IKH

Dialogua Flow Managment-Rasa

The second aspect of the conversational framework is the Dialogue Management Model. A dialogue management model will predict the response or action that the chatbot should take based on the stage of the conversation. These actions/responses could be to print results on the screen, fetch data from a database, send a mail to the user or simply say “Thank you!”.

For example if a userasks” What ‘s the weather like in Mumbai today?”then the bot should fetch the results from a weather database and display them on the screen.It shouldn’t reply with a green message or ask for the user’s location.

The dialogue management model accomplishes the task of learning to take the correct action based on the stage of the conversation.

In the next video, you’ll learn to work with the second part of the Rasa architecture, namely, Rasa Core, which is the dialogue management layer of Rasa. Rasa Core takes structured input in the form of intents and entities (i.e., the output of Rasa NLU), and decides the next actions.

Just like any other MLmodel,we need training data for the dialogue managment model as well. The training data is in the form of conversational stories.

A story represents one training example, andit is simply a conversation between a user and an AI assistant ,expressed in a particular format.

In the stories file, user inputs are expressed as corresponding intents (and entities where necessary), while the responses of the assistant are expressed as corresponding action names.

Aiana will now demonstrate what these ‘stories’, that is, the training data for dialogue flow, look like.

To summarise, stories are the training data for Rasa Core’s dialogue management system. Rasa Core uses a variant of Recurrent Neural Networks (RNNs), which are ‘sequence models’, called LSTMs (Long Short-Term Memory), to learn dialogue flow.

You will learn about RNNs and LSTMs later; for now, you only need to understand that RNNs are neural-network-based sequence models (similar to HMMs and CRFs, just much more complex in architecture) that are well-suited to learning sequential processes such as speech-to-text conversion, learning dialogue flow, etc. LSTMs are a variant of RNNs, whose architecture provides a facility to ‘”remember values over arbitrary time intervals”, and hence the term ‘memory’.

In the next video, you will learn how to specify the different elements of the domain.yml file with slots, intents, entities, templates and actions. 

The domain defines the universe in which your assistant operates. It specifies the intents, entities, slots and actions that your bot should know about. Optionally, it can also include templates for the things that your bot can say.

You learnt how to specify different elements of the domain file: slots, entities, intents, templates and actions. Let’s summarise each of these:

  • Slots: These are ‘objects’ that you want to ‘keep track of’ during a conversation. Suppose a user says “I want to book movie tickets for Raazi”, and the bots asks “For how many people?”, and then the user responds “Two”. Throughout this conversation, the bot needs to remember that ‘Raazi’ is the value of the entity movie_name and ‘2’ is the value of the entity number_of_tickets, so it can be used to, for example, query a database. In other words, slots are the ‘memory’ of the bot. You can read more about slots here.
  • Intents (NLU layer): These are strings (such as ‘greet’, ‘weather_search’), which describe what the user (probably) intends to say. Intents are extracted using typical
  • ML classifiers.
  • Entities (NLU Layer): As you have learnt already, entities are extracted using models such as (sklearn) CRF, spaCy, etc. In most cases, entities are stored in slots; in cases when an entity is irrelevant for the dialogue flow, you don’t need to assign it a slot (e.g., the name of the user)
  • Templates:  Templates define the way the bot will utter a statement. For example, if the bot wants to ask for cuisine preference, then the template can define how the bot can ask for it: “What kind of cuisine would you like?”, “Please specify your cuisine preference”, etc. All these can be specified in the template section of the domain file. A random response is picked from the template so that your bot doesn’t sound robotic or bland.
  • Actions: The ‘actions’ component enlists all the actions that the bot can perform – uttering a text message such as “Hi”, looking up a database, making an API call, asking the user a question, etc. For example, an action named ‘actions.ActionSearchRestaurants’ could look up a restaurant database to search for restaurants given entities such as location and cuisine preferences. In this case, it looks up the ‘ActionSearchRestaurants’ class in the ‘actions.py’ module.