How To Use Transformers To Automatically Generate Stories In Rasa



Original Source Here

How To Use Transformers To Automatically Generate Stories In Rasa

Introduction

In this article, I will share an idea to automatically generate training and test stories for building a bot using Rasa.

The code to reproduce the results in this article is here.

Problem Statement

Imagine you’re a data scientist building a bot for a financial services use case. You’ve decided the first feature you’d like to implement is a money transfer task.

The questions that need to be answered are:

  1. What are the ways in which a customer could ask to do a money transfer?
  2. How should the bot respond to the customer’s messages?
  3. What are the conversation flows that need to be supported?

Finding out the answers to the above questions is typically the task of a Conversation Designer/Copywriter. If you are just starting out with building a bot, it is unlikely your team has such a resource so what can you do?

A common approach is to consult your project’s stakeholders and ask them what they’d like to see deployed. The problem is that there’s potential for their imagination to run wild and you end up building features and conversation flows that are hardly used in practice. For example, they may insist that the bot must recognize transfer amounts written in words e.g. “I want to transfer two thousand five hundred dollars”.

Therefore, a data-driven approach is preferable to asking stakeholders for their wish list.

Solution

Overview

If you have access to real conversations e.g. call transcripts from your customer service center, then you can build a language model that will output responses that will mimic how a real customer agent would respond to any query from a customer. Then, you can play the role of a customer and converse with that language model to automatically generate conversations under a variety of scenarios.

This should suffice to bootstrap your bot.

Implementation

To do this in Rasa, we can implement a custom NLG Server. This server will host a language model that will generate responses based on a user’s input and prior turns in the conversation.

Let’s implement the custom NLG server using the FastAPI library.

Based on Rasa’s NLG server documentation, the body of the request to the NLG Server can be modeled as follows:

Figure 1: The body of the request to the NLG Server

and the response from the NLG server is:

Figure 2: Modelling the response from the NLG Server

For the purpose of this article, we will use the transformers library’s pipeline feature to generate the responses. The model we will use is the blenderbot-3B model from Facebook.

This is how the pipeline is defined:

Figure 3: Defining a pipeline to interact with the model

Generating a response requires creating a Conversation object. This object requires a user’s input, their past inputs, and the bot’s past responses. This information can be extracted from Rasa’s Tracker object:

Figure 4: Building the Conversation object

Finally, we need to define an endpoint that will return the response. Let’s name this endpoint nlg:

Figure 5: Creating a POST endpoint to generate the bot’s response

Example Conversation

Now we are ready to talk to the bot.

Let’s pretend to be a customer that wants to transfer money to John and see where the conversation goes:

Figure 6: Interacting with the bot when connected to the NLG server

We can start the conversation with a different prompt to build a different conversation flow:

Figure 8: Creating a different conversation flow for the same use case

Converting these conversations to a training or test story format that rasa expects is easy using Rasa X.

Next Steps

Fine-tuning On Domain-Specific Conversations

The bot responses in Figure 7 and 8 certainly sounds human-like and friendly but does not seem right given the task at hand i.e. the customer wanting to transfer money. For example, the bot should have asked how much money the customer wanted to transfer during the conversation in Figure 7.

This isn’t surprising considering the datasets used to train and fine-tune the model (see section 6 of the Blenderbot paper).

Fine-tuning the model on more domain-specific conversations may help the bot ask the right questions.

Automatically Generating User Messages

The current approach requires a human to talk to the bot to create a conversation. What is stopping us from substituting the human with another bot?

Conclusion

This article has shared an idea to automatically generate training and test stories to train and evaluate a rasa bot.

Let me know in the comments what you think.

AI/ML

Trending AI/ML Article Identified & Digested via Granola by Ramsey Elbasheer; a Machine-Driven RSS Bot

%d bloggers like this: