Making a Lightweight, Low-Cost Rasa Chatbot with NGINX

chatbot bubble
fb twitter linkedin

There’s something about monotonous Monday morning scheduling, emailing and planning that just screams “there must be a more efficient way to do this”.  

Well, I’m happy to introduce you to TTT’s Coordination Lookup and Analysis Utility, CLAU! CLAU is a conversational AI chatbot for Slack made using the Rasa Open Source framework that we use to automate those simple, repetitive tasks that normally take up a lot of valuable work time. 

The current implementation is for our Project Management team. All they need to do is message the bot in Slack to quickly find documentation and retrieve information from internal management tools, saving time. That doesn’t sound so complicated, right? Well…

One way of externally deploying your chatbot is with Rasa X, Rasa’s own toolset. Rasa X is a tool for conversation-driven development by giving developers a UI to collect, review, and annotate data from users. We opted to deploy our chatbot without Rasa X. The main reason for this was that the functionality we would get from Rasa X wasn’t worth the added complexity and cost of setting it up for our particular use case at this point in the project. There may be a time in the future where Rasa X suits our use case, but for now, we wanted a cleaner, lightweight setup. It’s up to you whether Rasa X suits your needs.

If you decide not to go with Rasa X, the challenge is finding appropriate documentation. Most of the Rasa documentation that exists assumes you’re going to install Rasa X with your initial setup. Without Rasa X, there’s less documentation out there to support you.

We developed CLAU with limited documentation and ran into three key issues that I’m going to share with you. Whether you’re working on a low cost, lightweight chatbot for a personal project or for work purposes, I hope these insights save you some time. After all, that’s the whole point of the project!

For a deeper dive into chatbots in general, take a look at this previous blog of ours.

1. Deploying your Rasa chatbot with Docker

The first question I had when we decided not to use Rasa X was, is it even possible? The answer is yes⁠—but, how?

When facing a challenge that can quickly become complex, the best strategy is always to abide by best practises. The relevant best practise in this case, is that you should build and test out your bot locally before you move on to the remote environment. If you ever add additional services or change the scope of your project, it’s always a great idea to test everything out locally. Once you can deploy locally without a hitch, moving on to your preferred hosting service should be smooth.

The following steps will guide you through the process of local deployment. If you’re using Docker for the first time, Docker containers provide a developer with scalability, isolation, and consistency across different environments among other benefits.

Before we continue, make sure you have the following installed and ready to go:

To see if you have Docker and Docker Compose properly installed try running 

docker --version
docker-compose --version

This is what your directory structure should look like after initializing a simple Rasa chatbot. You can quickly make one following this Rasa tutorial.

We want our project directory to look like the following before attempting the next step:

Note that I placed into a folder called actions alongside a requirements-actions.txt and a Dockerfile. This helps modularize our project as everything related to our custom code is now inside this one file. 

Additionally, I moved the ngrok.exe application and added a docker-compose.yml file in the project directory for easy access from the command line.

Please refer to the following code blocks to understand what to place into each of the new files. If you are using custom actions that you specified in, make sure to change the endpoints.yml file to the specified URL as well.

The Dockerfile provides Docker with instructions on how to build your actions code. Here’s a template:

#whatever version that suits you
FROM rasa/rasa-sdk:latest 

#define the working directory of Docker container
#copy the requirements txt file with your dependencies in actions directory
COPY ./requirements-actions.txt ./ 
#copy everything in ./actions directory (your custom actions code) to /app/actions in container
COPY ./ /app/actions

#install dependencies inside Docker container
USER root 
RUN pip install -r requirements-actions.txt 
USER 1001 

Template requirements-actions.txt specifies the specific packages you need for your actions code. Here’s a template:

#<package_name1>==<version of package you want> 

The docker-compose.yml provides instructions to Docker Compose on how to run your containers. Here’s a template:

version: '3.0' 
    container_name: rasa
    # go to docker hub/rasa changelog to see what version and flavour of rasa that you want
    # Make sure that the version you specify is the same as the version that you pip installed
    image: rasa/rasa:1.10.5
    #Map port 5005 of local machine to 5005 of container
      - 5005:5005
    # This command will copy everything in current directory to the /app directory in the container 
      - ./:/app
      - run

    image: <name of image> 
      - 5055

Here is a template endpoints.yml:

 url: "http://app:5055/webhook"

Here are command line prompts to build and run your code:

#Inside your project directory
docker build ./actions -t <name of image> 
docker-compose up -d 
./ngrok http 5005

This is the hello message you should see if your deployment worked:

Do this curl request to check if your chatbot can talk:

curl --request POST 'https://<your url>'\
--data '{"sender":"Test","message":"Hi"}'

This is what you should get back: 

[{"recipient_id":"Test","text":"Hey! How are you?"}]

 2. Installing transport layer security

Now that you have a Dockerized version of Rasa working locally, you probably want to connect your chatbot to an external messaging service like Slack or Messenger and deploy it onto a hosting service like Google Cloud or AWS. But before you do so, your chatbot messages need to be encrypted. One way you can install SSL certificates to do so is to use a service that you can easily incorporate to your application called NGINX.  

NGINX (pronounced Engine X) is open-source software that acts as a server sitting in front of your application. It handles traffic and performs tasks such as reverse proxying, caching, and load balancing. This provides an extra layer of security between your chatbot and the outside world. 

You can configure NGINX as a Docker container to receive all HTTP traffic which will then be sent upstream to the Rasa container (or an authentication server). This will keep the Rasa container isolated from direct contact with the external world. Additionally, by storing SSL certificates in the NGINX container, you ensure that you can provide a secure connection between your server and servers of the messaging provider you are using (like Slack).

NGINX would be placed before the Bot User to add a layer of security.

Here’s how you can configure NGINX to add this layer of security.

Step 1, obtain an SSL certificate

To begin, you will need to get a domain name and set up a remote environment with Docker, Docker Compose, and your code. 

You can register for a domain name from any domain register and if you are not sure about where to deploy remotely, Google Cloud is a good place to start with a large amount of free credits. For this demo, a simple development environment was created on GCP Compute Engine using a n1-standard-1 VM (1vCPU, 3.75 GB memory) running on Ubuntu 16.04 LTS (20GB). Try testing different setups and services to suit your use case. 

The first step in obtaining an SSL certificate is to find a certificate authority. Let’s Encrypt is a popular non-profit authority that provides free certificates and is the option that we will go with. 

Perform the following commands to install certbot (Let’s Encrypt software to install free SSL certificates):

sudo apt-get update
sudo apt-get install software-properties-common
sudo add-apt-repository universe
sudo add-apt-repository ppa:certbot/certbot
sudo apt-get update
sudo apt-get install certbot

To get a certificate (you will need a domain name and email ready in this step):

sudo certbot certonly --standalone 

Next, move your freshly obtained certificates into a directory that Docker has access to. I placed it into my project directory. Wherever you put it, do not commit it to version control:

#Inside your project directory
mkdir certs 
sudo cp /etc/letsencrypt/<domain>/live/fullchain.pem ./certs
sudo cp /etc/letsencrypt/<domain>/live/privkey.pem ./certs

Before moving on, make sure that you have Docker and Docker Compose installed in your VM. Now that we have SSL certificates, we need to put them into NGINX.

Step 2, configure NGINX as a reverse proxy

When you run your application, you need to make sure that NGINX is configured to act as a reverse proxy for your Rasa container and point it to where the new SSL certificates are stored. If it is not configured correctly, your chatbot would be unable to access user inputs because NGINX is unable to forward the messages to the Rasa container.

First, make a configuration file inside the project directory (note that this is an environment-specific configuration file so it will not work on localhost without a few changes):

mkdir nginx
nano default.conf

Inside default.conf

#sends events to the rasa container on port 5005
upstream rasa {
    server rasa:5005;

#change your domain name to localhost if testing locally
#listen on port 80 (default port for non-encrypted messages) 
#if testing locally, <your_domain_name> is localhost 
server {
    listen       80;
    server_name  <your_domain_name>;

#reverse proxy to rasa container
    location / {
        proxy_pass  http://rasa;

#comment out this block if you are testing locally 
#listen to port 443 (default port for encrypted messages)
server {
      listen 443 ssl;
      server_name <your_domain_name>;

  #points to ssl certificates that we will move to nginx docker container in docker compose
      ssl_certificate /etc/letsencrypt/live/<your_domain_name>/fullchain.pem;
      ssl_certificate_key /etc/letsencrypt/live/<your_domain_name>/privkey.pem;
      ssl_trusted_certificate /etc/letsencrypt/live/<your_domain_name>/fullchain.pem;

  #reverse proxy to rasa container
      location / {
          proxy_pass  http://rasa;

The last step is to make a new NGINX container that includes the SSL certificates and configuration file that we created and add it to the deployment.

Step 3, add NGINX into the Docker deployment

Add the source code below to your Docker Compose file and you’re good to go!

version: '3.0' 
    container_name: rasa
    # go to docker hub/rasa changelog to see what version and flavour of rasa that you want
    # If unsure, using rasa/rasa:latest-full is a good default option
    image: rasa/rasa:1.10.5-full
    #This is the port on the container that is being exposed
      - 5005
    # This command will copy everything in current directory to the /app directory in the container 
      - ./:/app
      - run

    container_name: actions
    image: app-server
      - 5055

    container_name: nginx
    image: nginx
      - 80:80
      - 443:443
      - ./nginx:/etc/nginx/conf.d
      #I kept my SSL certs in a certs folder in project directory (Make sure to include this in .gitignore)
      - ./certs:/etc/letsencrypt/live/<domain>

#You can specify your own network if you do not want to like the default docker network naming

If you don’t have your custom actions image already, you can pull it from an image repository like Docker Hub or simply build it again using the Docker build command. 

Now, see if it works! Inside the project directory put:

docker-compose up -d

Go to your domain and try the https:// connection and you should be able to see the hello message from Rasa if you give it a few seconds to load.

3. Avoiding having to curate unnecessary training data

There’s always a bit of uncertainty on Rasa’s part when it tries to recognize entities. Rasa X can help with this issue by collecting lots of user input and training new models to recognize previously unseen entities. However, without Rasa X, we wanted a better way for our chatbot to recognize entities it hasn’t seen before.

Since CLAU’s users are members of TTT’s Project Management team, the chatbot needed to be trained to recognize a wide variety of project names. At TTT, we have a steady stream of new, unstructured project names, so we devised a system to avoid having to curate unnecessary training data every time we have a new project. Sometimes you don’t need a large quantity of data when you can create high quality data.

We developed a simple parentheses notation system that gives our chatbot 100% certainty that a word (or a group of words) is a specific entity. Instead of trying to train a chatbot to recognize new words as entities, we simply trained it to recognize a simple notation instead.

Now, imagine a restaurant website that has a chatbot. The restaurant has a menu that changes every day. You could push menu updates through the chatbot simply by telling it: “add (Bruschetta) as a daily special”. This way you wouldn’t have to worry about training the chatbot to recognize Bruschetta as a dish because it’s already trained to recognize everything inside the parenthesis as a dish. Here’s some example use of the notation:

#example training data 

## intent:update_menu
- put [(Sandwich)](food) as the daily special
- Can [(Clam chowder)](food) have its price updated by a dollar
- remove [(steak)](food) from the menu

# a second example involving names this time 

## intent: message_user
- remind [(Amanada)](person) of our meeting today
- send [(Darth Vader)](person) the bill for customizing his mask
- give [(Carly)](person) in [legal](department) the documents for our new project

This could be effective for you if you have to handle bite-sized entities or categories that change or need to be updated regularly.

This special notation wouldn’t be effective in situations where you have a stream of new users who would need training on using the notation. Our users are our project managers. We can quickly show them how to use it. After all, it’s a pretty small application and this use is internal for us for now. If, for example, you run a high volume website for making appointments, it would be unrealistic to expect users to learn and understand a special notation just to make an appointment. 

At this point you should have a solid foundation set up for your Rasa chatbot. You will have deployed it locally using Docker and Rasa Open Source, added transport layer security with NGINX and possibly incorporated a parenthesis notation as a workaround for certain kinds of training data. I hope you found the instructions and source code useful and good luck with your Rasa chatbot!