Hi there! Remember our chatbot? Well, we keep improving it, and now we’ve given it a voice. Our tech guys from ElifTech CPD (Cool Projects Department) would like to share the results of the latest experiments with the office chatbot. We’ve been working hard to improve it and made the chatbot more user-friendly and entertaining to work with.

Why We Chose the Rasa Stack

We used the Rasa Stack to upgrade the bot. It has all the machine learning tools needed to develop chatbots and AI assistants that can handle natural conversations. With Rasa, artificial intelligence chatbots go way beyond answering simple questions. Google Assistant, which has already become a permanent helper in many households, is a perfect example of this technology in action.

The Rasa Stack is excellent for developing smart conversational assistants for many reasons. This fully customizable open-source technology lets you run your Rasa AI assistant across multiple messaging and voice platforms. Rasa learns from real conversational data, which ensures much better results compared to traditional predefined rules. We used this very clear and informative Rasa article as a reference.

Google Home Mini has excellent speech recognition that enables it to handle simple conversations with fewer misunderstandings. And since we use the Google Home Mini assistant at our office, we can now talk to the bot everywhere.

How Our Bot Got Smarter

In this short video, you can take a glimpse at what our chatbot and Google Home Mini can do together. We used Rasa Core version 0.12 to upgrade the brain of our bot. Let us explain how we reached these results in basic terms.

Before connecting the chatbot and Google Home Mini, we took care of markdown responses. As you probably remember, we already have a separate Node.js server for generating bot utterances and message formatting. It performs specific actions and returns the results to the bot’s brain. You can read more about the bot’s body server here.

When we connected our bot to Google Home Mini for the first time, we chuckled at its utterances. The bot said things like this:

  • MESSAGE TO SAY: “You’re welcome :) ”
  • SPEAKER SAYS: “You’re welcome, smiley face.”

This conversation did not go well, and the only good thing the bot could do was make us laugh.

Next, we’ve disabled the chat buttons and made the bot unable to give options (buttons) for the user to choose their location. Now, the bot asks about the user’s location, and the user has to tell their office location to the bot (if required).

Steps of Bot Evolution

On our journey to making the bot smarter and more communicative, we’ve instituted the following four steps:

  1. Create a Google Assistant channel at the Rasa Core server.
  2. Create a Google Assistant skill.
  3. Connect the Rasa Core to the Google Assistant app.
  4. Test the bot locally using ngrok.

Now, let’s dig deeper into the most significant aspects of each step.

1. Create a Google Assistant Channel at the Rasa Core Server

At this stage, we already have a bot built with the Rasa Stack (Python, Rasa Core, and Rasa NLU). Plus, there’s an action server responsible for generating bot utterances and actions like API calls (Node.js).

First, we created a ga_connector.py file in the Rasa Core server root directory. We used it to make Google Assistant use the bot webhooks. By the way, you can use a starter pack from Rasa for your new bot as well. We described the meaning of the functions in the file as follows:

# import dependencies 
      from __future__ import absolute_import
      from __future__ import division
      from __future__ import print_function
      from __future__ import unicode_literals
      import logging
      from flask import Flask, Response, Blueprint, request, jsonify
      from rasa_core.channels.channel import UserMessage, OutputChannel
      from rasa_core.channels.channel import InputChannel
      from rasa_core.channels.channel import CollectingOutputChannel
      import json
      logger = logging.getLogger(__name__)
      # create connector class. Inherits InputChannel class from Rasa Core
      class GoogleConnector(InputChannel):
         @classmethod  ## define webhook url prefix
         def name(cls):
             return "Google_home"
         def blueprint(self, on_new_message):  # define webhook for Google Assistant
             Google_webhook = Blueprint('Google_webhook', __name__)
             @Google_webhook.route("/", methods=['GET']) # define health route
             def health():
                 return jsonify({"status": "ok"})
             @Google_webhook.route("/webhook", methods=['POST']) # define webhook route
             def receive():
                 payload = json.loads(request.data)
                 sender_id = payload['user']['userId']
                 intent = payload['inputs'][0]['intent']
                 text = payload['inputs'][0]['rawInputs'][0]['query']
                 if intent == 'actions.intent.MAIN': ## send bot response on connection
                     message = "<speak>Hey! <break time=\"1\"/>How can I help you?"
                 Else: ## send casual bot response regarding user messages
                     out = CollectingOutputChannel()
                     on_new_message(UserMessage(text, out, sender_id))
                     responses = [m["text"] for m in out.messages]
                     message = responses[0]
                 r = json.dumps(
                       "conversationToken": "{\"state\":null,\"data\":{}}",
                       "expectUserResponse": 'true',
                       "expectedInputs": [
                           "inputPrompt": {
                            "initialPrompts": [
                               "ssml": message
                         "possibleIntents": [
                           "intent": "actions.intent.TEXT"
                 return r
             return Google_webhook

Next, we created another py file in the same directory and named it run_app.py. We used it to start the Rasa Core with the Google Assistant channel.

# import dependencies 
      from rasa_core.agent import Agent
      from rasa_core.interpreter import RasaNLUInterpreter
      from ga_connector import GoogleConnector
      from rasa_core.utils import EndpointConfig
      action_endpoint = EndpointConfig(url="http://localhost:8282/webhook")
      nlu_interpreter = RasaNLUInterpreter('./models/current/nlu')
      generator = EndpointConfig(url="http://localhost:8282/nlg")
      agent = Agent.load('./models/current/dialogue', interpreter = nlu_interpreter, action_endpoint=action_endpoint , generator=generator)
      input_channel = GoogleConnector()
      agent.handle_channels([input_channel], 5002, serve_forever=True)

Everything is pretty straightforward here, so we’ll not get into much detail.

To start the server, we run python run_app.py.

Note: Don’t forget to train the NLU and Dialog models before starting.

2. Create a Google Assistant Skill

Next, we had to create a Google Assistant skill. For this, you have to:

  1. Go to Google Actions and click on the Add/import project button.

Actions on Google

2. Type in the name of your project and click on the Create project button.

How We’ve Given Our Bot a Mouth and Ears new project

3. Once you’re redirected to the Category page, scroll down and click on Actions SDK.

Actions SDK

This means that we’ve chosen to use our source of the NLU and Dialog management system for the Google Assistant app.

4. Copy the gactions command from the dialog box you’ll see next.

Add Actions on Actions SDK

gactions update --action_package PACKAGE_NAME --project cpd-mini-bot

Saved it? Good! We’ll be using it later, so don’t lose it.

5. Click on the OK button and choose the category of your bot.

6. Now, you’re on the App Configuration page. This is where you can decide how your action is invoked.

App Configuration page

There! You’re done with the skill setup.

3. Connect the Rasa Core to the Google Assistant App

We used the Google CLI to configure the Google Assistant app to learn two things. The first tells us that our custom action is invoked, and the second passes all the user inputs to our bot brain (Rasa Core with a Node.js action server).

To achieve this, we downloaded the gactions CLI and placed it in the bot root directory. Then, we run ./gactions init to initialize the Google Assistant configuration file. We got an actions.json file where we had to fill in our invocation actions data and webhook URL. This is how it looked:

       "actions": [
           "description": "Default Welcome Intent",
           "name": "MAIN",
           "fulfillment": {
             "conversationName": "cpd mini bot"
           "intent": {
             "name": "actions.intent.MAIN",
             "trigger": {
               "queryPatterns": [
                 "talk to cpd mini bot"
             "description": "Rasa Intent",
             "name": "TEXT",
             "fulfillment": {
               "conversationName": "rasa_intent"
             "intent": {
               "name": "actions.intent.TEXT",
               "trigger": {
       "conversations": {
         "cpd mini bot": {
           "name": "cpd mini bot",
           "url": "https://04024d36.ngrok.io/webhooks/Google_home/webhook",
           "fulfillmentApiVersion": 2
         "rasa_intent": {
             "name": "rasa_intent",
             "url": "https://04024d36.ngrok.io/webhooks/Google_home/webhook",
             "fulfillmentApiVersion": 2
       "locale": "en"

4. Test the Bot Locally Using ngrok

Note: We tested our Google Assistant custom action locally with the help of ngrok. We use the ngrok URL provided in actions.json > conversations.cpd_mini_bot.url and conversations.rasa_intent.url.

Here’s what we’ve got so far:

1. We created a Rasa Core connector to handle Google Assistant’s custom action requests.

2. We initialized a custom action in Google Assistant actions.

3. We wrote configs and used the gactions CLI to invoke the brain of our bot when the custom action is called.

Next, we needed to deploy the configs to Google Actions, start our bot with a custom Google Assistant connector, and give a task to Google Mini to talk to our CPD mini-bot.

Push actions.json:

./gactions update --action_package action.json --project mini-me-8e06e

Remember that gaction command we told you to save? The time has finally come to push it.

Then, we started our bot brain python run_app.py

Hooray! We’re online!

But before talking to Google Mini, we tested it on the Simulator:

Simulator Actions on Google

You can see the results here.

Final Thoughts and Next Steps

In the near future, we plan to make a text generator for our bot. For this, we want to use the Python TensorFlow library. It has lots of high-level and low-level APIs for that. Most of them are user-friendly (or programmer-friendly, for that matter). We’re excited to explore the new opportunities chatbot software offers.

On top of that, our intelligent bot should soon be able to do more than merely booking rooms. It will also manage most of the “smart stuff” our team are developing in the office.

Stay tuned for more exciting updates on AI chatbots from ElifTech’s Cool Projects Department. Who knows? Maybe soon we won’t be able to tell artificial intelligence bots from human support. We’d love to have a casual conversation with a bot. How about you?

Iron Man, J.A.R.V.I.S. (Just Another Rather Very Intelligent System) - always inspiring!