google-bioloid-premium-voicekit App

Well, after I finished up creating my proof-of-principle voice-directed Robotis Bioloid Premium, I was pleased but still curious about the ability to enhance its internet interface – I wanted something more along the lines of Amazon’s Echo or Google’s Home assistants…

So, back to searching…lo-and-behold, I discovered I had missed the fact that you could create a Raspberry Pi-based Google assistant that was able to process commands directly between the RPi and the RBP’s CM-530 computer…

Two things though that I did not particularly like – one, it would be a sort of hybrid of an Echo and the Snips applications – like the Snips, its action commands would come from the RPI, but, like the Echo, it needed to process the input requests in the cloud…I was concerned that such a process would cause some latency in getting a voice initiated action-command to the robot…the Snips app processes everything in the RPi…the Echo processes everything in the cloud…this would be somewhere in the middle…

Second, the only triggering words that you can use with the Google device are either Hey Google or OK Google – I appreciate Google wanting users to be sure they know who developed the device workings, but at least with Snips I could use Jarvis (not the best but OK)…

In any event, I wanted to see if things could work – I thought it would be worth the trade-offs if I could improve the personality of the robot – its “interactiveness”…so, I decided to give it a go…

Overview. I won’t go into all the detail I did for the Snips directed Jarvis robot, but rather try to avail you of the benefit of my learning curve…I will point out some of the areas that caused me some confusion in hopes that it will smooth those areas for you…

RPI/2-Mic Pi Hat/Raspian. Getting the computer together is the same as I detailed earlier for the RPi and the Pi Hat…the DotStar setup is a bit different, so let’s wait on that…

Google Assistant SDK. the setup of your RPI for the Google Assistant SDK is detailed in several places on the internet, so I won’t go into too much here…Google has their own tutorials and so best to start there…there are other sites around on developing a GooglePi, so if the Google site seems a bit ornery, you can try something other, for example

As with Snips or the Echo, you need to set up an account with Google in order to start building your project – it’s covered in the various tutorials…I found the process to create and register my project to be a bit convoluted and confusing – my advice here is to be patient and realize that it takes Google a while sometimes to respond to your actions during the process…in any event, do not fret since everything is undoable and it does not make too much trouble if some things you do are redundant…just be sure to note at each step that results in Google assigning something a tag or number, what it is – e.g., when you set up your project, you will get a Project name and a Project ID that you will need later as you develop the files and such for you project…

To orient you to the process of setting up a Google Assistant SDK project, I have outlined it for you – read through before you get started so you have a grasp on the overview – I think it will be less confusing that way:

  • You need an account with Google…
  • Once you have an account, you can gain access to the Google Cloud Platform…your project(s), once set up, will be listed there, and, it is an easy place to get your basic project reference info, numbers, etc.
  • You will create your project at the Google Actions Console…go there with your PC browser…
  • In another tab on your PC, since you already have your RPi set up, you can start the tutorial here to create your project…note that some of the tutorials terms shift around a bit from what you may see since this is a rather complex and it’s hard to keep all the documentation up-to-date…go for the gist
    • Click New Project
    • Give it a name and choose language
    • At the bottom of the next screen, click on Device registration, then on Register Model (models are used to differentiate devices that you may create that are similar but configured differently, e.g., US vs EU intended devices – you need one to get started)
      • There are 3 fields here and none of the labels are too important – note that your Model id suggested label will change as you enter the fields…you can edit it to something you like better by clicking on the edit pen…
      • When you go to the next screen, you can download the OAuth 2.0 credentials file, or, you can do this later – it is a file with a LONG label that ultimately needs to be stored on the RPI and referenced in some registration steps…probably easier to wait until you are on your RPi and have your browser open…
      • The next screen is for assigning traits to your project…like the Alexa Device environment, Google has a library of certain actions that are sort of pre-programmed for your use…you can also create your own device actions…for purposes of the Google tutorial that you may want to follow, scroll down and click on the ONOFF trait and click to move on…
      • Though it may not be apparent, your device is now registered – the button at the upper right of the screen is to register additional models…
    • When you finish the registration, you are returned to the Device Registration screen – notice it is hi-lited at the left menu bar…at the top bar you are in the Develop tab…click on the Invocation item at the left bar and enter something in the Display name box – it’s not important what it is now, just needs something…click SAVE
      • The other items on the left bar are not important now…
    • You now have a project…this is one area where the tutorial gets confusing and seems to indicate that you need to do all this registratoin again, but, you now have a project…
    • You can see it if you go to the Google Cloud Platform – at the top bar there is a dropdown where you can select your project…once you select it, you will see your Project info…you will need to refer to your Project name and Project ID often as you develop your project…
      • Note while at this screen, you may need to Enable the API

For the most-part, you are done with the Actions on Google and Cloud Platform sites, other than maybe needing to refer to them for names/numbers…

OK, so now that you have a project, you need to move back to the RPI and get the Google Assistant SDK set up…the tutorial for this from Google is good, so just follow it through…

  • You will be setting up a virtual environment just like you did for the Snips application, but unlike the Snips environment, the User account (unless you change it) for all this is pi, so things are a bit easier for troubleshooting and permissions and such…
  • Before you start the tutorial, now that you are on your RPi, using your browser, go to the Google Actions Console
    • click on your project, and in the Device registration screen (left bar), click on the more icon at the right-hand end of the device line
      • click on the OAuth download button
      • your browser most likely will download it to the Downloads folder – use your File Manager to move it to the /home/pi/ directory…
google-oauthlib-tool --scope https://www.googleapis.com/auth/assistant-sdk-prototype \
      --save --headless --client-secrets /path/to/client_secret_client-id.json
  • Note the line above, which is part of the tutorial – it uses the OAuth 2.0 credentials file you generated when setting up your project, now downloaded and placed in the /home/pi/ directory, to authorize your project to run on the RPi…
    • in this part of the command: /path/to/client_secret_client-id.json
      • for the /path/to part, use /home/pi
      • for the client_id part, just substitute a wildcard asterisk, *, since that will be the only file like that in the directory and it will save you from dealing with that long id number…so:
 /home/pi/client_secret_*.json
  • Once you have installed and authorized the SDK, you can follow this page to run the initial test instance of the Assistant…
    • The initial assistant example uses push-to-talk (i.e., hit the <enter> key) to initiate the conversation – it will at least let you know that things are working correctly and you should do that at first…
    • Then continuing with the tutorial, you can move on to setting up an LED to allow you to try out both the trait approach, as well as the Device Actions approach – I suggest you do both – for my robot, I ultimately used the Device Actions method (see below)…

While you are at the Google Assistant SDK Guides screen, note in the left-hand menu bar there are two main sections, the Google Assistant Service and the Google Assistant Library

The Library has been replaced by the Service…prior to this change, the tutorial walked you through setting up an assistant with a hotword voice trigger (OK Google or Hey Google), rather than the push-to-talk trigger method that is now used in the Service tutorial…this is not obvious (that a hotword demo exists), so I was disappointed when I couldn’t use voice – some research and playing around led me to the deprecated Library and the hotword.py script…

The hotword.py script is set up to use the OK Google or Hey Google triggers…once I finished with the tutorial following the Service route, I then did the deprecated tutorial in the Library section…

I also found the python code in the hotword.py script to be easier for a Python neophyte such as myself to follow and modify – the pushtotalk.py script is much more sophisticated, so I used the hotword.py script as the starting point for my rbp_robot.py demo script…you can find my files here on Github…

  • Note that compared to my Snips scripts, the Google code only has the base rbp_robot.py and the actions_leds.py files on the RPi…in addition, on my PC, I created the rbp_robot_commands_01.json actions file that is required to implement Device Actions (as opposed to using Google’s built-in traits functions which are at this point mostly set up to do smart house implementations)…
  • With Snips, you create an project and add or create Apps that are used when the Snips Assistant runs on the RPi – they are developed in the Snips Console (cloud) and downloaded when you install the Assistant…the Apps for Snips are basically just the utterances you want to use for the project, the rest of the related actions are created on the device
  • Similarly, with Google, you create a (one) JSON action(s) file that contains the utterances for input, but also contains the text that you want output as a result on the RPi TTS process…
    • the JSON file can be quite complicated in nature, but fortunately for purposes of the robot, is pretty simple
    • the name you give to the JSON file seems to be basically arbitrary, except that it cannot begin with the word action – I think that is reserved for the Google traits files…
    • one thing of note that is not initially apparent, you use one JSON file for all of your commands rather than one for each – be careful to include all of the required lines in the command function, and, be sure to place a comma after each command function, e.g.,
...
"actions": [
	{
            "name": "Cheer",
            "availability": {
                "deviceClasses": [
                    {
                        "assistantSdkDevice": {}
                    }
                ]
            },
            "intent": {
                "name": "robot.device.command.Cheer",
                "trigger": {
                    "queryPatterns": [
                        "cheer",
						"do you cheer",
						"can you cheer",
						"do you ever cheer",
						"please cheer"
                    ]
                }
            },
            "fulfillment": {
                "staticFulfillment": {
                    "templatedResponse": {
                        "items": [
                            {
                                "simpleResponse": {
                                    "textToSpeech": "i cheer whenever i think of papa frank, i am so grateful he created me"
                                }
                            },
                            {
                                "deviceExecution": {
                                    "command": "Cheer"
                                }
                            }
                        ]
                    }
                }
            }
        },
        {
            "name": "PoundChest",
            "availability": {
                "deviceClasses": [
                    {
                        "assistantSdkDevice": {}
                    }
                ]
            },
            "intent": {
                "name": "robot.device.command.PoundChest",
                "trigger": {
                    "queryPatterns": [
                        "pound chest",
						"pound your chest",
						"can you pound your chest"
                    ]
                }
            },
            "fulfillment": {
                "staticFulfillment": {
                    "templatedResponse": {
                        "items": [
                            {
                                "simpleResponse": {
                                    "textToSpeech": "I am a proud robot"
                                }
                            },
                            {
                                "deviceExecution": {
                                    "command": "PoundChest"
                                }
                            }
                        ]
                    }
                }
            }
        },
		{
            "name": "Wink",
            "availability": {
                "deviceClasses": [
                    {
                        "assistantSdkDevice": {}
                    } ...
  • When you create your first JSON file, you will need to need to go through an authorization process using gactions update and once authorized, a gactions test to push the package out to your RPi…
  • Once you have gained authorization for a named JSON file, any time you edit it, you will need to resubmit it to Google, but, you will not have to go through all the initial update authorization steps – just do a gactions update, then a gactions test to push out the updated package…

LEDs and WAV files.

  • Making sure you are in the Google Assistant environment ( source env/bin/activate ), cd to the folder you have your hotword.py located (probably cd /home/pi/assistant-sdk-python/google-assistant-sdk/googlesamples/assistant/library ) and go through the steps to install the Adafruit DotStar library
    • since the needed import file – adafruit_dotstar.pi – can be difficult to find, I included it in the Github files so it can just be copied into your local project directory…
    • essentially, the actions_leds.py is the same as the one used for the Snips assistant…the other actions files using defs – chat, motions and sensors, are not really helpful since the file structure is different for the Google assistant, and, for the sensors queries, Google seems to override the local request and feeds back general weather information, so I eliminated them from my demo…you may want to play with that a bit…
    • some of the import stuff is a bit different too so check out how it’s done in the rbp_robot.py file…
    • add the dependencies to your requirements.txt file – an example is included in the Github files…
  • I got used to having a tone sound when using the Snips assistant, so I wanted to incorporate one in the Google assistant…it was pretty straight-forward once I figured out what module to use…pyGame works well and is simple to implement:
import pygame

pygame.init()

chime = pygame.mixer.Sound("/path/to/Chime.wav")

chime.set_volume(.2)

chime.play()
  • You can use any WAV file…I found what I needed at SoundBible.com…I also used a sound file to provide some background for the cheer command in the demo file…I included the two I used in the Github files…
  • pyGame comes with Raspbian, so you should just be able to import it without an initial install – if not, the instructions are here

One last thing to cover – auto-starting the Google Assistant…the Snips assistant defaults to booting up whenever you run your RPi – the Google assistant does not…since we’re running a robot, it would be a pain to have to ssh into the RPi each time it starts, so, I wanted to have it autostart…

I looked into and found a few ways (/etc/rc.local, /home/pi/.bashrc ., creating a .sh file, etc.) to get it to autostart, but there were issues that made them a bit cumbersome to deal with while still developing the project…what I settled with was to run it as a service…as a service, you can enable it to start at boot, start and stop it any time, get a status report, etc. – very convenient…

  • create a services file…
    • $ sudo nano /etc/systemd/system/assist.service
[Unit]
Description=Assist @ reboot

[Service]
Environment=XDG_RUNTIME_DIR=/run/user/1000
ExecStart=/bin/bash -c '/home/pi/env/bin/python -u /home/pi/assistant-sdk-python/google-assistant-sdk/googlesamples/assistant/library/rbp_robot.py'
WorkingDirectory=/home/pi/env/bin/
Restart=always
User=pi
StandardOutput=syslog
StandardError=syslog
SyslogIdentifier=assist

[Install]
WantedBy=multi-user.target
  • the above code snippet (an example file is in the Github) may need you to change the path to your project in the ExecStart line…
  • once you have the file in place:
    • $ sudo systemctl enable assist.service [tells system to run at startup]
    • $ sudo service assist start
    • $ sudo service assist stop
    • $ sudo service assist status
    • $ sudo systemctl disable assist.service [tells system to stop running it at startup]

In summary, there you have it – I hope there is enough here to steer you down a relatively smooth path to implementing a Google assistant for the voice-direction of your Robotis Premium Bioloid (or another such bot!)…leave a comment if you should see something that could be made better/more helpful…

Github files here

3 thoughts on “google-bioloid-premium-voicekit App”

  1. Nice job, I’ve been trying to reach you but never replied back. I a robotics student, and own bioloid I want my bot to be more interactive.

    1. Hi Michael…nice to see that you are a student – robotics has a great future! I think my site contains about all I know about robotics at this point – it was just a diversion for me…if you have any specific questions, ask away! Regards, Frank

Leave a reply to Michael L Cooper Cancel reply