Hey, Alexa! Test-Driven Development with Virtual Assistants

Nate Lentz
C2 Team Member Alumni

Test-driven development (TDD). It’s a rather simple concept — writing tests for features that haven’t been written yet. Although we won’t cover TDD entirely — if you’re interested in learning the basics I’d recommend reading up on Agile Alliance’s article here before reading on.

The C2 Group has had years of experience working with browser-based solutions, conquering web projects and requests of all sizes, but this was the first opportunity C2 had to develop and test an Alexa skill. Based on research and the experiences of other developers, we knew an Alexa skill would be a great chance to try out TDD since writing tests for a product with a repeatable function has a more practical application than writing tests for websites.

Alexa skills introduce an interesting development process. Developing to provide a proper Voice User Interface means the end product is much less “visible” — there is no real physical or front-end outcome to the responses given by the skill — just one JSON object that’s given to the Alexa device to interpret. That JSON object is the key as it defines the behavior of the skill. It holds all information about the current session as a user interacts with Alexa such as what the user asked and instructions for Alexa. These JSON responses passed between the skill and the device are what we write our tests against.

When Alexa determines if a request is for your skill, Alexa dispatches the request to Lambda, an AWS cloud service where logic for the skill lives. The Lambda function processes the request and then sends a response back to the device instructing it what to do or say. This is where the significance of TDD comes into play. Using JavaScript libraries like chai.js and mocha.js, we can write tests against simulations of these requests — making it possible to test our code without communicating with Lambda or an Alexa device. This is awesome news for developers because it means we do not need to deploy to AWS each time we want to test changes in our code, nor do we need to consistently say “Hey, Alexa…” which might result in strange looks from co-workers.

As a feature is developed, we run tests against changes to the code ensuring each criterion is met. Once the tests written for a feature pass, we can be confident the feature is close to completion and is ready for testing on an actual device. Ultimately, we can use TDD to cut out Lambda and an Alexa device until we’re confident the logic in our code is behaving as expected.

Below is a basic request from an Alexa device to a skill to execute an intent:

<p> CODE: https://gist.github.com/thec2group-blog/9eb2785175f120ff37e281043db1fb68.js</p>

As I mentioned before, TDD isolates testing everything in the process to just one computer. Using the Alexa Development Console, we can capture what the request looks like for a specific intent and use that as sample input for our tests. We can then write a test which evaluates the response generated by our code to ensure the output looks as expected.

Check out the example below of a basic test (in Typescript) which evaluates the structure of the response constructed by the skill. If anything is wrong, the output of the test will tell us, and we can diagnose bugs as needed.

<p> CODE: https://gist.github.com/thec2group-blog/ddbd80ccef8f825c3ff2c990facdaffc.js</p>

Writing and running these individual intent tests locally is a great way to see if we’re on track. TDD allows us to see what intents are failing and where they’re failing without the need to sort through logs. During my experience, I found that it was important to use both the Alexa Development Console and run the tests I had written. The Alexa Development Console is a great introduction to developing an Alexa skill, giving you the basic JSON input from the Alexa device to test against, but once you’ve made some changes to the code for each intent, lacks the ability to express what exactly in the code is causing an error. We can look at the output and try to interpret the issues, but it won’t tell us specifically where or what the issue is. Another downside to the Alexa Development Console is that you need to manually type in your request to Alexa each time you wish to test a skill, meaning you’ll need to try every command possible to ensure it works correctly. Rather, when performing automated testing, you get results back that narrow in on what exactly failed within each intent test. With this mind, continually running tests makes it easier to spot regressions introduced by the development of new features.

I had an awesome time getting to develop and test an Alexa skill, something outside of C2’s normal realm, while also getting to try out TDD in a more practical application. It’s not ideal or easy to write tests for websites, but it’s easier to use TDD for a product that’s doing the same function repeatedly. On the flip side, because we had to think about what the features need to do in addition to all the possible outcomes it was challenging to know when we were done. To write a test, you need to think critically about the device and its request, not just fulfilling the test. As we got closer and closer to the end of the development project, it was difficult to write new tests.

Overall, I really enjoyed following a Test-Driven approach to create our client’s Alexa skill. Getting to work on the project helped to reveal the curtain behind an Alexa device and learn some AWS skills to boot. Something that appears as “black magic” to most was a unique opportunity to dissect and see the anatomy behind an Alexa skill.

Insights, Right to Your Inbox.