Home > Uncategorized > Deploying with Alexa

Deploying with Alexa

This past Thursday (26 of October), I spoke at WordCamp Cape Town on Automating WordPress. Other than showcasing tools like DeployBot and WP-Make to the Cape Town community, I ended off with a demonstration of a voice-controlled deployment to a DigitalOcean VPS using the DeployBot REST API and Amazon Alexa. It was my first real hands on experience with AWS Lambda and the Alexa Skills Kit, so my demonstration was merely a proof of concept and didn’t account for all scenarios / errors.

As far as Cape Town goes, I’m one of the few that own an Amazon Echo. Its not available to purchase here through commercial retailers and Amazon doesn’t ship flagship tech like Alexa to South Africa. I’m not entirely sure how my girlfriend got her hands on one for me, but I’m not complaining.

I was pleasantly surprised at how many resources are available for developing Alexa Skills. The majority of them are written in Python and Node.js. I’ve had experience with both technologies, however I feel a lot more comfortable with Node. My original thoughts were to develop a simple Node.js app using Express and Request and host it on Heroku with an endpoint that the Alexa Service could utilize. I scrapped that idea when I came across AWS Lambda. The fact that I didn’t need a server to process server-side logic seemed like a natural fit considering the Alexa Skills Kit is an Amazon Developer service.

Here was my thought process:

  1. I want to deploy the latest commit of my project
  2. I want DeployBot to handle it all for me (A bit of initial setup, but far from a headache)
  3. I want to be able to tell Alexa: “Ask DeployBot to deploy environment 118283”
  4. The environment ID should be a variable that Alexa would understand (data type)
  5. Trigger a POST request to the DeployBot API to trigger a build using the required environment_id
  6. Have Alexa respond and say: “Deploy to environment 118283 successful”

DeployBot has a really simple API so it wasn’t too difficult to trigger a deployment. The Alexa Skills kit has a really simple configuration process. It consists of 3 major elements: Utterances, Intents and Slots.

Utterance

“Hey Alexa, deploy environment 118283”

Intents

Intents live as JSON objects which take a couple of properties like the casting of the variable (int, string, etc). They are also tied directly to the Utterance and the Slot. In a way an Intent is the structure that Utterances and Slots abide by.

Slot

Essentially a variable that Alexa is aware of. In this case it was the environment_id (118283).

Heres what my Intent and Slots look like:

{
  "intents": [
    {
      "slots": [
        {
          "name": "Environment",
          "type": "NUMBER"
        }
      ],
      "intent": "deployEnvironment"
    }
  ]
}

And heres what my Utterances look like (You can have as many as you like to make sure you provide for how differently people might say your command)

deployEnvironment deploy to environment {Environment}
deployEnvironment to deploy to environment {Environment}
deployEnvironment deploy my code to environment {Environment}
deployEnvironment please deploy my code to environment {Environment}
deployEnvironment push code to environment {Environment}

You’ll notice an interpolated variable Environment – this is the slot, it will store what I say when I speak to Alexa, or more specifically it will store the environment_id. The deployEnvironment is my Intent which you can set to be a trigger word / phrase of your choosing, such as: “Ask Deploy Bot”, “Ask my wife”, “Ask Santa Claus” etc.

And now the fun part. The AWS Lambda:

exports.handler = function( event, context ) {

   var https = require( 'https' );

   var options = {
      host: 'dlm.deploybot.com',
      path: '/api/v1/deployments',
      port: 443,
      method: 'POST',
      headers: {
         'Content-Type': 'application/json',
         'X-Api-Token' : 'TOKEN GOES HERE'
      }
   };

   callback = function(response) {

      var str = '';

      response.on('data', function (chunk) {
         str += chunk;
      });

      response.on('end', function () {
         output(report, context);
      });
   };

   var targetSlot = event.request.intent.slots.Environment.value;
   var report = 'Deploy to environment ' + targetSlot + ' successful.';

   if (event.request.type === "IntentRequest") {

      var req = https.request(options, callback);
      var postData = '{"environment_id":'+ targetSlot +'}';

      req.write(postData);
      req.end();
   }
};

function output( text, context ) {

   var response = {
      outputSpeech: {
         type: "PlainText",
         text: text
      },
      card: {
         type: "Simple",
         title: "DeployBot Trigger Deploy",
         content: text
      },
   shouldEndSession: true
   };

   context.succeed( {response : response} );

}

The Lambda takes care of sending the HTTP POST using the Node.js HTTPS module, returning the response and utterance as well as updating the Alexa App (iOS in my case) with a card containing all the details. If you dive into the code, its very rudimentary, as I stated before there’s no error handling but I ran out of time before WordCamp to make something truly robust.

N.B: For some context, the project that I am deploying is a WP-Make theme on Github. If I add /commit changes and push up to the remote, DeployBot will grab the latest commit and Alexa, once asked, will trigger the deployment.

With Node being asynchronous, this all happens in “real-time” and is a really engaging user experience.

So without further ado, heres a live demonstration:

As far as smart assistants go, I’ve found Amazon Alexa to be really agile and a lot of fun to use for side projects like this, mainly because of the tight integration between the Alexa Skills Kit and AWS. Maybe one day, we’ll all have one on our desk. Feel free to ping me if you have any questions or for an impromptu code review 😉

Leave a Reply

Your email address will not be published. Required fields are marked *