Monday, 30 January 2017

Alexa "Movie Expert" Skill Using a Slot, Python and the OMDB API

More on my series of blog posts on using the Amazon Echo Dot and creating Alexa skills.

After creating my last Alexa skill that used Python and an API, I wanted to extend this by creating a skill that could massively vary it's response based upon what the user asked it.  i.e. Didn't just do one thing or respond with one of a limited set of pre-defined responses.

The result is my Amazon Movie Expert skill which aims to be able to provide information on any movie (within reason).  Best to see it in action first!

I don't go back to basics in this post about creating Alexa skills.  Please look at one of my old posts or the tutorials on the interwebs for more on that.

A key point is that all the movie information comes from the Open Movie Database (OMDB) API which can be found here.  All credit to the people who maintain that API.  It's excellent!

The idea is that you're able to say something like "Alexa, ask movie expert about the movie Sing".  To understand how Alexa interprets this, let's break down what was asked:

  • "movie expert" is the invocation name that you configure.
  • "about the movie" is the first part of the utterance and this is configured to map to a Alexa "intent".  This basically points to a function in your Lambda handler.
  • "sing" is called a slot.  This is effectively a parameter that is passed to the Lambda handler.  

So your utterances in the interaction model look something like this:

So the intent is MovieIntent and {MovieName} is your slot.  This means the words spoken at the end of the utterance can be any movie name.

You then define the intent structure as:

So here we define that the MovieIntent intent has a slot called MovieName.  We also say it's of type "Amazon.Movie".  Slots seem to either be built-in or user defined custom slots where what the user can say is pre-defined by the developer.  I think the built-in slot type of AMAZON.Movie tells Alexa to expect a movie name to be spoken and so narrows down the range of words Alexa must interpret, thus improving accuracy.  There's a whole set of built-in slots for you to use.

This means that the movie name spoke at the end of the utterance is passed to the AWS Lambda function as a parameter.  You then write a function to handle the request.  Here's a laughably simple architecture diagram showing how it all fits together:

Below is the Python function that handles MovieIntent.  Key points:

  • Many attributes of the intent are passed to the function in the parameter "intent".
  • You can assign the slot to a variable by accessing intent['slots']
  • The function then forms a URL, passes it to the open movie database (OMDB) and captures the response
  • The response is in JSON format.  Key elements are extracted and form the string that Alexa reads back to the user.

#This is the main function to handle requests for movie information
def get_movie_info(intent, session):
    card_title = intent['name']
    session_attributes = {}
    should_end_session = True  

    if 'MovieName' in intent['slots']:
        #Get the slot information
        MovieToGet = TurnToURL(intent['slots']['MovieName']['value'])
        #Form the URL to use
        URLToUse = OmdbApiUrl + MovieToGet + UrlEnding
        print("This URL will be used: " + URLToUse)
          #Call the API
          APIResponse = urllib2.urlopen(URLToUse).read()
          #Get the JSON structure
          MovieJSON = json.loads(APIResponse)
          #Form the string to use
          speech_output = "You asked for the movie " + MovieJSON["Title"] + ".  " \
                          "It was release in " + MovieJSON["Year"] + ".  " \
                          "It was directed by " + MovieJSON["Director"] + ".  " \
                          "It starred " + MovieJSON["Actors"] + ".  " \
                          "The plot is as follows: " + MovieJSON["Plot"] + ".  " \
                          "Thank you for using the Movie Expert Skill.  "
            speech_output = "I encountered a web error getting information about that movie.  " \
                            "Please try again."  \
                            "Thank you for using the Movie Expert Skill.  "
        speech_output = "I encountered an error getting information about that movie.  " \
                        "Please try again."  \
                        "Thank you for using the Movie Expert Skill.  "

    reprompt_text = None
    return build_response(session_attributes, build_speechlet_response(
        card_title, speech_output, reprompt_text, should_end_session))

#Takes the information from the slot and turn it into the format for the URL which 
#puts + signs between words
def TurnToURL(InSlot):
  #Split the string into parts using the space character    
  SplitStr = InSlot.split()
  OutStr = ""   #Just initialise to avoid a reference before assignment error
  #Take each component and add a + to the end   
  for SubStr in SplitStr:
    OutStr = OutStr + SubStr + "+"
  #Just trim the final + off as we don't need it
  return OutStr[:-1]

Monday, 23 January 2017

Amazon Alexa Skill with Python and Strava API

In my last post I described how I'd followed a step-by-step guide to create a Amazon Alexa Skill for my Amazon Echo Dot.  This used Node.js and was basically an easy "join-the-dots" guide to creating your first skill and getting it certified.

Building on this I wanted to build a skill that:

  1. Uses Python - my language of choice.
  2. Calls an API (rather than just responding with pre-canned data).
  3. Teaches me more about how to configure skills to do different things.

Here's the skill in action.  I'll then describe how I made it:

To start with I used the Amazon Python "Colour Expert" skill which can be found here.  Follow this if it's your first time with an Alexa skill as it will show you how to use the Amazon Developer site and Amazon Web Services Lambda to create a skill using Python.

My idea was to modify this skill to fetch and read out data from my Strava (exercise logging) account.  I've previously blogged on using the Strava API in posts like this and this.

To modify the Colour Expert skill I initially did the following on the Amazon Developer site on the "Skill Information" tab:

  • Name = "Sports Geek Stuff".  This is just what you'd see on the Alexa smartphone app if you published the skill.
  • Invocation name = "sports geek".  This is what say to Alexa to specify you're using a particular skill.  So you'd start by saying "Alexa, ask sports geek" then subsequent words define what you want the skill to do.

I then added extra configuration on the "Interaction Model" tab to define how I should interact with the skill to get the Strava data.

The "Intent Schema" basically creates a structure that maps things you say to Alexa to the associated functions that you run in the AWS Lambda Python script (more on this below).  I added the following to the bottom of the Intent Schema.

      "intent": "StravaStatsIntent"

I then defined an utterance (so basically a thing you say) that links to this intent.  The utterance was:

StravaStatsIntent for strava stats

...which basically means, when you say "Alexa, ask sports geek for strava stats" then Alexa calls the associated Python script in AWS Lambda with the parameter "StravaStatsIntent" to define what function to call.

Apart from ace voice to text translation, there's very little intelligence here.  You could configure:

StravaStatsIntent for a badger's sticker collection

...or even...

StravaStatsIntent for brexit means brexit

...and these crazy sayings would still result in the StravaStatsIntent being selected.

You also configure the Alexa skill to map to a single AWS Lambda function which will handle all the intents you configure.  So in simple terms a invocation name selects a Alexa skill which is linked to an AWS Lambda function.  Then utterances are configured that link to intents, each of which is handled by the Lambda function.

Here's a simple diagram of how it all  hangs together:

So next you have to edit the Python Lambda function to handle the intents.  I left the colour expert
skill as is and just added code for my Strava intent.  There is some other interesting aspects of the Python script that I'll explore later (these are slots and session handling) so I didn't want to remove this.

To modify the code I went to AWS, logged in, selected Lambda and chose to edit the code inline.  This gave me a screen like this that I could use to edit the Python script:

To modify the code I firstly added references to the Python urllib2 and json modules as I need to use these, (you can see them in the image above).

I also added my Strava developer API key and a Unix timestamp to use for the API call as constants.

I then edited the on_intent function to specify that the StravaStatsIntent would be passed.  This is shown in red below.

    # Dispatch to your skill's intent handlers
    if intent_name == "MyColorIsIntent":
        return set_color_in_session(intent, session)
    elif intent_name == "WhatsMyColorIntent":
        return get_color_from_session(intent, session)
    elif intent_name == "AMAZON.HelpIntent":
        return get_welcome_response()
    elif intent_name == "AMAZON.CancelIntent" or intent_name == "AMAZON.StopIntent":
        return handle_session_end_request()
    elif intent_name == "StravaStatsIntent":
        return handle_strava()    
        raise ValueError("Invalid intent")

I then created the handle_strava() function, all of which is shown below.  Yes, I know my code is clunky!

Key points here are:
  • Making the API call using urllib2 and getting a response
  • Parsing the JSON and building an output string
  • Not using  reprompt_text which could be used to prompt the user again as to what to say
  • Setting should_end_session to true as we don't want the session to continue beyond this point
  • Calling the build_response function to actually build the response to pass back to the Alexa skill

#Get us some Strava stats
def handle_strava():
    """ If we wanted to initialize the session to have some attributes we could
    add those here

    session_attributes = {}
    card_title = "parkrun"
    #Access the Strava API using a URL
    StravaText = urllib2.urlopen('' + StravaToken + '&per_page=200&after=' + TheUnixTime).read()
    #Parse the output to get all the information.  Set up some variables
    SwimCount = 0
    SwimDistance = 0
    RunCount = 0
    RunDistance = 0
    BikeCount = 0
    BikeDistance = 0

    #See how many Stravas there are.Count the word 'name', there's one per record
    RecCount = StravaText.count('name')

    #Load the string as a JSON to parse
    StravaJSON = json.loads(StravaText)

    #Loop through each one
    for i in range(0,RecCount):
      #See what type it was and process accordingly
      if (StravaJSON[i]['type'] == 'Swim'):
        SwimCount = SwimCount + 1
        SwimDistance = SwimDistance + StravaJSON[i]['distance']
      elif (StravaJSON[i]['type'] == 'Ride'):
        BikeCount = BikeCount + 1
        BikeDistance = BikeDistance + StravaJSON[i]['distance']
      elif (StravaJSON[i]['type'] == 'Run'):
        RunCount = RunCount + 1
        RunDistance = RunDistance + StravaJSON[i]['distance']
    #Turn distances into km
    SwimDistance = int(SwimDistance / 1000)
    BikeDistance = int(BikeDistance / 1000)
    RunDistance = int(RunDistance / 1000)
    #Build the speech output
    speech_output = 'Swim Count = ' + str(SwimCount) + '. Swim Distance = ' + str(SwimDistance) + " kilometres.  "
    speech_output = speech_output + 'Bike Count = ' + str(BikeCount) + '. Bike Distance = ' + str(BikeDistance) + " kilometres.  "
    speech_output = speech_output + 'Run Count = ' + str(RunCount) + '. Run Distance = ' + str(RunDistance) + " kilometres."
    # If the user either does not reply to the welcome message or says something
    # that is not understood, they will be prompted again with this text.
    # Now we set re-prompt text to None.  See notes elsewhere for what this means
    #reprompt_text = "Please tell me your favorite color by saying, " \
    #                "my favorite color is red."
    #This could be set to false of you want the session to continue
    should_end_session = True
    reprompt_text = None

    return build_response(session_attributes, build_speechlet_response(
        card_title, speech_output, reprompt_text, should_end_session))

You can test if you have a Amazon Echo device or just test using the Alexa Skills Kit test capability.

Sunday, 15 January 2017

My First Amazon Alexa Skill

Recently I bought an Amazon Echo Dot as my colleagues had been raving about them.  Oh, my, what an excellent piece of kit it is.  As long as you speak clearly and think about the clarity of the words you use then the Alexa voice recognition system rarely fails.

There's plenty of reviews about Alexa and the Echo Dot on the interweb so I won't go into general usage here.  (Although the Easter Eggs are excellent fun).

As a Geek, my main driver for buying an Echo Dot was to write my own Alexa Skills.  I started using this tutorial and it's so super easy!  Usually I'd talk through the tutorial in detail on this blog but it was so easy it's not worth going through the detail of this.

What I will do is provide an super-simple "architectural" diagram of how it all works.  Here it is:

So in simple terms, to create a skill you:

  1. Configure the skill and associated attributes in the Amazon Skills Kit from the Amazon Developer site.  This is generally about the language you'll use to interact with the skill.  The site also takes you through all the workflow from defining your Skill to testing it then certifying it.
  2. Define a function in Amazon Web Services Lambda to actually handle the logic behind your Alexa skill.

(Note you don't have to use AWS Lambda, you can define your own web service and logic to interact with the Alexa Skills Kit.  Additionally the function that handles the Alexa logic can make calls out to the internet to gather further information to augment your skill, can write to databases etc).

The tutorial mentioned above is super easy to follow.  The only step I vaguely had trouble with is where it covers setting up a node.js environment but I managed to do this by following the steps super carefully.

So I developed the skill, tested it, had it certified by Amazon and now it's available on the Amazon Alexa app to be enabled by anyone with an Echo or Echo Dot.  Proud times!  (I do realise that this was super easy to do so I shouldn't boast too much!).

Here's a video of it in action:

Sunday, 8 January 2017

Using the Resources of the Fitbit API

In previous posts I've covered the basics of using a Raspberry Pi and the Fitbit API to extract and analyse the data created by a Fitbit Fitness tracker.  In particular, in this post I covered using OAUTH2.0 to access the API.

For this post I thought I'd do a more general overview of the range of data available through the Fitbit API.  So go back to the OAUTH2.0 post to see how to get access and refresh tokens etc.  Then come back here to see what you can do with the API.

Once you've got the required tokens, all you need to do to access data is specify different URLs.  In this post I'll describe a range of URLs that can be used to access different data.  There's a massive variety of data available and almost limitless combinations so just use this as a set of worked examples then use the Fitbit Developer documentation to work out other options.

Remember I'm just a guy that does this for a hobby and likes to help other people along the way.  If I use the wrong terms or describe things in a less than 100% accurate manner then please take this in the right spirit or even comment below to help me correct matters.

Activity Data
The most generic data available from the API.  Here's a simple URL to give you summary of activity data for a given date:

So a simple base URL and extra elements to specify "activities" and a date to get data for.  This yields:

{"activities":[],"goals":{"activeMinutes":30,"caloriesOut":2812,"distance":8.05,"floors":25,"steps":10000},"summary":{"activeScore":-1,"activityCalories":1952,"caloriesBMR":1725,"caloriesOut":3353,"distances":[{"activity":"total","distance":16.93},{"activity":"tracker","distance":16.93},{"activity":"loggedActivities","distance":0},{"activity":"veryActive","distance":13.18},{"activity":"moderatelyActive","distance":0.59},{"activity":"lightlyActive","distance":3.16},{"activity":"sedentaryActive","distance":0}],"elevation":155.45,"fairlyActiveMinutes":15,"floors":51,"heartRateZones":[{"caloriesOut":1546.2586,"max":89,"min":30,"minutes":793,"name":"Out of Range"},{"caloriesOut":271.2272,"max":124,"min":89,"minutes":47,"name":"Fat Burn"},{"caloriesOut":21.8036,"max":151,"min":124,"minutes":2,"name":"Cardio"},{"caloriesOut":861.961,"max":220,"min":151,"minutes":57,"name":"Peak"}],"lightlyActiveMinutes":206,"marginalCalories":1332,"restingHeartRate":55,"sedentaryMinutes":725,"steps":17309,"veryActiveMinutes":83}}

So even with it in JSON format you can see some of the key Fitbit metrics that are returned (I've marked these in red).

Step Data
The main reason people get their Fitbit is to count their steps!

Here's a simple example of a URL that provides data for 7 days up to and including the date you specify:"

The response is as follows:


So here you can see the 7 measurements and how the value for 2016-12-27 matches that of the activity data above.

You could get the same data but by specifying a start and end date by using:

If you ask Fitbit nicely they will give you access to intraday data.  See here for more details on how to do this.  An example URL to get 15 minute segments for a single day is:

Which gives data like this:


...not that interesting for this time period as I was asleep.  It gets better later in the day when I went for a run!


Other Measurements
You can use the same URL structure for other key tracker metrics like:


(i.e. replace "/steps" in the above examples with these words).

If you have a tracker that measures sleep then you can use a URL like the one below to get sleep data:

Which gives data like this at the start:


So some generic information then (by default) a record for every minute of your sleep.  Here the values are:
3=Really awake

Then a summary at the end:


Heart Rate
Finally, if you have a tracker that also measures heart rate you can use a URL like the one below to get data:

{"activities-heart":[{"dateTime":"2016-12-27","value":{"customHeartRateZones":[],"heartRateZones":[{"caloriesOut":1546.2586,"max":89,"min":30,"minutes":793,"name":"Out of Range"},{"caloriesOut":271.2272,"max":124,"min":89,"minutes":47,"name":"Fat Burn"},{"caloriesOut":21.8036,"max":151,"min":124,"minutes":2,"name":"Cardio"},{"caloriesOut":861.961,"max":220,"min":151,"minutes":57,"name":"Peak"}],"restingHeartRate":55}}],"activities-heart-intraday":{"dataset":[{"time":"00:00:00","value":65},{"time":"00:01:00","value":65},{"time":"00:02:00","value":65},{"time":"00:03:00","value":65},{"time":"00:08:00","value":65},{"time":"00:09:00","value":65},{"time":"00:10:00","value":65},{"time":"00:11:00","value":64},{"time":"00:12:00","value":64},{"time":"00:13:00","value":65},{"time":"00:14:00","value":66},{"time":"00:15:00","value":64},{"time":"00:16:00","value":61},

So first some general data then some measurements at up to one minute intervals (if you have access to this data).

So that was a whistle-stop tour of using the API.  Have a play, use different URLs and see what you can get!

10 Most Boring Things Video

My children watch a lot of YouTube videos.  (Some might say too many).  Some of the videos are along the lines of "10 most awesome <something>" where <something> is amusement park rides or water flumes.

Being a bit of a contrary chap i though it would be fun to do a video along the lines of "10 most boring things".  And here it is:

I captured the video on my Canon Digital SLR and edited it using Cyberlink PowerDirector 12.0.  It was super-easy and a lot of fun to do together.  The hardest thing was not making the components of the video too interesting!!