Saturday, 26 March 2016

Instagram APi Using Raspberry Pi

Mmmm brunch, my favourite meal of the day (after breakfast, dinner, lunch, tea, high tea, elevensies and supper).


But why a picture of brunch on a serious tech blog like this?  It would seem that the young people these days all seem to like Instagram.  Never one to not follow fashion I thought I'd get into Instagram Geek Dad style!  That means learning how to use the Instagram API on my Raspberry Pi and getting access to pictures of brunch from my account.

There's good documentation online, but here's my step-by-step guide on how to do it. Remember this post is for Geek Dads like me,not IT professionals so please don't scorn me if you think this is easy meat.

Step 1 - Sign up and Register an Application
Using the Instagram API relies on OAUTH2.0, much like the Fitbit API that I blogged about here.

Step 1 is about logging on, registering your app and getting secret data for your application.

Go to instagram developer site at https://www.instagram.com/developer/ and log in using your Instagram account credentials, (i.e. those which you get when you sign up as a normal, non-Geek user).  You'll need to provide extra details like why you want to get access to the API.  I put something along the lines of "to test what the API can do with my Raspberry Pi".  I'm sure you can put whatever you like here (within reason).

Select the "Manage Clients" button (top right of the screen) and add details for your specific test client.  I just used this blogs address for the various URL fields requested and my Gmail address for the Support email address.  You'll be given a client ID and a client secret that you need to log these for the next steps.

Step 2 - Get the User (You) to Authorise Your Test App
For the next step you need to get the user (you in this case) to authorise you accessing their account with your app.  To do this you simply point a browser at an address like the one below:

https://api.instagram.com/oauth/authorize/?client_id=<YourIDHere>&redirect_uri=http://pdwhomeautomation.blogspot.com&response_type=code&scope=basic+public_content+follower_list+comments+relationships+likes

Here the client ID is the one you get when you registered your app and the scope parameters at the end are described here.  I basically requested all possible scope items for my app.

You may have to login with your Instagram credentials if not already logged in and you will have to click a button marked "Authorise".  This effectively links your Instagram account to the developer application you created.

The browser session will be redirected to the URL you specified as your "Redirect URI" when you signed up and the last segment of the URL is the "code" you use for step 3.  Log this!!

Step 3 - Get an Access Token
Like all OAUTH2.0 implementations I've seen you use an access token to authorise your API requests.  However the token seems to be long lived, there's no periodic requesting of a new access token using a refresh token.  (However Instagram warn you may need to request a new access token at some point).

To get the access token you need to make a HTTP POST request using a bunch of parameters in the message body.  Instagram helpfully gives you the structure of a cURL command you can run on your Raspberry Pi.  Here it is:

curl -F 'client_id=CLIENT_ID' \ -F 'client_secret=CLIENT_SECRET' \ -F 'grant_type=authorization_code' \ -F 'redirect_uri=AUTHORIZATION_REDIRECT_URI' \ -F 'code=CODE' \ https://api.instagram.com/oauth/access_token

Simply replace the parameters in the above command with the ones you've collected through this process.

The response to this cURL command will be something like this (details changed of course):
{"access_token":"3911234.abcder.;dfjlkpaefjpfj;ldfjljdfljdff","user":{"username":"MyUserName","bio":"A geek, UK\udfdfdfd\dfdff\rrr\34546\454534545\fdfhh I love my girls\u\dfdfdf\ererr\webgh\jjkgkt","website":"http:\/\/pdwhomeautomation.blogspot.co.uk","profile_picture":"https:\/\/scontent.cdninstagram.com\/t51.2885-9\/10570229_699866173_1234567_a.jpg","full_name":"The Geek Dad (Geeky Man)","id":"678765"}}

So with the access token at the start of the JSON response, here's a few things you can do on the API.

Using the API
The key thing that Instagram warn is that your app starts in "Sandbox Mode" meaning restricted access to the API in terms of API calls, number of users etc.  You would then need to register your app for it to be promoted to use the full API.  Not required for someone who just wants to play.

The API has a set of endpoints that I'll describe with a few examples below.  The documentation on the Instagram developer site here is pretty good.

In the examples below, replace <YourAccessToken> with the one you got from the OAUTH2.0 request above.

Get information about yourself as a user:

https://api.instagram.com/v1/users/self?access_token=<YourAccessToken>

This provides the same JSON response as when you requested the access token (step 3 above).

You can also see recent media for you as a user by doing:

https://api.instagram.com/v1/users/self/media/recent?access_token=<YourAccessToken>

This gives you a whole bunch  of JSON detailing your most recent Instagram posts.  Within this you can see parameters like creation date, location, the filter used, the caption, number of likes etc.  There is also a bunch of URLs for different size versions of the image (low resolution, thumbnail and standard resolution).  Each of these are shown below.

Low:



Thumbnail:


Standard:


Mmmmm, brunch!














Sunday, 20 March 2016

What to do with 3.5 Million Heart Rate Monitor Readings?

Previously on Paul's Geek Dad blog I've written about how the heart rate monitor readings from my Fitbit Charge HR seem to be showing I'm getting fitter.  Here's the latest chart:


A nice trend but massively aggregated and smoothed.  I decided to play with the data in it's rawest format possible to see what I could see.  Fitbit allow you access to their "intraday" data for personal projects if you ask them nicely.  The webpage explaining this is here and what they say is:

Access to the Intraday Time Series for all other uses is currently granted on a case-by-case basis. Applications must demonstrate necessity to create a great user experience. Fitbit is very supportive of non-profit research and personal projects. Commercial applications require thorough review and are subject to additional requirements. Only select applications are granted access and Fitbit reserves the right to limit this access. To request access, email api@fitbit.com.

I've previously used this intraday data to look at my running cadence, using Fitbit API derived data to see whether attempts to change my running style were actually working.  Looking at the Fitbit API documentation I saw that heart rate data could be obtained at sub-minute granularity.  Whoopee!

An example URL to get 1 minute data from the Fitbit API using the OAUTH2.0 method I previously blogged about is:

https://api.fitbit.com/1/user/-/activities/heart/date/2015-03-01/1d
/1sec.json

...which yelds at the start (abridged):

{"activities-heart":[{"dateTime":"2015-03-01","value":{"customHeartRateZones":[],"heartRateZones":[{"caloriesOut":2184.1542,"max":90,"min":30,"minutes":1169,"name":"Out of Range"},{"caloriesOut":891.10584,"max":126,"min":90,"minutes":167,"name":"Fat Burn"},{"caloriesOut":230.65056,"max":153,"min":126,"minutes":23,"name":"Cardio"},{"caloriesOut":133.98084,"max":220,"min":153,"minutes":11,"name":"Peak"}],"restingHeartRate":66}}],"activities-heart-intraday":{"dataset":[{"time":"00:00:00","value":75},{"time":"00:00:15","value":75},{"time":"00:00:30","value":75},{"time":"00:00:45","value":75},{"time":"00:01:00","value":75},{"time":"00:01:15","value":75},{"time":"00:01:30","value":75},{"time":"00:01:45","value":75},{"time":"00:02:00","value":75},{"time":"00:02:15","value":75},{"time":"00:02:30","value":75},{"time":"00:02:45","value":75},{"time":"00:03:00","value":75},{"time":"00:03:15","value":75},{"time":"00:03:30","value":74},{"time":"00:03:40","value":72}

...and then ends...

,{"time":"23:55:15","value":62},{"time":"23:55:20","value":61},{"time":"23:55:30","value":62},{"time":"23:55:45","value":62},{"time":"23:56:00","value":62},{"time":"23:56:15","value":62},{"time":"23:56:30","value":62},{"time":"23:56:40","value":61},{"time":"23:56:55","value":61},{"time":"23:57:10","value":63},{"time":"23:57:20","value":61},{"time":"23:57:30","value":61},{"time":"23:57:45","value":61},{"time":"23:57:50","value":61},{"time":"23:58:05","value":61},{"time":"23:58:10","value":62},{"time":"23:58:25","value":62},{"time":"23:58:30","value":62},{"time":"23:58:40","value":61},{"time":"23:58:50","value":61}],"datasetInterval":1,"datasetType":"second"}}

So it seems it's not actually per second data, (i.e.one measurement per second), but rather a measurement every 10 to 15 seconds.  Which is enough I think!

What I wanted was every single sub-one minute record for the whole time I've had my Fitbit Charge HR( since Jan-15 to ~14 months at the time of writing).  I found that stretching out the time period for which "1sec" data is requested results in it being summarised to daily data.  Hence I needed to write a script to call the API multiple times and log the results.  Bring on the Python (my favourite programming language) on my Raspberry Pi 2.

The full scripted is pasted in below (you'll need to workout your own secret keys etc using my OAUTH2.0 method).  The core of it is my Fitbit OAUTH2,0 API script from before but I've added elements that takes a date range and makes one API call  per day.  Key elements:

  • Constants "StartDate" and "EndDate" that specify the range of dates to make API calls for.
  • Function "CountTheDays" that computes the number of days between the StartDate and EndDate constants.
  • A for loop that counts down in increments of 1 from the value returned by CountTheDays to 0.  This creates an index that is used for....
  • Function "ComputeADate" that takes the index and turns it back into a date string representing the number of days before EndDate.  This means we step from StartDate to EndDate, making....
  • Call to function "MakeAPICall" to actually make the call.
  • Code to take the API response JSON, strip out the key elements and write to a simple comma separated variable text file.

#Gets the heart rate in per second format, parses it and writes it to file.

import base64
import urllib2
import urllib
import sys
import json
import os
from datetime import datetime, timedelta
import time

#Typical URL for heart rate data.  Date goes in the middle
FitbitURLStart = "https://api.fitbit.com/1/user/-/activities/heart/date/"
FitbitURLEnd = "/1d/1sec.json"

#The date fields.  Start date and end date can be altered to deal with the period you want to deal with
StartDate = "2016-03-10"
EndDate = "2016-03-16"

#Use this URL to refresh the access token
TokenURL = "https://api.fitbit.com/oauth2/token"

#Get and write the tokens from here
IniFile = "/home/pi/fitbit/tokens.txt"

#Here's where we log to
LogFile = "/home/pi/fitbit/persecheartlog.txt"

#From the developer site
OAuthTwoClientID = "<ClientIDHere>"
ClientOrConsumerSecret = "<SecretHere>"

#Some contants defining API error handling responses
TokenRefreshedOK = "Token refreshed OK"
ErrorInAPI = "Error when making API call that I couldn't handle"

#Determine how many days to process for.  First day I ever logged was 2015-01-27
def CountTheDays(FirstDate,LastDate):
  #See how many days there's been between today and my first Fitbit date.
  FirstDt = datetime.strptime(FirstDate,"%Y-%m-%d")    #First Fitbit date as a Python date object
  LastDt = datetime.strptime(LastDate,"%Y-%m-%d")      #Last Fitbit date as a Python date object

  #Calculate difference between the two and return it
  return abs((LastDt - FirstDt).days)

#Produce a date in yyyy-mm-dd format that is n days before the end date to be processed
def ComputeADate(DaysDiff, LastDate):
  #Get today's date
  LastDt = datetime.strptime(LastDate,"%Y-%m-%d")      #Last Fitbit date as a Python date object

  #Compute the difference betwen now and the day difference paremeter passed
  DateResult = LastDt - timedelta(days=DaysDiff)
  return DateResult.strftime("%Y-%m-%d")

#Get the config from the config file.  This is the access and refresh tokens
def GetConfig():
  print "Reading from the config file"

  #Open the file
  FileObj = open(IniFile,'r')

  #Read first two lines - first is the access token, second is the refresh token
  AccToken = FileObj.readline()
  RefToken = FileObj.readline()

  #Close the file
  FileObj.close()

  #See if the strings have newline characters on the end.  If so, strip them
  if (AccToken.find("\n") > 0):
    AccToken = AccToken[:-1]
  if (RefToken.find("\n") > 0):
    RefToken = RefToken[:-1]

  #Return values
  return AccToken, RefToken

def WriteConfig(AccToken,RefToken):
  print "Writing new token to the config file"
  print "Writing this: " + AccToken + " and " + RefToken

  #Delete the old config file
  os.remove(IniFile)

  #Open and write to the file
  FileObj = open(IniFile,'w')
  FileObj.write(AccToken + "\n")
  FileObj.write(RefToken + "\n")
  FileObj.close()

#Make a HTTP POST to get a new
def GetNewAccessToken(RefToken):
  print "Getting a new access token"

  #Form the data payload
  BodyText = {'grant_type' : 'refresh_token',
              'refresh_token' : RefToken}
  #URL Encode it
  BodyURLEncoded = urllib.urlencode(BodyText)
  print "Using this as the body when getting access token >>" + BodyURLEncoded

  #Start the request
  tokenreq = urllib2.Request(TokenURL,BodyURLEncoded)

  #Add the headers, first we base64 encode the client id and client secret with a : inbetween and create the authorisation header
  tokenreq.add_header('Authorization', 'Basic ' + base64.b64encode(OAuthTwoClientID + ":" + ClientOrConsumerSecret))
  tokenreq.add_header('Content-Type', 'application/x-www-form-urlencoded')

  #Fire off the request
  try:
    tokenresponse = urllib2.urlopen(tokenreq)

    #See what we got back.  If it's this part of  the code it was OK
    FullResponse = tokenresponse.read()

    #Need to pick out the access token and write it to the config file.  Use a JSON manipluation module
    ResponseJSON = json.loads(FullResponse)

    #Read the access token as a string
    NewAccessToken = str(ResponseJSON['access_token'])
    NewRefreshToken = str(ResponseJSON['refresh_token'])
    #Write the access token to the ini file
    WriteConfig(NewAccessToken,NewRefreshToken)

    print "New access token output >>> " + FullResponse
  except urllib2.URLError as e:
    #Gettin to this part of the code means we got an error
    print "An error was raised when getting the access token.  Need to stop here"
    print e.code
    print e.read()
    sys.exit()

#This makes an API call.  It also catches errors and tries to deal with them
def MakeAPICall(InURL,AccToken,RefToken):
  #Start the request
  req = urllib2.Request(InURL)

  #Add the access token in the header
  req.add_header('Authorization', 'Bearer ' + AccToken)

  print "I used this access token " + AccToken
  #Fire off the request
  try:
    #Do the request
    response = urllib2.urlopen(req)
    #Read the response
    FullResponse = response.read()

    #Return values
    return True, FullResponse
  #Catch errors, e.g. A 401 error that signifies the need for a new access token
  except urllib2.URLError as e:
    print "Got this HTTP error: " + str(e)
    HTTPErrorMessage = e.read()
    print "This was in the HTTP error message: " + HTTPErrorMessage
    #See what the error was
    if (e.code == 401) and (HTTPErrorMessage.find("Access token invalid or expired") > 0):
      GetNewAccessToken(RefToken)
      return False, TokenRefreshedOK
    elif (e.code == 401) and (HTTPErrorMessage.find("Access token expired") > 0):
      GetNewAccessToken(RefToken)
      return False, TokenRefreshedOK
    #Return that this didn't work, allowing the calling function to handle it
    return False, ErrorInAPI

#Main part of the code
#Declare these global variables that we'll use for the access and refresh tokens
AccessToken = ""
RefreshToken = ""

print "Fitbit API Heart Rate Data Getter"

#Get the config
AccessToken, RefreshToken = GetConfig()

#Get the number of days to process for
DayCount = CountTheDays(StartDate,EndDate)

#Open a file to log to
MyLog = open(LogFile,'a')

#Loop for the date range
#Process each one of these days stepping back in the for loop and thus stepping up in time
for i in range(DayCount,-1,-1):
  #Get the date to process
  DateForAPI = ComputeADate(i,EndDate)

  #Say what is going on
  print ("Processing for: " + DateForAPI)

  #Form the URL
  FitbitURL = FitbitURLStart + DateForAPI + FitbitURLEnd

  #Make the API call
  APICallOK, APIResponse = MakeAPICall(FitbitURL, AccessToken, RefreshToken)

  if APICallOK:
    #We got a response, let's deal with it
    ResponseAsJSON = json.loads(APIResponse)

    #Get the date from the JSON response just in case.  Then loop through the JSON getting the HR measurements. 
    JSONDate = str(ResponseAsJSON["activities-heart"][0]["dateTime"])

    #Loop through picking out values and forming a string
    for HeartRateJSON in ResponseAsJSON["activities-heart-intraday"]["dataset"]:
      OutString = JSONDate + "," + str(HeartRateJSON["time"]) + "," + str(HeartRateJSON["value"]) + "\r\n"

      #Write to file
      MyLog.write(OutString)
  else:  #Not sure I'm making best use of this logic.  Can tweak if necessary
    if (APIResponse == TokenRefreshedOK):
      print "Refreshed the access token.  Can go again"
    else:
      print ErrorInAPI

MyLog.close()

The code does the job; maybe the error handling could be better.  One thing I ran into was that Fitbit rate limit their API call to 150 calls per hour.  As I was grabbing nearly 14 months of data I found I hit the limit and had to wait for the hour to expire before I could re-start the script, (after editting the start and end dates).

The raw data output looks like:

pi@raspberrypi ~/fitbit $ head persecheartlog.txt
2015-01-26,20:28:20,72
2015-01-26,20:31:15,70
2015-01-26,20:31:30,70
2015-01-26,20:31:35,75
2015-01-26,20:31:40,70
2015-01-26,20:31:45,68
2015-01-26,20:31:50,66
2015-01-26,20:31:55,64
2015-01-26,20:32:10,64
2015-01-26,20:32:20,84

...and...

pi@raspberrypi ~/fitbit $ tail persecheartlog.txt
2016-03-16,19:02:25,78
2016-03-16,19:02:30,76
2016-03-16,19:02:35,75
2016-03-16,19:02:45,76
2016-03-16,19:02:55,77
2016-03-16,19:03:10,76
2016-03-16,19:03:15,78
2016-03-16,19:03:20,77
2016-03-16,19:03:30,75
2016-03-16,19:03:35,66

...and contained this many measurements:

pi@raspberrypi ~/fitbit $ wc -l persecheartlog.txt
3492490 persecheartlog.txt

So 3.5 million measuresments to play with.  Mmmmmmmmmmmmmm.

As I've been doing lots recently I used R to analyse the data.  I tried this on my  Raspberry Pi 2 and, whilst I could load and manipulate the data using my Pi, R kept crashing when I tried to graph the data :-(.  Hence I resorted to using my PC which is a bit boring but needs must...

Load the CSV file full of heart rate data:
> FitbitHeart1 <- read.csv(file="c:/myfiles/persecheartlog.txt",head=FALSE,sep=",")

Create useful column names:
> colnames(FitbitHeart1) <- c("Date","Time","HeartRate")

Add a Posix style date/time column to help with graphing:
> $DateTimePosix <- as.POSIXlt(paste(FitbitHeart1$Date,FitbitHeart1$Time,sep=" "))

And graph (this took about 5 mins to run on my PC, it got a bit hot and the fan went into over-drive)!
> library(ggplot2)
> qplot(DateTimePosix,HeartRate,data=FitbitHeart1,geom=c("point","smooth"),xlab="Date",ylab="Heart Rate (bpm)",main="Fitbit Heart Rate Data")

Yielding this interesting graph:


Hmmm, so this is what 3.5 million points plotted on a graph looks like!  Maybe a dense Norwegian fir tree forest in the dead of night!  I think there's beauty in any graph and, whilst this one only it's Dad can love I spot:
  • A regression line (blue) which is decreasing and so matches the Fitbit summarised chart and adds further proof that I'm getting fitter.
  • Gaps in the "trees" where my Fitbit has not been working for one reason or another*.
  • The bottom of the dense set of points (the tree roots maybe) nestling at about 50 beats per minute.  Just looking at the graph there currently appears to be more of these now than there were a year ago showing my resting heart rate is decreasing.
  • The "canopy" of the forest at ~125 bpm, meaning my heart generally sits within the range 50 to 125 bpm.
  • Numerous trees peaking above 125 bpm which must be when I exercise.  There's more of these trees now as I do more exercise.
OK, that's the Norwegian forest analogy stretched a bit too far...

So maybe I need to think a bit more as to what to do with 3.5 million heart rate date points.  Something for a future blog post...

(*This was where my Fitbit broke during an upgrade and the lovely people from Fitbit replaced it free-of-charge).







Tuesday, 15 March 2016

Strava API Lap Analysis Using Raspberry Pi, Python and R

I'm training for a Half Marathon at the moment and, without meaning to sound too full of myself, I think I'm getting fitter.  This seems to be born out by my resting heart rating as measured by my Fitbit Charge HR which, after my previous analysis, continues to get lower:


When out for a long run on Saturday it struck me that, for the same perceived effort, it feels like I'm getting faster in terms of how long each kilometer takes me to run.  As Greg Lemond once said "it doesn't get any easier, you just go faster".  Hence, when running, I formed a plan to look at the pace stats from my ~2 years worth of Garmin gathered Strava data to see how my pace is changing.

For a previous post I described how to get Strava activity data from the Strava API.  After registering for a key, a HTTP GET to an example URL such as:

https://www.strava.com/api/v3/activities?access_token=<YourKey>&per_page=200&page=1

...returns a bunch of JSON documents, each of which describes a Strava activity and each of which has a unique ID.  Then, as described in this post, you can get "lap" data for a particular activity with a HTTP GET to a URL like this:

https://www.strava.com/api/v3/activities/<ActvityID>/laps?access_token=<YourKey>

So what is a "lap"?  In  it's simplest form, you get a lap logged every time you press "Lap" on your stopwatch.  So for an old skool runner, every time you pass a km or mile marker in a race you pressed lap and looked at your watch to see if you were running at your target pace.

These days a modern smartwatch will log every lap for post-analysis and can also be set up to auto-lap on time or distance.  For the vast majority of my runs I have my watch configured to auto-lap every km so I have a large set of data ready-available to me!

As all good data is, there is also some messiness in it; specifically for some runs where I've chosen to manually log laps, have had the lap function turned off (so the whole run is a single lap) or have a small sub-km distance at the end of the run that is logged as a lap.

So to analyse the data.  I chose to write a Python script on my Raspberry Pi 2 that would:
  • Extract activity data from the Strava API.  It has a limit of 200 activities per page so I had so request multiple pages.
  • Then for each activity, if it was a run, extract lap data from the Strava API.
  • Then log all the lap data, taking into account any anomalies (specifically missing heart rate data), into a file for further analysis.
Here's all the code.  The comments should describe what's going on:

import urllib2
import json

#The base URL we use for activities
EndURLLaps = "/laps?access_token=<YourKey>"
LapLogFile = "/home/pi/Strava/lap_log_1.txt"

#Open the file to use
MyFile = open(LapLogFile,'w')

#Loop extracting data.  Remember it comes in pages
EndFound = False
LoopVar = 1

#Main loop - Getting all activities
while (EndFound == False):
  #Do a HTTP Get - First form the full URL
  ActivityURL = BaseURLActivities + str(LoopVar)
  StravaJSONData = urllib2.urlopen(ActivityURL).read()
  
  if StravaJSONData != "[]":   #This checks whether we got an empty JSON response and so should end
    #Now we process the JSON
    ActivityJSON = json.loads(StravaJSONData)

    #Loop through the JSON structure
    for JSONActivityDoc in ActivityJSON:
      #Start forming the string that we'll use for output
      OutStringStem = str(JSONActivityDoc["start_date"]) + "|" + str(JSONActivityDoc["type"]) + "|" + str(JSONActivityDoc["name"]) + "|" + str(JSONActivityDoc["id"]) + "|"
      #See if it was a run.  If so we're interested!!
      if (str(JSONActivityDoc["type"]) == "Run"):
        #Now form a URL and get the laps for this activity and get the JSON data
        LapURL = StartURLLaps + str(JSONActivityDoc["id"]) + EndURLLaps
        LapJSONData = urllib2.urlopen(LapURL).read()

        #Load the JSON to process it
        LapsJSON = json.loads(LapJSONData)

        #Loop through the lap, checking and logging data
        for MyLap in LapsJSON:
          OutString = OutStringStem + str(MyLap["lap_index"]) + "|" + str(MyLap["start_date_local"]) + "|" + str(MyLap["elapsed_time"]) + "|" 
          OutString = OutString + str(MyLap["moving_time"]) + "|" + str(MyLap["distance"]) + "|" + str(MyLap["total_elevation_gain"]) + "|"
          
          #Be careful with heart rate data, might not be  there if I didn't wear a strap!!!
          if "average_heartrate" not in MyLap:
            OutString = OutString + "-1|-1\n"
          else:
            OutString = OutString + str(MyLap["average_heartrate"]) + "|" + str(MyLap["max_heartrate"]) + "\n"
          
          #Print to screen and write to file
          print OutString
          MyFile.write(OutString)          
    #Set up for next loop
    LoopVar += 1
  else:
    EndFound = True

#Close the log file
MyFile.close()

So this created a log file that looked like this:

pi@raspberrypi:~/Strava $ tail lap_log_1.txt
2014-06-30T05:39:36Z|Run|Copenhagen Canter|160234567|8|2014-06-30T08:18:12Z|283|278|1000.0|6.3|-1|-1
2014-06-30T05:39:36Z|Run|Copenhagen Canter|160234567|9|2014-06-30T08:22:52Z|272|271|1000.0|16.2|-1|-1
2014-06-30T05:39:36Z|Run|Copenhagen Canter|160234567|10|2014-06-30T08:27:29Z|295|280|1000.0|18.1|-1|-1
2014-06-30T05:39:36Z|Run|Copenhagen Canter|160234567|11|2014-06-30T08:34:27Z|58|54|195.82|0.0|-1|-1
2014-06-26T11:16:34Z|Run|Smelsmore Loop|158234567|1|2014-06-26T12:16:34Z|2561|2561|8699.8|80.0|-1|-1
2014-06-20T11:09:00Z|Run|Smelsmore Loop|155234567|1|2014-06-20T12:09:00Z|2529|2484|8015.3|80.1|-1|-1
2014-06-16T16:23:19Z|Run|HQ to VW.  Strava was naughty and only caught part of it|154234567|1|2014-06-16T17:23:19Z|640|640|2169.9|39.2|-1|-1
2014-06-10T11:13:31Z|Run|Sunny squelchy Smelsmore|151234567|1|2014-06-10T12:13:31Z|2439|2429|8235.2|83.4|-1|-1
2014-06-03T10:57:58Z|Run|Lost in Donnington|148234567|1|2014-06-03T11:57:58Z|1933|1874|6266.7|86.0|-1|-1
2014-05-24T07:43:52Z|Run|Calf rehab run|144234567|1|2014-05-24T08:43:52Z|2992|2964|9977.4|170.7|-1|-1

Time to analyse the data in R!

First import the data into a data frame:
> StravaLaps1 <- read.csv(file="/home/pi/Strava/lap_log_1.txt",head=FALSE,sep="|")

Add some meaningful column names:
> colnames(StravaLaps1) <- c("ActvityStartDate","Type","Name","ActivityID","LapIndex","LapStartDate","ElapsedTime","MovingTime","Distance","ElevationGain","AveHeart","MaxHeart")

Turn the distance and time values to numbers so we can do some maths on them:
> StravaLaps1$ElapsedTimeNum = as.numeric(StravaLaps1$ElapsedTime)
> StravaLaps1$DistanceNum = as.numeric(StravaLaps1$Distance)

Now calculate the per km pace.  For the laps which were derived from the "auto-lap at 1 km" settings this just means we're dividing the elapsed time for the lap by 1.  Otherwise it scales up (for <1km laps) or down (for >1km laps) as required.
> StravaLaps1$PerKmLapTime <- StravaLaps1$ElapsedTimeNum / (StravaLaps1$DistanceNum / 1000)

 The data comes off the Strava API in reverse chronological order.  Hence to make sure it can be ordered for graphing I need to create a Posix time column, i.e. a column that's interpreted as a date and time, not just text.  To do this I first re-format the date and time using strptime, then turn into Posix.

> StravaLaps1$LapStartDateSimple <- strptime(StravaLaps1$LapStartDate, '%Y-%m-%dT%H:%M:%SZ')
> StravaLaps1$LapStartDatePosix <- as.POSIXlt(StravaLaps1$LapStartDateSimple)

...which gives us data like this:

> head(StravaLaps1[,c(13,14,15,17)])
  MovingTimeNum DistanceNum PerKmLapTime   LapStartDatePosix
1           269        1000          268 2016-03-12 08:55:11
2           263        1000          266 2016-03-12 08:59:44
3           264        1000          267 2016-03-12 09:04:10
4           258        1000          259 2016-03-12 09:08:37
5           271        1000          272 2016-03-12 09:12:56
6           252        1000          255 2016-03-12 09:17:30

Now to draw a lovely graph using ggplot2:
>library(ggplot2)
> qplot(LapStartDatePosix,PerKmLapTime,data=StravaLaps1,geom=c("point","smooth"),ylim=c(200,600),xlab="Date",ylab="KM Pace(s)",main="KM Pace from Strava")


Which gives this:


Now that is an interesting graph!  Each "vertical line" represents a single run with each point being a lap for that run.  A lot of the recent points are between 250 seconds (so 4m10s per km) and 300s (so 5m per km) which is about right.

On the graph you can also see a nice even spread of runs from spring 2014 to early summer 2015.  There was then a gap when I was injured until Sep 2015 when I returned from injury and then Dec 2015 when I started training in earnest.

The regression line is interesting, reaching it's min point by Autumn 2015 (when I started doing short, fast 5km runs at ~4m10s per km) and then starting to increase again as my distance increased (to ~4m30s per km).

So it was interesting to just look at the most recent data. To find the start point I scanned back in the data to the point I started running again after my injury.  This was derived by doing the following command to just extract the first rows of the data frame into a new data frame:
>StravaLaps2 <- StravaLaps1[c(1:423),]

> tail(StravaLaps2[,c(1,3)])
        ActvityStartDate          Name
418 2015-11-10T07:54:51Z   Morning Run
419 2015-11-10T07:54:51Z   Morning Run
420 2015-11-10T07:54:51Z   Morning Run
421 2015-11-05T07:51:20Z Cheeky HQ Run
422 2015-11-05T07:51:20Z Cheeky HQ Run
423 2015-11-05T07:51:20Z Cheeky HQ Run

Where "Cheeky HQ Run" was a short tentative run I did as the first of my "comeback".  A plot using this data and a regression line is shown below:

> qplot(LapStartDatePosix,PerKmLapTime,data=StravaLaps2,geom=c("point","smooth"),ylim=c(200,600),xlab="Date",ylab="KM Pace(s)",main="KM Pace from Strava - Recent")


Now I REALLY like this graph.  Especially as the regression line shows I am getting faster which was the answer I wanted!  However with a bit less data you can see each run in more detail (each vertical line) and an interesting pattern emerges.

Best to look at this by delving into the data even more and just taking Feb and March data:

> StravaLaps3 <- StravaLaps2[c(1:201),]

> qplot(LapStartDatePosix,PerKmLapTime,data=StravaLaps3,geom=c("point","smooth"),ylim=c(200,600),xlab="Date",ylab="KM Pace(s)",main="KM Pace from Strava - Feb/Mar 2016")



Taking the run (vertical set of points) on the far right and moving left we see:

  • Long 21k run at a consistent pace so lots of points clustered together.
  • Shorter hillier run so less points and similar pace.
  • Intervals session so some very fast laps (sub 4 min km) and some slow jogging
  • Long 18k run at a consistent pace but not so nicely packed together as the 21k run

...and so on back in time with each type of run (long, short and intervals) having it's own telltale "finger print".  For example the second run from the right is a 5k fast (for me) Parkrun so has a small number of laps at a pretty good (for me) pace.

Overall I really like this data and what Strava, Raspberry Pi, Python and R lets me do with it.  First of all it tells me I'm getting faster which is always good.  Second it has an interesting pattern and each type of run is easily distinguishable which is nice.  Finally it's MY data; I'm playing with and learning about this stuff with my own data which is somehow more fun than using pre-prepared sample data.