Thursday 18 December 2014

Raspberry Pi - Python - MySQL - Cron Jobs and Physical Jerks #1


In two previous posts I blogged about how I've linked geekiness and exercise by playing with the Strava API.  This is good for exercise types that Strava covers but doesn't cover the other stuff I like to do like strength and conditioning work (very important now I am entering more mature years).

What I find is that I go through peaks and troughs of doing this sort of exercise.  I have period where I have good intentions and do lots and other periods where I just seem to forget about it.  Hence what I needed is something like Strava that takes a record of what I've done and gives me prompts to do more.

I'm sure there's apps out there that does this sort of thing but that's not the Geek Dad way; I wanted to build my own....

So here was my idea:
  1. When I've done some exercise I send a Tweet from my 'phone with some form of short hand notation in it to define what I've done.
  2. My Raspberry Pi periodically checks my twitter feed to pick up these Tweets. 
  3. My Raspberry Pi logs the exercise data in a database.
  4. My Raspberry Pi sends me responses via Twitter to congratulate me for doing the exercise.
  5. At a later stage I can analyse the data on a web page.
The TUI (Twitter User Interface) is a little lazy; maybe one day when I get some more time I'll write an app with a decent GUI (Geek User Interface).

I decided to call it "Jerks", as in physical jerks.  The word has no other meaning that I know of... ;-)

To start with  I needed a database.  I decided to use MySQL as it is a core part of a LAMP webserver.  This page provides an excellent tutorial on setting up and using (via Python and PHP) a MySQL database on Raspberry Pi. The only problem I ran into was that my installation process did not prompt me to create a root password.  I followed the steps on this site and managed to resolve this.

To set-up my Jerks database I simply used this MySQL syntax:

mysql > CREATE DATABASE jerks;
mysql USE jerks;

I wanted the method to Tweet in details of the exercises I'd done to be super simple. Hence I decided to use the format - XX N - where XX is a two letter code defining the exercise I'd done and N is the number of repetitions.  So for example:
  • Pu 12 - Means I've done 12 press ups.
  • Yo 1 - Means I've done 1 Yoga session.
So I needed a look-up database table that could map from these two letter codes to the full exercise name.  This was easy to create:

CREATE TABLE lookup (twoletters TEXT, longform TEXT);

...and add to:

INSERT INTO lookup (twoletters,longform) values("PU","Press Ups");
INSERT INTO lookup (twoletters,longform) values("YO","Yoga");

...meaning I had a table like this after entering lots of insert statements:

mysql> select * from lookup;
+------------+--------------------+
| twoletters | longform           |
+------------+--------------------+
| PU         | Press Ups          |
| CP         | Clap Press Ups     |
| FR         | Foam Rolling       |
| CR         | Calf Raises        |
| PI         | Pilates            |
| YO         | Yoga               |
| SQ         | Squatting          |
| BC         | Bicep Curls        |
| TC         | Tricep Curls       |
| FR         | Front Raises       |
| SR         | Side Raises        |
| GS         | General Stretching |
| LW         | Leg Weights        |
+------------+--------------------+
13 rows in set (0.01 sec)

In terms of playing with Twitter in Python, I used some example code from this awesome tutorial.  This tells you how to sign up for a Twitter developer account, read Tweets and send Tweets.  A quick pydoc twitter.Apito showed me more ways to use the Python Twitter wrapper.

So (after initiating a Twitter object) you can iterate through each of the Tweets using code like this:

# Display all my tweets
for tweet in twitter.statuses.home_timeline():
  print(tweet['text'])

A quick check of the Twitter Developer page showed that this call would result in only the last 20 Tweets being returned so it's not a massive overhead.

So (after checking that a Tweet is a Jerks Tweet by looking for the pattern of two space delimited parts, first part two letters, second part a number) I needed to create a way to check whether a Tweet was a new one or an old one.  I decided to use another database table and to log the unique Tweet ID that is  returned from the Twitter API.  Overall I needed a table to show:
  • The date I did the exercise.
  • The time I did the exercise.
  • The associated Tweet ID.
  • The exercise.
  • The count of the exercise.
So I used this syntax to create this table:
CREATE TABLE exercise (tdate DATE, ttime TIME, tweet_id TEXT, exercise TEXT, count NUMERIC);

So for every Tweet that came in I could run an SQL query to see if the Tweet ID was in the table and, if not, add a new row to the table with an INSERT statement.

Then to finish the job I could send a Twitter direct message back to me to give me a warm and glowing feeling that I'd done some good to my creaking body.  So my Twitter feed and direct messages looks something like this:


Finally I needed to automate the Python script to get  it to periodically check the Twitter feed, add to the database and direct message me.  Previously I've used endless While loops but this time I fancied using a cron job (as I've never done it before).  After struggling with the crontab command, this excellent page gave me an excellent alternative that worked using a file in/etc/cron.d

So the script works very nicely and after a few days the database is filling up nicely:

mysql> select * from exercise;
+------------+----------+--------------------+--------------------+-------+
| tdate      | ttime    | tweet_id           | exercise           | count |
+------------+----------+--------------------+--------------------+-------+
| 2014-12-10 | 21:33:40 | 542794300512077440 | Press Ups          |     5 |
| 2014-12-10 | 19:51:30 | 542768590521259395 | Clap Press Ups     |    12 |
| 2014-12-09 | 21:46:47 | 542435212346121218 | Foam Rolling       |     1 |
| 2014-12-09 | 21:30:53 | 542431214136039808 | Press Ups          |    10 |
| 2014-12-08 | 19:21:32 | 542036123435369729 | Foam Rolling       |     1 |
| 2014-12-06 | 18:42:07 | 541300987675000064 | Clap Press Ups     |    12 |
| 2014-12-06 | 18:39:29 | 541300987683332864 | Calf Raises        |    12 |
| 2014-12-06 | 18:37:52 | 541302456700659328 | Press Ups          |    12 |
| 2014-12-10 | 21:55:04 | 542790864539329920 | Yoga               |     1 |
| 2014-12-11 | 08:12:05 | 542954961987842176 | Calf Raises        |    40 |
| 2014-12-11 | 21:59:04 | 543161234896574465 | Press Ups          |     5 |
| 2014-12-11 | 22:00:07 | 543163346444444632 | Clap Press Ups     |    12 |
| 2014-12-11 | 22:06:37 | 543164980787878298 | Press Ups          |     4 |
| 2014-12-12 | 07:44:40 | 543310451141212378 | Press Ups          |     3 |
| 2014-12-12 | 09:03:04 | 543330181834567898 | Press Ups          |     1 |
| 2014-12-12 | 09:02:50 | 543330124222222211 | Press Ups          |     1 |
| 2014-12-12 | 07:50:11 | 543311856755551132 | Press Ups          |    10 |
| 2014-12-12 | 18:40:31 | 543475502015523554 | Squatting          |     1 |
| 2014-12-12 | 18:34:33 | 543412342452578571 | Calf Raises        |    12 |
| 2014-12-12 | 18:58:36 | 543480055554545456 | Squatting          |     1 |
+------------+----------+--------------------+--------------------+-------+
20 rows in set (0.00 sec)

Here's all the code (minus sensitive bits of course):

import os
from twitter import *
import MySQLdb
from datetime import datetime
# go to https://dev.twitter.com/apps/new to create your own
# CONSUMER_KEY and CONSUMER_SECRET
# Note that you need to set the access level to read and write
# for this script to work (Found in the settings tab once you
# have created a new application)
# pydoc twitter.Apito get all the twitter documentation
CONSUMER_KEY = "Your_key_here"
CONSUMER_SECRET = "Your_secret_here"

#Database related contents
dbMainTable = "exercise"
dbLookupTable = "lookup"

#Used to take a Twitter format date and turn it into a Python date
def ParseTwitterDate(TwitterDt):
  #The Twitter date is like this -> Sat Dec 06 18:42:07 +0000 2014
  #strptime doesn't deal with the +0000 at all well. So I'll just strip it out
  DateToChange = TwitterDt[0:19] + TwitterDt[25:30]
  return datetime.strptime(DateToChange,'%a %b %d %H:%M:%S %Y')

#######################
#This is the main part of the code

#Database stuff
db = MySQLdb.connect("localhost", "jerks", "user", "password")   #host,user,password,database name
curs=db.cursor()

# get full pathname of .twitterdemo_oauth file in the
# home directory of the current user
oauth_filename = os.path.join(os.path.expanduser('~'),'.twitterdemo_oauth')

# get twitter account login info
if not os.path.exists(oauth_filename):
  oauth_dance('Raspberry Pi Twitter Demo', CONSUMER_KEY, CONSUMER_SECRET, oauth_filename)
(oauth_token, oauth_token_secret) = read_token_file(oauth_filename)


# log in to Twitter
auth = OAuth(oauth_token, oauth_token_secret, CONSUMER_KEY, CONSUMER_SECRET)
twitter = Twitter(auth=auth)
# Tweet a new status update
# twitter.statuses.update(status="Hello The World!")
# Display all my tweets
for tweet in twitter.statuses.home_timeline():
  #Jerks tweets are two parts space delimited, two letters + space + a number and from PDW.  Do all these parts
  #First check if it was from the my account
  if tweet['user']['screen_name'] == 'PDW':
    #Get all the parts of the tweet
    TweetParts = tweet['text'].split(' ')
    #Check for all the right components
    if (len(TweetParts) == 2) and (len(TweetParts[0]) == 2) and (TweetParts[1].isdigit()):
      print tweet['text']
      #This means that this is a Jerks tweet.  Need to check whether it's new or old by looking at the database.  We do this by checking the id.  First form the SQL query, then execute it
      JerksQuery = 'SELECT * FROM ' + dbMainTable + ' where tweet_id="' + str(tweet['id']) + '";'
      curs.execute (JerksQuery)
      #Check whether we got anything in response, if not we add to the database
      if (curs.rowcount == 0):
        #Execute a query to add to the database
        print "Adding to the database"
        try:
          #Form the new query then execute it.  First get the date
          MyDateTime = ParseTwitterDate(tweet['created_at'])

          #Now we need to lookup the two letter code from the tweet into it's long form.  We do this in a lookup table
          JerksQuery = 'SELECT * FROM ' + dbLookupTable + ' where twoletters = "' + TweetParts[0].upper() + '";'
          print JerksQuery
          curs.execute (JerksQuery)
          #Check whether we got anything in response, if not we add to the database
          if (curs.rowcount > 0):
            #We've found the long form value of the exercise in the query response.  Add to the insert query
            for LookUpResponse in curs.fetchall():
              LongFormExercise = LookUpResponse[1]

            #Form the query
            JerksQuery = 'INSERT INTO ' + dbMainTable + ' (tdate,ttime,tweet_id,exercise,count) values("' + str(MyDateTime.date()) + '","' + str(MyDateTime.time()) + '"
,"' + str(tweet['id']) + '","' + LongFormExercise + '",'+ TweetParts[1] + ');'
            print JerksQuery
            curs.execute (JerksQuery)
            db.commit()
            print "Data committed"
            MessageToTweet = 'Nice one jerk!  You did ' + str(TweetParts[1]) + ' ' + LongFormExercise
            print MessageToTweet
            twitter.direct_messages.new(user="PDW",text=MessageToTweet)
          else:
            print "Exercise not in lookup table"
            twitter.direct_messages.new(user="PDW",text='Hey jerk! ' + TweetParts[0] + ' is not a code Jerks understands')
        except:
          print "Error: the database is being rolled back"
          db.rollback()
      else:
        print "No new entries to add"

Monday 1 December 2014

Raspberry Pi and Strava API #2

In a previous post blogged on how I've managed to cross the streams of two of my hobbies, geekery and exercise, using my Raspberry Pi and the Strava API.  I've extended this by creating a capability to use Strava API data to see if I am getting fitter through exercise activities.

One key part of this is having an exercise regime that is repeatable in a controlled fashion to allow the likes of me, who lacks an exercise lab, to do some analysis.  With half a mind on the geek potential it provided, back in September I started to do HIIT sessions using the same bike setup and the same exercise protocol.

Now generally I am skeptical of exercise fads but HIIT:
  • Seemed to have a number of reputable scientific studies backing it.
  • Enabled me to fit in sessions in very short time windows.
  • Is very basic, no HIIT specific equipment provided so no one trying to sell you kit or DVDs or exercise programmes.
So my bike is setup on a turbo trainer and I always have it in the same gear and on the same turbo resistance setting.  I know that things like tyre pressure could vary (thus changing rolling resistance) but by and large the conditions are similar.

I have a heart rate monitor and speed/cadence sensor on my bike so I can log these things as I exercise.  I don't have a fancy-dan power meter.

My exercise regime is as follows:
  • 4 minute warm up
  • 6 lots of flat out for 20 seconds followed by 10 seconds rest.
  • 3 minute warm down.
...so all done in 10 minutes flat.

So in Strava I get these sort of results:


So you can see my speed and cadence peak for every interval and my heart rate increase gently during the warm-up, goes up quickly (in steps) during the flat-out periods and then drops during the warm-down.

So all good stuff but as a geek I need to know whether the results above are any better than this other session below:


Looks like a better first couple of reps but tailed off at the end.  The Strava UI didn't seem to give me a good way to compare and contrast my ~20 HIIT sessions so I decided to find a geeky way!  A quick look at the Strava API documentation showed that there is a "laps" resource in the API so I decided to use it as my Garmin logs each section of the HIIT activity as a lap.

First you identify the Strava activity in question by listing all activities:

https://www.strava.com/api/v3/activities?access_token=<your token here>&per_page=200

You then list the laps for a specific activity using this URL:

https://www.strava.com/api/v3/activities/331779633/laps?access_token=<your token here>&per_page=200

Which gives you easily parsable json output like this (just 2 laps shown for clarity):

[{"id":744754123,"resource_state":2,"name":"Lap 1","activity":{"id":331779633},"athlete":{"id":4309532},"elapsed_time":240,"moving_time":241,"start_date":"2014-11-18T19:49:59Z","start_date_local":"2014-11-18T19:49:59Z","distance":1527.27,"start_index":0,"end_index":36,"total_elevation_gain":0.0,"average_speed":6.4,"max_speed":6.7,"average_cadence":69.1,"average_watts":70.7,"average_heartrate":85.2,"max_heartrate":116.0,"lap_index":1},{"id":744744535,"resource_state":2,"name":"Lap 2","activity":{"id":220668522},"athlete":{"id":4309532},"elapsed_time":20,"moving_time":19,"start_date":"2014-11-18T19:54:04Z","start_date_local":"2014-11-18T19:54:04Z","distance":245.38,"start_index":37,"end_index":45,"total_elevation_gain":0.0,"average_speed":12.3,"max_speed":12.4,"average_cadence":124.5,"average_watts":320.2,"average_heartrate":134.8,"max_heartrate":148.0,"lap_index":2}

So for each lap you get a range of interesting information such as:
  • average_cadence
  • average_watts (which I assume to be estimated as I don't have a power meter)
  • average_heartrate
  • max_heartrate
So for every HIIT session I've done I've given it a name in Strava using the format "HIIT YYYYMMDD" so it was easy to write some PHP to:

  • List all activities and put the resulting json into an array.
  • Loop through the array and pick out each of the HIIT activities.
  • For each HIIT session, call the API to get lap resource information.
  • Parse the json to pick out measurements for each lap.
  • Print the results into a web page.

Full code listing is at the bottom of this blog post.  The output I get within a browser window is shown below:


So a very raw CSV print out of 3 key metrics per lap.  So I could easily take this CSV data and pull it into Excel for data analysis.  For example I could put together this graph showing average cadence per lap:


So laps 2,4,6,8,10 and 12 are the laps where I put in a lot of effort for 20 seconds.  (Lap 1 is warm up, lap 13 is the final 10 seconds after a hard effort 6 and lap 14 is warm down).

I first put this graph together a couple of weeks ago and one initial observation was that in previous HIIT sessions that was a lot of variance, both within a session and from session to session. This was because I had no real targets for hard efforts, I just did it as hard as I could and held on for the last couple of efforts.  Hence in the last two weeks I've focussed on doing "tighter" sessions where the target is 130rpm across all the efforts. You can see this on the graph where there's much less of a spread from lap 2 to lap 12 and they're clustered in the 120 to 135 range.

Next: More HIIT sessions, more analysis and attempting to draw graphs on web page rather than nasty old Excel.

Full code (remember I'm new to PHP):

<!DOCTYPE html>
<html>
  <head>
    <meta charset="UTF-8">
    <title>Weeks Family Strava Fun</title>
    <link rel="stylesheet" type="text/css" href="/stylesheet/style1.css">
  </head>
  <body>
   <!-- Get the value from Strava-->
   <?php
     //Use cURL to get the value from Strava.  Max 200 actvities per file.  After that will need to play with pages...
     $curl = curl_init();
     curl_setopt ($curl, CURLOPT_URL, "https://www.strava.com/api/v3/activities?access_token=<your token here>&per_page=200");
     curl_setopt($curl, CURLOPT_RETURNTRANSFER, 1);
     $result = curl_exec ($curl);
     curl_close ($curl);
     $json_a=json_decode($result,true);
     //Good debug code
     //foreach($json_a as $num)
     //{
     //print $num[start_date]."-".$num[name]."-".$num[type]."-".$num[distance]."-".$num[total_elevation_gain]."-".$num[average_speed]."<br>";
     //}

     //Now itterate through the main results, picking out the HIITs and summarising them
     foreach($json_a as $num)
     {
       if (substr($num[name],0,4) == "HIIT") //Check for HIIT sessions
       {
       //This is an entry with a HIIT
       $hiitNameDate = $num[name].",".$num[start_date_local].",";
       //Download the associated lap
       $activityNumber = $num[id];
       //Form the URL
       $lapURL = "https://www.strava.com/api/v3/activities/".$activityNumber."/laps?access_token=<your token here>";
       //Do cURL with this URL
       $curl = curl_init();
       curl_setopt ($curl, CURLOPT_URL, $lapURL);
       curl_setopt($curl, CURLOPT_RETURNTRANSFER, 1);
       $lapResult = curl_exec ($curl);
       curl_close ($curl);
       //Turn into an array we can parse
       $json_b=json_decode($lapResult,true);
       foreach($json_b as $lapNum)
         {
         echo $hiitNameDate.$lapNum[name].",Ave Cadence,".$lapNum[average_cadence]."<br>";
         echo $hiitNameDate.$lapNum[name].",Ave Heart,".$lapNum[average_heartrate]."<br>";
         echo $hiitNameDate.$lapNum[name].",Max Heart,".$lapNum[max_heartrate]."<br>";
         }
       }
     }
   ?>
      <h1>Summary of HIIT Activity</h1>
      <p><?php echo $hiitSummary ?></p>
    <p>Warning: This only shows a max of 200 Stravas!  Will have to play with  pages after that...</p>
  </body>
</html>


Saturday 22 November 2014

Raspberry Pi and Strava API #1

My 3 favourite things in order are:
  • Being a Dad
  • Being a Geek
  • Doing exercise (cycling, running, swimming and general physical jerks).
I recently found a way to combine the last two on this list!

I created a LAMP webserver on my RasPi  (using this guide) and proceeded to muck about with various bits of HTML (basic made up pages), PHP, CSS and Javascript.  This enabled me to create a simple home intranet that cycled through various pages of "interesting" information:



...but what I really wanted was some new and interesting data to process and present on my intranet.

For a recent major birthday I got a present that's useful for my third hobby.  It's a Garmin Forerunner 910XT, a GPS multi-sports watch allowing me to log and track runs, bike rides and swims.  In it's own right it's a cool gadget for a geek; for example allowing you to track open water swims by logging GPS points as and when your wrist pops put the water.  Here's one I did earlier that was an accurate record of a short lake swim I did:


I upload all my exercise logs to Strava and whilst browsing around the site I came across details of the Strava API.  This enables you to build your own custom web pages and applications based upon exercise data you have logged on Strava.  First you need to go to the Strava developer site to log your application and get an API key.  Then you can call the API using URLs like this (which just lists all activities):

https://www.strava.com/api/v3/activities?access_token=<your token here>&per_page=200

This yields a JSON response which you can parse; for example picking out particular activities and looking at them in detail:

So:

https://www.strava.com/api/v3/activities/188107339?access_token=<your token here>&per_page=200

...gives:


{"id":188107339,"resource_state":3,"external_id":"activity_579805014.tcx","upload_id":208194475,"athlete":{"id":43095234,"resource_state":1},"name":"Dutch Lake Swim","distance":195.3,"moving_time":59,"elapsed_time":484,"total_elevation_gain":0.0,"type":"Swim","start_date":"2014-08-22T15:51:31Z","start_date_local":"2014-08-22T08:51:31Z","timezone":"(GMT-08:00) America/Vancouver","start_latlng":[51.65,-120.06],"end_latlng":[51.65,-120.06],"location_city":null,"location_state":"British Columbia","location_country":"Canada","start_latitude":51.65,"start_longitude":-120.06,"achievement_count":0,"kudos_count":0,"comment_count":0,"athlete_count":1,"photo_count":0,"map":{"id":"a188102339","polyline":"ofgzH|jx{U@wBKCYtBf@F","resource_state":3,"summary_polyline":"ofgzH|jx{UI{BL|B"},"trainer":false,"commute":false,"manual":false,"private":false,"flagged":false,"gear_id":"g397875","average_speed":3.31,"max_speed":0.6,"truncated":null,"has_kudoed":false,"description":"Garmin Accurate on Course and Distance","calories":0,"segment_efforts":[],"gear":{"id":"g397875","primary":true,"name":"Brooks blue ones","resource_state":2,"distance":135654.0}}

So with some PHP jiggery-pokery I created this intranet page to summarise my Strava activities in a new and interesting way:


All the PHP and HTML is listed below.  Note I'm a novice in this area so 0 marks for style I know...

<!DOCTYPE html>
<html>
  <head>
    <meta charset="UTF-8">
    <title>Weeks Family Strava Fun</title>
    <link rel="stylesheet" type="text/css" href="/stylesheet/style1.css">
  </head>
  <body>
   <!-- Get the value from Strava-->
   <?php
     //Use cURL to get the value from Strava.  Max 200 actvities per file.  After that will need to play with pages...
     $curl = curl_init();
     curl_setopt ($curl, CURLOPT_URL, "https://www.strava.com/api/v3/activities?access_token=<your key here>");
     curl_setopt($curl, CURLOPT_RETURNTRANSFER, 1);
     $result = curl_exec ($curl);
     curl_close ($curl);
     $json_a=json_decode($result,true);
     //Good debug code
     //foreach($json_a as $num)
     //{
     //print $num[start_date]."-".$num[name]."-".$num[type]."-".$num[distance]."-".$num[total_elevation_gain]."-".$num[average_speed]."<br>";
     //}
     //Some code for themain summary that goes in the middle of the top of the page
     //First see if we write km or m
     if ($json_a[0][distance] > 999)
     {
     $distanceToWrite = round(($json_a[0][distance] / 1000),1)."km";
     }
     else
     {
     $distanceToWrite = round($json_a[0][distance],1)."m";
     }
     //Now form the last exercise string to go at the top
     $lastExercise = $json_a[0][type]."-".$distanceToWrite;

     //Now we want some variables for a table that shows Activity,Activity Count,Total Distance (in km)
     //Also use this for fastest, longest and climingest activities
     foreach($json_a as $num)
     {
       if ($num[type] == "Run")
       {
       //Add distance and activity count
       $runCount = $runCount + 1;
       $runDistance = $runDistance + $num[distance];
       //Find the longest activity
       if ($num[distance] > $maxRunDistance)
         {
         $maxRunDistance = $num[distance];
         $maxRunExplanation = substr($num[start_date],0,10)."-".$num[name]."-".round($num[distance]/1000,2)."km";
         }
       //Find the fastest actvity
       if ($num[average_speed] > $fastestRun)
         {
         $fastestRun = $num[average_speed];
         $fastestRunExplanation = substr($num[start_date],0,10)."-".$num[name]."-".round(((1/($num[average_speed] / 1000))/60),2)." minutes per km";
         }
       //Find the activity with the most climbing
       if ($num[total_elevation_gain] > $mostClimbRun)
         {
         $mostClimbRun = $num[total_elevation_gain];
         $mostClimbRunExplanation = substr($num[start_date],0,10)."-".$num[name]."-".$num[total_elevation_gain]."m";
         }
       }
       elseif ($num[type] == "Ride")
       {
       //Add distance and activity count
       $rideCount = $rideCount + 1;
       $rideDistance = $rideDistance + $num[distance];
       //Find the longest activity
       if ($num[distance] > $maxRideDistance)
         {
         $maxRideDistance = $num[distance];
         $maxRideExplanation = substr($num[start_date],0,10)."-".$num[name]."-".round($num[distance]/1000,2)."km";
         }
       //Find the fastest activity
       if ($num[average_speed] > $fastestRide)
         {
         $fastestRide = $num[average_speed];
         $fastestRideExplanation = substr($num[start_date],0,10)."-".$num[name]."-".round((($num[average_speed]/1000)*3600),2)." km per hour";
         }
       //Find the activity with the most climbing
       if ($num[total_elevation_gain] > $mostClimbRide)
         {
         $mostClimbRide = $num[total_elevation_gain];
         $mostClimbRideExplanation = substr($num[start_date],0,10)."-".$num[name]."-".$num[total_elevation_gain]."m";
         }
       }
       elseif ($num[type] == "Swim")
       {
       //Add distance and activity count
       $swimCount = $swimCount + 1;
       $swimDistance = $swimDistance + $num[distance];
       //Find the longest activity
       if ($num[distance] > $maxSwimDistance)
         {
         $maxSwimDistance = $num[distance];
         $maxSwimExplanation = substr($num[start_date],0,10)."-".$num[name]."-".$num[distance];
         }
       }
     }
     //Turn all the distances in to km
     $runDistance = round(($runDistance / 1000),1);
     $rideDistance = round(($rideDistance / 1000),1);
     $swimDistance = round(($swimDistance / 1000),1);
     //Put in the context figures.  First the %of a channel swim (which is 34 km)
     $swimContext = round(100*($swimDistance / 34),1)."% of the distance from Dover to Calais";
     //Now the ride context which is the % of the distance from London to Sydney which is 16885 km
     $rideContext = round(100*($rideDistance / 16885),1)."% of the distance from London to Sydney";
     //Finally run context which is the distance for a LEJOG, 970km
     $runContext = round(100*($runDistance / 970),1)."% of the distance from Land's End to John o' Groats";
     ?>
      <h3>Last Strava: <?php echo substr($json_a[0][start_date],0,10)." - ".$json_a[0][name] ?></h3>
      <div id="container" style="width:750px">
        <div id="left" style="height:60px;width:250px;float:left;text-align:right">
          <img src="/images/strava.jpg" alt="Strava" width="100" height="60">
        </div>
        <div id="middle" style="height:60px;width:250px;float:left;background-color:green;text-align:center;vertical-align:middle">
          <p><font size="5"><?php echo $lastExercise ?></font></p>
        </div>
        <div id="right" style="height:60px;width:250px;float:left;">
          <img src="/images/strava.jpg" alt="Strava" width="100" height="60">
        </div>
      </div>
      <br><br><br>
      <!-- Show a summary table of all Stravas-->
      <h3>Summary of all Stravas:</h3>
      <table>
        <tr>
          <th></th>
          <th>Activity Count</th>
          <th>Distance (km)</th>
          <th>Context</th>
        </tr>
        <tr>
          <td><img src="/images/swim.PNG" alt="Swim" width="25" height="25"></td>
          <td><?php echo $swimCount ?></td>
          <td><?php echo $swimDistance ?></td>
          <td><?php echo $swimContext ?></td>
        </tr>
        <tr>
          <td><img src="/images/bike.PNG" alt="Ride" width="25" height="25"></td>
          <td><?php echo $rideCount ?></td>
          <td><?php echo $rideDistance ?></td>
          <td><?php echo $rideContext ?></td>
        </tr>
        <tr>
          <td><img src="/images/run.PNG" alt="Run" width="25" height="25"></td>
          <td><?php echo $runCount ?></td>
          <td><?php echo $runDistance ?></td>
          <td><?php echo $runContext ?></td>
        </tr>
    </table>
    <h3>Strava Records:</h3>
    <table>
      <tr>
        <th></th>
        <th><img align="center" src="/images/swim.PNG" alt="Swim" width="25" height="25"></th>
        <th><img align="center" src="/images/bike.PNG" alt="Ride" width="25" height="25"></th>
        <th><img align="center" src="/images/run.PNG" alt="Run" width="25" height="25"></th>
      </tr>
      <tr>
        <td>Longest</td>
        <td><?php echo $maxSwimExplanation ?></td>
        <td><?php echo $maxRideExplanation ?></td>
        <td><?php echo $maxRunExplanation ?></td>
      </tr>
      <tr>
        <td>Fastest</td>
        <td>n/a</td>
        <td><?php echo $fastestRideExplanation ?></td>
        <td><?php echo $fastestRunExplanation ?></td>
      </tr>
      <tr>
        <td>Most Climbing</td>
        <td>n/a</td>
        <td><?php echo $mostClimbRideExplanation ?></td>
        <td><?php echo $mostClimbRunExplanation ?></td>
      </tr>

    </table>
    <p>Warning: This only shows a max of 200 Stravas!  Will have to play with  pages after that...</p>
  </body>
</html>

Monday 2 June 2014

What Google Knows About Me

Two things coincided recently:

  • I was tinkering with creating a web capability that would allow my wife to track my whereabouts.
  • The fact that Google had created the capability for people to wipe their personal data.


So that got me thinking, what did Google know about me in terms of location?  I recall setting a permission to allow applications to use my location and am aware that Google Maps always opens up at my current location and I get location-centric search results.  Hence I assume that Google periodically logs my location, something I'm reasonably comfortable with.  To me the benefits outweigh the downsides and there are some crowd sourced location capabilities (e.g. Maps Traffic overlay) that I rely on so it's only fair that I share my location.

I recall using Google Latitude in the past and that it had an API.  However Latitude was "retired" a couple of years ago and, from a bit of searching, was apparently replaced by Google+.  When I looked at my Google+ profile my location was  there so I assumed that this would be  available via the Google+ API.  Hence I registered for the Google+ API and got myself an API key.

This URL:
https://www.googleapis.com/plus/v1/people/<put_your_google_id_here>?key=<put_you_key_here>

Yielded this JSON:
{
 "kind": "plus#person",
 "etag": "\"YFr-hUROXQN7gOa3dpHg9dQ8eq0/0LQ1cdZ_xacNSKto12345j_fXA\"",
 "gender": "male",
 "objectType": "person",
 "id": "my_id
"displayName": "Paul Weeks",
 "name": {
  "familyName": "Weeks",
  "givenName": "Paul"
 },
 "url": "https://plus.google.com/my_id",
 "image": {
  "url": "https://lh3.googleusercontent.com/-XdUI1234CWA/ACFGTCCCAAAI/AAAABBBAAAA/4252rscbv5M/photo.jpg?sz=50"
 },
 "isPlusUser": true,
 "circledByCount": 1,
 "verified": false
}


So no location.  Additionally, no matter what URL I used, I could not get my location via the Google+ API.  Equally, no end of searching yielded a way to do this via the Google+ API.  Hence it seems good old Google retired the Latitude API without supplying a replacement.  Nice one Google.

More searching showed me that there is a (pretty inelegant) way to get your location history.  Google provide it via Google Maps via this URL: https://maps.google.com/locationhistory/b/0.  Here you can use the web interface to set a date range and view where you've been.  You can also export to KML to get a dump of you location history, playing with the timestamp parameter to get a different time period.  This epic Blog provides more details and gives a nifty way to automate the process with cURL and PHP.


An example URL is as follows:
https://maps.google.com/locationhistory/b/0/kml?startTime=1401058800000&endTime=1401145200000


...and the timestamps are Unix style so each day is 24*60*60*1000 = 86,400,000 milliseconds long. Using this URL format I requested a year's worth of data (so startTime = endTime - {365 * 86,400,000}).  After a short delay (maybe 10 seconds) I was served with a large KML file.


Looking at the file I spotted that it only went back to mid-October 2013, so about 7 months worth of data. I'm not sure whether I did something back in October '13 to trigger Google storing my location, more likely there's some form of limit as to how much data they provide access to.  I also spotted that there were 186,930 locations logged!  That's nearly 900 per day!  Many of which were duplicates of the same location which doesn't seem to be the most efficient way to do things.  One day I'll investigate this some more.

From old tinkerings I knew you can upload a KML file to Google Fusion Tables and visualise it on a map.  However with this file and the 186,930 data points, Google wouldn't render a map.  Hence I did some Excel jiggery pokery to remove duplicates from the location list and only use lat/long to 3 decimal places and got it down to a mere 918 data points.


Visualised on Google Fusion Tables it looks like this:


So you can see how I spend most of my time in England and have traveled to the Netherlands, Germany, Romania and Hungary this year. There's also a puzzling data point in Iran - somewhere that I absolutely, definitely have not traveled to ever!

The English data points are interesting:



       You can see how I spend most of my time in the central South area, travel occasionally to that London and have also visited relatives in the north-west of England.  The little circle around Birmingham shows I've used the orbital motorways in both direction and the direct line up the centre of the country was for a long distance bike ride I did.

Interesting, (but slightly scary),stuff.  Must think of more ways to use this data...


 
 

Sunday 5 January 2014

An AirPi - Useful in a Powercut

In March 2013 I built a Raspberry Pi AirPi and I've had it up and running ever since.  The system has proved to be very reliable; I reckon it's only stopped working 2 or 3 times since I set it up.  The Python scripts are setup to run when the Pi starts so the AirPi will automatically start up if there's a power cut or glitch.

I do this by putting scripts in the /etc/init.d folder.  Here's an example:

#Used this as a guide http://myraspberrypiexperience.blogspot.co.uk/2012/08/star
t-vnc-automatically.html
#Run this under the Pi username
export USER='pi'
eval cd ~$USER

#Run the Airpi
su $USER -c 'sudo python /home/pi/Meteoros/pdw_upload_v3.py &'

#End stuff
exit 0

We've had some pretty wet and windy weather in the UK over the Christmas period (it's still poor as I write early in the new year) which has resulted in long power cuts all over the country.  We suffered a power cut at my house for 26 hours from roughly 1300 on 23/12/2013 to 1500 on 24/12/2013.  

Here's my AirPi barometric pressure chart for the period:


On the graph you can see the pressure suddenly drop; the weather conditions that coincided with this were high winds and heavy rain.  This caused trees to be blown over which caused a power cut to the place I live in which you can see on the graph as the horizontal line from the 23rd to the 24th.  A power cut means no power to the RasPi and no ADSL router, hence there are no measurements for this period.

The AirPi came in useful as we were due as a family to travel to my in-laws on the 24th for Christmas.  It's a reasonably long drive so we were due to leave at 0830ish on the 24th.  With the power out we decided to stay put as we didn't know what state the house would be in when the power came back on (i.e. freezer defrosted, lights left on).  However by 1200 the power was still off and with a long drive ahead of us we decided to give a key to neighbour, switch off everything we could, chuck out a load of food from the freezer and then crack on with the drive.

In the end the drive went well and we arrived before 1600.  Then, just after 1600 I got a Twitter direct message from my Raspberry Pi, (more on that later), which let me know that it was up and running again.  Shortly after I got a text from my neighbour to tell me that all was well with the house.

The next question was whether the heating would come on properly after the power cut; it would be a bad thing not to have this on during winter.  My AirPi gave me the answer as I could see the temperature in the house going up and down twice a day.  (I could monitor this remotely via Xixely).


It was interesting to look at the temperature profile, (chart for the 25th shown below):

The heating came on in the early hours of the morning and then early in the afternoon which matched the twice a day timer program I had set.  It heated up to roughly the expected temperature  (the thermostat was in another room and set to 19 Celsius).  It was good to see that the room heated up quicker than it cooled down which shows that the insulation must be doing some good.  What was interesting was the the heating was coming on several hours earlier than expected.  What I assumed (and later proved) was that when the power came on at 1600, the clock on the heating controller must have resumed at the time it held when the power went off(1300).  Hence the clock was running roughly 3 hours fast. 

It's also interesting to compare the fairly regular temperature profile while we were away (two peaks a day, a longer one in the afternoon) with one when we were at home the next week:

The "at home" peak is a lot more messy as we control the heating manually, tinker with the main thermostat and TRVs, open doors and windows etc.

I mentioned earlier that the thing that prompted me that the power was back on was a tweet from the Raspberry Pi. I have the Pi GET a temperature measurement from Xively and send it to me as a Twitter direct message (which I can pick up on my Android handset).  The fact that I get one of these every hour just tells me that the Pi is still up and working and it's also good for inter-geek boasting!

I used this utterly excellent document from the Raspberry Pi Foundation to tell me how to tweet from the Pi using Python (see page 115).  The code is below (secret values edited out) but in simple terms, every hour it picks up the latest temperature reading from Xively and posts it as a Twitter direct message.  It uses the twitter Python module.  Here's a screenshot:


#Using Twitter to communicate COSM values
#Example URL is http://api.cosm.com/v2/feeds/XXXXX.csv?datastreams=0
#This returns the last value for datastream 0
import os
from twitter import *
import time
from datetime import datetime
from httplib import HTTP

#The URL we will use
COSMURL = "api.cosm.com"
FullCOSMURL = "http://api.cosm.com/v2/feeds/104017.csv?datastreams=0"
KeyToUse = <Deleted>

#Make a HTTP request to COSM to get a string
def GetCOSM():
  try:
    #Now do the HTTP magic - Connect to the server
    h = HTTP(COSMURL)

    #Do a get
    h.putrequest('GET',FullCOSMURL)

    # setup the API Key
    h.putheader('X-ApiKey',KeyToUse)

    # we're done with the headers....
    h.endheaders()

    #Get the response
    errcode, errmsg, headers = h.getreply()
    response = h.getfile()
    data = response.read()
    h.close()
    print data
    return data
  #Catch an exception
  except Exception, err:
    #Write a log with the error
    print "Got us an exception: " + str(err)

#MAIN BODY OF CODE
#Go in to a loop sending direct messages
while True:
  #Using the @mrjamesbond account to send tweets
  #Went to https://dev.twitter.com/apps/new to set this up
  CONSUMER_KEY = <Deleted>
  CONSUMER_SECRET = <Deleted>

  # get full pathname of .twitterdemo_oauth file in the
  # home directory of the current user
  oauth_filename = os.path.join(os.path.expanduser('~'),'.twitterdemo_oauth')

  # get twitter account login info
  if not os.path.exists(oauth_filename):
    oauth_dance('Raspberry Pi Twitter Demo', CONSUMER_KEY, CONSUMER_SECRET, oauth_filename)
  (oauth_token, oauth_token_secret) = read_token_file(oauth_filename)

  # log in to Twitter
  auth = OAuth(oauth_token, oauth_token_secret, CONSUMER_KEY, CONSUMER_SECRET)
  twitter = Twitter(auth=auth)
  try:
   #Send a direct message
   MessageToSend = "AirPi Temperature Reading " + GetCOSM()
   #print MessageToSend
   twitter.direct_messages.new(user="pauldavidweeks",text=MessageToSend)

   # Tweet a new status update
   #twitter.statuses.update(status=MessageToSend)

   # Display all my tweets
   #for tweet in twitter.statuses.user_timeline():
     #print('Created at',tweet['created_at'])
     #print(tweet['text'])
     #print('-'*80)
  except:
    print "Got a Twitter error"

  time.sleep(3600)