AWS SSM Agent on an RPi 4

So as part of the work I do with AWS DeepRacer I use SSM Agent on the cars and (finally) now also on the Raspberry Pi based timing system, to make things easier I thought I’d “quickly” install and activate SSM on the RPi so I can access them remotely and show the timer online as part of DeepRacer Event Manager (DREM – more on which in another blog post)

Installing SSM on a 32bit OS on an RPi Zero or 4 was easy, just works, however on a 64bit OS I was getting errors:

dpkg: dependency problems prevent configuration of amazon-ssm-agent:armhf:
 amazon-ssm-agent:armhf depends on libc6.

dpkg: error processing package amazon-ssm-agent:armhf (--install):
 dependency problems - leaving unconfigured
Errors were encountered while processing:
 amazon-ssm-agent:armhf

Took me a while to find the answer as I didn’t happen to have the time to get an uninterrupted run at fixing the issue and more importantly testing the fix (because I was activating SSM as part of a scripted process) each attempt meant I needed to re-install the OS on the RPi to ensure it was working correctly.

Anyway, the solution was to install libc6:armhf so now my code to install SSM on an RPi 4 running 64bit OS is as follows:

sudo dpkg --add-architecture armhf
sudo apt-get update
sudo apt-get install -y libc6:armhf

mkdir /tmp/ssm
sudo curl https://s3.amazonaws.com/ec2-downloads-windows/SSMAgent/latest/debian_arm/amazon-ssm-agent.deb -o /tmp/ssm/amazon-ssm-agent.deb
dpkg -i /tmp/ssm/amazon-ssm-agent.deb
rm -rf /tmp/ssm

And once installed activate SSM as normal.

Hopefully this helps someone, and if not it will probably help future me.

Goodbye to all that

So as we come to the end of 2019 I found myself back here, mainly because I’m in the process of winding up Wirewool Limited (my freelancing company) and with that comes the fun and games of unpicking all of the hosting / websites / domain name registration / email that I’d been running for a number of customers for up to 8 years.

One of the sites was this one, and that meant I was shutting down where it had been hosted so it needed moving.

The moving part was easy enough, (seriously how hard can it be to move a WordPress site). but then I had a look at when I had last posted, and what I had in the last couple of years posted about. There is a lot missing and a lot has changed. Some better, some just different, here’s a vague recap of what our hero has been up to….

So starting with the obvious, why shut down Wirewool?

Well since April 2018 I’ve been gainfully employed by Amazon Web Services as a Solution Architect, sure I could earn more as a freelancer, but only when working, and only if I have work. One of the things that I found with freelancing was that it was very much feast and famine. Also the idea of freedom of work doesn’t apply when you have bills to pay, kids to feed and a house to pay for. I managed to keep it going for 8 years and when a decent offer came along (in fact I had a couple of offers at the same) I took the one that felt like it would be the best for me long term. So far it’s worked out well and now it’s time to shut down Wirewool and remove something that is costing me money to run having finally gone through and tracked down all of the services I’m / Wirewool is using. (Note: something I should’ve done a long time ago, 20:20 hindsight is a wonderful thing.)

Ooh working for Amazon, gosh!

Yep, gone from a one man band to the largest company I’ve ever worked for (and will probably ever work for). It’s certainly interesting, but it’s also pretty awesome. I mean there is the basic stuff you get when you go back to working for someone, like paid holiday and sickness which is cool, and monthly pay (still freak out towards the end of each month as the account drains down) but also private healthcare (more of why this is good later). Training and education are massively important, I work with loads of awesome people to help customers, I’m enjoying presenting, helping to educate groups of people (ex-forces transitioning into a career in IT and also Princes Trust “kids”) on cloud, playing with DeepRacer and showing how it can help with learning AI / ML, helping to organise Meetups, and just well everything. I even enjoyed going to Vegas in December 2019 for re:Invent (and I hate Las Vegas). I even get to do some code hacking every now and then and have access to the best cloud toy box on the planet.

Introducing Brian

Everyone gets baggage that they carry around after a time, just I managed to name mine.

So I’d been feeling rubbish for a number of years, couldn’t shift the weight that I had gained over time, more than just “getting old”, couldn’t really train without taking days to recover (even after a slow 5km run / walk I’d need 3-4 days for my legs to start working again). I’d been to the doctor 6 years or so ago and said “this isn’t right” and they said what do you think it might be “well worst case, a brain tumor” said I and oh how we laughed….

….right until the point where I tried again with the doctors early last year. This time around I was taken more seriously and sent off for a load of blood tests which revealed I had 0 testosterone. Back to the doctors to discuss the result and the possible causes and it was time to test out the private health care I now had access to (see earlier note on private health care and the joys of working for a large company) More bloods revealed I had a prolactin level of 116290 (normal is below 500) and this was high enough that the day after I’d given the blood to be tested the consultant was called by the vampires who were slightly alarmed by the results. So something was wrong. An Ultrasound revealed nothing out of the ordinary with my testicles (one of the possible causes of 0 testosterone) however an MRI scan revealed I had a benign brain tumor that a) I christened “Brian” and b) that it was a macroprolactinoma approx 28mm in size.

No one wants to hear the phrase “brain tumor” when talking about their health but some reassurance from my consultant and some digging around showed it was easily sorted (well probably) and here take these drugs which have some “interesting” potential side effects (never read the side effects leaflets or google for “side effects of drug I am taking”)

So in the first three months of taking the drugs the prolactin levels were back down near normal, I was losing weight and able to train, aside from a few incidents with a black dog (which I haven’t named) I’d say it’s all pretty good, some members of the family may have a different view, and a couple of work colleagues thought I was suffering from a terrible hangover a few times as my body adjusted to the drugs. Anyhew since starting on the drugs I’ve lost 18 kilos in weight, taken 10 minutes or so off my half marathon PB and have managed to get my Parkrun PB to 23 minutes 13 seconds.

Over the years

So I get to this point every year, look back on my fitness activity and think “should’ve done better.” Except for 2019 I’m feeling pretty good about it, but to make sure that I’m not just kidding myself I needed some data and that meant I first needed to import 7 years of data from Garmin into Strava so all of my activity data was in one place. I actually have data going back to 2008 but it’s a bit sparse so for the sake of the table below I’ve dropped it.

Importing all the things from Garmin into Strava was made easy thanks to garminexport (well I had a local archive of 1300+ activities that I had to manually import, 25 at a time, and check were assigned to the correct activity type.)

Amusing to note that as I was adding the entries I spotted a 3km run in 2015 that took me 22 minutes. In 2019 I managed to get my Parkrun PB down to 22:13 ;-)

So with the data all in one place, time to look at it (a bit)…. using this handy site I was able to quickly see annual summary data which is below, the biggest surprise for me is the lack of swimming, oh and a lot of the riding is made up of bike commuting. The interesting part (for me) is the improvement in my running pace over the last few years.

2012 2013 2014 2015 2016 2017 2018 2019
Ride
Count 10 12 79 151 163 53 84 74
Distance
(km)
413 262 423 399 672 250 394 350
Time
(hh:mm:ss)
21:53:51 10:50:22 23:56:42 21:09:31 34:24:52 13:34:27 21:50:58 18:00:36
Run
Count 58 37 33 31 85 89 71 131
Distance
(km)
358 247 201 211 238 457 270 847
Time
(hh:mm:ss)
35:49:27 26:55:34 23:54:00 23:58:01 30:28:49 52:12:56 27:29:12 78:02:32
Avg / Speed
min/km
6:00 6:32 7:08 6:50 7:42 6:51 6:07 5:32
Swim
Count 5 1 23 44
Distance
(km)
8 1 14 55
Time
(hh:mm:ss)
02:34:34 00:46:34 03:57:34 15:40:27

This is due to my “if you want to run faster, run faster” brain wave I had in the middle of 2018, where I suddenly worked out what I needed to do to stop my parkrun taking so long (and getting longer) and to just generally improve my running, going from a tedious and sometime boring plod, into something that almost feels like running. Just need to apply it to my swimming now ;-)

Pi + Bluetooth = Is the track boss going to have a heart attack

So I had an idea that fell out of being Track Boss at re:Invent for a few hours each day during the DeepRacer championship. Wouldn’t it be interesting to see what the heart rate and step count of the track boss was….?

So quick bit of DuckDuckGo(ing) later and I had a Polar H10 on the way and was looking at how I could get going – this post on RepRage formed the basis of some early work.

So I was able to scan for my device:

$ sudo hcitool lescan
 LE Scan …
 F4:DF:3F:95:DE:EA (unknown)
 F4:DF:3F:95:DE:EA Polar H10 65AAF325

But trying to connect to it failed using hcitool, so I switched to using gatttool with success (connection and data):

$ gatttool -t random -b F4:DF:3F:95:DE:EA -I
 [F4:DF:3F:95:DE:EA][LE]> connect
 Attempting to connect to F4:DF:3F:95:DE:EA
 Connection successful
 [F4:DF:3F:95:DE:EA][LE]> characteristics
 handle: 0x0002, char properties: 0x02, char value handle: 0x0003, uuid: 00002a00-0000-1000-8000-00805f9b34fb
 handle: 0x0004, char properties: 0x02, char value handle: 0x0005, uuid: 00002a01-0000-1000-8000-00805f9b34fb
 handle: 0x0006, char properties: 0x02, char value handle: 0x0007, uuid: 00002a04-0000-1000-8000-00805f9b34fb
 handle: 0x0008, char properties: 0x02, char value handle: 0x0009, uuid: 00002aa6-0000-1000-8000-00805f9b34fb
 handle: 0x000b, char properties: 0x20, char value handle: 0x000c, uuid: 00002a05-0000-1000-8000-00805f9b34fb

So that’s some information from the strap, now to subscribe to notifications to get the heart rate data:

 [F4:DF:3F:95:DE:EA][LE]> char-write-req 0x0011 0100
 Characteristic value was written successfully
 Notification handle = 0x0010 value: 10 40 e4 03 
 Notification handle = 0x0010 value: 10 40 92 03 
 Notification handle = 0x0010 value: 10 40 7e 03 
 Notification handle = 0x0010 value: 10 41 8f 03 76 03

For me I get ~79 notifications before an error, however for now getting something back is better than nothing especially given the second “value” is the important one, my heart rate in hexadecimal. So we can (sort of) read the data from the strap, now to do this in code.

Python is my current go to language so it was time to add some libraries and see what we could get working:

$ sudo apt-get install -y python3 python3-pip libglib2.0-dev
$ sudo pip3 install bluepy

Adding in MQTT and python libraries so the data can be used in a presentation layer:

$ sudo apt-get install -y mosquitto mosquitto-clients
$ sudo systemctl enable mosquitto.service
$ sudo pip3 install argparse paho-mqtt

My code is still bombing out with a connection error though after 147 notifications from bluepy. On the up side I’m not the only hitting this issue, on the down side there doesn’t appear to be a decent fix. For me running:

$ hcitool con
Connections:
         < LE F4:DF:3F:95:DE:EA handle 64 state 1 lm MASTER 

To find out the connection handle (in this case 64) followed by:

$ sudo hcitool lecup --handle 64 --min 250 --max 400 --latency 0 --timeout 600

Fixes the problem (todo: make this happen using magic)

So with the code below:

import datetime
import bluepy.btle as btle
import paho.mqtt.client as mqtt
import argparse
import json

packets = 0

class MyDelegate(btle.DefaultDelegate):
    def __init__(self):
        btle.DefaultDelegate.__init__(self)

    def handleNotification(self, cHandle, data):
        global packets 
        packets += 1

        global hr
        hr = str(data[1])

        global time
        time = datetime.datetime.now().time()
        print("time: {} packet: {} Handle: {} HR (bpm): {}".format(time, packets, cHandle, data[1]))

parser = argparse.ArgumentParser(description="Connect to Polar H10 HRM")
parser.add_argument('device', type=str, help='HRM strap device ID')

args = parser.parse_args()
print('args: {}'.format(args.device))

p = btle.Peripheral(args.device, addrType="random")
p.setDelegate(MyDelegate())

#start hr notification
service_uuid = 0x180D
svc = p.getServiceByUUID(service_uuid)
ch = svc.getCharacteristics()[0]
desc = ch.getDescriptors()[0]
desc.write(b"\x01\x00", True)

# MQTT
broker_url = "10.10.10.71"
broker_port = 1883

client = mqtt.Client()
client.connect(broker_url, broker_port)

# listen for notifications
while True:
    if p.waitForNotifications(1.0):
        payload = json.dumps({'time': str(time), 'heart_rate': hr})
        client.publish(topic="TrackBossHRM", payload=str(payload), qos=0, retain=False)
        continue

I have continuous heart rate data getting added into an MQTT based queue for use elsewhere…

$ mosquitto_sub -d -t TrackBossHRMClient mosqsub|2671-raspberryp sending CONNECT
 Client mosqsub|2671-raspberryp received CONNACK (0)
 Client mosqsub|2671-raspberryp sending SUBSCRIBE (Mid: 1, Topic: TrackBossHRM, QoS: 0)
 Client mosqsub|2671-raspberryp received SUBACK
 Subscribed (mid: 1): 0
 Client mosqsub|2671-raspberryp received PUBLISH (d0, q0, r0, m0, 'TrackBossHRM', … (47 bytes))
 {"time": "17:26:43.999799", "heart_rate": "72"}
 Client mosqsub|2671-raspberryp received PUBLISH (d0, q0, r0, m0, 'TrackBossHRM', … (47 bytes))
 {"time": "17:26:44.997310", "heart_rate": "72"}

Now to do something with it.