Actions on Google for my Tesla

I’ve been meaning to add the possibility to control the AC in my car from Google Home for a while but never got around to it. A few nights ago a finally set to action.

Google Home is basically Google’s take on the smart home. It is an ecosystem of things. The google home smart speaker is integrated with the google home app which is integrated with the google voice assistant, etc. etc.. In the Google Home app, you can setup devices you have around the house that are compatible, such as lights, power outlets, thermostats, etc. Once they are setup in the app, you can control them using google assistant or google home smart speakers. So, if I want to turn to be able to turn on the AC in my car by saying “Ok Google, turn on car AC”, I must make sure I can have a device in google home with the name “car AC” and when I issue the command “turn on”, it will send a signal to my car to turn the AC on. To do that I started looking at Actions on Google.

(Do I really need to build something for this? No, there are a couple of actions on google integrations already that you can simply add. However, when you do, you basically give away all control of your car to a third party. With this control and physical access to the car, you can essentially walk up to the car, unlock it, and drive away. Now, I’m not too paranoid about this, even though I probably should be. But, in this case, instead of giving someone the chance to drive away with my car, I much rather learn how to build this myself. After all, it’s really quite easy.)

Actions on Google allow you to “integrate with google assistant”. This currently means that you can extend your mobile apps to work with the voice assistant, to create conversational apps (basically apps that you use through voice/text… yes, chat bots), or connect to smart devices (i.e. google home). For my purposes, I do not want to create a conversational app. If I did, I would have to say things like “Ok google, talk to my car. -Ok, talking to your car. -Hi I’m your car what can i do for you? -Turn on AC. -Ok, AC turning on”. Zzz. I much rather just open the app on my phone and press the climate button. So, instead I went for smart device integration.

Actions on Google for smart home devices is really quite simple. You need an Oauth server that can issue a token, and an https endpoint that respond to commands issued by the google assistant. We are simply building a service that acts as a server for a particular device type. There are only four commands possible: Sync, Query, Execute, and Disconnect. WIth an access token received from your Oauth server, the Sync command should respond with a list of devices that this user possesses. Query should respond with the state of the queried devices. Execute should abide to whatever the particular command is (e.g. TurnOn should turn on the device), and Disconnect should do what ever needs to be done whenever the user unlinks their google home with your service.

So I first implemented the Oauth server. There are essentially two routes required: auth and token. The auth route should let a user authorize themselves and tell Google that it is ok for your service to share access to their devices. The server responds to google through a redirect route including an authorization code. The token route should take an authorization code and return an access token (as well as a refresh token that can be used to refresh the access token). For more information go here.

In my case I let a user authenticate with their tesla account, and use their access token as the access token (together with some other stuff). This means I never have to bother storing any information on my server, but let Google store that in the token instead.

Next, the action route. Google issues POST requests to the route with access token in an Authorization header. The post data is JSON and includes a requestId and a list of intents. The intent specifies if it is Sync, Query, Execute or Disconnect.

Sync does not send any additional data, but simply expects to have the devices associated with the user returned. For this, I lookup the devices associated with the tesla account (as given by the access token) and return the device data mapped to data expected by Google, including an id that can be used to identify the device when issuing commands to the Tesla API in the future. For each device Google also expects the type of device, and its traits. The device type is not really important as far as i can tell although it will dictate what it looks like in the Google Home app. But the traits specify what commands can be sent by google assistant. In my case I want to be able to the the heat on or off, so I picked the device type action.devices.types.HEATER (CAR doesn’t exist as a type), and the trait is actions.devices.traits.OnOff.

For Query, Google sends a list of devices it wants the status of. Here I simply iterate through the list, and fetch the status of the climate of each device. (The Tesla API has four distinct states fetched through four different routes: climate, drive, vehicle, and charge.) With a device trait of OnOff, Google expects a state named on to be true or false. I set it to true of the climate is on.

Finally, (I don’t bother with Disconnect as there is nothing to do), I implement Execute. Execute is a bit of a handful as it will send a list of commands as a set of devices and executions. Here I cheat majorly and assume there will be only one command, with one device, and one execution. The execution of action.devices.commands.OnOff carries a parameter of on that is set to on when the command is to turn the device on. Depending on this value I either turn the climate on or off through the Tesla API. Finally, Google expects you to respond with the final status of the commands for each device.

Finally, when setting up the Action on the Actions on Google Console, you simply add the auth and token routes for your Oauth server, and the action route for your Action. When complete, hitting test will set the action up for you to use through your own google assistant only.

All in all, there are a few concepts required to learn around Actions on Google, but once you understand those, enabling you to control something through google assistant is really as simple as writing a small rest API. Next I might add more capabilities, such as adding both the car heater, and the car charger, so that I can start or stop charging at home. I might also add the capability to set the temperature I want in my car, although I very rarely change this so it’s very far down my todo list.

And here’s the code.

Posted in Uncategorized

I am an academic again!

After a little more than two years at Iteam, I am now back in academia! This week I’m starting my position as a senior lecturer at the University of Gothenburg. I will be affiliated with the division of human-computer interaction at the department of applied IT. The division is just starting as I join, so I will be working together with Alexandra Weilenmann to establish it.

I am currently interested in understanding how mobile technology (and IT in general) is affecting our well-being (both positive and negative), and will continue looking in that direction. I’m also very keen on collaborating on this with other researchers, across disciplines so please reach out if you find this topic interesting.

I am excited about being back as an academic, both to do research and to teach and inspire students.

Posted in Research

Template for PoC using NodeJS, React, and Express

Quite often I need to try a simple idea for a new service or app. My weapon of choice is currently nodejs and react. It is fairly easy to quickly hash out a server backend. Using create-react-app or NextJS, it is also quite easy to write a react frontend for said backend.

This is all nice and well for a bit while you try out the idea for yourself. But no idea is really worth anything until it is shared with others. So now you need to host both the frontend and backend somehow, and point the frontend at the backend. This typically involves too many steps to make it feasible to try out the very simplest ideas.

What I typically want, is a single server that can run somewhere that hosts both the backend and the frontend. Ideally just a Dockerimage. And I want this to just work without bothering with builds and such. After having done this for a few projects recently I decided to create a template so that I can reuse it whenever I need to try out the next idea.

The result is in this GitHub repo.

To use it, copy the repo. Run npm install and then npm run dev. (I use node 10 and there is a .nvmrc in the repo.)

You now have a server on port 3000 that will restart when you change the server files, and will rebuild the client when you change the frontend. You are basically set to start materialising your idea.

Once you want to share it, you can either use ngrok, or build a Dockerimage using the Dockerfile in the repo and host where you typically host things. When the image starts it will build the frontend. I do it this way instead of when the dockerimage is built so that we can set environment variables on the hosting platform easier.

The repo sets up webpack 4, babel 7, react, express, and jest. The server runs in development mode or production mode. In development mode the server restarts when the backend code changes, and rebuilds the frontend using webpack on client code changes. In production mode the server builds the frontend on startup. It is called production mode, but I would not recommend this for actual production ready applications. But for a PoC this works very well.

Feel free to open issues or send PRs on the github repo.


Posted in Coding

Low power sensors using nRF24L01+ and ATTiny

Smart home devices are expensive. A simple temperature sensor costs quite a bit. If you want one in each room it quickly gets costly. If you want door sensors on all doors and windows it’s pricey.

Cost is not necessarily a problem. If you know what you need you can make a budget. But if you don’t know what you need then it stands in the way of experimentation. I want to be able to put up a bunch of stuff in the house and see what I can do with it. It’s hard to justify that cost for other than oneself.

So I’ve been on and off playing around with different ways to create cheap sensors. Not just to have these sensors, but because it is fun to learn.

This is something of a report on my latest adventures within this.


I started out with Arduino pro minis as they are easy to program and test things with. And it all starts with communication. My thought of sensor network consists of nodes that are sensors and/or actuators (think lights and motors), and gateways that can receive sensor readings and send commands to actuators. Since cost has been an issue, I’ve come to fall in love with nRF24L01+ – super cheap and low power radio modules.

Arduinos are super convenient, but they are entire boards. You can do a bunch of stuff to get them power efficient. However my thought is that all I really need is the MCU. So instead I want my sensors to run on ATTiny84 chips. They are a little harder to flash, but you can use the same Arduino tools as for an Arduino. Libraries even come with attiny compatibility so it’s hardly any difference from a software standpoint. You have less memory but these sensors only read a value and send the value to the radio module so I don’t need much memory.

Getting radio to work

The nRF chip communicates using SPI. It requires no more than 3.6V. More and they break. It draws very little current when it is idle, like microamps. But when it wakes up to send data it quickly draws around 15mA. It means the power source must allow quick power draw without causing a massive voltage drop. A capacitor helps. (The capacitor acts as a low pass filter, smoothing out a potential voltage drop.) In my setup I power the sensors with either 5V from wall connected DC transformer, or by battery. I therefore use a 3.3V pull-down component, and a 0.1uF capacitor. I’m not sure I actually need this when powering with batteries and will test this out soon. I use two AA as batteries. I’ve been successful using CR2032 occasionally but it often fails. I think I need better capacitors. A CR2032 does not have a high peak power draw and can’t deliver enough power when sending without too much of a voltage drop (causing the radio to stop working). A bigger on like CR2450 should work but haven’t paid my hands on any yet.

Powering the device is the main thing. After that comes wiring and software, but they are easy. I use the RF24 library by TMRH20 for software. It details the wiring. Make sure you get the CE and CSN correct. I’ve flipped those at several occasions.

I’ve had a lot of issues with getting Ack messages through. I think it has to do with power source and can possibly be fixed with capacitors. Still need to work on this. The data gets through, but it would be nice to know if data gets received or not. Especially for actuators. Most sensors will send it’s data again in a while so a drop of message is no biggie, except for door sensors that I’d like to resend every second until received by a gateway.

My setup consists of an Arduino pro mini communicating with a computer over serial. Essentially byte data received by the Arduino is sent in ASCII over the serial (simplifies debugging). Received data is published on mqtt to be subscribed on. The Arduino also reads from the serial and any data received is sent out over radio. It takes address and 32 bytes of data. There is very little logic on the gateway.

The idea is to be able to have several gateways around the house. Using raspberry pis would make them cheap. Using esp8266 or esp32 would make it even cheaper.

The gateway computer code is written in node.js.

Obviously I can connect the nRF chip directly to the gpio on an RPi, but this is simpler. I also have a bunch of computers laying around acting as gateways now.

Getting them to sleep

While power draw is low on the radio when idle, the MCU still draws power unless put to sleep. (Otherwise it just churns away at whatever clockspeed you’ve configured, looking for something to do. Delays are merely loops doing nothing but counting clockcycles.) So, we must tell it to go to sleep, getting it to use as little power as possible, and still be able to wake it up to occasionally do something useful.

When sleeping the sensors are barely using any current at all. Somewhere I read people saying that similar setups use less current than the battery looses by just laying around. This would mean that e.g. a door sensor on a window that never opens will work will function for the duration of the lifetime of the battery. That is quite impressive.

However, if this is the case, how will I know if the sensor is still working? To fix this I’m planning on also waking it up occasionally and send a heartbeat. That way I will be able to detect when a gateway haven’t heard from a sensor in a while and act accordingly.

Waking up

Getting it to sleep is fairly easy. Set some bits in some registers and off to sleep it goes. Getting it to wake up is done through interrupt. An interrupt is something that allows an MCU to halt it’s normal control flow to do something different for a while. It also happens to wake up a sleeping MCU.

I use two types of interrupts in my sensors so far. First one is a watch dog timer. It essentially let you specify a duration in which you want to be interrupted. It can be anything from a few milliseconds, to a few seconds (8s on the attiny). The second one is Pin Change interrupt that will alert the MCU that the logical value on a pin has changed. I use these in essentially the same way: I setup the interrupt I want to use, power down the radio, and go to sleep. When it wakes up I power up the radio, and continue the program which typically involves reading the value of an external sensor and send that to the radio module.

An important lesson I learned recently was to implement each ISR callback for each type of interrupt. E.g. there are two pin change interrupt handlers, PCINT0 and PCINT1, and one for WDT.

In the case of temperature sensor I ask the cpu to sleep for 8 seconds. I do that 6 times, and then send the temperature. The reason is that I don’t need to know the temperature every 8 seconds. Once a minute is fine. 6*8s is close enough.

On the door sensor I set the interrupt to react to the door pin – the pin that is high when the door is open and low when closed. This means that the sensor is basically sleeping almost all the time.


So far I’ve got very power efficient nodes for temperature and doors. They send data to a gateway that publish it over mqtt. I’ve implemented sending data to nodes, but have no nodes setup for it yet. Xmas lights might be a candidate as I have a bunch of WS2812B lights lying around.

I recommend anyone being interested to look at

For the code for all this and more details, checkout my repos:

Nrf-gateway – nodejs server gayeway
Nrf-nodes – Arduino code for the nodes

Posted in Coding, Hardware addon for backup to S3

I’ve been playing around with Home Assistant lately. I was surprised to not find a way to make external backups automatically. Most what I read recommended making snapshots and copy the snapshot over a samba mount manually. I wanted a way to directly make a backup and upload it to AWS S3. So late last night i threw an addon together. It can be found on github here.

Posted in Uncategorized

At home Tesla dashboard in Serverless

So at the weekend of my birthday I finally got the chance to sit down to work on a little project I’ve been wanting to do for a while. It was a gift for myself. Essentially I wanted to build a simple app to show on an iPad in the hallway of our house, where my wife could see where my car is and how long it would be for it to drive home in current traffic conditions. It would also allow me to make sure it was plugged in to charge over night, as well as let me turn on the AC in the morning if it was cold outside. (Obviously the last part could be automated but I don’t always drive in so it would be a waste of electricity to do it anyway. Plus it force me to check the temperature and dress appropriately.) I completed the first version on the Sunday and I’ve made some minor upgrades since, but the core functionality is done.

So how is it build? Well, my car is a Tesla Model S, and so it is essentially a computer on wheels. It has an accompanying app for my phone that let me control all kinds of things as well as give me its location so I can find it at a parking lot. Anything being controllable from a phone, means there is an API somewhere that can be used to do the same thing that the app can do. And Tesla is no exception. It is not official however, but fortunately it has already been documented by craftier people than myself and so far Tesla has made few attempts to prevent it being used so my hunch is that they kind of don’t mind.

The overall architecture of the system is essentially a React app as a frontend, talking to an API that serves the app with info about the car, an ETA for when I might be home, as well as controls for turning the AC on and off.

I have recently been working with Serverless and serverless architectures so I wanted to use that here. In short it means you write very small pieces of software that is invoked for a short period of time and then dies. It is run on servers somewhere, but you don’t have to care about where or how they are setup, just trust that your code will run when something triggers it. (There are books written on this and this is probably the shortest explanation I can manage so YMMV).

The react app is nothing special. It fetches the last state of the car every 60s and that’s about it. I built it using next.js that I use to export it to an s3 bucket where it is served from as a static site. It looks something like this:

Tesla home dashboard

Tesla home dashboard

The backend part is where the fun begins. There are essentially three components (although there are some support components as well). The first component is a function for getting the car state from the Tesla API. There are 5 routes used, that are aggregated and populated into a state object. This object is then put on a message queue in AWS SQS. Another function for saving this state in a dynamodb database is set to be triggered on a new message on the same queue. The update function is set to trigger every 1 minute using AWS scheduled events. 1 minute it she smalled time frame for schedules events. It is enough for now but I might need to change this in the future.

Once the data is in the database the last piece is a HTTP GET route for fetching the last state to display in the dashboard. Once this was done, it was simple to add two routes for turing the climate on and off. I later added a route for fetching the last locations of the previous hour, to show on the map plotting the route I’ve taken so far which gives additional cues for my wife as to where I might be heading.

Using the serverless framework, all components are neatly described in a serverless.yml file. Each component is dead simple, and easy to test. Having the state published to the queue, rather than save directly, means I can now write another function that also listens to the queue, and emit new events for particular state changes. This could allow e.g. me to write a function that spots when I park in other places than at home or at work and send me a push notification that open up the correct parking app for the location. (Or ideally, get the parking meter companies to open up their APIs so I can have my car pay for me automatically.)

In all this, I never had to touch a server configuration. I could focus on the code doing its thing.

Btw, here’s the code on github.

Posted in Coding

In Sweden, Digital strategist at Iteam

I have not updated this in a loooooong time. Figured I should at least add some minor updates before I write any other posts.

In January 28, 2017, me and my family left Glasgow and Scotland to move to Sweden. We had no plans on what to do so I did some freelance stuff. (30 years of programming has some benefits these days.) Not long after I decided to join Iteam as a digital strategist. It meant I left academia for industry. I no longer do research, but instead help companies and organisations think in digital terms – what change does digital mean to existing business and practices. My long stretch in looking into the future by applying research through design, now benefits my ability to help seeing opportunities and what is possible for existing businesses and organisations as well as new enterprise. My long experience in building stuff indeed comes in handy at times, where I mentor more junior colleagues and help strengthen our technology skills.

I now live in Veddige, close to Varberg, south of Gothenburg, on the Swedish west coast. Feel free to give me a shout if you want to meet up!

Posted in Uncategorized

Workshop at NordiCHI on Mobile Wellbeing

We will organise a workshop at NordiCHI later this year in Gothenburg, Sweden. The workshop is called Mobile Wellbeing and is to be focused around mental wellbeing and use and design of mobile technology. The workshop is organised by me, John Rooksby (UK), Alexandra Weilenmann (SE), Thomas Hillman (SE), Pål Dobrin (SE), and Juan Ye (UK). If you are interested in the workshop theme, consider writing a position paper and join us in Gothenburg on October 23. The deadline for position papers is August 25.

You should visit the workshop website for more information, but at the workshop we aim to discuss the following three questions:

  • In what ways do mobile technologies affect mental wellbeing?
  • How can mobile technologies be designed to support and improve mental wellbeing and to mitigate negative effects?
  • What strategies and practices can be developed for using mobile technology in ways that do not harm and instead support improvement of our wellbeing?

See you in Göteborg!


Posted in Uncategorized

Forget-me-not: Honorable Mention at CHI 2016

presenting_forgetmenotLast week I went to San Jose for a week to attend CHI. Among other things I got to present my note Forget-me-not: History-less Mobile Messaging. This note received an Honorable Mention which is given to the top 5% papers and notes at the conference.

The paper is based on work done by a group of level 3 students. The group project was to design, build, and study a mobile text messaging app without history. This is what they did and turned into the app forget-me-not. The student project lasted a year, and throughout the project the students received first best presentation at the intermediate project presentations at half-time, and finally they received best L3 project in Computing Science in the school. I could not be prouder of these students. Topping that up with also being able to turn the work into a paper receiving an honorable mention surely is the cherry on the cake!

The final paper discuss mobile text messaging and the role of messaging history. To do that we study what happens with mobile text messaging when there is no history. By interviewing 10 participants after using the app for 2 weeks, we gain insights into how they perceive the app, and how they perceive messaging through it. We found that messaging requires effort, but allows users to be relaxed about what they write. History turns out to be useful in the ability to “scroll up” to see the past messages which allows for distractions. Not having history instead makes it more engaging. It is also discussed how not having history allows for sending messaging you don’t want to have on record, such as bank details, or planning a secret birthday party. Read the whole paper, where we discuss the design of the app, and discuss the method of deliberately removing history to study history.


Posted in Uncategorized

The Delay Experiment

One of my summer interns, Ivaylo Lafchiev, is working on an interesting project over the summer, looking at the effect delays might have on mobile phone use. He has developed an Android app that introduces a delay when the phone is unlocked. The effect is that the user will have to wait for a short while (seconds) before the phone becomes available to them. The idea is to vary the duration of the delay to see if we can notice any effects on the overall usage of the device (for instance is the amount of time spent on the phone altered).

The app is now ready to be used in an experiment. The app is available on the app store, and the hope is that the app will attract a few users who will keep it installed for at least a few weeks. This is extremely risky. Why would anyone keep an app that makes it harder to use the phone? The hope here is that people do want to become more conscious about their phone use, and therefore are willing to participate in this experiment. The question however then is in what way the data will be biased by participants already wanting to change their phone use? We are trying to mitigate this bias (and investigating it) by first setting the delay to a short time, and after some time, we’ll change the duration of the delay remotely to see if we can stop a difference in phone behaviour. By comparing the use before and after the delay change, and by changing the delay differently for different users, we hope to gather evidence as to whether this will have an effect or not.

I’ve made two personal observations from having the app installed myself. First, each time I unlock the phone, it serves as a reminder that I’m now about to use my phone. This might seem like a weird thing as the fact that I’m using the phone should be a reminder itself. However, I tend to use the phone when ever I have nothing else todo: waiting for the subway or waiting in line at Starbucks for instance. With the delay, I’m reminded, and then given a short time to contemplate whether I actually want to use the phone or if I should just take a few seconds to do nothing. The longer the duration, the more annoying the app is, but it also makes me more aware of my phone use. When the duration is long enough (more than 3 seconds or so) I start to think before using my phone, even before I take it out of the pocket. All of a sudden I’m reminded that I will have to wait a few seconds before the phone becomes available, and I choose often not to subject myself to it because the thing I was supposed to do with the phone was pointless anyway.

I’m looking forward to see what happens with this experiment. Is anyone going to download the app and install it? Is anyone going to keep it installed for long enough for us to collect data on their behaviour? Only time will tell. But until then, I will be more conscious and mindful of my phone use – until Ivaylo makes the duration so long that I will uninstall the app and go back to mindlessly fill every void of my life with mobile phone use.

You can find more information about the project, and download the app, here.

Posted in Research