Low power sensors using nRF24L01+ and ATTiny

Smart home devices are expensive. A simple temperature sensor costs quite a bit. If you want one in each room it quickly gets costly. If you want door sensors on all doors and windows it’s pricey.

Cost is not necessarily a problem. If you know what you need you can make a budget. But if you don’t know what you need then it stands in the way of experimentation. I want to be able to put up a bunch of stuff in the house and see what I can do with it. It’s hard to justify that cost for other than oneself.

So I’ve been on and off playing around with different ways to create cheap sensors. Not just to have these sensors, but because it is fun to learn.

This is something of a report on my latest adventures within this.

Hardware

I started out with Arduino pro minis as they are easy to program and test things with. And it all starts with communication. My thought of sensor network consists of nodes that are sensors and/or actuators (think lights and motors), and gateways that can receive sensor readings and send commands to actuators. Since cost has been an issue, I’ve come to fall in love with nRF24L01+ – super cheap and low power radio modules.

Arduinos are super convenient, but they are entire boards. You can do a bunch of stuff to get them power efficient. However my thought is that all I really need is the MCU. So instead I want my sensors to run on ATTiny84 chips. They are a little harder to flash, but you can use the same Arduino tools as for an Arduino. Libraries even come with attiny compatibility so it’s hardly any difference from a software standpoint. You have less memory but these sensors only read a value and send the value to the radio module so I don’t need much memory.

Getting radio to work

The nRF chip communicates using SPI. It requires no more than 3.6V. More and they break. It draws very little current when it is idle, like microamps. But when it wakes up to send data it quickly draws around 15mA. It means the power source must allow quick power draw without causing a massive voltage drop. A capacitor helps. (The capacitor acts as a low pass filter, smoothing out a potential voltage drop.) In my setup I power the sensors with either 5V from wall connected DC transformer, or by battery. I therefore use a 3.3V pull-down component, and a 0.1uF capacitor. I’m not sure I actually need this when powering with batteries and will test this out soon. I use two AA as batteries. I’ve been successful using CR2032 occasionally but it often fails. I think I need better capacitors. A CR2032 does not have a high peak power draw and can’t deliver enough power when sending without too much of a voltage drop (causing the radio to stop working). A bigger on like CR2450 should work but haven’t paid my hands on any yet.

Powering the device is the main thing. After that comes wiring and software, but they are easy. I use the RF24 library by TMRH20 for software. It details the wiring. Make sure you get the CE and CSN correct. I’ve flipped those at several occasions.

I’ve had a lot of issues with getting Ack messages through. I think it has to do with power source and can possibly be fixed with capacitors. Still need to work on this. The data gets through, but it would be nice to know if data gets received or not. Especially for actuators. Most sensors will send it’s data again in a while so a drop of message is no biggie, except for door sensors that I’d like to resend every second until received by a gateway.

My setup consists of an Arduino pro mini communicating with a computer over serial. Essentially byte data received by the Arduino is sent in ASCII over the serial (simplifies debugging). Received data is published on mqtt to be subscribed on. The Arduino also reads from the serial and any data received is sent out over radio. It takes address and 32 bytes of data. There is very little logic on the gateway.

The idea is to be able to have several gateways around the house. Using raspberry pis would make them cheap. Using esp8266 or esp32 would make it even cheaper.

The gateway computer code is written in node.js.

Obviously I can connect the nRF chip directly to the gpio on an RPi, but this is simpler. I also have a bunch of computers laying around acting as gateways now.

Getting them to sleep

While power draw is low on the radio when idle, the MCU still draws power unless put to sleep. (Otherwise it just churns away at whatever clockspeed you’ve configured, looking for something to do. Delays are merely loops doing nothing but counting clockcycles.) So, we must tell it to go to sleep, getting it to use as little power as possible, and still be able to wake it up to occasionally do something useful.

When sleeping the sensors are barely using any current at all. Somewhere I read people saying that similar setups use less current than the battery looses by just laying around. This would mean that e.g. a door sensor on a window that never opens will work will function for the duration of the lifetime of the battery. That is quite impressive.

However, if this is the case, how will I know if the sensor is still working? To fix this I’m planning on also waking it up occasionally and send a heartbeat. That way I will be able to detect when a gateway haven’t heard from a sensor in a while and act accordingly.

Waking up

Getting it to sleep is fairly easy. Set some bits in some registers and off to sleep it goes. Getting it to wake up is done through interrupt. An interrupt is something that allows an MCU to halt it’s normal control flow to do something different for a while. It also happens to wake up a sleeping MCU.

I use two types of interrupts in my sensors so far. First one is a watch dog timer. It essentially let you specify a duration in which you want to be interrupted. It can be anything from a few milliseconds, to a few seconds (8s on the attiny). The second one is Pin Change interrupt that will alert the MCU that the logical value on a pin has changed. I use these in essentially the same way: I setup the interrupt I want to use, power down the radio, and go to sleep. When it wakes up I power up the radio, and continue the program which typically involves reading the value of an external sensor and send that to the radio module.

An important lesson I learned recently was to implement each ISR callback for each type of interrupt. E.g. there are two pin change interrupt handlers, PCINT0 and PCINT1, and one for WDT.

In the case of temperature sensor I ask the cpu to sleep for 8 seconds. I do that 6 times, and then send the temperature. The reason is that I don’t need to know the temperature every 8 seconds. Once a minute is fine. 6*8s is close enough.

On the door sensor I set the interrupt to react to the door pin – the pin that is high when the door is open and low when closed. This means that the sensor is basically sleeping almost all the time.

Summary

So far I’ve got very power efficient nodes for temperature and doors. They send data to a gateway that publish it over mqtt. I’ve implemented sending data to nodes, but have no nodes setup for it yet. Xmas lights might be a candidate as I have a bunch of WS2812B lights lying around.

I recommend anyone being interested to look at mysensors.org.

For the code for all this and more details, checkout my repos:

Nrf-gateway – nodejs server gayeway
Nrf-nodes – Arduino code for the nodes

Posted in Coding, Hardware

Hass.io addon for backup to S3

I’ve been playing around with Home Assistant lately. I was surprised to not find a way to make external backups automatically. Most what I read recommended making snapshots and copy the snapshot over a samba mount manually. I wanted a way to directly make a backup and upload it to AWS S3. So late last night i threw an addon together. It can be found on github here.

Posted in Uncategorized

At home Tesla dashboard in Serverless

So at the weekend of my birthday I finally got the chance to sit down to work on a little project I’ve been wanting to do for a while. It was a gift for myself. Essentially I wanted to build a simple app to show on an iPad in the hallway of our house, where my wife could see where my car is and how long it would be for it to drive home in current traffic conditions. It would also allow me to make sure it was plugged in to charge over night, as well as let me turn on the AC in the morning if it was cold outside. (Obviously the last part could be automated but I don’t always drive in so it would be a waste of electricity to do it anyway. Plus it force me to check the temperature and dress appropriately.) I completed the first version on the Sunday and I’ve made some minor upgrades since, but the core functionality is done.

So how is it build? Well, my car is a Tesla Model S, and so it is essentially a computer on wheels. It has an accompanying app for my phone that let me control all kinds of things as well as give me its location so I can find it at a parking lot. Anything being controllable from a phone, means there is an API somewhere that can be used to do the same thing that the app can do. And Tesla is no exception. It is not official however, but fortunately it has already been documented by craftier people than myself and so far Tesla has made few attempts to prevent it being used so my hunch is that they kind of don’t mind.

The overall architecture of the system is essentially a React app as a frontend, talking to an API that serves the app with info about the car, an ETA for when I might be home, as well as controls for turning the AC on and off.

I have recently been working with Serverless and serverless architectures so I wanted to use that here. In short it means you write very small pieces of software that is invoked for a short period of time and then dies. It is run on servers somewhere, but you don’t have to care about where or how they are setup, just trust that your code will run when something triggers it. (There are books written on this and this is probably the shortest explanation I can manage so YMMV).

The react app is nothing special. It fetches the last state of the car every 60s and that’s about it. I built it using next.js that I use to export it to an s3 bucket where it is served from as a static site. It looks something like this:

Tesla home dashboard

Tesla home dashboard

The backend part is where the fun begins. There are essentially three components (although there are some support components as well). The first component is a function for getting the car state from the Tesla API. There are 5 routes used, that are aggregated and populated into a state object. This object is then put on a message queue in AWS SQS. Another function for saving this state in a dynamodb database is set to be triggered on a new message on the same queue. The update function is set to trigger every 1 minute using AWS scheduled events. 1 minute it she smalled time frame for schedules events. It is enough for now but I might need to change this in the future.

Once the data is in the database the last piece is a HTTP GET route for fetching the last state to display in the dashboard. Once this was done, it was simple to add two routes for turing the climate on and off. I later added a route for fetching the last locations of the previous hour, to show on the map plotting the route I’ve taken so far which gives additional cues for my wife as to where I might be heading.

Using the serverless framework, all components are neatly described in a serverless.yml file. Each component is dead simple, and easy to test. Having the state published to the queue, rather than save directly, means I can now write another function that also listens to the queue, and emit new events for particular state changes. This could allow e.g. me to write a function that spots when I park in other places than at home or at work and send me a push notification that open up the correct parking app for the location. (Or ideally, get the parking meter companies to open up their APIs so I can have my car pay for me automatically.)

In all this, I never had to touch a server configuration. I could focus on the code doing its thing.

Btw, here’s the code on github.

Posted in Coding

In Sweden, Digital strategist at Iteam

I have not updated this in a loooooong time. Figured I should at least add some minor updates before I write any other posts.

In January 28, 2017, me and my family left Glasgow and Scotland to move to Sweden. We had no plans on what to do so I did some freelance stuff. (30 years of programming has some benefits these days.) Not long after I decided to join Iteam as a digital strategist. It meant I left academia for industry. I no longer do research, but instead help companies and organisations think in digital terms – what change does digital mean to existing business and practices. My long stretch in looking into the future by applying research through design, now benefits my ability to help seeing opportunities and what is possible for existing businesses and organisations as well as new enterprise. My long experience in building stuff indeed comes in handy at times, where I mentor more junior colleagues and help strengthen our technology skills.

I now live in Veddige, close to Varberg, south of Gothenburg, on the Swedish west coast. Feel free to give me a shout if you want to meet up!

Posted in Uncategorized

Workshop at NordiCHI on Mobile Wellbeing

We will organise a workshop at NordiCHI later this year in Gothenburg, Sweden. The workshop is called Mobile Wellbeing and is to be focused around mental wellbeing and use and design of mobile technology. The workshop is organised by me, John Rooksby (UK), Alexandra Weilenmann (SE), Thomas Hillman (SE), Pål Dobrin (SE), and Juan Ye (UK). If you are interested in the workshop theme, consider writing a position paper and join us in Gothenburg on October 23. The deadline for position papers is August 25.

You should visit the workshop website for more information, but at the workshop we aim to discuss the following three questions:

  • In what ways do mobile technologies affect mental wellbeing?
  • How can mobile technologies be designed to support and improve mental wellbeing and to mitigate negative effects?
  • What strategies and practices can be developed for using mobile technology in ways that do not harm and instead support improvement of our wellbeing?

See you in Göteborg!

 

Posted in Uncategorized

Forget-me-not: Honorable Mention at CHI 2016

presenting_forgetmenotLast week I went to San Jose for a week to attend CHI. Among other things I got to present my note Forget-me-not: History-less Mobile Messaging. This note received an Honorable Mention which is given to the top 5% papers and notes at the conference.

The paper is based on work done by a group of level 3 students. The group project was to design, build, and study a mobile text messaging app without history. This is what they did and turned into the app forget-me-not. The student project lasted a year, and throughout the project the students received first best presentation at the intermediate project presentations at half-time, and finally they received best L3 project in Computing Science in the school. I could not be prouder of these students. Topping that up with also being able to turn the work into a paper receiving an honorable mention surely is the cherry on the cake!

The final paper discuss mobile text messaging and the role of messaging history. To do that we study what happens with mobile text messaging when there is no history. By interviewing 10 participants after using the app for 2 weeks, we gain insights into how they perceive the app, and how they perceive messaging through it. We found that messaging requires effort, but allows users to be relaxed about what they write. History turns out to be useful in the ability to “scroll up” to see the past messages which allows for distractions. Not having history instead makes it more engaging. It is also discussed how not having history allows for sending messaging you don’t want to have on record, such as bank details, or planning a secret birthday party. Read the whole paper, where we discuss the design of the app, and discuss the method of deliberately removing history to study history.

 

Posted in Uncategorized

The Delay Experiment

One of my summer interns, Ivaylo Lafchiev, is working on an interesting project over the summer, looking at the effect delays might have on mobile phone use. He has developed an Android app that introduces a delay when the phone is unlocked. The effect is that the user will have to wait for a short while (seconds) before the phone becomes available to them. The idea is to vary the duration of the delay to see if we can notice any effects on the overall usage of the device (for instance is the amount of time spent on the phone altered).

The app is now ready to be used in an experiment. The app is available on the app store, and the hope is that the app will attract a few users who will keep it installed for at least a few weeks. This is extremely risky. Why would anyone keep an app that makes it harder to use the phone? The hope here is that people do want to become more conscious about their phone use, and therefore are willing to participate in this experiment. The question however then is in what way the data will be biased by participants already wanting to change their phone use? We are trying to mitigate this bias (and investigating it) by first setting the delay to a short time, and after some time, we’ll change the duration of the delay remotely to see if we can stop a difference in phone behaviour. By comparing the use before and after the delay change, and by changing the delay differently for different users, we hope to gather evidence as to whether this will have an effect or not.

I’ve made two personal observations from having the app installed myself. First, each time I unlock the phone, it serves as a reminder that I’m now about to use my phone. This might seem like a weird thing as the fact that I’m using the phone should be a reminder itself. However, I tend to use the phone when ever I have nothing else todo: waiting for the subway or waiting in line at Starbucks for instance. With the delay, I’m reminded, and then given a short time to contemplate whether I actually want to use the phone or if I should just take a few seconds to do nothing. The longer the duration, the more annoying the app is, but it also makes me more aware of my phone use. When the duration is long enough (more than 3 seconds or so) I start to think before using my phone, even before I take it out of the pocket. All of a sudden I’m reminded that I will have to wait a few seconds before the phone becomes available, and I choose often not to subject myself to it because the thing I was supposed to do with the phone was pointless anyway.

I’m looking forward to see what happens with this experiment. Is anyone going to download the app and install it? Is anyone going to keep it installed for long enough for us to collect data on their behaviour? Only time will tell. But until then, I will be more conscious and mindful of my phone use – until Ivaylo makes the duration so long that I will uninstall the app and go back to mindlessly fill every void of my life with mobile phone use.

You can find more information about the project, and download the app, here.

Posted in Research

Teaching Excellence Award

Teaching Excellence Award

Last week I was awarded a Teaching Excellence Award from the college. The award was given for my teaching activities within the school , including: the development of a set of tutorials given to students and staff across levels, supervising undergraduate and postgraduate students, and contributions to undergraduate courses.

Posted in Research

Pass The Ball at CHI 2015 in Seoul

This morning I presented a paper at CHI entitled Pass The Ball: Enforced Turn-Taking in Activity Tracking. See the abstract below

We have developed a mobile application called Pass The Ball that enables users to track, reflect on, and discuss physical activity with others. We followed an iterative design process, trialling a first version of the app with 20 people and a second version with 31. The trials were conducted in the wild, on users’ own devices. The second version of the app enforced a turn-taking system that meant only one member of a group of users could track their activity at any one time. This constrained tracking at the individual level, but more successfully led users to communicate and interact with each other. We discuss the second trial with reference to two concepts: socialrelatedness and individual-competence. We discuss six key lessons from the trial, and identify two high-level design implications: attend to “practices” of tracking; and look within and beyond “collaboration” and “competition” in the design of activity trackers.  

Posted in Uncategorized

Ffmpeg and Chromecast

I’ve been struggling recently with transcoding media files and streaming to Chromecast. Starting with the excellent project castnow over on github I wanted to be able to 1) stream media files directly from rar archives, and 2) create a web interface to start media files. It is not meant to be overly ambitious but just something useful to use at home.

Among several problems I’ve encountered so far, one has been especially annoying and turned out to have a very simple fix. The transcoding is done using ffmpeg. What I’ve been doing is to let ffmpeg reencode any media file I give it, into an mp4 with h264 and aac. This works most of the time, however for some mkv-s there’s been no image when transcoded. Casting the mkv directly to the chromecast gives you moving pictures, but it has no sound (since Chromecast does not support 5.1 audio as of yet as far as I understand).

The first attempt at a solution was to then not reencode the video but to simply copy the original. That is simple using ffmpeg flag: -vcodec copy. Unfortunately this still doesn’t work. However encoding the video to a file and then casting the file to the chromecast works. Thus there was something going on when the output from ffmpeg is streamed directly. I’ve still not worked out what is going on, but I’ve found a solution. Instead of creating a mp4 container, encoding everything into a mkv (or matroska) suddenly makes everything work just fine. The final line is


cat some-media-file | ffmpeg -i - -f matroska -vcodec h264 -acodec aac -ac2 pipe:1 | stream-to-chromecast

So far this seems to work all the time, however it is somewhat unnecessary to encode h264 if the video is already h264. In my project I therefor check codecs and set the flags for ffmpeg accordingly.

The project is written in Node.JS and is available on the following github repository.

Posted in Uncategorized