I have not updated this in a loooooong time. Figured I should at least add some minor updates before I write any other posts.
In January 28, 2017, me and my family left Glasgow and Scotland to move to Sweden. We had no plans on what to do so I did some freelance stuff. (30 years of programming has some benefits these days.) Not long after I decided to join Iteam as a digital strategist. It meant I left academia for industry. I no longer do research, but instead help companies and organisations think in digital terms – what change does digital mean to existing business and practices. My long stretch in looking into the future by applying research through design, now benefits my ability to help seeing opportunities and what is possible for existing businesses and organisations as well as new enterprise. My long experience in building stuff indeed comes in handy at times, where I mentor more junior colleagues and help strengthen our technology skills.
I now live in Veddige, close to Varberg, south of Gothenburg, on the Swedish west coast. Feel free to give me a shout if you want to meet up!
We will organise a workshop at NordiCHI later this year in Gothenburg, Sweden. The workshop is called Mobile Wellbeing and is to be focused around mental wellbeing and use and design of mobile technology. The workshop is organised by me, John Rooksby (UK), Alexandra Weilenmann (SE), Thomas Hillman (SE), Pål Dobrin (SE), and Juan Ye (UK). If you are interested in the workshop theme, consider writing a position paper and join us in Gothenburg on October 23. The deadline for position papers is August 25.
You should visit the workshop website for more information, but at the workshop we aim to discuss the following three questions:
In what ways do mobile technologies affect mental wellbeing?
How can mobile technologies be designed to support and improve mental wellbeing and to mitigate negative effects?
What strategies and practices can be developed for using mobile technology in ways that do not harm and instead support improvement of our wellbeing?
Last week I went to San Jose for a week to attend CHI. Among other things I got to present my note Forget-me-not: History-less Mobile Messaging. This note received an Honorable Mention which is given to the top 5% papers and notes at the conference.
The paper is based on work done by a group of level 3 students. The group project was to design, build, and study a mobile text messaging app without history. This is what they did and turned into the app forget-me-not. The student project lasted a year, and throughout the project the students received first best presentation at the intermediate project presentations at half-time, and finally they received best L3 project in Computing Science in the school. I could not be prouder of these students. Topping that up with also being able to turn the work into a paper receiving an honorable mention surely is the cherry on the cake!
The final paper discuss mobile text messaging and the role of messaging history. To do that we study what happens with mobile text messaging when there is no history. By interviewing 10 participants after using the app for 2 weeks, we gain insights into how they perceive the app, and how they perceive messaging through it. We found that messaging requires effort, but allows users to be relaxed about what they write. History turns out to be useful in the ability to “scroll up” to see the past messages which allows for distractions. Not having history instead makes it more engaging. It is also discussed how not having history allows for sending messaging you don’t want to have on record, such as bank details, or planning a secret birthday party. Read the whole paper, where we discuss the design of the app, and discuss the method of deliberately removing history to study history.
One of my summer interns, Ivaylo Lafchiev, is working on an interesting project over the summer, looking at the effect delays might have on mobile phone use. He has developed an Android app that introduces a delay when the phone is unlocked. The effect is that the user will have to wait for a short while (seconds) before the phone becomes available to them. The idea is to vary the duration of the delay to see if we can notice any effects on the overall usage of the device (for instance is the amount of time spent on the phone altered).
The app is now ready to be used in an experiment. The app is available on the app store, and the hope is that the app will attract a few users who will keep it installed for at least a few weeks. This is extremely risky. Why would anyone keep an app that makes it harder to use the phone? The hope here is that people do want to become more conscious about their phone use, and therefore are willing to participate in this experiment. The question however then is in what way the data will be biased by participants already wanting to change their phone use? We are trying to mitigate this bias (and investigating it) by first setting the delay to a short time, and after some time, we’ll change the duration of the delay remotely to see if we can stop a difference in phone behaviour. By comparing the use before and after the delay change, and by changing the delay differently for different users, we hope to gather evidence as to whether this will have an effect or not.
I’ve made two personal observations from having the app installed myself. First, each time I unlock the phone, it serves as a reminder that I’m now about to use my phone. This might seem like a weird thing as the fact that I’m using the phone should be a reminder itself. However, I tend to use the phone when ever I have nothing else todo: waiting for the subway or waiting in line at Starbucks for instance. With the delay, I’m reminded, and then given a short time to contemplate whether I actually want to use the phone or if I should just take a few seconds to do nothing. The longer the duration, the more annoying the app is, but it also makes me more aware of my phone use. When the duration is long enough (more than 3 seconds or so) I start to think before using my phone, even before I take it out of the pocket. All of a sudden I’m reminded that I will have to wait a few seconds before the phone becomes available, and I choose often not to subject myself to it because the thing I was supposed to do with the phone was pointless anyway.
I’m looking forward to see what happens with this experiment. Is anyone going to download the app and install it? Is anyone going to keep it installed for long enough for us to collect data on their behaviour? Only time will tell. But until then, I will be more conscious and mindful of my phone use – until Ivaylo makes the duration so long that I will uninstall the app and go back to mindlessly fill every void of my life with mobile phone use.
You can find more information about the project, and download the app, here.
Last week I was awarded a Teaching Excellence Award from the college. The award was given for my teaching activities within the school , including: the development of a set of tutorials given to students and staff across levels, supervising undergraduate and postgraduate students, and contributions to undergraduate courses.
We have developed a mobile application called Pass The Ball that enables users to track, reflect on, and discuss physical activity with others. We followed an iterative design process, trialling a first version of the app with 20 people and a second version with 31. The trials were conducted in the wild, on users’ own devices. The second version of the app enforced a turn-taking system that meant only one member of a group of users could track their activity at any one time. This constrained tracking at the individual level, but more successfully led users to communicate and interact with each other. We discuss the second trial with reference to two concepts: socialrelatedness and individual-competence. We discuss six key lessons from the trial, and identify two high-level design implications: attend to “practices” of tracking; and look within and beyond “collaboration” and “competition” in the design of activity trackers.
I’ve been struggling recently with transcoding media files and streaming to Chromecast. Starting with the excellent project castnow over on github I wanted to be able to 1) stream media files directly from rar archives, and 2) create a web interface to start media files. It is not meant to be overly ambitious but just something useful to use at home.
Among several problems I’ve encountered so far, one has been especially annoying and turned out to have a very simple fix. The transcoding is done using ffmpeg. What I’ve been doing is to let ffmpeg reencode any media file I give it, into an mp4 with h264 and aac. This works most of the time, however for some mkv-s there’s been no image when transcoded. Casting the mkv directly to the chromecast gives you moving pictures, but it has no sound (since Chromecast does not support 5.1 audio as of yet as far as I understand).
The first attempt at a solution was to then not reencode the video but to simply copy the original. That is simple using ffmpeg flag: -vcodec copy. Unfortunately this still doesn’t work. However encoding the video to a file and then casting the file to the chromecast works. Thus there was something going on when the output from ffmpeg is streamed directly. I’ve still not worked out what is going on, but I’ve found a solution. Instead of creating a mp4 container, encoding everything into a mkv (or matroska) suddenly makes everything work just fine. The final line is
So far this seems to work all the time, however it is somewhat unnecessary to encode h264 if the video is already h264. In my project I therefor check codecs and set the flags for ffmpeg accordingly.
Today I was invited to give a talk at the Mobile Apps Group Meetup in Glasgow. I decided to talk briefly about my own app development in my research, why it involves quickly building apps, and how I tend to do that.
I first gave the premise of my work: To understand an app (or the ideas manifested in the app), it needs to be built, so that it can be studied in use. In my view, we can only know what an artefact is once it is in use. We cannot know what it is prior to that.
I then explained how I suggest people to do that. None of the points are anything new, but it is hopefully something that people will start doing once they hear it often enough so I figured it is worth talking about. Concisely I’m trying to convey that you should make decisions when they are easy to make and refrain from it when it is time consuming.
Thus the process is:
Sketch a lot of ideas. Sketch on paper or in any other material that is easy to produce and easy to discard.
Make a mockup using Sketch, Photoshop, or anything else that allow you to create what you want different screens of your app should look like. This is where decisions are made. From the sketches made previously, pick one, make it into images that will say exactly what the app will look like on screen. The purpose of this mockup is to describe what is to implemented in the next phase.
Now you build. But make no decisions what so ever. Just transfer the decisions manifested in the mockups, down to the font sizes, margins, and colours. As soon as you start playing around with margins, font sizes, and colours, you start loosing a lot of time. The reason is because it is not as easy and efficient as it would have been if you would have done this already when creating the mockups, so you should not do it now! If it makes it easier, pretend that the mockups come from a paying customer who pays you to implement an app that looks exactly like he has decided and you have no room for creative suggestions.
In my experience, following this simple rule of making all decisions while creating the mockups, and making no decisions when implementing, makes implementing it a breeze. I think one of the reasons for this is because when you try to make decisions in the implementation phase you not only need to think about how you would like it to look, but you also need to think about how to make it so. Having the decision made means you only need to think about how to make it.
I have for the longest time wanted to get into electronics. While I have been tinkering with connecting lights and motors to batteries since a very low age, taken several electronics courses both in highschool and at university, I’ve never done anything practical. Recently however I started using my now-of-age reserves of money towards DIY-electronic stuff in order to try and finally do something. In the last couple of weeks I’ve learned how easy it really is once you actually have the components you need, and how the DIY culture, especially around Arduino, have made these things incredibly accessible. It’s safe to say that we’ve come a long way since the days of holding a small toy lightbulb against a 4.5V battery (which I did as a kid).
Since when I get into something I tend to immerse myself quite deeply. Therefore, for this valentines day I quite readily jumped to the idea of building something with my newfound skills and toys. I therefore decided to connect a few LEDs in the shape of a heart that could flash in interesting ways to give to my girlfriend. As you can see in the video, it eventually turned into a mirror with hidden heart-shaped LEDs that can be controlled through a web interface on the local network.
I had at this point built up a sensor network at home using nRF24L01+ for wireless communication (which is an incredibly cheap but easy to use RF-module), and both Arduino Pro Minis as well as ATtiny85s as brains. Therefore I already had one of the radio modules connected to an Arduino communicating serially with a small server in the house. All I had to do was to figure out how I could control and drive a bunch of LEDs (ended up using 14) from the Arduino, and then try and package it nicely to be a suitable gift.
The mirror is built using an Arduino Pro Mini 5V, a nRF24L01+, driven using a 12V power adapter. Pretty sure 12V is overkill, but it’s what I had lying around. The nRF24L01+ must be run on 3.3V so it’s also using a voltage regulator that I put on the control board for the LEDs. The LED control is two 547 transistors and the LEDs are current limited using 470ohm. I figured that I couldn’t run the current for all 14 LEDs through one transistor (they would consume about 150mA, and I didn’t have any other transistors lying around) and therefore split them over two. Also ended up connecting the base of the transistors to two different pins on the Arduino, and could theoretically control the upper and lower part of the heart individually. Just ended up not doing that.
The mirror is continuously listening to radio traffic. When a control message arrives, it changes the mode of the LEDs. The software on the mirror is therefore incredibly simple. The server however had to be slightly modified. To this point the server was only listening to sensor nodes, and not transmitting any information back. I therefore had to change both the Arduino program to listen to serial communication and forward the data to the radio, and the software running on the server. The current server software was in Python, but decided to change to NodeJS. Since I also wanted to be able to control from a web interface, I took the time to also build a simple rest-api for sending the command to the mirror using HTTP. Thus the server software now listens to the serial for sensor readings from my sensors in the house and stores that in a database, as well as receives HTTP requests for sending out data packets to listening nodes.
Finally I created a super simple web page with four buttons for selecting one of the four settings on the mirror: heart-beat, off, on, and smooth pulse.
It was a fun project, girlfriend was happy, and I can now move on to the next projects. The project gave me experience in designing and soldering a PCB (using vero board), cutting such board using a drill, how simple it can be to put things together in not too shabby ways.
Today I presented our paper ‘Personal Tracking as Lived Informatics’ at CHI. By interviewing 22 people about how they were using personal tracking devices, such as Nike Fuelband, Jawbone Up, FitBit, and mobile apps (e.g. RunKeeper and MyFitnessPal).
Among the findings, we uncover how the people we talk to use multiple trackers, and track multiple things. They switch between trackers, as well as what they track, over time. While they say that they do not share tracked data to social networks, they do track together with people in their lives. Furthermore, while they track for a long time, they rarely look at their historical data.
We discuss our findings and present an alternative view to Personal Informatics which we term Lived Informatics. Lived Informatics emphasises the emotionality of tracking and that tracking is something done with an outlook for the future, rather than part of a rational scientific process for optimising self as seen in Quantified Self and in Personal Informatics.
We are very happy the paper got an Honorable Mention.