Ffmpeg and Chromecast

I’ve been struggling recently with transcoding media files and streaming to Chromecast. Starting with the excellent project castnow over on github I wanted to be able to 1) stream media files directly from rar archives, and 2) create a web interface to start media files. It is not meant to be overly ambitious but just something useful to use at home.

Among several problems I’ve encountered so far, one has been especially annoying and turned out to have a very simple fix. The transcoding is done using ffmpeg. What I’ve been doing is to let ffmpeg reencode any media file I give it, into an mp4 with h264 and aac. This works most of the time, however for some mkv-s there’s been no image when transcoded. Casting the mkv directly to the chromecast gives you moving pictures, but it has no sound (since Chromecast does not support 5.1 audio as of yet as far as I understand).

The first attempt at a solution was to then not reencode the video but to simply copy the original. That is simple using ffmpeg flag: -vcodec copy. Unfortunately this still doesn’t work. However encoding the video to a file and then casting the file to the chromecast works. Thus there was something going on when the output from ffmpeg is streamed directly. I’ve still not worked out what is going on, but I’ve found a solution. Instead of creating a mp4 container, encoding everything into a mkv (or matroska) suddenly makes everything work just fine. The final line is


cat some-media-file | ffmpeg -i - -f matroska -vcodec h264 -acodec aac -ac2 pipe:1 | stream-to-chromecast

So far this seems to work all the time, however it is somewhat unnecessary to encode h264 if the video is already h264. In my project I therefor check codecs and set the flags for ffmpeg accordingly.

The project is written in Node.JS and is available on the following github repository.

Posted in Uncategorized

Why and How to Quickly Build Apps – Make No Decisions

Today I was invited to give a talk at the Mobile Apps Group Meetup in Glasgow. I decided to talk briefly about my own app development in my research, why it involves quickly building apps, and how I tend to do that.

I first gave the premise of my work: To understand an app (or the ideas manifested in the app), it needs to be built, so that it can be studied in use. In my view, we can only know what an artefact is once it is in use. We cannot know what it is prior to that.

I then explained how I suggest people to do that. None of the points are anything new, but it is hopefully something that people will start doing once they hear it often enough so I figured it is worth talking about. Concisely I’m trying to convey that you should make decisions when they are easy to make and refrain from it when it is time consuming.

Thus the process is:

  • Sketch a lot of ideas. Sketch on paper or in any other material that is easy to produce and easy to discard.
  • Make a mockup using Sketch, Photoshop, or anything else that allow you to create what you want different screens of your app should look like. This is where decisions are made. From the sketches made previously, pick one, make it into images that will say exactly what the app will look like on screen. The purpose of this mockup is to describe what is to implemented in the next phase.
  • Now you build. But make no decisions what so ever. Just transfer the decisions manifested in the mockups, down to the font sizes, margins, and colours. As soon as you start playing around with margins, font sizes, and colours, you start loosing a lot of time. The reason is because it is not as easy and efficient as it would have been if you would have done this already when creating the mockups, so you should not do it now! If it makes it easier, pretend that the mockups come from a paying customer who pays you to implement an app that looks exactly like he has decided and you have no room for creative suggestions.

In my experience, following this simple rule of making all decisions while creating the mockups, and making no decisions when implementing, makes implementing it a breeze. I think one of the reasons for this is because when you try to make decisions in the implementation phase you not only need to think about how you would like it to look, but you also need to think about how to make it so. Having the decision made means you only need to think about how to make it.

Posted in Uncategorized

Lived Informatics at CHI, Toronto

Today I presented our paper ‘Personal Tracking as Lived Informatics’ at CHI. By interviewing 22 people about how they were using personal tracking devices, such as Nike Fuelband, Jawbone Up, FitBit, and mobile apps (e.g. RunKeeper and MyFitnessPal).

Among the findings, we uncover how the people we talk to use multiple trackers, and track multiple things. They switch between trackers, as well as what they track, over time. While they say that they do not share tracked data to social networks, they do track together with people in their lives. Furthermore, while they track for a long time, they rarely look at their historical data.

We discuss our findings and present an alternative view to Personal Informatics which we term Lived Informatics. Lived Informatics emphasises the emotionality of tracking and that tracking is something done with an outlook for the future, rather than part of a rational scientific process for optimising self as seen in Quantified Self and in Personal Informatics.

We are very happy the paper got an Honorable Mention.

The paper:

Rooksby, J., Rost, M., Morrison, A. and Chalmers, M. (2014) Personal Tracking as Lived Informatics. To appear in Proceedings of CHI’14, Toronto, Canada.

Posted in Uncategorized

Guest lecture in Gothenburg

On friday I gave a guest lecture in a course at LinCS at the University of Gothenburg. The course was ‘Understanding and designing for social media practices’ and is aimed at exploring methods for studying online social media.

The lecture was based on two previous studies of foursquare. The first one was a primarily interview based study where we looked at performative aspects of using foursquare. The second one deals with a study of log data from foursquare. The talk discussed what it means for people to share their location through the means of checking in (or what it means to check in), and discussed what this means for the genera of data, and showed examples of communicative features in the log data from foursquare.

The slides can be found on slideshare.

Posted in Uncategorized

Texture problems in Android OpenGL

Encountered such a stupid thing that I do not wish anyone else ever to encounter so I feel I have to write a post about it. Tl;dr: BitmapFactory.decodeResource may rescale the image to match the pixel density of the device, which is not good if you are concerned about the pixel size of the image.

I recently had a problem with textures in OpenGL ES 2.0 on Android. Everything worked like a charm on the development device I was using, and everyone was happy. But when I found a less common device in the office, I decided to try it on that device to see the performance. I did not expect it not to work, but was prepared that might be some performance issues. Little did I expect that nothing would show on the screen. Also, no proper error messages were shown anything. Just black screen.

When loading textures in OpenGL on android, people generally use something like

Bitmap bitmap = BitmapFactory.decodeResource(context.getResources(),R.drawable.cool_texture);
GLUtils.texImage2D(GLES20.GL_TEXTURE_2D, 0, bitmap, 0);

Its doubtful that anything would go wrong with these two lines, so I went to check the most obvious things I could think of. First off I checked if the device required power of two sized textures (256×256, 512×512, etc). I created a texture that was 256×256 and loaded it. Did not work. So I went ahead and tried all kinds of things, stripped down the code to bare minimum to isolate where the problem was.

To cut things short. The problem was with the two lines, and the way BitmapFactory.decodeResources decodes resources… The bitmap I obtained ended up not being the dimensions of the image I created, but something slightly less. The fix:

final BitmapFactory.Options options = new BitmapFactory.Options();
options.inScaled = false;
Bitmap bitmap =
BitmapFactory.decodeResource(context.getResources(),R.drawable.cool_texture, options);

Once this was added, I learned the problem indeed was with power of two sized textures, and could take care of it.

Ps. if you want to check the platform supports non-power of two sized textures, you can run this check:

        String extensions = gl.glGetString(GL10.GL_EXTENSIONS);
        mSupportsNPOT = extensions.indexOf("GL_OES_texture_npot") != -1;

Potentially not the nicest way of doing it, but it works. Ds.

Posted in Uncategorized

Internships in the Populations programme

If you’re a PhD student working with things such as ubiquitous and mobile computing, statistics and inference, and formal modelling and analysis, the project I’m working in is offering internships! It’s preferably over the summer, but not restricted to it.

The Populations project is trying to advancing the design process and underlying science of mobile applications. We do this by designing, building and studying mobile apps through large deployments, and investigate new methods and analysis tools that allow us to advance this area of research and practice. If you’re interested in reading more you should check out the call.

Posted in Uncategorized

Android WebView and File Input

While working on a web app that I wanted to embed in a WebView on Android to create a native app (hybrid), I encountered some problems with file input. The app is a simple one that allows users to upload photos, and shows a list of friends photos. (Instagram anyone?)

These days file input on mobile devices works pretty ok on iOS and Android devices. Adding <input type=’file’ /> will create a file input, and clicking it will let the user pick a file from your phone. At least from Chrome or Safari. However, when putting this in a WebView, it is currently not handled for you automatically.

Googling around to figure out what is going on reveals that this is something that has to be implemented by yourself by overriding a couple of functions in an instance of WebChromeClient. Unfortunately there are still a couple of issues with this. First of all, the functions that has to be overridden har undocumented. Second of all, returning the files seem to be devoid of file type, which is not necessarily a problem for everyone, but that was a problem for me.

First of all, the functions that has to be overridden are:

mWebView.setWebChromeClient(new WebChromeClient() {

@SuppressWarnings(“unused”)

public void openFileChooser(ValueCallback<Uri> uploadMsg, String AcceptType, String capture) {

this.openFileChooser(uploadMsg);

}

@SuppressWarnings(“unused”)

public void openFileChooser(ValueCallback<Uri> uploadMsg, String AcceptType) {

this.openFileChooser(uploadMsg);

}

public void openFileChooser(ValueCallback<Uri> uploadMsg) {

mUploadMessage = uploadMsg;

pickFile();

}

});

The three functions illustrate a progression of Android, where the last one is for early versions of android, and the last one is for Android 4.1+. Potentially then, this might change in the future.

The use of this then is to return the value in mUploadMessage by calling its callback.

Picking the file is easy enough using an intent call.

Intent chooserIntent = new Intent(Intent.ACTION_GET_CONTENT);

chooserIntent.setType(“image/*”);

startActivityForResult(chooserIntent, RESULTCODE);

 

And you return the file in onActivityResult:

protected void onActivityResult(int requestCode, int resultCode, Intent intent) {

mUploadMessage.onReceiveValue(intent.getData());

mUploadMessage = null;

}

 

(Note: There’s not error handling here, so remember to include that!)

So far so good. This will cause the file input data set, so we can submit it from a form, or access it from javascript.

What I needed this image for, was to both update a picture on the web page, and also to upload this data later on. The way I solved this was to use HTML5 local file reading through a FileReader object, and then reading the data using readAsDataURL

var reader = new FileReader();

reader.readAsDataURL(file);

reader.onload = function(e) {

var dataurl = e.target.result;

}

So we got dataurl as a URI object, which will be on the form “data:image/png;base64,…”. Or so is it normally. Problem is that from at least Android 4.2.2 (On my Nexus 4), the file object that is returned to javascript from Android does not have the file type set. (file.type===undefined). This makes the data URI to be invalid as it only describes binary data: “data:base64,…”. This caused major problems for me, but I managed to fix it. Fortunately the filename is set correctly, and so I can use the file ending. From that I can reform the dataurl to create a correct one

dataurl = “data:image/” + file.name.split(‘.’).slice(-1)[0] + “;base64,” + dataurl.split(‘,’)[1];

The rest of the script then works. You can use the dataurl to set the image src ($(“#theimage”).attr(‘src’,dataurl); and you can upload it as a string.

I’m guessing there might be a fix for this to get Android to pass a file with the correct type set, but without that fix, this one seems to work.

 

Posted in Uncategorized

Steve Jobs 1983

All talk about Steve Jobs can sometimes be quite tiring. It is however hard to dismiss his insight into the domain in which he was working. Here is a speech from 1983, in which it is clear that he had a firm grasp on what was about to come. He gets many things wrong, such as the idea of the hand held iPad kind of device within the 1980s (it took 20 more years). But it is extremely interesting to listen to many of the things he got right.

I especially like how he wants computers to beautiful and that industrial designers must pay attention to the computer industry to start working towards more beautiful artifacts, and more beautiful experiences. He says that these things (the computers) will inevitable be all around us, in our home and in our offices, and so we now (1983) have the time to make sure they are beautiful. Unfortunately the IBM desktops kind of took over for a long time and we had those gray boring things for a long long time. But I think we can see how that is changing with companies putting more efforts into doing beautiful devices (computers, mobile phones, tablets). If only they would have started doing that already in 1983…

Listen to it here.

 

Posted in Uncategorized

How to choose between web and native

Forbes has an article on how to choose between mobile web and native apps. It has a simple overview of pros and cons of the two, and what intermediate versions there are. Their claim is that it is more complicated than two ends of a spectrum, but you can choose in between. Most interestingly is what they name a dedicated web app

Dedicated web app, which is a mobile web site tailored to a specific platform or form factor, like the LinkedIn web app which was designed for Android and iOS, but not for other smartphones or feature phones.

as opposed to a generic mobile app “which are mobile web sites designed to match every web-enabled phone, like the Wikipedia mobile page.”. I have definitely gone for the dedicated web app more times than the generic mobile web app.

I agree in principle with their end points, which state that you should build whatever solution you decide on, on the data. Create an API for the data, and build the app to use the API. This is really just good development style, but can definitely help when building mobile (web) apps if you need to try different solutions later on. As some people advocate “web first, native second” (e.g. this guy), having the API and data in place, you have laid the ground work.

 

Posted in Uncategorized | Leave a reply