Introduction

Raspberry Pis1 and their camera modules2 are a popular and cheap way to stream video on the Internet. Anything video related on the web always seems to be messy though, and although there are lots of articles discussing this on the web, I found it far from trivial to set up. Perhaps the most useful article was on StackExchange3

One reason that the task is difficult is simply that there isn’t an approach which is best for everyone. So let’s begin by deciding what we want, and what we don’t care about.

Grabbing video with raspivid

Although there are many recipes for online video streaming, they all use raspivid to get data from the camera and save it in H.2644 format. Raspivid is a fine choice because it knows how to use the VideoCore 45 hardware in the Pi’s GPU to encode the video efficiently.

Getting the video into H.264 bodes well for displaying it on Apple hardware: both iOS and MacOS know how to decode it without any extra software.

Unsurprisingly raspivid boasts many options, and you’ll probably want to consult the documentation6 or even the source7 on github.

Although I don’t claim that it’s optimal, here’s the command I used:

raspivid -n -ih -t 0 -ISO 800 -ex night -w 720 -h 405 -fps 25 -b 20000000 -o -

Here’s a brief explanation:

-n
Disable preview.
-ih
Discussed below. For now, note that omitting this makes for frustrating debugging!
-ISO 800 -ex night
Try to get the best image in the dark.
-w 720 -h 405 -fps 25 -b 20000000
Specify the video parameters.
-o -
Send the video to STDOUT.

Preparing video with ffmpeg

Having got a source of H.264 video, we need to get it to the clients. There are any number of streaming solutions, which use special protocols to send a continuous stream of data to the viewer.

However, in the interests of simplicity, we’ll use HTTP Live Streaming.8 Although this sounds gradiose, it’s about the simplest thing which might work:

Happily ffmpeg9 can do this for us:

ffmpeg -y \							
    -loglevel panic \						
    -i - \							
    -c:v copy \							
    -map 0 \							
    -f ssegment \						
    -segment_time 1 \						
    -segment_format mpegts \					
    -segment_list "$base/stream.m3u8" \				
    -segment_list_size 10 \					
    -segment_wrap 20 \						
    -segment_list_flags +live \					
    -segment_list_type m3u8 \					
    -segment_list_entry_prefix /cam/segments/ \			
    "$base/segments/%03d.ts" 				

In essence this takes the H.264 stream on STDIN and saves it in one second clips to files in $base/segments. It also keeps a playlist for those segments in $base/stream.m3u8. For more details, see the documentation for ffmpeg.10

Note that ffmpeg is clever enough to reuse files for the clips, so we won’t gradually fill the disk.

The biggest problem with this approach is latency. By working in discrete blocks each a second long, we might expect a few seconds of latency, but in practice 10--20s seems common. Perhaps different parameters would improve this.

Finally we’ll need some HTML to wrap things up:

<html>										
  <head>									
    <title>PiVid</title>							
  </head>									
  <body>									
    <video controls="controls" width="720" height="405" autoplay="autoplay" >	
      <source src="stream.m3u8" type="application/x-mpegURL" />			
    </video>									
  </body>									
</html>										

raspivid -ih

You may recall the -ih option to raspivid which inserts PPS and SPS headers on every I-frame. I don’t understand in detail what this means, but if you don’t specify it you’ll find that clients which connect to the webcam quickly work, whilst laggards won’t.

Debugging this problem can be fun, because it manifests itself as a system which works well in testing, but then fails when you reload the stream a bit later. Given that the window when it works depends on the clip length, it is also easy to think the problem lies there.

ffmpeg versions

At time of writing, December 2013, it’s claimed that Raspbian ship an old version of ffmpeg which doesn’t support segmenting, so you’ll need to compile your own. This takes many hours, and runs out of memory on Pis with only 256MB of RAM (model A and version 1 model B).

raspivid -segment

Although ffmpeg can do all manner of video conversions, it’s clear that here it’s not doing very much. Perhaps raspivid will learn how to segment the video itself.

In fact, it’s well on the way! The most recent commit11 to the software appears to allow just that. As yet though, there’s no support for generating the .m3u8 playlist though. Still, if you’re reading this in 2014 or later, it might be worth checking before you spend ages compiling ffmpeg.

Serving video

The recipe above generates a handful of files:

$ ls -lR							
.:								
total 68							
-rw-r--r-- 1 pi pi   233 Dec 12 23:04 index.html		
drwxr-xr-x 2 pi pi 57344 Dec 14 09:30 segments			
-rw-r--r-- 1 pi pi   525 Dec 15 15:17 stream.m3u8		
								
./segments:							
total 12832							
-rw-r--r-- 1 pi pi 410028 Dec 15 15:17 000.ts			
-rw-r--r-- 1 pi pi 663264 Dec 15 15:16 001.ts			
-rw-r--r-- 1 pi pi 664204 Dec 15 15:16 002.ts			
...
-rw-r--r-- 1 pi pi 673040 Dec 15 15:17 019.ts

To serve the video to clients, just let any old webserver see them. I used nginx, but I’d expect apache and lighttpd would work too.