For decades, I’ve used programs like LaTeX where the workflow is:

At some point xdvi learnt to watch the output of LaTeX and automatically update itself, when the file changed. That was nice, but one still had to run LaTeX by hand after saving the source file.

Now nice editors make it easy to invoke the command with a keystroke, but these days, I often use watchdog1 instead.

Once running, watchdog watches the TeX files and when one changes it automatically invokes pdflatex. Here’s the relevant command:

% watchmedo shell-command -c "pdflatex top && open top.pdf" -p '*.tex' .

As you’ll see, the command’s a trifle baroque, so I tend to save it as a shell script.

This general scheme applies to more than just LaTeX. I often write software by editing code and then invoking make: why not use watchdog instead ?

Avoiding pipes to gnuplot

Whilst it’s obviously useful if the source file is being edited by a person, watchdog is also a useful replacement for various inter-process communication schemes.

For example, I’m a big fan of gnuplot2 which has a perfectly good command line interface. It works well if you’ve got fixed data to plot, and you want to fettle the plotting parameters.

It’s less good if you’re generating new data as you go, and you simply want the plot output to update automatically. Obviously one approach is to get the program generating the data to open a pipe to gnuplot, and then send commands along the pipe as necessary.

In practice though, that can be a bit of a hassle to do well, especially in languages with less whipuptitude than Perl. Since I increasingly use Haskell and ghci for simple calculations, using watchdog seems to be a better approach.

Explicitly, I get the Haskell to write:

To generate the plots manually ones needs to execute a few simple commands, which I usually put in a script:

#! /bin/sh
echo "Rebuilding"
gnuplot -e "load 'gpout/'"
open gpout/*.pdf
echo "Done\n"

You’ll spot two implicit dependencies: the Haskell needs to write the script to gpout/, and the commands in the script must generate PDF files in gpout.

To automate this, we just need to persuade watchdog to run this script as needed. Happily, this is easy: here’s a suitable script:

#! /bin/sh
mkdir -p $DIR
echo "Watching $DIR..."
watchmedo shell-command -c tools/run-gnuplot -p '*.gp' $DIR

Other benefits

It’s nice to decouple generating data from plotting it, but in practice you could get a similar effect by wrapping the necessary pipes and process control into a library. In other words, having written the library I could simply call, say, runGnuplot in ghci instead of writeFile.

However, the watchdog solution is completely language agnostic so it’s trivial to change where the data are generated. One case is particularly useful: you can edit the gnuplot script manually and observe the effect. It’s not quite as direct as using the gnuplot command-line directly, but it’s close, and a good way to tweak the plots. Those tweaks can then be folded back into the Haskell which writes the script.


It’s a rare thing which is entirely positive, and watchdog is no exception. The main downside is that there’s no feedback from the downstream program. This is particularly an issue when editing files by hand: typically a good editor will let you jump to the source of any error directly when you invoke the compiler from within the editor.


Overall I think the main reason for writing this article is that watchdog seems a useful utility. The recipes sketched above aren’t optimal, and could easily be improved if you feel like a spot of yak-shaving.3 For example, on MacOS many applications could be persuaded to update their display with a bit of AppleScript.