Martin’s Atelierurn:uuid:E594AE00-6AC0-11DC-A959-834F2EDAAEEB2023-12-23T23:28:38ZXML::Atom::SimpleFeed38922AC2-9A77-11EE-92A4-9E79CDF774402023-12-14T11:51:38:38Z2023-12-14T11:51:38:38ZThe EleksMakerMartin Oldfield<p>Build and usage notes for an XY-plotter
</p><p>Ages ago I bought an EleksMaker plotter. The basic idea is to use a
couple of stepper motors to move a pen across paper, and a servo to
lift it. That’s enough to draw things!
</p><p>I think it’s fair to say the EleksMaker is at least heavily inspired by
the <a href="https://axidraw.com">AxiDraw</a> from <a href="https://www.evilmadscientist.com">Evil Mad Scientist
Labs</a>, though it’s much cheaper and
less polished. Perhaps inevitably I think the documentation has rotted
since the device was released: these are my brief notes on how to get
it all working.
</p><p>These days, I think most people building the Eleksmaker tend to
install a laser rather than a pen, which can be rather confusing. For
example the software goes to some length to avoid firing the laser
continuously at the same spot because it might cause a fire.
</p><h2>Hardware
</h2><p>The machine came as a kit which was fairly easy to build. Two static
stepper motors drive a belt, which is looped in a clever way to move
the pen arbitrarily in a plane. In CNC circles it’s refered to as a
<a href="https://en.wikipedia.org/wiki/CoreXY">CoreXY</a> design.
</p><p>Vertical motion is controlled by a servo. There’s not a direct linkage
between the servo arm and the pen: rather the servo arm can lift the
pen away from the paper, but when the servo arm swings out of the way
gravity pulls the pen down. In practice I found this didn’t work very
well, but adding a spring to pull the pen down solved the problem.
</p><p>It is tempting to replace this part of the mechanism. However, it’s
not entirely trivial: it is hard to align the machine such that it
moves the head parallel to the paper, so you either need some sort of
springiness in the pen holder, or use a pen which can tolerate the
variation. This is essentially the bed-levelling problem well-known to
the owners of 3D-printers, but tolerances here are much less precise.
</p><p>When connecting a servo with the usual brown-orange-yellow colours,
the brown goes closest to the micro-USB port.
</p><h2>Firmware
</h2><p>The EleksMaker is controlled by an Arduino Nano running
<a href="https://github.com/gnea/grbl/wiki">grbl</a>. Both firmware and hardware
are essentially obsolete but are still available. Oddly most of the
grbl code lives in a library: the grblUpload.ino file which you
compile to make the firmware has but a single include:
</p><pre><code> #include <grbl.h>
</code></pre><h3>Configuring grbl
</h3><p>By default grbl assumes that there are independent X- and Y-axis
motors. To support the CoreXY geometry it’s enough edit config.h
and
</p><pre><code>#define COREXY
</code></pre><h3>Firmware size
</h3><p>When compiled my grbl firmware was 31,320 bytes long, which is too big
for the default Arduino Nano settings (30,720 bytes). Annoyingly, the
problem isn’t with the bootloader, but rather with the fuses that set
how much flash is available for user (i.e. non-bootloader) code. <a href="https://github.com/arduino/ArduinoCore-avr/issues/308">This
issue</a> comes up
repeatedly on the Arduino forum
</p><p>So the first step in uploading firmware to a new board is to fix the
fuses: the easiest way to do this is to burn a new bootloader
pretending that the Nano is a Uno. The Uno has the same processor but
sane fuse and bootloader settings. Happily you only have to entertain
this deception once.
</p><h3>Servo control
</h3><p>Originally in grbl the servo output was used to control the speed of a
spindle on a CNC machine. As mentioned above, these days people
typically put a laser, rather than a pen, on the plotter, and use the
servo output to modulate its intensity. In both cases, the servo
output is a PWM signal, which is good for RC servo control, but the
parameters need to be changed.
</p><p>To control a <a href="https://en.wikipedia.org/wiki/Servo_%28radio_control%29">RC
servo</a> we
need a frequency of 50Hz with a pulse length of 1–2ms, which we can
achieve by changing this definition (which occurs twice) in cpu_map.h:
</p><pre><code>#define SPINDLE_TCCRB_INIT_MASK ((1<<CS22)|(1<<CS21) | (1<<CS20))
// 1/1024 prescaler ~60Hz for servo
</code></pre><p>Once patched thus, on-times of 19–31 move the servo to sensible
places.
</p><p>grbl implements safety interlocks which couple the servo output to the
motion. This makes sense when driving a spindle or laser, but can be
confusing when you’re just moving a pen up and down.
</p><h3>grbl settings
</h3><p>grbl has a number of configuration parameters. These are the settings
I’m using:
</p><pre><code>>>> $I
[VER:1.1h.20190825:]
[OPT:VC,15,128]
ok
>>> $$
$0 = 10 (Step pulse time, microseconds)
$1 = 25 (Step idle delay, milliseconds)
$2 = 0 (Step pulse invert, mask)
$3 = 1 (Step direction invert, mask)
$4 = 0 (Invert step enable pin, boolean)
$5 = 0 (Invert limit pins, boolean)
$6 = 0 (Invert probe pin, boolean)
$10 = 1 (Status report options, mask)
$11 = 0.010 (Junction deviation, millimeters)
$12 = 0.002 (Arc tolerance, millimeters)
$13 = 0 (Report in inches, boolean)
$20 = 0 (Soft limits enable, boolean)
$21 = 0 (Hard limits enable, boolean)
$22 = 0 (Homing cycle enable, boolean)
$23 = 0 (Homing direction invert, mask)
$24 = 25.000 (Homing locate feed rate, mm/min)
$25 = 500.000 (Homing search seek rate, mm/min)
$26 = 250 (Homing switch debounce delay, milliseconds)
$27 = 1.000 (Homing switch pull-off distance, millimeters)
$30 = 255 (Maximum spindle speed, RPM)
$31 = 0 (Minimum spindle speed, RPM)
$32 = 1 (Laser-mode enable, boolean)
$100 = 100.000 (X-axis travel resolution, step/mm)
$101 = 100.000 (Y-axis travel resolution, step/mm)
$102 = 100.000 (Z-axis travel resolution, step/mm)
$110 = 5000.000 (X-axis maximum rate, mm/min)
$111 = 5000.000 (Y-axis maximum rate, mm/min)
$112 = 5000.000 (Z-axis maximum rate, mm/min)
$120 = 200.000 (X-axis acceleration, mm/sec^2)
$121 = 200.000 (Y-axis acceleration, mm/sec^2)
$122 = 200.000 (Z-axis acceleration, mm/sec^2)
$130 = 200.000 (X-axis maximum travel, millimeters)
$131 = 200.000 (Y-axis maximum travel, millimeters)
$132 = 200.000 (Z-axis maximum travel, millimeters)
ok
>>> $G
[GC:G0 G54 G17 G21 G90 G94 M5 M9 T0 F0 S0]
ok</code></pre><p>Some of these set scale sizes. For example $30 and $31 control the
mapping between the spindle speed set in G-code and the value poked
into the PWM peripheral.
</p><h3>G-code
</h3><p>To control the plotter we send it
<a href="https://en.wikipedia.org/wiki/G-code">G-code</a>. G-code covers all
sorts of machines, but we only need a small subset here.
</p><h4>Preamble
</h4><pre><code>G28 ; XXXXX
G53 ; XXXXX
M03 S17 ; lift the pen
G1 X0 Y0 F1000 ; move to the origin and set default speed
</code></pre><h4>Epilogue
</h4><pre><code>M03 S17 ; lift the pen
G28 ; XXXXX
</code></pre><h4>Movement
</h4><p>This moves us to (123,45) measured in mm at the default speed.
</p><pre><code>G1 X123 Y45
</code></pre><h4>Pen control
</h4><p>To move the pen up:
</p><pre><code>M03 S17 ; lift the pen
</code></pre><p>and to move it down:
</p><pre><code>M03 S31 ; drop the pen
</code></pre><h4>Set coordinate origin
</h4><p>Sometimes it’s helpful to move the coordinate origin. Usually
I want to tell the machine to use the current position as the
origin in future.
</p><pre><code>G92 X0 Y0 ; Set (0,0) to current location
</code></pre><h2>Useful links
</h2><h3>Universal G-code Sender
</h3><p>To control the machine the <a href="http://winder.github.io/ugs_website/">Universal G-code Sender</a>
was very useful.
</p><h3>Other Eleksmaker articles
</h3><ul><li><p>Jan Delgado wrote some <a href="https://github.com/jandelgado/eleksmaker_a3">notes</a> for an A3
laser engraver.
</p></li><li><p>A Danish FabLab wrote some <a href="https://fablab.ruc.dk/using-a-grbl-powered-drawing-machine-e-g-eleksmaker/">notes</a>
about a pen plotter. I found them useful but not all their comments matched my
experience (e.g. there wasn’t an A1 command in my firmware).
</p></li></ul><h3>Generating G-code
</h3><p>There are numerous tools to convert SVG to G-code. Here are
a few I found useful:
</p><ul><li><p>Dlacko wrote <a href="https://github.com/domoszlai/juicy-gcode">juicy-gcode</a> in Haskell.
</p></li><li><p>There are lots of python tools on
<a href="https://pypi.org/search/?q=gcode">PyPI</a>. I found several helpful
for code snippets, but didn’t have complete success with any of
them: that was probably due to my incompetence though.
</p></li><li><p>Plugins for Inkscape can generate G-code. Johny Mattsson has the
<a href="https://github.com/jmattsson/eleksmaker-inkscape-extension/blob/master/README.md">best
fork</a>.
</p></li></ul>7C4577E2-784E-11EE-A906-2030072B1CEA2023-10-28T21:31:01:01Z2023-10-28T21:31:01:01ZHDMI capture cards as monitorsMartin Oldfield<p>HDMI capture cards make a serviceable monitor for non-critical
tasks. I find them ideal for watching a Raspberry Pi boot.
</p><p><img alt="[HDMI capture dongle]" class="img_noborder" src="dongle.jpg">
</p><p>I’ve long been slightly annoyed that although I usually have a nice
display to hand, it’s hard to just view a video stream on
HDMI. However, it turns out that for about twenty pounds you can buy a
little dongle which lets you capture HDMI video on a USB-C
port. Specifically, I bought a <a href="https://www.amazon.co.uk/dp/B0BW9MK247?ref=ppx_yo2ov_dt_b_product_details&th=1">Guermok Capture
Card</a>
but others might well work.
</p><p>I find it a handy monitor when I need to work out why a Raspberry Pi
isn’t booting properly. The display is perfectly sharp and legible,
but I’ve not tested the latency.
</p><p>Having sorted out the hardware, you need some software:
</p><ul><li><p>On an Apple Silicon Mac the <a href="https://apps.apple.com/gb/app/quick-camera/id598853070?mt=12">Quick
Camera</a>
app works well.
</p></li><li><p>On an iPad Pro I use the <a href="https://orion.tube">Orion</a> app, which
has more bells and whistles, but since it introduced me to the
idea I feel a certain sense of loyalty.
</p></li></ul>70B2369C-6DFC-11EE-8FAF-2EC8F90909622023-10-18T20:03:27:27Z2023-10-18T20:03:27:27ZFloppy Disks in 2023Martin Oldfield<p>Reading data from old floppy disks in 2023.
</p><p><img alt="[Floppy Disk Drive]" class="img_noborder" src="gw.jpg">
</p><p>Recently I was clearing out some old junk and found the 3.5" floppy
disks from an Atari ST which I used back in the 1990s. Naturally I
wondered if I could still read them, but lacked any hardware to do it.
</p><p>These are brief notes on how I solved this using things which were
easily available in the UK in 2023. I’m writing it both for my own
records and in the hope that it might help others who want to do
something similar.
</p><p>These are very practical notes: if you want to understand the
underlying technology and some of the history, I recommend this
<a href="https://thejpster.org.uk/blog/blog-2023-08-28/">excellent article</a> by
Jonathan Pallant.
</p><p>For my needs it was enough to know that most ST disks have eighty
tracks each of nine 512-byte sectors. That means double-sided disks
hold 720kb, and single-sided 360kb.
</p><h2>The drive I
</h2><p>Somewhat to my surprise, you can still buy floppy disc drives though
these days they usually have an integrated controller and connect to a
PC via USB. Amazon will sell you many different sorts for about £20
each.
</p><p>I bought one of these, plugged it into a Mac, and inserted a
disc. Sadly, it didn’t work: presumably something doesn’t recognise
the Atari format. It’s possible that you might be able to fix this,
but I didn’t try.
</p><h2>The drive II
</h2><p>Although it’s hard to find new drives, eBay is full of used 3.5"
floppy drives just like I remember using decades ago. The Mitsumi
D359M3D seemed a popular choice, and I bought one for about
£10. Annoyingly I managed to buy one without the fascia but happily
that's just cosmetic.
</p><p>Like drives from thirty years ago, these have a 34-pin IDC data
connector, and a four-pin power connector. I’d forgotten a few details
though:
</p><ul><li><p>Most 34-pin floppy cables supported two identical drives on the
same cable. To accomplish such magic part of the cable is twisted
between the two drives: without the twist the signals for drive B
are presented to the drive; with the twist drive A’s signals
appear. Annoyingly that means that if you use a cable without the
twist you’ll see drive B, not drive A. I guess this stems from a
time when termination mattered.
</p></li><li><p>Although the 4-pin power connector nominally needs both a 12V and
5V supply, in practice the 12V supply isn’t needed for most modern
drives.
</p></li></ul><h2>The Greaseweazle
</h2><p>Modern Macs don’t have 34-pin sockets for floppy drives, but
after a bit of searching I found that people make fancy drive
controllers which sit on USB.
</p><p>Foremost amongst these is the
<a href="https://github.com/keirf/greaseweazle/wiki">Greaseweazle</a>, a 200MHz
Cortex M-4 based design. There are multiple versions, and the designs
are open enough that you could build your own if you wished. Being
lazy I just bought one of the V4 models from the designer’s eBay shop
for about £25.
</p><p>The Greseweazle appears to the host computer as a USB serial
device. To control it, you need to install the the <a href="https://github.com/keirf/greaseweazle/wiki/Software-Installation">host
tools</a>.
On the Mac these come as a python package.
</p><h2>The case
</h2><p>Happily, there’s a perfectly functional <a href="https://www.thingiverse.com/thing:5522437">printable
case</a> on Thingiverse.
</p><h2>Reading Atari ST disks
</h2><p>The <code>gw</code> command is installed as part of the host tools, and talks
to the hardware. To read disks you typically want something like this:
</p><pre><code>% gw read --drive 1 --format atarist.720 a.img
Reading c=0-79:h=0-1 revs=2
Format atarist.720
T0.0: IBM MFM (9/9 sectors) from Raw Flux (87474 flux in 400.33ms)
T0.1: IBM MFM (9/9 sectors) from Raw Flux (94299 flux in 400.33ms)
...</code></pre><p>The <code>--format</code> flag is a shorthand way to set the number of tracks and
so on. You can see a full list in
<a href="https://github.com/keirf/greaseweazle/blob/master/src/greaseweazle/data/diskdefs.cfg">diskdefs.cfg</a>.
To read Atari ST disks, you need the atarist.nnn format where nnn ∈ {
360, 400, 440, 720, 800, 880 }.
</p><p>If you get the number of sectors wrong, you’ll see helpful
diagnostics:
</p><pre><code>% gw read --drive 1 --format atarist.800 b.img
Reading c=0-79:h=0-1 revs=2
Format atarist.800
T0.0: IBM MFM (9/10 sectors) from Raw Flux (87475 flux in 400.44ms)
T0.0: IBM MFM (9/10 sectors) from Raw Flux (218639 flux in 1000.68ms) (Retry #1.1)
T0.0: IBM MFM (9/10 sectors) from Raw Flux (349804 flux in 1600.89ms) (Retry #1.2)
T0.0: IBM MFM (9/10 sectors) from Raw Flux (480969 flux in 2201.08ms) (Retry #1.3)
T0.0: Giving up: 1 sectors missing
...</code></pre><p>If you specify too many tracks the error will be obvious, too few and
you’ll just miss data.
</p><h2>Summary
</h2><p>Even in 2023 it’s easy to read old floppy disks. One recipe is:
</p><ul><li><p>Buy a used floppy disk drive from eBay.
</p></li><li><p>Buy a <a href="https://github.com/keirf/greaseweazle">Greaseweazle</a>.
</p></li><li><p>Print a <a href="https://www.thingiverse.com/thing:5522437">case</a>.
</p></li></ul><p>Enjoy!
</p>5C28DB10-0C4D-11EE-9CCB-C7AD169F57322023-06-16T13:35:43:43Z2023-06-16T13:57:55:55ZA nice Haskell snippetMartin Oldfield<p>A succinct Haskell snippet using the View Patterns extension.
PWM module.
</p><p>Haskell code is often pleasingly elegant, but I particularly liked
this code which uses the <a href="https://ghc.gitlab.haskell.org/ghc/doc/users_guide/exts/view_patterns.html">View
Patterns</a>
extension to good effect:
</p><pre><code>{-# LANGUAGE ViewPatterns #-}
{-# LANGUAGE OverloadedStrings #-}
import qualified Data.Map.Strict as M
import qualified Data.Text as T
....
r :: T.Text -> T.Text
r (T.stripPrefix "inc:" -> Just k) = handleInc k
r (T.stripPrefix "h1:" -> Just k) = header k
r ((`M.lookup` snippets) -> Just h) = h
r k = unknown k
</code></pre><p>I suppose the key insight is that in the context of View Patterns
functions of type <code>u -> Maybe v</code> are great to optionally match against
some <code>u</code> and extract a <code>v</code>.
</p><p>Some Haskell libraries
e.g. <a href="https://hackage.haskell.org/package/text-2.0.2/docs/src/Data.Text.html">Data.Text</a>
have
<a href="https://hackage.haskell.org/package/text-2.0.2/docs/Data-Text.html#g:22">functions</a>
specifically to enable this sort of thing.
</p><p>It struck me as a more principled version of this sort of Perl
code:
</p><pre><code>if ($foo =~ /^h1:(.*)/) { header($1) }
</code></pre><p>You can do much more with View Patterns: the GHC wiki lists <a href="https://gitlab.haskell.org/ghc/ghc/-/wikis/view-patterns">some
examples</a>.
</p>C91D6BEC-0ABC-11EE-86CA-A98F159F57322023-06-14T14:06:31:31Z2023-06-14T16:30:32:32ZStrobing with the Pico PWMMartin Oldfield<p>Building a crude stroboscope with the Raspberry Pi Pico’s
PWM module.
</p><p><img alt="[Stroboscope]" class="img_noborder_small" src="strobe.jpg">
</p><p>Recently I built a toy
<a href="https://en.wikipedia.org/wiki/Stroboscope">stroboscope</a> which used
the PWM timer on a Raspberry Pi Pico to flash a high-power LED.
</p><h2>Hardware
</h2><p>The hardware is simple. Besides the Pico, LED and a battery, we need
two other components to flash the LED:
</p><ol><li><p>A MOSFET to switch the current flowing through the LED.
</p></li><li><p>A chunky capacitor to stiffen the output of the battery.
</p></li></ol><p>There’s also a <a href="https://www.aliexpress.com/w/wholesale-tm1638.html">TM1638
module</a> which
provides buttons and 7-segment LEDs for a simple user interface.
</p><h2>Software
</h2><p>The software is straightforward too: if you ignore the UI code it’s
basically a case of just configuring the Pico’s PWM. Happily, this is
clearly explained in Section 4.5 of the <a href="https://datasheets.raspberrypi.com/rp2040/rp2040-datasheet.pdf">RP2040
datasheet</a>.
</p><p>Searching for <a href="https://www.google.com/search?q=pico+pwm+examples">Pico PWM examples</a> will furnish you with plenty
of examples written in the language of your choice.
</p><h3>PWM control
</h3><p>I found that I thought about PWM configuration in a slightly different
way after the project, which I thought worth noting down for my future
self.
</p><p>If you’re just interested in generating a signal with a given period
\(\tau\) though it boils down to finding \(a\), \(b\), and
\(c\) such that,
</p><p>$$
\tau = \tau_0 \times a \times b \times c,
$$
</p><p>where,
</p><p>$$
\begin{eqnarray}
\tau_0 &=& 8\textrm{ns}, \\
a &\in& [1, 2], \\
b &\in& [1, 256], \\
c &\in& [1, 65536].
\end{eqnarray}
$$
</p><p>Here \(a\) is determined by whether the PWM is running in
phase-correct mode (where it counts up then down) or not (where it
just counts up). In phase-correct mode \(a\) is 2, otherwise it’s 1.
</p><p>\(b\) is the clock divider. The hardware supports a fractional
divisor, but for simplicity’s sake we consider only integer values
here.
</p><p>\(c\) is the counter limit.
</p><p>In many cases, the choice will not be unique: for example you might be
able to double \(a\) and halve \(b\). Keeping the counter limit
high and the divider low usually helps make the duty-cycle more
precise.
</p><h4>Parameter space
</h4><p>It’s worth noting that there are 25 bits of configuration, which is
about 3 x 10^7 (or about the number of seconds in a year). I think
that’s too many to iterate over if you want a real-time response, but
it’s perfectly reasonable to explore offline.
</p><p>For example, suppose you want to find a divisor which gives good
approximations to a set of frequencies: just consider all the
divisors, accumulate some sort of misfit statistic for each target
frequency, then pick the best. No thought is required!
</p><p>This was obvious is retrospect, but I wasted time thinking about
clever ways to do it all in real-time on the Pico.
</p><h4>Multiple frequencies
</h4><p>Suppose we’re going to change between a discrete set of
frequencies. In some applications, if you make a small change in the
period it’s helpful if the time between the last pulse of the old
regime and the first pulse of the new one is roughly a whole number of
periods. If you don’t do this, you’re effectively adding a phase
jump.
</p><p>The easiest way to do this is to keep the divisor the same: that way
the counter remains set to a sensible number after the change.
</p><p>If you do change the divisor then you either need to change the count
value as well, or do the update when the counter’s at zero.
</p><p>Again all this was all obvious in retrospect.
</p><h3>The UI
</h3><p>The TM1638 board gave me eight push-buttons which I treated as four pairs:
</p><ol><li><p>Increase/decrease the duty-cycle.
</p></li><li><p>Increase/decrease the frequency with auto-repeat and acceleration.
</p></li><li><p>Increase/decrease the frequency by 0.1Hz, no repeat.
</p></li><li><p>Increase/decrease the frequency by 0.01Hz, no repeat.
</p></li></ol><p>It worked better to group the buttons by speed rather than function so
that the button to reverse the last change was adjacent to the button
which caused it.
</p>E9051F58-9821-11ED-90E0-CA7B265656362023-01-18T23:56:14:14Z2023-01-24T20:56:49:49ZWakeup TimersMartin Oldfield<p>Experiments with a couple of low power wakeup timers.
</p><p>I’d like to be able to turn on a gadget roughly once a day, then
guarantee that it’s turned off a few minutes later—or sooner
if the gadget’s finished its task. This is a reasonably common
problem, and gadgets which solve it are usually called wakeup timers.
</p><p>As you might have guessed the gadget here is a microcontroller which
needs to take some measurements, transmit them to a server, then do
nothing until the next day. One could simply put the microcontroller
into a deep sleep when it’s finished, and rely on a low-power timer to
wake it the next day. For this project though I wanted stronger
guarantees that it would work properly. So, I built a low-current
timer which controls things. Happily there are lots of chips to make
this very straightforward. I looked at two: the 74HC4060 and the
LTC2956.
</p><h2>The 74HC4060
</h2><p>The 74HC4060 contains both a 14-stage counter, and the inverters
needed to build an RC oscillator. If we ignore limiting the on-time,
but just generate a square-wave with a period of about a day, then we
need two resistors and a capacitor besides the IC.
</p><p><img alt="[74HC4060 Schematic]" class="img_noborder_small" src="74hc4060.png">
</p><p>$$
\begin{eqnarray}
T_{period} &\approx& 2.2 \; R_1 \; C_1 \\
R_2 &\approx& 2\; R_1.
\end{eqnarray}
$$
</p><h3>Experimental results
</h3><p>For the experiment below I used a
<a href="https://www.ti.com/product/CD74HC4060">CD74HC4060E</a> from Texas
Instruments. It’s an old chip: the datasheet is dated February
1998.
</p><table class="cspaced_sml">
<tr><th>Vcc / V</th> <td>3.3</td> <td>3.3</td> </tr>
<tr><th>R<sub>1</sub> / kΩ</th> <td>390</td> <td>390</td> </tr>
<tr><th>R<sub>2</sub> / kΩ</th> <td>840</td> <td>840</td> </tr>
<tr><th>C<sub>1</sub> / nF</th> <td>1</td> <td>100</td> </tr>
<tr><th>f<sub>4</sub> / Hz</th> <td>66.7</td> <td>0.65</td> </tr>
<tr><th>t<sub>14</sub> / s</th> <td>15.3</td> <td>1,600</td> </tr>
<tr><th>I / µA</th> <td>360</td> <td>275</td> </tr>
</table>
<p>Notes:
</p><ul><li><p>f<sub>4</sub> is the frequency on Q<sub>4</sub>, one sixteenth the oscillator frequency.
</p></li><li><p>t<sub>14</sub> is the period of Q<sub>14</sub>.
</p></li></ul><p>Whilst this works, and it was easy to set up, it draws about 0.3mA
which is much too high. It’s possible that a different variation of
the chip would draw less power. To get to a period of a day, we’d need
to slow the oscillator down by a factor of about 50, which would
probably also reduce the power consumption.
</p><p>To provide a complete solution we could either trigger a monostable to
give a fixed on-time, or use a flip-flop set on the rising-edge of
Q<sub>14</sub> and reset when some of the lower bits go high. In both
cases the microcontroller could shorten the on-time
</p><p>However, we can do very, very much better than the 74HC4060.
</p><h2>The LTC2956
</h2><blockquote><p>The <a href="https://www.analog.com/en/products/ltc2956.html">LTC2956</a> is
described as a ‘Wake-Up Timer with Pushbutton Control’. We don’t care
about the button, but the upshot is that the device will do exactly
what we want given a handful of discrete components, whilst drawing
just 800nA.
</p></blockquote><p>I am indebted to Parker Dillman of the <a href="https://macrofab.com/blog/podcast/">MacroFab Engineering
Podcast</a> for making me aware of
this part: he’s using it in his <a href="https://longhornengineer.com/2023/01/11/ltc2956-wake-up-timer/">Cat Feeder
Unreminder</a>
project.
</p><p>The are a couple of variants of the LTC2956: the -1 part has an active
high output designed for driving the enable pin of a voltage
regulator; the -2 part has an active low output suitable for driving
the gate of a p-channel high-side MOSFET switch.
</p><p>Ignoring some of the chip’s features, the low-frequency oscillator is
made from a faster oscillator and a divider chain, which triggers a
timer. The frequency of the oscillator and the quotient for the
divider are both set with resistors, and the period of the timer by a
capacitor.
</p><p><img alt="[LTC2956 Schematic]" class="img_noborder_small" src="ltc2956.png">
</p><p>$$
\begin{eqnarray}
T_{period} &=& \frac{R_{period} \times N_{range}}{400}, \\
T_{on} &=& \frac{C_{on}} {75}.
\end{eqnarray}
$$
</p><p>Times are in seconds, resistances in kΩ and capacitances in
nF. N<sub>range</sub> is the divider ratio, set by a resistor
according to the following table:
</p><table class="cspaced_sml">
<tr>
<th>RECOMMENDED PERIOD</th>
<th>N<sub>range</sub></th>
<th>R<sub>range</sub> / kΩ</th>
</tr>
<tr><td>0.25s to 0.8s</td> <td>1</td> <td>9.76</td></tr>
<tr><td>0.4s to 3.2s</td> <td>4</td> <td>17.4</td></tr>
<tr><td>1.6s to 12.8s</td> <td>16</td> <td>26.1</td></tr>
<tr><td>6.4s to 51.2s</td> <td>64</td> <td>35.7</td></tr>
<tr><td>25.6s to 3.4min</td> <td>256</td> <td>47.5</td></tr>
<tr><td>102s to 14min</td><td>1,024</td> <td>61.9</td></tr>
<tr><td>6.8min to 55min</td><td>4,096</td> <td>78.7</td></tr>
<tr><td>27min to 3.6hr</td><td>16,384</td> <td>100.0</td></tr>
<tr><td>1.82hr to 15hr</td><td>65,536</td> <td>127.0</td></tr>
<tr><td>7.28hr to 58hr</td><td>262,144</td> <td>162.0</td></tr>
<tr><td>29hr to 233hr</td><td>1,048,576</td> <td>210.0</td></tr>
<tr><td>233hr to 932hr</td><td>4,194,304</td> <td>280.0</td></tr>
</table>
<h3>Period changes
</h3><p>If T<sub>on</sub> is bigger than T<sub>period</sub> shown above, the
period is stretched to T<sub>on</sub> + 125ms.
</p><p>The on time can be shortened by sending a short pulse to the <span style="text-decoration:overline">SLEEP</span> pin. Sending a long
pulse (as configured here, long means more than 16.384s) will turn the
timer off. For more details consult the data sheet.
</p><h2>Conclusions
</h2><p>It is perhaps not surprising that single chip wakeup timers
exist. Nevertheless, I was pleasantly surprised that they need such
little current to function.
</p><p>One potential gotcha: if you connect the output of the LTC2956 to test
gear with an input impedance of 1MΩ then a few Volts will drive a
current of a few µA. Normally it would be safe to ignore such
currents, but here they dominate the 0.8µA needed to run the chip
itself.
</p>27055BB0-774D-11ED-AE4B-2D3C5D07CE092022-12-08T22:58:38:38Z2022-12-08T22:58:38:38ZRESTful Hardware APIsMartin Oldfield<p>Putting hardware devices online via a REST API implemented
with FastAPI.
</p><p>This article shows a way to connect hardware to the local network via
a small Linux computer like a Raspberry Pi. Connecting hardware to a
computer often makes it more useful: it makes it easier to log data
and automate things. Making the connection via a network rather than,
say, USB, offers a couple of extra benefits. Most obviously it means
that the computer can be some distance away from the hardware its
controlling. Equally usefully the computer is electrically isolated
from the hardware reducing the risk of disaster.
</p><p>Unlike many <a href="https://en.wikipedia.org/wiki/Internet_of_things">IoT</a>
devices we will speak
<a href="https://en.wikipedia.org/wiki/Hypertext_Transfer_Protocol">HTTP</a>
rather than <a href="https://en.wikipedia.org/wiki/MQTT">MQTT</a>, and by making
the device the HTTP server we don’t need any other infrastructure. In
essence we will write a simple website, but instead of manipulating
records in a database, out back-end will be hardware. This approach
leads us to the tools and techniques of server-side web development
rather than embedded software. For example, we will write code in
python, send messages encoded in
<a href="https://en.wikipedia.org/wiki/JSON">JSON</a>, and use
<a href="https://en.wikipedia.org/wiki/Representational_state_transfer">RESTful</a>
ideas to design the API.
</p><h3>RESTful design
</h3><p>RESTful design is an important principle for designing APIs. One key
idea is that we map URLs to things which are meaningful in our
application, then make HTTP requests. For example, we might
<code>GET /api/1.0/bbq/temperature</code> to see if it’s time to cook, or
<code>PUT /api/1.0/kitchen/light</code> to brighten things up. Both the action
e.g. <code>GET</code> in the request, and the status code returned e.g. <code>200</code>
should be taken seriously.
</p><p>Another important idea is that the server should know nothing of the
client state: the API covers only the state of the server, this helps
to keep the design modular.
</p><h3>FastAPI
</h3><p>The FastAPI library is a good way to implement APIs in Python. For
example given a <code>temp_read()</code> function which talks to the hardware,
here is a wrapper which exposes it in the API:
</p><pre><code>@app.get(apiroot + "/temperature")
def get_temp():
return { 'temperature': temp_read() }</code></pre><p>You can see that FastAPI uses
<a href="https://docs.python.org/3/glossary.html#term-decorator">decorators</a>
to map URLs to functions.
</p><p>Here is another stub, this time to set the brightness of a light:
</p><pre><code>@app.put(apiroot + "/light")
def put_light(b: float):
set_brightness(b)
return { 'brightness': get_brightness() }</code></pre><p>Here we use <a href="https://docs.pydantic.dev/#example">pydantic type
annotations</a> to define the type of
data the API accepts. Note too that we <code>PUT</code> rather than <code>POST</code> the LED
state because
<a href="https://www.rfc-editor.org/rfc/rfc7231#section-4.3.4">RFC7231</a> says
that <code>PUT</code> is the appropriate verb when the new state obliterates the
old. By contrast <code>POST</code> is appropriate when we are creating a new thing,
and <code>PATCH</code> when we are modifying the state of an existing thing.
</p><h4>Error handling
</h4><p>Suppose we want to extend the example above to signal an error if the
brightness is outside the valid range. This is done by returning a
different status code and an error message. HTTP status codes are
defined in <a href="https://datatracker.ietf.org/doc/html/rfc9110">RFC9110</a>:
code 400 used here indicates that the problem lies in the request made
by the client.
</p><pre><code>def client_error(t):
return PlainTextResponse(content = t
, status_code = status.HTTP_400_BAD_REQUEST)
@app.put(apiroot + "/light")
def put_light(b: float):
if b < 0.0 or b > 1.0:
return client_error(f"Brightness {b} out of range [0,1]")
else:
led_set(b)
return led_get()</code></pre><h3>Resource discovery
</h3><p>Devices on the local network usually have neither a DNS record nor a
fixed IP address, which makes it inconvenient for clients trying to
talk them. Happily though it is easy to get Linux boxes to advertise
themselves in the <code>.local</code> domain.
</p><h2>A Toy Server
</h2><p>Let’s build a minimal toy server which puts the Pi’s temperature sensor
and LED online. Happily these devices are easily accessible in <code>/sys</code>,
so we don’t need to do any low-level work, though we do need <code>sudo</code> to
get the relevant access permission.
</p><h4>Temperature
</h4><p>The Linux kernel makes the CPU temperature available in the <code>/sys</code> filesystem:
</p><pre><code>$ cat /sys/class/thermal/thermal_zone0/temp
37810</code></pre><p>The temperature is returned in milliCelsius, so 37810 corresponds to
an overprecise 37.81°C.
</p><h4>LED
</h4><p>The green LED on the Raspberry Pi is controlled by the Linux
kernel. Usually it is configured to show disk activity, but this can
be changed:
</p><pre><code>$ sudo sh -c "echo none > /sys/class/leds/led0/trigger"
$ sudo sh -c "echo 1 > /sys/class/leds/led0/brightness"
$ sudo sh -c "echo 0 > /sys/class/leds/led0/brightness"</code></pre><h3>Software preliminaries
</h3><p>Let’s assume that we have a new Raspberry Pi called <code>restful</code>, so that
we can connect to it at <code>restful.local</code>. Starting from the stock
Raspberry OS distribution, we need to install the fastapi library and
uvicorn which fastapi uses to make a webserver. We also grab the
source code from GitHub:
</p><pre><code>$ sudo apt install python3-fastapi python3-uvicorn uvicorn
$ git clone https://github.com/mjoldfield/restful-hardware-api.git
$ cd restful-hardware-api.git
$ ls
api.py index.html</code></pre><h3>The server
</h3><p>The server code is in <code>api.py</code>:
</p><pre><code>from fastapi import FastAPI, status
from fastapi.responses import FileResponse, PlainTextResponse
from pydantic import BaseModel
import subprocess
#
# Define functions to talk to the hardware
#
# Here we use toy examples from /sys appropriate to
# a Raspberry Pi.
#
# They might need root access, so run stuff in a shell
# inside sudo.
#
def run_as_root(cmd):
x = subprocess.run(["sudo", "su", "-c", cmd]
, capture_output=True
, text=True
, check=True)
return x.stdout.strip()
def temp_read():
raw = run_as_root("cat /sys/class/thermal/thermal_zone0/temp")
t = f"{int(raw) / 1000.0:.1f}\N{DEGREE SIGN}C"
return { 'temperature': t }
def led_init():
run_as_root("echo none > /sys/class/leds/led0/trigger")
def led_set(x):
b = 255 if x > 0 else 0
run_as_root(f"echo {b} > /sys/class/leds/led0/brightness")
def led_get():
b = float(run_as_root("cat /sys/class/leds/led0/brightness"))
x = 1.0 if b > 0 else 0.0
return { 'brightness': x }
#
# The main HTTP server starts here
#
led_init()
app = FastAPI()
apiroot = "/api"
def client_error(t):
return PlainTextResponse(content = t
, status_code = status.HTTP_400_BAD_REQUEST)
@app.get('/')
def get_index():
return FileResponse('index.html')
@app.get(apiroot)
def get_all():
return (temp_read() | led_get())
@app.get(apiroot + "/temperature")
def get_temp():
return temp_read()
@app.get(apiroot + "/light")
def get_light():
return led_get()
class LightControl(BaseModel):
brightness: float
@app.put(apiroot + "/light")
def put_light(m: LightControl):
b = m.brightness
if b < 0.0 or b > 1.0:
return client_error(f"Brightness {b} out of range [0,1]")
else:
led_set(b)
return led_get()</code></pre><p>Hopefully this code is reasonably clear even on the first reading.
</p><p>There are several things to note:
</p><ul><li><p>We specify the parameters used to set the brightness by defining a
custom class. It’s a bit contrived here, but does shows the
general principle. Using a class also makes the automatically
generated documentation a little bit clearer.
</p></li><li><p>If someone <code>GET</code>s <code>/</code> we return the HTML document saved in
<code>index.html</code>.
</p></li><li><p>If someone <code>GETs</code> <code>/api</code> we return the union of lower-level
data. This reduces the latency which would be incurred if we made
multiple sequential requests.
</p></li></ul><h3>Liftoff!
</h3><p>Finally, we need to run the server:
</p><pre><code> $ uvicorn api:app --host 0.0.0.0</code></pre><p><a href="https://www.uvicorn.org">Uvicorn</a> is a python HTTP server which runs
code conforming to the ASGI specification and thus supports FastAPI.
</p><ul><li><p>The <code>api</code> in <code>api:app</code> means that we run the server in <code>api.py</code>.
</p></li><li><p>We specify the host address of 0.0.0.0 so that the server runs on
all the Pi’s network interfaces: without this we could only access
the API from the Pi itself.
</p></li><li><p>By default, uvicorn listens on port 8000, so the root URL of the
server is <a href="http://restapi.local:8000">http://restapi.local:8000</a>
</p></li></ul><h3>From the browser
</h3><p>Visiting <a href="http://restapi.local:8000/api">http://restapi.local:8000/api</a> returns the sever state
encoded in JSON. A more friendly interface is available at
<a href="http://restapi.local:8000/docs">http://restapi.local:8000/docs</a> which allows you to explore and test
the API without writing any code. Better yet, because this page is
automatically generated from the code which is actually running on
the server, it will stay in sync as the code changes.
</p><h3>From the command line
</h3><p>Alternatively, we can access the API from the comment line with
curl. Happily the web UI tells us exactly the command to use:
</p><pre><code>$ curl -X 'PUT' \
'http://restapi.local:8000/api/light' \
-H 'accept: application/json' \
-H 'Content-Type: application/json' \
-d '{ "brightness": 0 }'
{"brightness":0.0}</code></pre><p><a href="https://httpie.io">httpie</a> is a more modern command which makes this
a bit cleaner:
</p><pre><code>$ http PUT http://restapi.local:8000/api/light brightness=5.0
HTTP/1.1 400 Bad Request
content-length: 33
content-type: text/plain; charset=utf-8
date: Sun, 01 Jan 2023 16:14:02 GMT
server: uvicorn
Brightness 5.0 out of range [
0,
1
]</code></pre><h3>From python
</h3><p>The <a href="https://requests.readthedocs.io/en/latest/#">requests</a> library
is an easy way to talk to the API from python:
</p><pre><code>$ python3
>>> import requests
>>> r = requests.put('http://restapi.local:8000/api/light'
, json={ 'brightness': 0.2})
>>> r.content
b'{"brightness":1.0}'</code></pre><h2>A user interface of sorts
</h2><p>Althought the <code>/docs</code> page is a good way to explore the API, in
practice we need a a simpler UI for day-to-day work. A static HTML
page suffices for this: we can embed JavaScript in the HTML to
interact with the API.
</p><p>The code below uses the
<a href="https://developer.mozilla.org/en-US/docs/Web/API/Fetch_API">fetch</a> to
make HTTP requests: it's a modern version of XMLHttpRequest.
</p><pre><code><!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<style>
...
</style>
<title>Toy restAPI</title>
<script>
const url_root = "/api";
// Start here: call this when the page is loaded
function init() {
fetch_state();
setInterval(fetch_state, 1000);
}
// Query state from the API, and update display.
function fetch_state() {
fetch(url_root)
.then((response) => response.json())
.then((data) => update_display(data))
}
// Update the display: HTML element IDs must match returned keys
function update_display(d) {
for (var k in d) {
var e = document.getElementById(k);
if (e) {
e.innerText = d[k];
}
}
}
// Handy function to wrap a PUT command. Do it, then fetch the
// new state.
function put(url, args) {
fetch(url_root + url, {
method: 'PUT',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify(args),
})
.then((r) => fetch_state());
}
function set_light(x) {
put("/light", { "brightness": x });
}
</script>
</head>
<body onload="init()">
<h1>Toy restAPI UI</h1>
<h2>Temperature</h2>
<p><span id="temperature"></span></p>
<h2>Light</h2>
<p>State: <span id="brightness"></span></p>
<div class="f">
<button onclick="set_light(0.0)">
OFF
</button>
<button onclick="set_light(1.0)">
ON
</button>
</div>
<p><a href="/docs">API documentation</a> is available.</p>
</body>
</html></code></pre><h2>Conclusions
</h2><p>Using HTTP to control hardware devices isn’t the most efficient way to
do it, nor does it give the best performace. If you wanted to read
data every 10ms or transfer vast amounts of data then this might not
be a good solution. On the other hand, if the task is to read a few
numbers or tweak the settings once a minute then it seems fine.
</p><p>The whole thing is easy to set up, making it feasible to set up
on-the-fly.
</p>F18784E0-E6FB-11DD-B71D-CEAD2AF057D32009-01-20T14:08:57:57Z2022-08-17T16:39:47:47ZPlaces to eat in LondonMartin Oldfield<p>Some brief notes on places to eat in London. </p><h2>Great Queen Street</h2>
<p>A straightforward, decent, gastropub found between Covent Garden and Holborn. It belongs to the same stable as Southwark’s ‘Anchor and Hope’, but is both more convenient for the centre of town and, to my mind, has both a better atmosphere and nicer food.</p>
<p>It’s one of my favourite places for imaginative, honest grub.</p>
<p>A minor caveat: on my most recent visit (January 2012) things seemed to have slipped a bit, though of course I might just have been unlucky.</p>
<p>For more details phone them on +44 20 7242 0622, or see <a href="http://maps.google.com/maps?q=N+51+30.912+W+0+7.301">Google Maps.</a></p>
<p><small><em>Last visited January 2012.</em></small></p>
<h2>Dinner by Heston Blumenthal</h2>
<p>Second album syndrome seems a common problem in the music industry, but whilst Dinner is no Fat Duck, Mr Blumenthal has created another wonderful place.</p>
<p>One of Dinner’s gimmicks is that all the dishes hark back to some earlier, sometimes much earlier, recipe. Although it’s a nice idea, I’m not sure that it adds much to the experience.</p>
<p>The first, and perhaps best dish on the menu is meat fruit: a perfect liver parfait skinned in citrus jelly. It appears to the diner as a perfectly normal tangerine, served on a plain wooden board, and accompanied by perfectly toasted brioche. Meat fruit claims a pedigree stretching back 1500, but today’s dish is pure 21st-century.</p>
<p>The rest of the menu is less theatrical but the ingredients, recipes and presentation are all top notch. The staff too are all you could wish for: enthusiastic, attentive, and knowledgeable. It’s true that there’s not the same sense of fun one finds in Bray, but ‘The Fat Duck’ is a very special place.</p>
<p>Dinner’s not without its own charms though: several of the normal tables overlook the kitchens and you can watch the chefs at work. It’s a truly impressive sight to see such a well trained team practising their art, and like <a href="http://www.bbc.co.uk/sn/tvradio/programmes/horizon/broadband/archive/feynman/idp.swf">Feynman’s appreciation of a flower</a> I think seeing how things are done adds to the enjoyment.</p>
<p>For more details visit their <a href="http://www.dinnerbyheston.com/">website,</a> or see <a href="http://maps.google.com/maps?q=N+51+30.120+W+0+9.596">Google Maps.</a></p>
<p><small><em>Last visited January 2012.</em></small></p>
<h2>Les Deux Salons</h2>
<p>Although both London and Paris both have a goodly number of really fine places to eat, I’m often disappointed that London loses out in the brasserie stakes.</p>
<p>Happily ‘Les Deux Salons’ addresses this, albeit only <em>un petit peu.</em> Put simply, it’s a very fine brasserie just off Trafalgar Square. I’ve eaten here several times now, and always enjoyed it.</p>
<p>Both surroundings and the fare seem plausibly French, and even though Paris is easier to visit than ever, this is even more convenient!</p>
<p>Update: I visited in September 2012 looking for a simple <em>steak frites</em>. Although the meat was fine, the chips were awful!</p>
<p>For more details visit their <a href="http://www.lesdeuxsalons.co.uk/">website,</a> or see <a href="http://maps.google.com/maps?q=N+51+30.569+W+0+7.587">Google Maps.</a></p>
<p><small><em>Last visited September 2012.</em></small></p>
<h2>Hawksmoor Guildhall</h2>
<p>There are now three Hawksmoor restaurants in London, and the Guildhall branch is a fairly large subterranean affair. There’s a definite buzz to the place, which verged on being too noisy for my taste.</p>
<p>Hawksmoor claim to serve the best steak in London. I can’t vouch for that, but it’s certainly the best I’ve had. They serve a damn good gimlet too!</p>
<p>For more details visit their <a href="http://thehawksmoor.com/">website,</a> or see <a href="http://maps.google.com/maps?q=N+51+30.949+W+0+5.471">Google Maps.</a></p>
<p><small><em>Last visited February 2012.</em></small></p>
<h2>Spuntino</h2>
<p>Rupert Street is hardly the most auspicious site, given that Spuntino is surrounded by strip joints and massage parlours: indeed the restaurant dares not to advertise its name!</p>
<p>Should you dare to enter you’ll find a small room of quasi-industrial faux-grunge Americana. The food’s a fun take on casual fare: truffled cheese-on-toast, a quartet of posh burgers, and wonderfully stringy fries in my case. The pulled-pork burger got top marks: melting meat and crisp crackling.</p>
<p>For more details visit their <a href="http://spuntino.co.uk/">website,</a> or see <a href="http://maps.google.com/maps?q=N+51+30.731+W+0+8.023Ls">Google Maps.</a></p>
<p><small><em>Last visited March 2012.</em></small></p>
<h2>L’Atelier du Joël Robuchon</h2>
<p>This is the second of Robuchon’s eight ateliers I’ve had the pleasure of sampling, and the skills and artistry seem to have survived the short trip from Paris quite perfectly.</p>
<p>I tried the tasting menu this time and was treated to a wonderful procession of elegant and beautiful dishes, which tasted even better than you’d expect.</p>
<p>Warmly recommended.</p>
<p>For more details visit their <a href="http://www.joelrobuchon.co.uk/">website,</a> or see <a href="http://maps.google.com/maps?q=N+51+30.789+W+0+7.709">Google Maps.</a></p>
<p><small><em>Last visited March 2012.</em></small></p>
<h2>Roux at Parliament Square</h2>
<p>I’ve long been a great fan of the bar upstairs in Roux’s establishment near Parliament Square: they make the most interesting cocktails around. Sadly though, the restaurant’s always been booked when I’ve wanted to eat there.</p>
<p>This changed quite recently and it was worth the wait. Elegant French food, presented in a quiet relaxing surroundings. A blessed relief from the hustle and bustle of life.</p>
<p>The dessert was probably the best course: a wonderfully refreshing pear creation.</p>
<p>For more details visit their <a href="http://rouxatparliamentsquare.co.uk">website,</a> or see <a href="http://maps.google.com/maps?q=N+51+30.075+W+0+7.793">Google Maps.</a></p>
<p><small><em>Last visited December 2012.</em></small></p>
<h2>Dean Street Townhouse</h2>
<p>A perfectly delightful dining room in Soho. As you might expect, the townhouse has rooms too, but I just had lunch. The food was good, and the ambience excellent. it’s not haute cuisine, but I think you could happily eat here regularly and never get bored.</p>
<p>For more details visit their <a href="http://www.deanstreettownhouse.com">website,</a> or see <a href="http://maps.google.com/maps?q=N+51+30.808+W+0+7.946">Google Maps.</a></p>
<p><small><em>Last visited January 2013.</em></small></p>
<h2>The Gilbert Scott</h2>
<p>A most conveniently sited restaurant given its proximity to Kings Cross station, and thus trains to Cambridge. The food’s good too, though arguably a trifle overpriced. In practice I think the architecture is the real star of the show: the dining room sports an elegant curve which lends the place an open, airy feel: though that’s surely in part down to the gorgeous high ceilings.</p>
<p>So, despite the location, I find myself only wanting to dine here with friends. If I’ve got time to kill before a train, I tend to head instead for the Booking Office Bar or the Hansom Lounge of the St Pancras Renaissance Hotel, where fine cocktails and delicious bar-snacks are served in more relaxed surroundings.</p>
<p>For more details visit The Gilbert Scott’s <a href="http://www.thegilbertscott.co.uk">website,</a> the St Pancras Hotel’s <a href="http://www.marriott.co.uk/hotels/hotel-information/restaurant/lonpr-st-pancras-renaissance-london-hotel/">website,</a> or see <a href="http://maps.google.com/maps?q=N+51+31.755+W+0+7.563">Google Maps.</a></p>
<p><small><em>Last visited January 2013.</em></small></p>
<h2>Bocca di Lupo</h2>
<p>A nice Italian reastarant in Soho, boasting a kitchen-facing bar. A fairly wide variety of simple dishes are on offer, and almost all of them are available in two sizes: three ‘small’ dishes makes for an interesting lunch.</p>
<p>I visited in truffle season when patrons are encouraged to bring their own. Should you lack such foresight, the fine gelaterie across the street <a href="http://www.gelupo.com"><em>Gelupo</em></a> will sell you one.</p>
<p>For more details visit their <a href="http://www.boccadilupo.com/">website,</a> or see <a href="http://maps.google.com/maps?q=N+51+30.695+W+0+8.031">Google Maps.</a></p>
<p><small><em>Last visited November 2013.</em></small></p>
<h2>Brasserie Zédel</h2>
<p>Their website calls this ‘a grand Parisian brasserie transported to the heart of London’, and I find it hard to improve on that. I’d add that it’s a stone’s throw from Picadilly Circus (exit 1 from the Tube station), and that the prix fixe menu is a steal!</p>
<p>For more details visit their <a href="http://www.brasseriezedel.com/brasserie-zedel">website,</a> or see <a href="http://maps.google.com/maps?q=N+51+30.638+W+0+8.128">Google Maps.</a></p>
<p><small><em>Last visited December 2015.</em></small></p>
<h2>Dabbous</h2>
<p>Fabulous and imaginative dishes and drinks, flawlessly executed. Highly recommended.</p>
<p>Highlights for me were the buttered kale with chestnuts, which had all the good things about a cottage pie in a novel and interesting way, and the wonderful avocado and sorrel based dessert.</p>
<p>For more details visit their <a href="http://www.dabbous.co.uk">website,</a> or see <a href="http://maps.google.com/maps?q=N+51+31.214+W+0+8.104">Google Maps.</a></p>
<p><small><em>Last visited January 2016.</em></small></p>
<h2>Scott’s</h2>
<p>Google makes it easy to find Scott’s: just search for ‘best fish restaurant in London’. Happily, reality matches Google’s view: the ingredients seemed top-notch, and the preparation highlighted this.</p>
<p>I had some memorably fresh sashimi, followed by miso salmon. Although I’ve eaten similar things before, the balance of flavours in this salmon dish seemed to be the ideal which previous attempts were approximating.</p>
<p>Good cocktails too!</p>
<p>For more details visit their <a href="http://www.scotts-restaurant.com/">website,</a> or see <a href="http://maps.google.com/maps?q=N+51+30.589+W+0+9.055">Google Maps.</a></p>
<p><small><em>Last visited February 2016.</em></small></p>
<h2>Latium</h2>
<p>Whilst there’s a time when you really want food which delights you with its imagination and novelty, there are other times when nothing’s better than a comforting menu of dishes, and the confidence that they’ll be prepared perfectly. For such times, Latium is ideal.</p>
<p>My main course stands for the whole: <i>filetto di manzo</i> but so carefully sourced and lovingly cooked that I’ll savour it for years.</p>
<p>For more details visit their <a href="http://www.latiumrestaurant.com/">website,</a> or see <a href="http://maps.google.com/maps?q=N+51+31.060+W+0+8.220">Google Maps.</a></p>
<p><small><em>Last visited February 2016.</em></small></p>
<h2>Murano</h2>
<p>If Latium covers tradition, Murano is its perfect complement. Here the perfect execution is matched with imagination and flair: rabbit with a palette of contrasting flavours; baked celeriac and pear.</p>
<p>As you’d guess from the name, there’s some funky glasswear to enjoy too, both on the table and the lights. Overall though, the ambience is just lovely.</p>
<p>For more details visit their <a href="http://www.muranolondon.com">website,</a> or see <a href="http://maps.google.com/maps?q=N+51+30.424+W+0+8.833">Google Maps.</a></p>
<p><small><em>Last visited March 2016.</em></small></p>
<h2>The Greenhouse</h2>
<p>An astonishing oasis of tranquility in Mayfair.</p>
<p>I’m making these notes six months afterwards, and I still remember how the short walk from the street to the restaurant seemed to take you miles away from the city.</p>
<p>The food was excellent: I found it hard to fault either the ingredients or the cooking.</p>
<p>For more details visit their <a href="http://www.greenhouserestaurant.co.uk">website,</a> or see <a href="http://maps.google.com/maps?q=N+51+30.464+W+0+8.946">Google Maps.</a></p>
<p><small><em>Last visited April 2016.</em></small></p>
<h2>Bar Boulud</h2>
<p>The Picadilly Line has many virtues, but its trains lack any kind of restaurant car. This can be awkward if you get a bit peckish on the way back from Heathrow!</p>
<p>Happily, if you leave the train at Knightsbridge, you can walk straight into Bar Boulud, part of the Mandarin Oriental. Inside it’s a perfect bistro: simple food, perfectly prepared.</p>
<p>They are proud of their burgers: I think with good reason.</p>
<p>For more details visit their <a href="http://www.barboulud.com/london/">website,</a> or see <a href="http://maps.google.com/maps?q=N+51+30.119+W+0+9.612">Google Maps.</a></p>
<p><small><em>Last visited summer 2016.</em></small></p>
<h2>Blanchette</h2>
<p>A fun French tapas place in Soho. A fairly low-key affair, but a great place to have a slightly funky lunch.</p>
<p>For more details visit their <a href="http://www.blanchettesoho.co.uk">website,</a> or see <a href="http://maps.google.com/maps?q=N+51+30.878+W+0+8.164">Google Maps.</a></p>
<p><small><em>Last visited summer 2016.</em></small></p>
<h2>Hélène Darroze at the Connaught</h2>
<p>Their <a href="http://www.helenedarroze.com/en/destination-london.html">website</a> describes the restaurant as ‘this London temple of French gastronomy’, and I find that hard to better. Food, ambience and staff were all faultless.</p>
<p>The menus are varied and deep, and they have a funky ordering system: each course maps to a marble, and you just place as many marbles as you want onto a cute little plate.</p>
<p>One of the very best, and most expensive, meals I’ve had anywhere. Warmly recommended, but only infrequently.</p>
<p>For more details see <a href="http://maps.google.com/maps?q=N+51+30.611+W+0+8.976">Google Maps.</a></p>
<p><small><em>Last visited early 2017.</em></small></p>
<h2>Pollen Street Social</h2>
<p>Fine, modern cooking, just south of Oxford Circus. Somewhat more down-to-earth and much better value than many Michelin restaurants, which makes a nice change.</p>
<p>They have a most amusing ice-cream surprise too!</p>
<p>For more details see their <a href="http://pollenstreetsocial.com">website,</a> or <a href="http://maps.google.com/maps?q=N+51+30.804+W+0+8.539">Google Maps.</a></p>
<p><small><em>Last visited March 2017.</em></small></p>
<h2>Hide</h2>
<p>Back in January 2016, I had the great pleasure of dining at Ollie Dabbous’ eponymous restaurant in Fitzrovia: you can read what I thought of it above. That restaurant has closed now, but happily Mr Dabbous has opened Hide on Picadilly. It shares many of the virtues of the old: imaginative, precise and enjoyable cooking, but improves on the general ambience.</p>
<p>All of the mains and starters were excellent; the dessert was spectacular but the initial excitement faded somewhat. On the other hand, they served a very fine cocktail: whisky based, but for once I was glad to be drinking the cocktail rather than the pure spirit.</p>
<p>In short, very warmly recommended.</p>
<p>For more details see their <a href="https://hide.co.uk/home/hide">website,</a> or <a href="http://maps.google.com/maps?q=N+51+30.369+W+0+8.660">Google Maps.</a></p>
<p><small><em>Last visited July 2022.</em></small> </p>56B0F260-FB8E-11EC-B909-A19317C054282022-07-04T10:45:42:42Z2022-07-05T10:03:35:35ZLoRa and The Things NetworkMartin Oldfield<p>Initial experiments talking to The Things Network with a LILYGO T-Beam over LoRa.
</p><h2>Introduction
</h2><p>I’ve long been interested in
<a href="https://en.wikipedia.org/wiki/LoRa">LoRa</a>, a Long-Range, low-power
radio technology. The specs talk about a 10km range, but I wanted to
get a feel for how it worked in practice. I was keen to try the
simplest experiment which might work, rather than optimize for a
particular application.
</p><p>Much of my background knowledge comes from <a href="https://www.youtube.com/playlist?list=PL3XBzmAj53Rkkogh-lti58h_GkhzU1n7U">Andreas Spiess’
videos</a>
so if the choices below are sensible, he deserves the credit:
</p><ul><li><p>Having a single RF device isn’t much fun, so for my experiments I
connected to <a href="https://www.thethingsnetwork.org">The Things
Network</a>.
</p></li><li><p>For my node I used the <a href="http://www.lilygo.cn/claprod_view.aspx?TypeId=62&Id=1401&FId=t28:62:28">LILYGO
T-Beam</a>
which partners a LoRa RF chip with an ESP32, a GPS receiver, and a
natty OLED screen. There’s also support for an 18650 cell which
makes it easy to use in the field. To be specific I bought a
T-Beam T22_V1.1 20210222.
</p></li><li><p>I found a case on
<a href="https://www.thingiverse.com/thing:4753247/files">Thingverse</a>
which kept things neat and tidy.
</p></li></ul><h2>Firmware
</h2><p>One advantage of the T-Beam is that people have been connecting it to
The Things Network for years, and <a href="https://www.thethingsnetwork.org/forum/t/ttgo-t-beam-topic/15297/355">documenting the
process</a>. That
said, there are several different versions of the T-Beam, and some of
the firmware available freely online doesn’t work with the latest
hardware.
</p><ol><li><p>The <a href="https://github.com/meshtastic/Meshtastic-device">Meshtastic firmware
</a> appeared to
work, though having only one device it was hard to be sure. It
doesn’t use The Things Network, so this firmware is only really
useful to test the toolchain and basic hardware.
</p></li><li><p>I couldn’t get <a href="https://github.com/lnlp/LMIC-node">LMIC-node</a> or
<a href="https://github.com/roelwolf/LMIC-node-gps-tracker">LMIC-node-gps-tracker</a>
to work: the hardware initialization failed, though it wasn’t clear
why. I didn’t spend long on them though, and it’s quite possible I
was making a stupid mistake.
</p></li><li><p>Version 1.2.1 of the <a href="https://github.com/kizniche/ttgo-tbeam-ttn-tracker">TTGO T-Beam Tracker
</a> worked after
fixing the duplicate definition of <code>hal_init</code>. It connects to The
Things Network and successfully logs its location. Version 1.2.2
released in July 2022 should compile without problems.
</p></li></ol><p>Furthermore, the USB-serial chip on recent versions of the board isn’t
properly supported on the Mac (neither Intel or Apple Silicon both
running Monterey 12.4). I didn’t explore third-party drivers, but just
copied the files to a Raspberry Pi, and ran esptool there instead.
</p><h2>Experiments
</h2><p>Calling these experiments is rather pretentious. Having found that the
device could send messages to the network, I wanted to know where it
worked. Since the device has its own GPS, this was easy: turn it on
and move it around.
</p><p>I used the <a href="https://ttnmapper.org/heatmap/">TTN Mapper</a> integration to
view the data I sent to The Things Network, so there’s quite a long
pipeline between the GPS chip and the thing I’m viewing. Not all the
messages made it to the end, but I’ve not attempted to work out where
they get lost.
</p><p>I tried three things:
</p><ol><li><p>Leaving the device stationary in my home in Cambridge, UK. This
works, but a significant fraction of packets are lost.
</p></li><li><p>Cycling from home to the centre of Cambridge with the device in my
pannier. This works reasonably well: about 75% of the packets sent
made it to the map and it’s easy to see the route I took. Most of
the gateways are closer to the centre of Cambridge than to my
house, so on average the distance to the gateways is lower on this
journey than at home.
</p></li><li><p>Driving to Somerset, stopping irregularly en route, with the device
on the front-seat of my car. This really didn’t work: for most of
the journey no packets arrived. I suspect that speed is a key
factor: those packets which are displayed were from roads where I
was driving slowly. That’s not the whole story though: I parked in
Henley for about an hour and saw but one packet.
</p></li></ol><h2>Packet statistics
</h2><p>The statistics below are purely representative. They were taken at
different times of day, and in different weather conditions. It seems
safe to conclude that high spreading factors give us more range, and
that whatever you do it is likely that some packets will be lost.
</p><p>I was surprised to find that quite a lot of packets don’t arrive, and quite
a few are sent multiple times.
</p><table class="cspaced_sml">
<tr>
<th rowspan="2">Location</th>
<th rowspan="2">Spreading Factors</th>
<th rowspan="2">Lost</th>
<th colspan="5">Received</th>
</tr>
<tr>
<th>Once</th>
<th>Twice</th>
<th>Thrice</th>
<th>Four times</th>
<th>Five times</th>
</tr>
<tr><th>Garden</th><td>7</td><td>46%</td><td>44%</td><td>10%</td><td>0%</td><td>0%</td><td>0%</td></tr>
<th>Home</th><td> 7</td><td>43%</td><td>28%</td><td>30%</td><td> 0%</td><td> 0%</td><td> 0%</td></tr>
<th>Home</th><td> 8</td><td>15%</td><td>15%</td><td>58%</td><td>12%</td><td> 0%</td><td> 0%</td></tr>
<th>Home</th><td> 9</td><td> 6%</td><td>85%</td><td> 9%</td><td> 0%</td><td> 0%</td><td> 0%</td></tr>
<th>Home</th><td>10</td><td>13%</td><td> 3%</td><td>53%</td><td>31%</td><td> 0%</td><td> 0%</td></tr>
<th>Home</th><td>11</td><td>38%</td><td>34%</td><td>28%</td><td> 0%</td><td> 0%</td><td> 0%</td></tr>
<th>Home</th><td>12</td><td>10%</td><td>26%</td><td> 4%</td><td>11%</td><td>31%</td><td>19%</td></tr>
<tr><th>Cycle Ride</th><td>7</td><td>25%</td><td>56%</td><td>17%</td><td>3%</td><td>0%</td><td>0%</td></tr>
</table>
<p>The firmware steadily ramped up the Spreading Factor when it was left
to run overnight. As expected this increased the range, so much so
that nearly a fifth of the packets were received by five gateways. A
tenth of packets were lost, but perhaps I exceeded the maximum allowed
usage here.
</p><h2>Conclusions
</h2><p>In broad terms I think this was a success: I have a device which logs
its position over The Things Network. Happily, it was neither
particularly expensive or difficult to set up.
</p><p>Furthermore, it really does seem to be long-range. Assuming that the
TTN gateway locations are accurate, the device is routinely sending
data over 8km.
</p><p>However there are caveats:
</p><ul><li><p>Even when the device is stationary, a surprisingly high fraction
of packets don’t arrive. This is mitigated—but not solved—by
using a larger spreading factor. Sadly, the device seems
oblivious that the packets have vanished into the aether.
</p></li><li><p>When the device is moving quickly (~70 mph) very few observations
get through.
</p></li><li><p>I can’t quantify this, but I got the feeling that things happened
a bit more slowly than I was expecting. In practical terms I
found that if something didn’t appear to be working, just leaving
it alone for ten minutes was sometimes enough to get the desired
result.
</p></li></ul><p>I don’t think any of these problems are insurmountable, and it’s quite
possible that they could be fixed with different configuration or
software tweaks. Overall, it seems that LoRa and The Things Network
might well be useful, but some care is needed when using it. Notably,
in this setup, packets were lost without the node being aware of it.
</p>CA93379C-EDDD-11DD-8EFE-F3FE530FD1A72009-01-29T08:20:30:30Z2022-05-25T21:35:42:42ZPlaces to eat in ParisMartin Oldfield<p>Some brief notes on places to eat in Paris. </p><h2>Websites</h2>
<p>Recently I’ve tried a few places from <a href="http://parisbymouth.com/our-guide-to-paris-restaurants/">Paris by Mouth</a> and they’ve been good.</p>
<h2>In the 1st</h2>
<h3>Les Fines Gueules, 43 rue Croix des Petits Champs, 75001.</h3>
<p>Although one could just come here for a drink, I think it’s firmly on the food side of the restaurant/wine-bar divide. Happily that’s because the food is good rather than the wine substandard!</p>
<p>One minor nit: my <i>saignant</i> steak was decidedly <i>à point</i> if not <i>bien cuit!</i></p>
<p>August 2014 update: A fine dinner: nice wine, and a memorably good tuna tartare.</p>
<p>May 2017 update: Still well up to scratch!</p>
<p>For more details visit <a href="http://www.lesfinesgueules.fr/">their website,</a> or see <a href="http://maps.google.com/maps?q=N+48+51.908+E+2+20.440">Google Maps.</a></p>
<p><small><em>Last visited May 2017.</em></small></p>
<h3>Spring, 6 rue Bailleul, 75001.</h3>
<p>Not a bad place, tucked away in a little side-street a stone’s throw from le Louvre.</p>
<p>I sat at the downstairs bar, which obviously can’t match the ambience upstairs, but it’s still nice enough. They have the zero-choice menu gimmick, which works well enough, but sadly have used this excuse to serve a gazillion small dishes: I think that in almost all cases things would be improved were they to reduce the number, increase the size, and polish things a little further.</p>
<p>Oddly the main meat course, some fine veal, was rather large and got rather cold by the time I’d finished it: less meat or a hotter plate please.</p>
<p>Such nits aside, stuff is cooked with flair and precision, and the quality of the ingredients struck me as good to great.</p>
<p>For more details visit <a href="http://www.springparis.fr/">their website,</a> or see <a href="http://maps.google.com/maps?q=N+48+51.655+E+2+20.532">Google Maps.</a></p>
<h3>Pirouette, 5 rue de Mondétour, 75001.</h3>
<p>A laid back restaurant whose quasi-casual appearance belies seriously good food. Excellent, affordable wine too.</p>
<p>April 2022 update: Still lovely!</p>
<p>For more details, visit <a href="https://www.restaurantpirouette.com">their website,</a> or see <a href="http://maps.google.com/maps?q=N+48+51.779+E+2+20.877">Google Maps.</a></p>
<p><small><em>Last visited April 2022.</em></small></p>
<h2>In the 4th</h2>
<h3>Café Beaubourg, 43 rue Saint-Merri, 75004.</h3>
<p>I’ve been coming here for decades, and come to value the good food, the fine atmosphere, and its unchanging nature. I suspect that I pay a visit on most of my trips to Paris.</p>
<p>For more details visit <a href="https://cafebeaubourg.com/en/">their website,</a> or see <a href="http://maps.google.com/maps?q=N+48+51.601+E+2+21.061">Google Maps.</a></p>
<p><small><em>Last visited April 2022.</em></small></p>
<h2>In the 5th</h2>
<h3>Le Reminet, 3 rue des Grands Degrés, 75005.</h3>
<p>A delightful little bistrot, which serves marvellous food.</p>
<p>March 2015 update: I am delighted to say that things here are at least as good as they were in the past.</p>
<p>For more details visit <a href="http://www.lereminet.com/">their website,</a> or see <a href="http://maps.google.com/maps?q=N+48+51.076+E+2+21.005">Google Maps.</a></p>
<p><small><em>Last visited March 2015.</em></small></p>
<h3>L'Agrume, 15, rue des Fossés St-Marcel, 75005.</h3>
<p>A small, simple restaurant serving excellent food. I had a memorably good pigeon breast with mushrooms.</p>
<p>For more details visit <a href="http://restaurant-lagrume.fr/">their website,</a> or see <a href="http://maps.google.com/maps?q=N+48+50.328+E+2+21.370">Google Maps.</a></p>
<p><small><em>Last visited May 2017.</em></small></p>
<h2>In the 6th</h2>
<h3>L’Atelier du Joël Robuchon, 5, rue de Montalebert, 75006.</h3>
<p>I’ve been meaning to try this place for ages, and I finally found myself in roughly the right place at roughly the right time—6:30pm.</p>
<p>As so often in fine restaurants, the starter was the best bit: exquisitely cooked scallops with truffles. One of the best dishes I’ve eaten anywhere.</p>
<p>If I had to find fault it would be with the dessert: an elegant chocolate sphere which was both food and magic-trick. Although perfectly constructed, the waiter stole the prestige—it would have been much more fun had I discovered the surprise for myself.</p>
<p>Warmly recommended though!</p>
<p>For more details visit <a href="http://www.joel-robuchon.net/">their website,</a> or see <a href="http://maps.google.com/maps?q=N+48+51.403+E+2+19.655">Google Maps.</a></p>
<p><small><em>Last visited November 2010.</em></small></p>
<h3>Les Bouquinistes, 53 quai des Grands Augustins, 75006.</h3>
<p><em>As of March 2015, I no longer recommend this place.</em></p>
<p>I’ve been lucky to dine here many times over the years and the food’s always between good and great.</p>
<p>The highlight on my last visit was the dessert: a riff on apples cooked in different ways which, I was told, won an award a few years’ ago.</p>
<p>They also serve a crazy French whisky which although obvious related to a fine Scottish malt is altogether more ethereal. It’s nice about once a year, which by happy coincidence is about the interval between my visits to Paris.</p>
<p>October 2011 update: sadly I think the desserts were a bit pedestrian this time, and the current ‛Pomme’ is a pale immitation of last year’s apple-based delight. On the plus side I had a stunningly good tuna starter, and a wonderfully gamey hare.</p>
<p>March 2015 update: Oh dear! Whilst the food here is still quite reasonable, the service was dreadful. Rather than enjoying the ambience and atmosphere, I felt so rushed that by the end of the meal I was as glad to leave as they apparently were to get rid of me. Walking back to the apartment, I felt deeply saddened by the whole affair. Good lamb though!</p>
<p>March 2016 update: Although I wouldn’t think of going in, I did walk past and saw the place almost empty.</p>
<p>For more details visit <a href="http://www.lesbouquinistes.com/">their website,</a> or <a href="http://maps.google.com/maps?q=N+48+51.314+E+2+20.487">Google Maps.</a></p>
<p><small><em>Last visited March 2015.</em></small></p>
<h3>Moustache, 3 rue Ste Beuve, 75006.</h3>
<p>A fine restaurant, serving high-quality French fare with a subtle Asian twist. A fine ambience too: subtle lighting and simple furnishings.</p>
<p>Both starter and main course were excellent, whilst the dessert—a splendid Valrhona mousse–was positively sinful!</p>
<p>July 2017 update: fewer Asian influences, but still lovely.</p>
<p>For more details visit <a href="http://www.moustache-restaurant.com/">their website,</a> or see <a href="http://maps.google.com/maps?q=N+48+50.651+E+2+19.787">Google Maps.</a></p>
<p><small><em>Last visited July 2017.</em></small></p>
<h3>Pouic Pouic, 9 rue Lobineau, 75006.</h3>
<p>Update: Sadly their website now announces <i>«Pouic Pouic a fermé ses portes.»</i></p>
<p>Just lovely! Fine french food in a classic setting.</p>
<p>I had the most lovely starter here: foie gras and a 63℃ egg on a bed of lentils, with tiny croutons for extra texture. The rest of the meal was perfect too: go and see for yourself.</p>
<p>For more details visit <a href="http://www.pouicpouicstgermain.fr/">their website,</a> Or <a href="http://maps.google.com/maps?q=N+48+51.101+E+2+20.147">Google Maps.</a></p>
<p><small><em>Last visited March 2015.</em></small></p>
<h3>Ze Kitchen Galerie, 4, Rue des Grands Augustins, 75006.</h3>
<p>A fine modern restaurant next door to Les Bouqinistes, which I used to love but now can’t recommend.</p>
<p>It’s always nice to find a new restaurant which breaks free from convention, and relies on the chef to combine fine ingredients in unusual ways. By and large, Ze Kitchen Galerie succeeds. Their dishes don’t have that sense of being obviously right—but only after you’ve tried them—but there’s clearly thought and talent of high-order at work.</p>
<p>March 2016 update: Absolutely great cooking: imaginative, well executed, and beautiful. Just amazing!</p>
<p>For more details see <a href="http://www.zekitchengalerie.fr/">their website,</a> or see <a href="http://maps.google.com/maps?q=N+48+51.304+E+2+20.479">Google Maps.</a></p>
<p><small><em>Last visited March 2016.</em></small></p>
<h3>le Relais de l'Entrcôte, 20 rue Saint-Benoît, 75006.</h3>
<p>The place scores highly in numerous ‘best <i>steak frites</i>’ lists, but I’ve not tried it before. They offer a pleasingly simple menu: you need only pick your <i>cuisson</i> and wine. I suspect 5-bits suffice even without compression.</p>
<p>Happily the quality is great, though not without gimmicks. One of these seems sensible though: you effectively get two identical courses, so both steak and frites stay hot.</p>
<p>For more details see <a href="http://relaisennr.cluster011.ovh.net/?page_id=396%2F">their website,</a> see <a href="http://maps.google.com/maps?q=N+48+51.270+E+2+19.968">Google Maps.</a></p>
<p><small><em>Last visited March 2016.</em></small></p>
<h3>Le Christine, 1 rue Christine, 75006.</h3>
<p>Elegant food in an elegant setting. Warmly recommended.</p>
<p>For more details see <a href="https://lechristine.becsparisiens.fr">their website,</a> or see <a href="http://maps.google.com/maps?q=N+48+51.262+E+2+20.415">Google Maps.</a></p>
<p><small><em>Last visited April 2022.</em></small></p>
<h3>Blueberry Maki Bar, 6 Rue du Sabot, 75006.</h3>
<p>I am not a sushi expert but I thought this was absolutely fabulous. Really nice fish, imaginatively presented.</p>
<p>For more details see <a href="https://www.blueberrymakibar.com/">their website,</a> see <a href="http://maps.google.com/maps?q=N+48+51.167+E+2+19.887">Google Maps.</a></p>
<h2>In the 8th</h2>
<h3>L’Envue, 39 rue Boissy d’Anglas, 75008.</h3>
<p>A slightly crazy, bohemian place near Place de Madeline, warmly recommended as a fun place to eat. The food’s usually good, and always presented with flair and sparkle.</p>
<p>For more details visit <a href="http://www.lenvue.com/">their website,</a> or see <a href="http://maps.google.com/maps?q=N+48+52.193+E+2+19.345">Google Maps.</a></p>
<p><small><em>Last visited December 2008.</em></small></p>
<h3>Le Mini Palais, Grand Palais, Avenue Winston Churchill, 75008.</h3>
<p>A fine place for dinner, particularly in summer, hidden on a terrace in the Grand Palais.</p>
<p>The terrace affords a fine view of both people and architecture. The food and drink don’t disappoint either.</p>
<p>For more details, visit <a href="http://www.minipalais.com">their website,</a> or see <a href="http://maps.google.com/maps?q=N+48+51.906+E+2+18.786">Google Maps.</a></p>
<p><small><em>Last visited August 2014.</em></small></p>
<h2>In the 10th</h2>
<h3>Terminus Nord, 23 rue de Dunkerque, 75010.</h3>
<p>Restaurants outside railway stations are obviously convenient, but that always makes me worry about the quality. No such issues here though: <em>Terminus Nord</em> is an ideal place to wait for Eurostar, and food is good.</p>
<p>For more details visit <a href="http://terminusnord.com">their website,</a> or see <a href="http://maps.google.com/maps?q=N+48+52.781+E+2+21.306">Google Maps.</a></p>
<h2>In the 14th</h2>
<h3>Crêperie de Josselin, 67 rue de Montparnasse, 75014.</h3>
<p>In a street full of crêperies this one boasted the finest array of Zagat awards and the like.</p>
<p>Inside it’s small, densely packed, and serves excellent galettes and crêpes! What more could you ask of it ?</p>
<p>For more details phone them on +33 1 43 20 93 50, or see <a href="http://maps.google.com/maps?q=N+48+50.518+E+2+19.527">Google Maps.</a></p>
<p><small><em>Last visited October 2012.</em></small></p>
<h3>La Coupole, 102 Boulevard du Montparnasse, 75014.</h3>
<p>Such famous places always seem likely to fade into a mere tourist-trap, and in some sense one feels that <em>La Coupole</em> did that many years back. However the food's still good and the ambience quite unique.</p>
<p>I had a nice breast of duck, served with what amounted to a toasted fruit sandwich: poached apples and peaches served between two pieces of white bread.</p>
<p>For more details visit <a href="http://www.lacoupole-paris.com/en/">their website,</a> or see <a href="http://maps.google.com/maps?q=N+48+50.541+E+2+19.681">Google Maps.</a></p>
<p><small><em>Last visited March 2015.</em></small></p>
<h3>Swann et Vincent, 22 place Denfert-Rochereau, 75014.</h3>
<p>A good place for a quick lunch, obviously popular with people who work nearby. I had some splendid pork in a cream sauce, and left full and happy.</p>
<p>For more details visit <a href="http://swann-vincent.fr/fr/">their website,</a> or see <a href="http://maps.google.com/maps?q=N+48+50.036+E+2+19.855">Google Maps.</a></p>
<p><small><em>Last visited March 2015.</em></small></p>
<h2>In the 15th</h2>
<h3>L’atome Café, 29 Boulevard de Grenelle, 75015.</h3>
<p>There are times in life when all one really wants is a decent confit de canard, and I warmly recommend this place for such times.</p>
<p>For more details see <a href="http://maps.google.com/maps?q=N+48+51.164+E+2+17.446">Google Maps.</a></p>
<p><small><em>Last visited November 2009.</em></small></p>
<h2>In the 17th</h2>
<h3>Les Fougeres, 10 rue Villebois-Mareuil, 75017.</h3>
<p>Simply exquisite. Refined and elegant cooking in refined and elegant surroundings.</p>
<p>Just one caveat: I was told that the restaurant is about to close because the chef, Stéphane Duchiron, is opening a larger place.</p>
<p>For more details visit <a href="http://www.restaurant-les-fougeres.fr/">their website,</a> or see <a href="http://maps.google.com/maps?q=N+48+52.788+E+2+17.650">Google Maps.</a></p>
<p><small><em>Last visited October 2012.</em></small></p>
<h3>Mamma Primi, 71, Rue des Dames, 75017.</h3>
<p>A fine, vibrant, Italian trattoria. The menu is full of the usual fare: interesting pasta, good pizza, random antipasti. However the execution is at least good and sometimes great: I can still smell the truffles from a simple <em>pâtes â la truffe</em>.</p>
<p>They have a no-reservation policy: instead people queue hopefully and hungrily along the street. Doors open at 7pm, but from a small sample I guess you need to be there soon after 6:30. No idea what happens if it’s raining though!</p>
<p>For more details visit <a href="https://www.bigmammagroup.com/fr/trattorias/mamma-primi">their website,</a> or see <a href="http://maps.google.com/maps?q=N+48+53.012+E+2+19.241">Google Maps.</a></p>
<p><small><em>Last visited July 2017.</em></small></p>
<h2>In the 18th</h2>
<h3>Au Virage Lepic, 61 Rue Lepic, 75018.</h3>
<p>A fine bistro in Montmartre. There’s little more to be said: take an appetite and enthusiasm for good food, and you’ll come away delightfully sated.</p>
<p>For more details call them on 01 42 52 46 79, or see <a href="http://maps.google.com/maps?q=N+48+53.262+E+2+20.072">Google Maps.</a></p>
<p><small><em>Last visited December 2008.</em></small></p>
<h3>Le Moulin de la Galette, 83 Rue Lepic, 75018.</h3>
<p>I think any restaurant in Montmartre runs the risk of turning into a tourist trap, and this seems all the more likely if its blessed with its own <a href="https://en.wikipedia.org/wiki/Moulin_de_la_Galette">windmill,</a> and portraits by <a href="https://en.wikipedia.org/wiki/Le_Moulin_de_la_Galette_(Van_Gogh_series)">Van Gogh,</a> and <a href="https://en.wikipedia.org/wiki/Bal_du_moulin_de_la_Galette">Renoir.</a></p>
<p>However, it looks great from the outside, and the menu seems understated enough to be written for mouths rather than cameras. Happily these impressions are right: I had lunch here on a beautiful summer’s day, enjoying green gazpacho then a perfect steak on a quiet, calm, patio. Good desserts too!</p>
<p>Objectively, I suspect it’s a little on the pricey side, but subjectively I was happy to pay for good food in such nice surroundings.</p>
<p>For more details visit <a href="http://www.lemoulindelagalette.fr/en/">their website,</a> or see <a href="http://maps.google.com/maps?q=N+48+53.237+E+2+20.242">Google Maps.</a></p>
<p><small><em>Last visited July 2017.</em></small> </p>6A483B62-D843-11EC-A493-554596076A462022-05-20T13:46:04:04Z2022-05-20T13:46:04:04ZA breadboard-based IR receiverMartin Oldfield<p>Notes on an infra-red receiver for the breadboard.
</p><p>These are brief notes on an IR receiver designed for use in my <a href="https://coord.info/GC86BQY">To
the Birdhouse geocache</a>.
</p><p>It is essentially a clever Vishay detector chip which receives a coded
signal embedded on a 40kHz carrier, then amplifies it. All of the
audio information is encoded on the transmitted signal.
</p><p><a href="det-bb.jpg">
<img alt="[The Receiver]" class="img_noborder" src="det-bb.jpg">
</a>
</p><h2><em>Desiderata</em>
</h2><p>As discussed <a href="ir-fun.html">above</a> the detector will be built by people
who might have very little experience of building
electronics. Accordingly the components in the design all look
different to reduce the chances of someone e.g. confusing resistors of
different value. Two different resistors are used in this circuit, but
different power-ratings make them physically quite different.
</p><h2>Schematic
</h2><p><a href="rx-bb-schem.svg">
<img alt="[Schematic]" class="img_noborder" src="rx-bb-schem.svg">
</a>
</p><p>There is little say about the circuit.
</p><h3>Sensor
</h3><p>The sensor has significant output impedance, and setting R2 to 110kΩ
was chosen to feed about 1V into the op-amp.
</p><h3>Op-amp
</h3><p>The op-amp isn’t critical: indeed although the schematic calls for an
OP-07, I managed to find a cheap source of NE5534s and used them
instead. Either way, it is configured as a unity-gain buffer.
</p><h2>Assembly manual
</h2><p>The manual is produced from some hacky Haskell code, which isn’t
really fit for public consumption. Do contact me if you’re interested
though.
</p><p><a href="rx-bb-inst.pdf">
<img alt="[Schematic]" class="img_border" src="rx-bb-inst.pdf">
</a>
</p><p>The reference to ‘The Bloomsbury Group’ is related the geocache
puzzle: please ignore it.
</p><h2>Design Files
</h2><p>All the design files can be downloaded from
<a href="https://github.com/mjoldfield/ir-morse-toys/tree/main/rx-bb">GitHub</a>. The electronics
was designed with KiCad. Random bits of Haskell and python were used too.
</p>A96B4E80-D7AB-11EC-AABF-B22B96076A462022-05-19T19:38:37:37Z2022-05-19T19:38:37:37ZA PCB-based IR receiverMartin Oldfield<p>Notes on an PCB based infra-red receiver.
</p><p>These are brief notes on an IR receiver designed for use in my <a href="https://coord.info/GC86BQY">To
the Birdhouse geocache</a>.
</p><p>It is essentially a clever Vishay detector chip which receives a
coded signal embedded on a 40kHz carrier, then amplifies and low-pass
filters the output. All of the audio information is encoded on the
transmitted signal.
</p><p><a href="det-pcb.jpg">
<img alt="[The Receiver]" class="img_noborder" src="det-pcb.jpg">
</a>
</p><h2><em>Desiderata</em>
</h2><p>As discussed <a href="ir-fun.html">above</a> the detector will be built by
people who might have very little experience of building
electronics. Accordingly the components in the design all look
different to reduce the chances of someone e.g. confusing resistors of
different value.
</p><h2>Schematic
</h2><p><a href="rx-pcb-schem.svg">
<img alt="[Schematic]" class="img_noborder" src="rx-pcb-schem.svg">
</a>
</p><p>There is little say about the circuit.
</p><h3>Sensor
</h3><p>The sensor has significant output impedance, and R1 to 100kΩ was
chosen to feed about 1V into the op-amp.
</p><h3>Op-amp
</h3><p>The op-amp isn’t critical: indeed although the schematic calls for an
OP-07, I managed to find a cheap source of NE5534s and used them
instead. Either way, it is configured as a unity-gain
<a href="https://en.wikipedia.org/wiki/Sallen–Key_topology">Sallen-Key</a>
low-pass filter with a cut-off at about 1.45kHz and a Q of 0.5.
</p><p>The cut-off frequency probably isn’t ideal, but I wanted to use only
one resistor value, so the only freedom was in the choice of
capacitor. Happily not all capacitors look the same, so it was easy to
choose a new dielectric and thus a new value.
</p><h2>PCB
</h2><p>Most of the components are mounted on the top of the board. By
mounting the battery below it can be used as a convenient stand or
handle. The sensor is on the underside to shield it from the sun: I
have no idea if this is necessary or not.
</p><h2>Assembly manual
</h2><p>I used Jaroslav Malec’s <a href="https://github.com/yaqwsx/PcbDraw">pcbdraw</a>
plugin for KiCad to generate an instruction manual. I wrote some hacky
code around pcbdraw, which isn’t really fit for public consumption. Do
contact me if you’re interested though.
</p><p><a href="rx-pcb-inst.pdf">
<img alt="[Schematic]" class="img_border" src="rx-pcb-inst.pdf">
</a>
</p><p>The reference to ‘The Bloomsbury Group’ is related the geocache
puzzle: please ignore it.
</p><h2>Design Files
</h2><p>All the design files can be downloaded from
<a href="https://github.com/mjoldfield/ir-morse-toys/tree/main/rx-pcb">GitHub</a>. The electronics
was designed in KiCad. Random bits of Haskell and python were used too.
</p>7B48621A-8D26-11EC-863F-3710A882190F2022-02-13T23:39:45:45Z2022-03-10T21:21:11:11ZAn IR transmitter for the BirdhouseMartin Oldfield<p>Notes on an infra-red transmitter suitable
for outdoor deployment.
</p><p>These are brief notes on an IR transmitter designed for use in my <a href="https://coord.info/GC86BQY">To
the Birdhouse geocache</a>. The transmitter
is deployed inside a birdhouse hidden in woods near Cambridge, UK.
</p><table>
<tr>
<td width="50%" style="text-align: center">
<a href="em-bb-front.jpg">
<img src="em-bb-front.jpg" alt="[Front View]" class="img_noborder" />
</a>
</td>
<td width="50%" style="text-align: center">
<a href="em-bb-back.jpg">
<img src="em-bb-back.jpg" alt="[Back View]" class="img_noborder" />
</a>
</td>
</tr>
</table>
<h2><em>Desiderata</em>
</h2><ul><li><p>The emitter has to flash an infra-red LED at 40kHz modulated with
both a 800Hz tone, and the dit-dah patterns of Morse Code.
</p></li><li><p>It needs to run reliably for about a year on a battery.
</p></li><li><p>Besides beaming the pre-programmed message, the device should also
send the battery level.
</p></li><li><p>I want to use <a href="https://jlcpcb.com">JLCPCB's</a> assembly service,
and at the time of ordering that implied using components from
<a href="https://lcsc.com">LCSC</a>.
</p></li><li><p>Given that the emitter will be based around an STM32, it would be
fun to explore using some of the fancy timer peripherals.
</p></li></ul><p>There’s also one non-issue: only a few units will be needed, so the
unit cost isn’t that important.
</p><h2>Hardware
</h2><p>As you can see from the schematic below, the hardware is essentially
trivial. The key components are a microcontroller, some infra-red LEDs
switched by a MOSFET, and a voltage regulator. There are also a
scattering of connectors and configuration links. The programming
header matches the 0.1" pitch <a href="https://1bitsquared.de/products/jtag-swd-100mil-pitch-breakout">breakout
board</a>
for the Black Magic Probe programmer/debugger.
</p><p>The microcontroller, an
<a href="https://www.st.com/en/microcontrollers-microprocessors/stm32l4x1.html">STM32L431</a>,
was chosen on the basis that it was the only STM32L4 part stocked by
LCSC. It is possible that an F series part would have worked just as
well: I just assumed that the low-power L series would be a better fit
for the limited power available.
</p><p>The LEDs and associated current limiting resistor are about the
simplest thing which might work tolerably well. The resistor does
waste power, but stops anything disasterous happening if the LED
gets stuck on e.g. because of a software bug.
</p><p>The voltage across the bottom two cells of the battery is tapped as
an easy way to measure the battery voltage.
</p><p>If making more of these, it would be prudent to add reverse voltage
protection, and clamp the battery sense voltage. My bad!
</p><p>All the design files can be downloaded from <a href="https://github.com/mjoldfield/ir-tx-rx/src/main/tx-stm32l431/pcb/">GitHub</a>.
</p><h3>Schematic
</h3><p><a href="em-bb-sch.svg">
<img alt="[Back View]" class="img_noborder" src="em-bb-sch.svg">
</a>
</p><h3>PCB
</h3><table width="100%">
<tr>
<td width="50%" style="text-align: center">
<a href="em-bb-3d-f.png">
<img src="em-bb-3d-f.png" alt="[Front View]" class="img_noborder" />
</a>
</td>
<td width="50%" style="text-align: center">
<a href="em-bb-3d-b.png">
<img src="em-bb-3d-b.png" alt="[Back View]" class="img_noborder" />
</a>
</td>
</tr>
</table>
<table width="100%">
<tr>
<td width="50%" style="text-align: center">
<a href="em-bb-pcb-f.svg">
<img src="em-bb-pcb-f.svg" alt="[Front View]" class="img_noborder" />
</a>
</td>
<td width="50%" style="text-align: center">
<a href="em-bb-pcb-b.svg">
<img src="em-bb-pcb-b.svg" alt="[Back View]" class="img_noborder" />
</a>
</td>
</tr>
</table>
<p>The curious board shape and mounting holes are compatible with the
<a href="https://www.hammfg.com/electronics/small-case/plastic/1554?referer=1244">Hammond
1554B2GYCL</a>
waterproof enclosure, though that doesn't leave any space for the
battery.
</p><h3>LED choice
</h3><p>IR LEDs drop about 1.2–1.3V which is larger than the voltage supplied
by a half-empty alkaline cell. So it makes more sense to have, say,
four cells driving 3 LEDs. On the other hand, four cells deliver about
6.5V when fully charged which is about 2.2V across each LED: enough to
destroy them. Accordingly a 15Ω resistor limits the current to about
150mA.
</p><p>Energy is lost in the resistor: between a third (at 6.5V) and a tenth
(at 4.0V) of the power.
</p><p>Empirically the current (in mA) through the LEDs in this setup is well
modelled by
</p><pre><code>I = 55.86 * (V - 3.59)
</code></pre><h3>Cell choice
</h3><p>The desire for simplicity led me to the humble alkaline D cell, which
a capacity variously claimed to be in the range of 10–20Ah.
</p><p>Wikipedia claim 12Ah as a lower bound so let’t take that. Although we
might take 50mA as the average LED current when on, the signal is
modulated by the 40kHz carrier, the 800Hz tone, and the Morse
signal. Thus, the LED is only on for about 10% of the time, and leads
us to an average LED current of 5mA. This implies that the battery
might last about 2,400 hours or 100 days, which is about half an order
of magnitude too small.
</p><p>Modelling the battery voltage didn’t seem easy, so I did some
<a href="https://github.com/mjoldfield/ir-tx-rx/src/main/tx-stm32l431/experiments/">experiments</a>
instead. In practice the D cells lasted about 120 days, a bit better
than expected.
</p><p>Given that the D cells last for months, doing experiments with them is
rather time consuming. Using AAA cells instead makes things run
roughly a dozen times faster: roughly ten days to discharge the
batteries.
</p><p>Finally, the four D cells fit neatly into a Hammond
<a href="https://www.hammfg.com/electronics/small-case/plastic/1554?referer=1244">1554EE</a>
box.
</p><p><img alt="[Battery Box]" class="img_border" src="batbox.jpg">
</p><h2>Firmware
</h2><p>The basic task of the firmware is to flash a message on the
LEDs. Multiple messages can be sent in sequence, and different
sequences can be selected by means of the configuration jumpers. One
message includes the half-battery voltage.
</p><p>The firmware makes significant use of the fancy timers on the
STM32L431, so you should probably have a copy of the <a href="https://www.st.com/resource/en/reference_manual/rm0394-stm32l41xxx42xxx43xxx44xxx45xxx46xxx-advanced-armbased-32bit-mcus-stmicroelectronics.pdf">reference
manual</a>
to hand to follow the register setup. That aside, the code runs in an
interrupt handler, invoked whenever a new dit or dah has to be sent.
</p><h3>Battery monitoring
</h3><p>The voltage of half the battery is measured by an ADC, roughly every
few minutes. So as to not interfere with the transmission, the ADC
operations are spread over multiple interrupt calls. To avoid
misreads, a checksum is appended to the voltage: the sum of all digits
(mod 10) should be 0.
</p><p>So 25328 is a valid reading because 2 + 5 + 3 + 2 + 8 = 20, and 20
mod 10 = 0. It corresponds to a half-battery voltage of 2.532V.
</p><h3>LED control
</h3><p>The current through the LEDs drops significantly as the battery runs
down, and with it the brightness of the LEDs. To maintain the emitted
power, the code increases the mark/space ratio of the signal as the
battery voltage falls. This is done very crudely: I measured the
current through the LEDs for a range of measured voltages, so I can
work out the minimum time needed to send at least a given amount of
charge to the LEDs in each pulse. There comes a point though when the
battery to too weak to deliver the charge even at a duty-cycle of 50%.
</p><p>The timing is crude: the precision is determined by the ratio of the
PWM clock to the carrier frequency: here that’s 800kHz to 40kHz so we
have an apparent factor of 20. However, it only makes sense to use
duty-cycles up to 50%, so we have ten possible time intervals ranging
from 5% to 50%: one order of magnitude.
</p><p>There are eight different target charge levels covering the range 1nC
to 3µC per pulse. The choice is set by means of configuration jumpers
which are read at boot time.
</p><p>By reducing the charge this way, it is easy to increase the battery
lifetime by a factor of four without significantly affecting the range
at which the signal could be detected. This leads to a battery life of
480 days.
</p><h3>Design Files
</h3><p>The firmware and PCB design can be downloaded from
<a href="https://github.com/mjoldfield/ir-morse-toys/tree/main/tx-stm32l431">GitHub</a>. The
electronics was designed with KiCad, the firmware is written in C and uses
<a href="https://libopencm3.org">libopencm3</a>. Random bits of Haskell and
python were used too.
</p><h2>Lessons learned
</h2><p>Perhaps the most important lesson is that it worked: four D cells are
enough to power the device for a year. There is little point in
extending the life further: it is good practice to check the device
every six months or so anyway.
</p><p>It was very helpful to make all the frequencies divisors of the CPU
clock frequency. With an 800kHz clock, 40kHz is easy to make, but
36kHz would have been trickier.
</p><p>Although it was fun to play with the fancy timers, I'm not sure they
added much practical benefit here.
</p><p>If one did want to get better efficiency, I think the key is to drive
the LEDs in a more sophisticated way. It would be nice to have a way
to efficiently dump a given amount of charge into the LED regardless
of battery voltage. If you could do that, then the CPU could run at
40kHz rather than 800kHz. Further, if you could move the 40kHz and
800Hz generators out of the CPU, you could reduce the CPU frequency to
a few kHz, assuming it would run that slowly.
</p>02CEFC8E-94F8-11EC-9643-DAFCE6F509542022-02-23T22:28:28:28Z2022-02-23T22:28:28:28ZAn IR transmitter for testingMartin Oldfield<p>Notes on an infra-red transmitter based around the
inexpensive Blue-Pill STM32F103 development boards.
</p><p>These are brief notes on an IR test transmitter designed for use in my
<a href="https://coord.info/GC86BQY">To the Birdhouse geocache</a>.
</p><p><a href="em-bp.jpg">
<img alt="[The Emitter]" class="img_noborder" src="em-bp.jpg">
</a>
</p><h2><em>Desiderata</em>
</h2><ul><li><p>This emitter has to flash an infra-red LED at 40kHz modulated with
both a 800Hz tone, and the dit-dah patterns of Morse Code.
</p></li><li><p>We need a push button to select different Morse messages.
</p></li><li><p>It will typically run on someone’s desk, so USB power is convenient.
</p></li><li><p>Beyond this, it should be cheap and simple for me to build.
</p></li></ul><h2>Hardware
</h2><p>At the time I designed it, it was cheaper to buy Blue Pill STM32 dev
boards than the bare processors, so I designed a daughter board with
a couple of LEDs (one IR, one visible), a current limiting resistor,
and a push button.
</p><p>The daughter board fits over the progamming header on the Blue Pill. A
little finesse is needed to avoid blocking the header.
</p><h3>Schematic
</h3><p>As you'll see the schematic is almost trivial:
</p><p><a href="em-bp-schem.svg">
<img alt="[The Schematic]" class="img_noborder" src="em-bp-schem.svg">
</a>
</p><h3>PCB
</h3><p>The PCB is simple too. The two unplated holes allow the user to see
the status LEDs on the Blue Pill.
</p><p><a href="em-bp-3d.png">
<img alt="[The PCB]" class="img_noborder" src="em-bp-3d.png">
</a>
</p><h2>Firmware
</h2><p>The STM32F103 on the Blue Pill is perfectly fast enough to
drive the LED with simple bit-banging. There is nothing clever
about the code.
</p><ul><li><p>All of the action happens during the <code>sys_tick</code> interrupt which
is called every 12.5μs.
</p></li><li><p>The data for the Morse messages is defined in <code>morseout.c</code> which is
generated by
<a href="https://github.com/mjoldfield/morsetool">morsetool</a>. Morsetool lets you
defined multiple messages, and the push button cycles through them.
</p></li><li><p><a href="http://libopencm3.org/docs/latest/html/index.html">libopencm3</a> is
used to initialize some of the hardware; the source is laid out in
the libopencm3 way.
</p></li></ul><h3>Design Files
</h3><p>The firmware and PCB design can be downloaded from
<a href="https://github.com/mjoldfield/ir-morse-toys/tree/main/tx-bluepill">GitHub</a>. The
electronics was designed with KiCad, the firmware is written in C and uses
<a href="https://libopencm3.org">libopencm3</a>. Random bits of Haskell and
python were used too.
</p>ED7B8618-8039-11EC-B67B-F14779BB8AF42022-01-26T23:58:55:55Z2022-01-26T23:58:55:55ZGames with IR commsMartin Oldfield<p>A few years ago, I explored sending Morse Code messages over
an infra-red link. The ideas were incorporated in a geocache
near Cambridge UK: to solve the puzzle the player has to build
a simple detector for the IR signals.
</p><h2>Introduction
</h2><p>A few years ago I set up <a href="https://coord.info/GC86BQY">a geocache</a>
where the coordinates of the container were flashed in Morse Code by
infra-red LEDs hidden in a bird-box. To receive the signal, people had
to construct a detector from a kit of parts provided in an earlier
stage of the puzzle.
</p><p>Belatedly, here are a few notes on the hardware.
</p><h2>The basic idea
</h2><p>In essence the idea is simple: flash an infra-red LED so that it sends
a message in <a href="https://en.wikipedia.org/wiki/Morse_code">Morse code.</a>
Given that most people can’t see infra-red, some sort of detector will
be needed, and this was something that the person doing the geocache
would have to build. I thought it would be more fun if the detector
made a noise rather than just flashed a light. To keep the detector as
simple as possible, the tone is generated in the transmitter and
broadcast in the IR signal.
</p><p>By generating all the signals in the transmitter, the detector could
be as simple as a photodiode AC-coupled to an amplifier driving a pair
of headphones. It might be nice to have some sort of sensitivity or
volume adjustment too.
</p><p>As is probably obvious, the emitter will be a microcontroller of some
sort driving a few IR LEDs.
</p><p><img class="img_noborder" src="block.svg">
</p><h3>Basic timings
</h3><p>All the timings in Morse Code transmissions are multiples of the time
taken to send a dit, which is about 50ms for a reasonably fast operator. So,
sending A (which is <code>.-</code> in Morse code) looks like this:
</p><p><img class="img_noborder" src="morse-a.svg">
</p><p>Morse is often transmitted at 800Hz, and if we adopt this convention
each dit will be 40 tone cycles long:
</p><p><img class="img_noborder_small" src="morse-800hz.svg">
</p><h3>Extra Modulation
</h3><p>Although this plan seemed straightforward, I was worried about the
sun. Any photographer knows just how much visible light there is
outside, and black-bodies being what they are, the same is presumably
true of infra-red too. It seemed quite plausible to me that the
sun’s IR would swamp the signal from my transmitter. Even if it
didn’t, the detector might have to extract a small AC signal (from
my transmitter) from a large DC offset (from the sun).
</p><p>To complicate matters, I wasn’t confident that I could set up a
representative test, and I did my early outdoor experiments in autumn
where the sun wasn’t as bright as it would be in summer.
</p><p>Set against these concerns is the well known result that infra-red
remote controllers work well even in very sunny rooms, despite having
small batteries which last for ages. Part of the explanation for this
is that remote controls modulate their control signals, typically at a
frequency of 30–50kHz. A detector can look for this signal and be
relatively immune to the sun and other sources of noise. As you might
expect people make inexpensive sensors which contain an IR-photodiode
plus a filter and amplifier with automatic-gain-control (AGC). Besides
making life easier for us, I suspect it’s easier to make a stable
high-gain amplified when it’s so close to the sensor.
</p><p>For example Vishay make the <a href="https://www.vishay.com/docs/82806/tsop134.pdf">TSOP13xxx
series</a>. To fully
specify the part you need to specify both the carrier frequency and
the algorithm used for the AGC. If you you think about it you’ll
realise that the optimal AGC algorithm must depend on the signal being
sent.
</p><p>40kHz seemed a good choice because it’s a multiple of the 800Hz tone
frequency, and an exact divisor of common CPU clocks e.g. 8MHz. In
particular, we can fit exactly 25 40kHz cycles into the on time of the
800Hz tone.
</p><p><img class="img_noborder_small" src="morse-40khz.svg">
</p><h2>Detector Design
</h2><p>Most of the complexity disappears inside the Vishay detector:
accordingly very few technical design decisions need to be made once
we’ve decided to use it. The digital output from the detector merely
needs to be amplified to drive headphones. A volume control seems
civilized, and I think there’s merit in a bit of low-pass filtering
too so the listener hears a purer tone that a raucous square-wave.
</p><h3><em>Desiderata</em>
</h3><p>The detector will be built by people who might have very little
experience of building electronics, and the key objectives are:
</p><ul><li>to end up with a working detector;
</li><li>to have fun;
</li><li>to feel a sense of accomplishment.
</li></ul><p>Thus it seemed sensible that:
</p><ul><li>different components should look significantly different;
</li><li>the design should be simple;
</li><li>the design shouldn’t be <em>too</em> simple.
</li></ul><p>Finally, something I did <em>not</em> worry about: I doubted I’d need more
than a hundred detectors, so cost wasn’t really an issue.
</p><p>My original plan was to force people to solder up the detector, but on
balance that seemed too daunting. So I designed both the permanent PCB-based
board, and a simpler design which could be built on a breadboard. Both
detectors do basically the same job, though the breadboard version is
a bit simpler.
</p><p>The detectors are discussed in more detail in separate articles:
</p><table>
<tr>
<td width="50%" style="text-align: center">
<a href="rx-pcb.html">
<img src="det-pcb.jpg" alt="[PCB-based detector]" class="img_noborder" />
<p>PCB-based detector</p>
</a>
</td>
<td width="50%" style="text-align: center">
<a href="rx-bb.html">
<img src="det-bb.jpg" alt="[Breadboard-based detector]" class="img_noborder" />
<p>Breadboard-based detector</p>
</a>
</td>
</tr>
</table>
<h2>Emitter Design
</h2><p>I needed a couple of different emitters for the puzzle.
</p><p>One design was for field deployment hidden inside a birdbox hidden in
the middle of nowhere. The main constraint here was that it needs to
run reliably for at least a year on one set of batteries. I only
wanted a handful of these, so again cost didn’t matter.
</p><p>The other emitter was a test transmitter included with the detector
kits. In the end, adding a daughter board to the a <a href="https://stm32-base.org/boards/STM32F103C8T6-Blue-Pill.html">Blue
Pill</a>
STM32 board was easier and cheaper than spinning my own STM32 board.
</p><p>The two emitters are discussed in more detail in separate articles:
</p><table>
<tr>
<td width="50%" style="text-align: center">
<a href="ir-emitter-bb.html">
<img src="em-bb-front.jpg" alt="[Emitter for birdbox]" class="img_noborder" />
<p>Emitter for birdbox</p>
</a>
</td>
<td width="50%" style="text-align: center">
<a href="ir-emitter-bp.html">
<img src="em-bp.jpg" alt="[Bluepill-based emitter]" class="img_noborder" />
<p>Bluepill-based detector</p>
</a>
</td>
</tr>
</table>
<h2>Design Files
</h2><p>All the design files can be downloaded from
<a href="https://github.com/mjoldfield/ir-morse-toys">GitHub</a>. The electronics
was designed KiCad, the firmware is written in C and uses
<a href="https://libopencm3.org">libopencm3</a>. Random bits of Haskell and
python were used too.
</p><h2>Conclusions
</h2><p>Although I’m sure the designs aren’t optimal, they work. Novices have
built the detectors, and the emitters have worked reliably.
</p>C0B3B04A-5EC1-11EC-B1F9-B7BEF22C6C4C2021-12-16T20:35:04:04Z2021-12-17T17:19:21:21ZCar Battery MonitorMartin Oldfield<p>A simple gadget to warn me if my car battery is going flat.
</p><h2>Introduction
</h2><p>My car tends to eat batteries! Sometimes, particularly during pandemic
summers many weeks pass without the car being used, and during this
time sometimes the battery discharges. I suspect the underlying
problem is that somewhere current is leaking to ground, but that seems
hard to fix. Instead, I’d be happy to just monitor the situation and
charge the battery if needed.
</p><p>The car is usually parked in range of the home WiFi network, so a convenient
solution would be to have the car send me email about its battery every
once in while. This article describes how to do just that.
</p><p><img alt="[Car Battery Monitor]" class="img_border_small" src="cbm.jpg">
</p><p>All the files for this project are on
<a href="https://github.com/mjoldfield/car-battery-monitor/">GitHub</a>. The code
is written in CircuitPython, the PCB was designed in KiCad, and the
3D-printable case was designed in OpenSCAD.
</p><h2>Design constraints
</h2><p>The only real constraint is that the device mustn’t draw too much
current. To get a feel for the numbers say the car battery has a
capacity of 100 amp hours, and in the absence of other drains we’d
like a lifetime of about ten years (so that a month is roughly 1%
of the capacity).
</p><p>There are about 8,766 hours in a year, call it 10,000, and so about
100,000 hours in a decade. Our 100 amp hour battery can supply about
1mA for that time, so we should make sure that our gadget draws less
than this.
</p><p>Having satisfied the current constraint, I was keen to optimize for
easy of implementation rather than anything. Rather than put
components on a PCB, I was quite happy to use modules. Doubtless much
smaller and cheaper designs exist, but I wasn’t fussed about that
here.
</p><h2>Basic hardware
</h2><p>These days I think the default choice for random WiFi gadgets is
something from the ESP32 family. I recently experimented with the
ESP32S2 on the TinyS2 board, using a (slightly different) Pololu DC-DC
converter to drop a 12V supply to 5V. You can <a href="../11/tiny-power.html">read the
details</a> but the key numbers are:
</p><table class="cspaced_sml">
<tr><th>State</th><th>Current consumption</th></tr>
<tr><td>Deep sleep</td><td>90µA</td></tr>
<tr><td>WiFi active</td><td>45mA</td></tr>
</table>
<p>So, provided the WiFi isn’t on very often the current consumption
might be an order of magnitude below the target.
</p><p>I wanted to hear from the car every day but I don’t trust the
accuracy of the clock, so I thought it best to send email every
six hours. That way I can be reasonably confident of getting an
email at night even if the car is away during the day.
</p><p>It takes about 10s to connect to WiFi and send an email, so the duty
cycle is 10s in 6 hours, or about 1 in 2000. This means that the
average current for doing things is about 25µA, which when added to
the sleep current of 90µA brings us to 115µA.
</p><h3>Schematic
</h3><p><a href="pcb.svg"><img alt="[Schematic]" class="img_border_small" src="pcb.svg"></a>
</p><h3>Bill of Materials
</h3><table class="cspaced_sml">
<tr><th>Designator</th><th>Part</th><th>Footprint</th></tr>
<tr><td>U1</td><td><a href="https://www.pololu.com/product/3791">Pololu D36V6F3 3.3V buck converter</a></td><td></td></tr>
<tr><td>U2</td><td><a href="https://unexpectedmaker.com/tinys2">TinyS2</a></td><td></td></tr>
<tr><td>RV1</td><td><a href="https://www.littelfuse.com/products/varistors/surface-mount/auml/v18aumla1210.aspx">V18AUMLA1210NH</a> MLV</td><td>1210</td></tr>
<tr><td>F1</td><td><a href="https://www.littelfuse.com/products/polyswitch-resettable-pptcs/surface-mount/nanoasmd/nanoasmdc010f_2.aspx">NANOASMDC010F-2</a> 100mA polyfuse</td><td>1206</td></tr>
<tr><td>D1</td><td><a href="https://www.taiwansemi.com/en/products/details/SS24">SS24</a> Schottky diode</td><td>DO-214AA (SMB)</td></tr>
<tr><td>R1</td><td>180k resistor</td><td>0805</td></tr>
<tr><td>R2</td><td>33k resistor</td><td>0805</td></tr>
<tr><td>C1</td><td>100nF X7R capacitor</td><td>0805</td></tr>
</table>
<h3>Getting power
</h3><p>The gadget just needs a 12V power supply which is on even when the
car is stopped and the engine’s off. I powered it from a cigarette
lighter socket.
</p><p>I bought a plug from Amazon which had both a fuse and LED. I replaced
the fuse with one with a much lower current rating (500mA). The LED I
removed completely: it drew a few mA!
</p><h3>Protection
</h3><p>I am no expert but I understand that the car’s 12V supply can be
rather noisy so I made some attempts to protect the electronics.
The DC-DC converter claims to support input voltages of 50V, so
I just wanted something to handle spikes.
</p><p>The delightfully named Littelfuse company make a wonderful range of
varistors for precisely this task. These devices start to conduct if
the voltage across them (in either direction) exceeds some
threshold. If this went on for very long the magic smoke would escape,
so I put a polyfuse in series with the supply.
</p><p>Finally there’s a Schottky diode for reverse polarity protection.
</p><h3>Voltage sensing
</h3><p>The ESP32S2 has inbuilt ADCs, so all that’s needed is an external
potential divider. The total resistance is about 200kΩ, which implies
a current of about 60µA when fed with 12V.
</p><p>Empirically a simple linear model was a good map from ADC count to
input voltage. Fitting over the range 10–15V gives:
</p><pre><code>V = 4.0418 * (ADC / 10000) + 0.8294</code></pre><p>which makes these predictions
</p><table class="cspaced_sml">
<tr><th>True voltage / V</th><th>ADC count / 10,000</th><th>Model value / V</th></tr>
<tr><td>6.0</td><td>1.2392</td><td>5.929</td></tr>
<tr><td>7.0</td><td>1.5132</td><td>7.021</td></tr>
<tr><td>8.0</td><td>1.7575</td><td>7.996</td></tr>
<tr><td>9.0</td><td>2.0196</td><td>9.041</td></tr>
<tr><td>10.0</td><td>2.2639</td><td>9.996</td></tr>
<tr><td>11.0</td><td>2.5221</td><td>11.015</td></tr>
<tr><td>12.0</td><td>2.7643</td><td>12.011</td></tr>
<tr><td>13.0</td><td>3.0146</td><td>13.010</td></tr>
<tr><td>14.0</td><td>3.2529</td><td>13.959</td></tr>
<tr><td>15.0</td><td>3.5071</td><td>14.973</td></tr>
</table>
<h2>Construction
</h2><p>Although the circuit is simple, I made a PCB to make things neat and
tidy. I’d not realised that GPIO pin 0 is used by the bootloader, so
had to bodge things a little. The files in the <a href="https://github.com/mjoldfield/car-battery-monitor/tree/main/pcb">GitHub
repo</a>
have been updated to fix this problem.
</p><p><img alt="[PCB]" class="img_noborder_small" src="pcb-brd.svg">
</p><h2>Software
</h2><p>All the software was written in CircuitPython. It is straightforward,
but a few points are worth making:
</p><ol><li><p>We take all the ADC readings before turning on the WiFi because
the wireless part draws lots of current which fluctuates rapidly.
</p></li><li><p>Results are sent by SMTP to a mail server on the local network. I’ve
<a href="../11/python-smtp.html">written about this</a> before.
</p></li></ol><p>There are three Python files:
</p><ul><li><p><a href="https://github.com/mjoldfield/car-battery-monitor/blob/main/code/code.py"><code>code.py</code></a> which contains all the application code;
</p></li><li><p><a href="https://github.com/mjoldfield/car-battery-monitor/blob/main/code/tinys2.py"><code>tinys2.py</code></a> this comes with the TinyS2 and contains a driver for the onboard LED;
</p></li><li><p><a href="https://github.com/mjoldfield/car-battery-monitor/blob/main/code/secrets.py"><code>secrets.py</code></a> this contains the WiFi access information, and details about the email setup.
</p></li></ul><h3>Installation
</h3><p>There are four layers of software to consider.
</p><ol><li><p>The hardware bootloader in the ESP32. You need special software e.g. <a href="https://github.com/espressif/esptool">esptool</a> to communicate with this.
</p></li><li><p>A <a href="https://github.com/microsoft/uf2">UF2</a> bootloader, which provides a USB mass-storage device to which firmware can be uploaded.
</p></li><li><p>The CircuitPython binary, which is available as a UF2 image. When installed it provides a USB mass-storage device to which your Python code can be uploaded.
</p></li><li><p>The Python files which contain your application.
</p></li></ol><p>The CircuitPython website has
<a href="https://circuitpython.org/board/unexpectedmaker_tinys2/">instructions</a>
for installing all this, and both the UF2 bootloader and CircuitPython UF2 images.
</p><p>These are the steps:
</p><ol><li><p>Reset the board holding the BOOT button down.
</p></li><li><p>Use esptool.py to install the UF2 boot loader (combined.bin).
</p></li><li><p>Reset the board.
</p></li><li><p>Install the CircuitPython UF2 image.
</p></li><li><p>Reset the board.
</p></li><li><p>Copy the three .py files to the board.
</p></li><li><p>Reset the board.
</p></li></ol><h2>Case
</h2><p>I designed a simple 3D-printable case in OpenSCAD. It is rather larger
than necessary, but gives space to knot the incoming power cable to stop
it being pulled out.
</p><h2>Conclusions
</h2><p>The device basically works as desired. The measured sleep current is
about 130µA, so adding in the extra current used when active I expect
the average to be about 155µA. This is a bit lower than I’d expected,
probably because the DC-DC converter is a bit more efficient than in
my earlier tests.
</p><p>If I had time, it would be nice to fit all the components inside the
cigarette-lighter plug but I doubt I’ll bother.
</p><p>I am struck by just how easy it is to build something like this from
pre-existing modules and Python, and still only sip power. I think we
should all be thankful for the work that people have donated to the
free commons which makes this possible.
</p>6B27589E-47F0-11EC-A81F-8332CA3B95742021-11-17T20:35:47:47Z2021-11-17T20:35:47:47ZTinyS2 PowerMartin Oldfield<p>Power measurements for the TinyS2, an ESP32-S2 board.
</p><h2>Power Measurements
</h2><p>Having been rather disappointed to find that the Raspberry Pi Pico
and ESP32 Wireless Pack drew so much current, I thought it would
be worth repeating the experiment with a pure ESP32 solution.
</p><p>The <a href="https://unexpectedmaker.com/tinys2">TinyS2</a> board was easy to
acquire, so that’s what I did! It boasts an
<a href="https://www.espressif.com/en/products/socs/esp32-s2">ESP32-S2</a>
mounted on a small board with glue circuitry and a USB-C socket. The
ESP32-S2 is a single core processor with WiFi, BLE, and native USB
support. The last-named allows CircuitPython to export its filesystem
to a host computer.
</p><p><img alt="[TinyS2]" class="img_border_small" src="tinys2.jpg">
</p><p>All the tests below use CircuitPython v6.2 which came installed on the
board.
</p><p>Normal operation is what the name suggests, and includes time sleeping
with the <code>time.sleep()</code> call. Light sleep made no difference and so I
ignored it; deep sleep uses the
<code>alarm.exit_and_deep_sleep_until_alarms()</code> call.
</p><h2>Results
</h2><p>I ran three series of tests: 5V supply to the 5V pin; 5V supply to the battery
pin; 12V dropped to 5V which supplied the battery pin. When powered from the
battery pin, the red power LED is not illuminated, which saves about 1.5mA:
as you’ll see from the table below, this is very significant when in deep sleep.
</p><table class="cspaced_sml">
<tr><th>State</th><th>5V pin</th><th>Battery pin</th><th>12V → 5V → Battery pin</th></tr>
<tr><th>Normal</th><td>31.5mA</td><td>30.0mA</td><td>18mA</td></tr>
<tr><th>Deep sleep</th><td>1.5mA</td><td>40µA</td><td>90µA</td></tr>
<tr><th>WiFi active</th><td>100mA</td><td>100mA</td><td>45mA</td></tr>
</table>
<p>Most measurements are accurate to about 0.5mA. The low currents drawn during
deep sleep are accurate to about 10µA. The current drawn whilst WiFi was active
fluctuated a lot, so should be taken as indicative only.
</p><h3>Peak current consumption
</h3><p>When the WiFi was active, short-term peaks of about 200mA were seen. The current-limit
on my bench power supply tripped if it was set lower than 210mA.
</p><h3>Quiescent current
</h3><p>The <a href="https://www.espressif.com/sites/default/files/documentation/esp32-s2_datasheet_en.pdf">ESP32-S2 datasheet</a>
quotes a current consumption of 20µA when everything but the
Real-Time-Clock is disabled.
</p><p>The
<a href="https://www.renesas.com/us/en/document/dst/isl85410-datasheet?r=529061">datasheet for the ISL85410</a>
in the Pololu <a href="https://www.pololu.com/product/2831">D24V10F5</a> DC-DC
converter quotes a typical quiescent current of 80µA which dominates
the current drawn whilst the ESP32 is in deep sleep.
</p><h2>Conclusions
</h2><p>I was interested in the comparison between running CircuitPython on
the TinyS2 and on the Raspberry Pi Pico with an ESP32 WiFi
addon. Simply put Tiny power is a lot smaller than Pico Power,
especially when sleeping.
</p>76030ECC-46FE-11EC-BA46-F978C93B95742021-11-16T15:40:53:53Z2021-11-16T15:40:53:53ZPi Pico PowerMartin Oldfield<p>Power measurements for the Raspberry Pi Pico.
</p><p><em>Update on 17th November 2021: having now played with an ESP32-S2
based board, I should say that it draws a lot less power than the combo described here.</em>
</p><h2>Power Measurements
</h2><p>I was thinking about using a Raspberry Pi with an ESP32 based Wireless
Pack for a battery-powered monitor, so I was interested to see how much
current it draws.
</p><p>All the tests below use Adafruit’s Circuit Python. Normal operation is
what the name suggests, and includes time sleeping with the
<code>time.sleep()</code> call.
</p><p>Light sleep uses the <code>alarm.light_sleep_until_alarms()</code> call; deep
sleep uses the <code>alarm.exit_and_deep_sleep_until_alarms()</code> call.
</p><h3>Pico only
</h3><p>The first tests use only the Pico board, powered at 5V through the USB port.
</p><table class="cspaced_sml">
<tr><th>State</th><th>Current draw / mA</th><th>Power consumption / mW</th></tr>
<tr><th>Normal operation</th><td>18</td><td>90</td></tr>
<tr><th>Light sleep</th><td>12</td><td>60</td></tr>
<tr><th>Deep sleep</th><td>5</td><td>25</td></tr>
</table>
<p>Accuracy for the current measurement is about ±0.5mA. There’s a bit of
high-frequency noise, presumably from the DC-DC converter, but the
very little variation otherwise.
</p><h3>Pico and Wireless Pack
</h3><p>The second set of tests use the Pico board with a <a href="https://shop.pimoroni.com/products/pico-wireless-pack">Pimoroni Wireless
Pack</a>, all
powered at 5V through the USB port.
</p><table class="cspaced_sml">
<tr><th>State</th><th>Current draw / mA</th><th>Power consumption / mW</th></tr>
<tr><th>WiFi not in use</th><td>36</td><td>180</td></tr>
<tr><th>WiFi in use</th><td>91</td><td>455</td></tr>
<tr><th>Light sleep</th><td>29</td><td>145</td></tr>
<tr><th>Deep sleep</th><td>22</td><td>110</td></tr>
</table>
<p>As can be seen just plugging in the Wireless pack increases the
current consumption by about 17mA. Thus if the Pico is running normal
code the current consumption is roughly doubled if it’s sleeping it
more than quadruples. It is rather a shame that the sleep current is
so high.
</p><p>If you actually use the WiFi consumption rises sharply: nearly three
times higher. I have no idea if the current will change in different
settings: in my test the ESP32 was about a metre away from the WiFi
Access Point, so presumably very little RF power was needed. The
single number above hides significant variation when the WiFi is
active.
</p><h3>Pico, Wireless Pack, 12V supply
</h3><p>I used a <a href="https://www.pololu.com/product/2831">Pololu D24V10F5</a> buck
converter to drop 12V to 5V, and then supplied that to the
Pico. Internally it uses the Intersil ISL85410 DC-DC converter.
</p><table class="cspaced_sml">
<tr><th>State</th><th>Current draw / mA</th><th>Power consumption / mW</th><th>Efficiency</th></tr>
<tr><th>WiFi not in use</th><td>16.5</td><td>198</td><td>91%</td></tr>
<tr><th>WiFi in use</th><td>40.5</td><td>486</td><td>94%</td></tr>
<tr><th>Light sleep</th><td>13.5</td><td>162</td><td>90%</td></tr>
<tr><th>Deep sleep</th><td>10</td><td>120</td><td>92%</td></tr>
</table>
<h2>Conclusions
</h2><p>I learned a couple of things:
</p><ul><li><p>The Pico Wireless Pack draws a lot of current, even when it’s not
doing anything.
</p></li><li><p>The Pololu buck converter really does deliver 90% efficiency in a
real application.
</p></li></ul><p>So one disappointment and one nice surprise!
</p><p><em>Update: If you care about low-power, it is probably more sensible to use
a pure ESP32 design.</em>
</p>D3D45FFC-434C-11EC-BCDF-399686B77A502021-11-11T21:51:23:23Z2021-11-13T14:59:12:12ZPi Pico WirelessMartin Oldfield<p><em>Aidés-memoires</em> for using the Pimoroni Pico Wireless Pack.
</p><p><em>Update on 17th November 2021: having now played with an ESP32-S2
based board, I think it is much better than the Pico/ESP32 combination.</em>
</p><h2>Introduction
</h2><p>Although the <a href="https://www.raspberrypi.com/products/raspberry-pi-pico/specifications/">Raspberry Pi
Pico</a>
is a fine board, it lacks a WiFi connection. To solve this problem one
can add a “WiFi-coprocessor”. Using an
<a href="https://en.wikipedia.org/wiki/ESP32">ESP32</a> is sufficiently popular
for this role that open firmware is available. You could use almost any
ESP32 board, but Pimoroni make the <a href="https://shop.pimoroni.com/products/pico-wireless-pack">Pico
Wireless Pack</a>
which fits neatly onto the Pico. The Wireless Pack also boasts a microSD
card slot, but I’ve not played with that.
</p><p><img alt="[Pico Wireless Pack]" class="img_border_small" src="picowireless.jpg">
</p><p>Such a coprocessor is not a new idea. The
<a href="https://en.wikipedia.org/wiki/ESP8266">ESP8266</a> (the forerunner to
the ESP32) first appeared as the ESP-01 module designed for this very
role. It allowed any device with a serial port to connect to WiFi via
an extension of the <a href="https://en.wikipedia.org/wiki/Hayes_command_set">Hayes Modem
Commands</a>. By
contrast the ESP32 often sits on the main processor’s SPI bus for
extra speed. The Pimoroni board is very similar to the <a href="https://www.adafruit.com/product/4201">Adafruit
AirLift</a> and includes the same
RGB LED connected to the ESP32. In turn, the AirLift drew inspiration
from the <a href="https://store.arduino.cc/products/arduino-uno-wifi-rev2">Arduino UNO Wifi
Rev.2</a> which
uses a ublox NINA-W102 for WiFi. Peer inside the ublox module though
and you’ll find an ESP32.
</p><p>The ESP32 is a powerful microcontroller in its own right though, and
its arguably overkill to use it purely to handle the WiFi connection:
I suspect many applications using a Pico and ESP32 could be ported to
run wholly on the ESP32. Personally though, I somewhat prefer
developing code for the Pico. For large volumes or when space or unit
costs matter, dropping the Pico probably sensible; othewise I think
the Pico adds value.
</p><p><em>Update: Having now explored using a pure ESP32-S2 solution, it would
be remiss not to point out a couple of drawpacks of the Pico/ESP32
combo. Firstly the Wireless Pack draws about 17mA when idle which is
rather thirsty; secondly the CircuitPython API isn’t compatible with
the normal <code>wifi</code> library used on boards with integrated WiFi.</em>
</p><h2>Software
</h2><p>Given that the Pico/ESP32 combo makes little sense in purely hardware
terms, it is disappointing that the Pimoroni software supporting the
board is poorly documented. There are a few examples which illustrate
how to do HTTP things from MicroPython and C++, but doing more seems
hard. I couldn’t find good documentation for the API or a schematic
for the hardware. Perhaps I am being unfair, so do look for yourself.
</p><p>Happily the software that Adafruit provide for their AirLift board
works perfectly well with the Wireless Pack too. The main
incompatibility is that Adafruit use CircuitPython rather than
MicroPython, so you have to upload new firmware to the Pico. Happily
that’s easy: just follow <a href="https://circuitpython.org/board/raspberry_pi_pico/">the
instructions</a>. Once
installed the differences between Circuit- and Micro-Python didn’t
bother me. Although Adafruit are keen on the Mu IDE, you can use
Thonny too (but you have to tell Thonny it’s talking to CircuitPython,
not MicroPython).
</p><p>The ESP32 firmware provided by Pimoroni works with the Adafruit
CircuitPython libraries, so there’s nothing to change there.
</p><h3>GPIO mapping
</h3><p>Unsurprisingly, the Adafruit documentation doesn’t cover using the
Wireless Pack, so we have to sort out which GPIO pins to use by
ourselves. Happily, though, this is the extent of the work we have to
do. I wasn’t able to find a proper schematic, but there’s a helpful
diagram on the <a href="https://shop.pimoroni.com/products/pico-wireless-pack">Pimoroni
page</a>. The key
connections are:
</p><table class="cspaced_sml">
<tr><th> Adafruit Code </th><th>Pimoroni Diagram </th><th>GPIO </th></tr>
<tr><td> esp32_cs </td><td style="text-decoration:overline">ESP_CS </td><td>GP7 </td></tr>
<tr><td> esp32_ready </td><td>BUSY </td><td>GP10 </td></tr>
<tr><td> esp32_reset </td><td style="text-decoration:overline">RESET </td><td>GP11 </td></tr>
<tr><td> SCK </td><td>SCLK </td><td>GP18 </td></tr>
<tr><td> MOSI </td><td>MOSI </td><td>GP19 </td></tr>
<tr><td> MISO </td><td>MISO </td><td>GP16 </td></tr>
</table>
<p>Other Pico pins are used: some for the SD card, others for a serial
connection to the ESP32 which (I think) are used to flash new ESP32
firmware. I’ve not played with these at all.
</p><h2>A working example
</h2><p>Adafruit provide a simple example on <a href="https://learn.adafruit.com/adafruit-airlift-breakout/circuitpython-wifi">their
website</a>
which scans for wireless networks and reports which ones it finds. If this
works, then the basic system is sound.
</p><h3>Code
</h3><pre><code>import board
import busio
from digitalio import DigitalInOut
from adafruit_esp32spi import adafruit_esp32spi
import adafruit_requests as requests
print("ESP32 SPI hardware test")
esp32_cs = DigitalInOut(board.GP7)
esp32_ready = DigitalInOut(board.GP10)
esp32_reset = DigitalInOut(board.GP11)
spi = busio.SPI(board.GP18, board.GP19, board.GP16)
esp = adafruit_esp32spi.ESP_SPIcontrol(spi, esp32_cs, esp32_ready, esp32_reset)
if esp.status == adafruit_esp32spi.WL_IDLE_STATUS:
print("ESP32 found and in idle mode")
print("Firmware vers.", esp.firmware_version)
print("MAC addr:", [hex(i) for i in esp.MAC_address])
for ap in esp.scan_networks():
print("\t%s\t\tRSSI: %d" % (str(ap['ssid'], 'utf-8'), ap['rssi']))
print("Done!")
</code></pre><p>In CircuitPython this file must be saved as <code>code.py</code>.
</p><h3>Dependencies
</h3><p>Besides the code above, you also need a couple of CircuitPython libraries:
</p><ul><li><p>adafruit_esp32spi
</p></li><li><p>adafruit_requests
</p></li></ul><p>Both can be extracted manually from the <a href="https://github.com/adafruit/Adafruit_CircuitPython_Bundle/releases">Adafruit CircuitPython
Bundle</a>,
and copied to the Pico.
</p><h3>Walkthrough
</h3><ul><li><p>Assemble the hardware
</p></li><li><p>Put the CircuitPython firware on the Pico:
</p><ul><li><p>Download <a href="https://circuitpython.org/board/raspberry_pi_pico/">the .uf2 file</a>.
</p></li><li><p>Boot the Pico with the button held down.
</p></li><li><p>Copy the .uf2 file to the Pico.
</p></li><li><p>The Pico will reboot, and when it reappears the USB drive will be called
<code>CIRCUITPY</code>.
</p></li></ul></li><li><p>Copy the file above to the root of the <code>CIRCUITPY</code> drive as <code>code.py</code>.
</p></li><li><p>Copy the dependencies to the Pico:
</p><ul><li><p>Down the relevant <a href="https://circuitpython.org/libraries">library bundle</a>.
</p></li><li><p>Unzip it.
</p></li><li><p>Copy <code>lib/adafruit_requests.mpy</code> and <code>lib/adafruit_esp32spi</code> to the root of the <code>CIRCUITPY</code> drive. The latter is a directory: copy the whole tree.
</p></li></ul></li></ul><p>If you run the code e.g. with Thonny or Mu, you should see this on the Serial port:
</p><pre><code>ESP32 SPI hardware test
ESP32 found and in idle mode
Firmware vers. bytearray(b'1.7.3\x00')
MAC addr: ['0x44', '0x97', '0x8e', '0x57', '0xdd', '0xc4']
XXXXXXX RSSI: -49
XXXXXXX RSSI: -51
XXXXXXX RSSI: -55
XXXXXXX RSSI: -73
XXXXXXX RSSI: -76
XXXXXXX RSSI: -77
XXXXXXX RSSI: -77
Done!
</code></pre><h2>Other examples
</h2><p>As you might guess from the code above, the <code>adafruit_esp32spi</code>
library contains the code which talks to the ESP32. Their
<a href="https://github.com/adafruit/Adafruit_CircuitPython_ESP32SPI">repo</a> on
GitHub contains many <a href="https://github.com/adafruit/Adafruit_CircuitPython_ESP32SPI/tree/main/examples">other
examples</a>.
</p><p>Most of these examples actually connect to a WiFi network, and so need
credentials for that network. Conventionally this is done by supplying
a <code>secrets.py</code> file which looks like this:
</p><pre><code>secrets = {
'ssid' : 'My Network SSID',
'password' : 'My WiFi Password',
}
</code></pre><p>then calling:
</p><pre><code>esp.connect_AP(secrets["ssid"], secrets["password"])
</code></pre><h2>Conclusion
</h2><p>Although the Pico lacks native WiFi support, adding an ESP32 solves that
problem easily if inelegantly.
</p><p><em>Update: Although this works, I think you’ll probably be happier using
an ESP32 as the main processor if you want WiFi.</em>
</p>5C414D8C-4492-11EC-8CD2-CBB986B77A502021-11-13T14:58:49:49Z2021-11-13T14:58:49:49ZSimple SMTP from PythonMartin Oldfield<p>How to send simple email from Python particular from
embedded devices.
</p><p><em>Update on 17th November 2021: Added a second example using the <code>wifi</code> library.</em>
</p><h2>Introduction
</h2><p>Recently I’ve been using the Raspberry Pi Pico to record
data and send it to me over a WiFi network. Although there
are many ways to do this, receiving the results by email
had a certain retro appeal. The Pico doesn’t have native
WiFi support, so I used an ESP32 as a WiFi coprocessor, and
drove it all from Adafruit’s CircuitPython stack.
</p><p>The data are collected on my local LAN, where I already run an open
(to the LAN)
<a href="https://en.wikipedia.org/wiki/Simple_Mail_Transfer_Protocol">SMTP</a>
server. I hoped that there would already be SMTP support in one
of the CircuitPython libraries, but I couldn’t find it.
Happily though for this very simple case (no authentication, no
fancy 8-bit extensions), it was easy to roll my own.
</p><h2>Hello World
</h2><p>Annoyingly the <code>wifi</code> library used for ports with native WiFi support
and the <code>adafruit_esp32spi</code> library used for ESP32 WiFi coprocessors
have different APIs.
</p><h3><code>wifi</code> version
</h3><p>Most CircuitPython boards with integrated WiFi support seem to use the <code>wifi</code>
library for low-level stuff and <code>socketpool</code> above it. I tested the code
below on a <a href="https://unexpectedmaker.com/tinys2">Tiny S2</a> ESP32-S2 board.
</p><pre><code>import board
import time
import alarm
import digitalio
import wifi
import socketpool
from secrets import secrets
def wifi_connect():
print("My MAC addr:", [hex(i) for i in wifi.radio.mac_address])
print("Available WiFi networks:")
for network in wifi.radio.start_scanning_networks():
	print("\t%s\t\tRSSI: %d\tChannel: %d" % (str(network.ssid, "utf-8"),
			network.rssi, network.channel))
wifi.radio.stop_scanning_networks()
print("Connecting to AP...")
wifi.radio.connect(secrets["ssid"], secrets["password"])
network = wifi.radio.ap_info
print("Connected to {} via {}, RSSI = {}".format(
	 network.ssid, network.authmode, network.rssi))
print("My IP address is", str(wifi.radio.ipv4_address))
def mail_open_socket():
server = secrets["smtp_server"]
print("Connecting to mail server ", server)
pool = socketpool.SocketPool(wifi.radio)
sock = pool.socket()
addr = (server, 25)
sock.connect(addr)
return sock
def mail_rxtx(s, msg):
if msg is not None:
	print('> ' + msg)
	s.send(msg.encode('ascii') + b'\n')
buff_size = 1024
buff = bytearray(buff_size)
s.recv_into(buff)
x = buff.decode('ascii')
print('< ' + x.rstrip())
return x
def mail_send(m_subj, m_msg):
m_from = secrets["mail_from"]
m_to = secrets["mail_to"]
s = mail_open_socket()
mail_rxtx(s, None)
mail_rxtx(s, "HELO pico")
mail_rxtx(s, "MAIL FROM:{}".format(m_from))
mail_rxtx(s, "RCPT TO:{}".format(m_to))
mail_rxtx(s, "DATA")
mail_rxtx(s, "From: {}\nTo: {}\nSubject: {}\n\n{}\n.".format(m_from, m_to, m_subj, m_msg))
print("connect to wifi")
wifi_connect()
time.sleep(sleep_time)
print("Send email")
m_subj = "Hello World!"
m_msg = "Your text goes here"
mail_send(m_subj, m_msg)
</code></pre><h3><code>ESP32SPI</code> version
</h3><p>The main program owes much to an Adafruit <a href="https://github.com/adafruit/Adafruit_CircuitPython_ESP32SPI/blob/main/examples/esp32spi_tcp_client.py">socket
example</a>,
though the pins have been changed to suit a Pimoroni Wireless Pack and
Raspberry Pi Pico.
</p><pre><code>import board
import busio
from digitalio import DigitalInOut
import adafruit_esp32spi.adafruit_esp32spi_socket as socket
from adafruit_esp32spi import adafruit_esp32spi
from secrets import secrets
def rxtx(s, msg):
if msg is not None:
	print('> ' + msg)
	s.send(msg.encode('ascii') + b'\n')
x = s.recv(1024).decode('ascii')
print('< ' + x.rstrip())
return x
print("ESP32 SMTP Client")
esp32_cs = DigitalInOut(board.GP7)
esp32_ready = DigitalInOut(board.GP10)
esp32_reset = DigitalInOut(board.GP11)
spi = busio.SPI(board.GP18, board.GP19, board.GP16)
esp = adafruit_esp32spi.ESP_SPIcontrol(spi, esp32_cs, esp32_ready, esp32_reset)
while not esp.is_connected:
try:
	print("Connecting to AP...")
	esp.connect_AP(secrets["ssid"], secrets["password"])
except RuntimeError as e:
	print("could not connect to AP, retrying: ", e)
	continue
print("Connected to", str(esp.ssid, "utf-8"), "\tRSSI:", esp.rssi)
print("My IP address is", esp.pretty_ip(esp.ip_address))
socket.set_interface(esp)
socketaddr = socket.getaddrinfo(secrets["smtp_server"], 25)[0][4]
s = socket.socket()
s.settimeout(10)
print("Connecting to mail server")
s.connect(socketaddr)
m_from = secrets["mail_from"]
m_to = secrets["mail_to"]
m_subj = "Hello World!"
m_msg = "Your text goes here"
rxtx(s, None)
rxtx(s, "HELO pico")
rxtx(s, "MAIL FROM:{}".format(m_from))
rxtx(s, "RCPT TO:{}".format(m_to))
rxtx(s, "DATA")
rxtx(s, "From: {}\nTo: {}\nSubject: {}\n\n{}\n.".format(m_from, m_to, m_subj, m_msg))
</code></pre><h3>Common code
</h3><p>Besides the usual WiFi information in the <code>secrets.py</code> file, you
also need to define the mail server and a couple of addresses.
</p><pre><code>secrets = {
'ssid' : 'XXXXX',
'password' : 'XXXXXXXXXXXXXX,
'smtp_server': '192.168.1.1',
'mail_from': '<foo@wibble.com>',
'mail_to': '<bar@wibble.com>'
}
</code></pre><h2>Conclusion
</h2><p>This email recipe won’t work if you want to use a server with access
control, or if you want to send attachements. For simple tasks though
it works well. It is nice to write code against <a href="https://datatracker.ietf.org/doc/html/rfc821">an API which is forty
years old</a>, but still
works.
</p>7ECA5D1A-0F4C-11EC-B92D-1897FAC0D1802021-09-06T19:34:18:18Z2021-09-06T19:34:18:18ZPlaces to eat on SkyeMartin Oldfield<p>Some brief notes on places to eat on the Isle of Skye.
</p><h2>Edinbane Lodge
</h2><p>The best food I found on Skye. The Edinbane Lodge is an 16th century
hunting lodge about halfway between Portree and Dunvegan. The food was
classy; the atmosphere casual. Great produce, cooked with flair and
confidence.
</p><p>For more details visit <a href="https://www.edinbanelodge.com/restaurant">their website</a> or
see <a href="http://maps.google.com/maps?q=N+57+28.169+W+6+25.936">Google Maps.</a>
</p><p><em><small>Last visited in August 2021.</small></em>
</p><h2>Antlers Bar and Grill, Portree
</h2><p>This great restaurant is in the Portree Hotel. It serves delicious
food, nicely presented, and cooked with more flair than the menu
suggests. Dinner is particularly good.
</p><p>For more details visit <a href="http://theportreehotel.com/dine-with-us/">their website</a>
or see <a href="http://maps.google.com/maps?q=N+57+24.778+W+6+11.625">Google Maps.</a>
</p><p><em><small>Last visited in August 2021.</small></em>
</p><h2>Cuchullin, Portree
</h2><p>Excellent food with a Scottish seafood bent in Portree.
</p><p>For more details call the restaurant on 01478 612750 or
see <a href="http://maps.google.com/maps?q=N+57+24.789+W+6+11.618">Google Maps.</a>
</p><p><em><small>Last visited in August 2021.</small></em>
</p><h2>Pizza in the Skye, Portree (?)
</h2><p>Improbably good wood-fired pizza, cooked in a mobile food-truck. It really
is fabulous pizza in absolute terms, rather than just good for something
cooked in a truck.
</p><p>For more details, including their location, see <a href="https://pizzaintheskye.com/">their website.</a>
</p><p><em><small>Last visited in August 2021.</small></em>
</p><h2>Birch, Portree
</h2><p>An elegant cafe, serving excellent espresso and fine cakes. There’s
a Scandi-hipster vibe to the place, which seems to suit it, despite
the surroundings.
</p><p>For more details, see <a href="http://maps.google.com/maps?q=N+57+24.758+W+6+11.604">Google Maps.</a>
</p><p><em><small>Last visited in August 2021.</small></em>
</p><h2>The Old Inn, Carbost
</h2><p>A lovely gastropub a stone’s throw from the Tallisker distillery. They describe their
food as wholesome, which doesn’t do it justice to my mind.
</p><p>For more details visit <a href="https://www.theoldinnskye.co.uk">their website</a>
or see <a href="http://maps.google.com/maps?q=N+57+18.110+W+6+21.149">Google Maps.</a>
</p><p><em><small>Last visited in August 2021.</small></em>
</p><h2>The Oyster Shed, Carbost
</h2><p>Primarily a seafood shop, though you can eat here too sitting on
benches outside. The produce is wonderful. The establishment is hidden
up a little road behind Carbost, and it’s popular enough in summer to
make parking quite an adventure: a very worthwhile one though.
</p><p>For more details visit <a href="https://www.theoysterman.co.uk">their website</a>
or see <a href="http://maps.google.com/maps?q=N+57+17.967+W+6+21.459">Google Maps.</a>
</p><p><em><small>Last visited in August 2021.</small></em>
</p><h2>Cafe Lephin, Glendale
</h2><p>A lovely roadside cafe, that I will long remember for its Haggis
panini: a true work of genius.
</p><p>For more details visit <a href="http://www.cafelephin.co.uk">their website</a>
or see <a href="http://maps.google.com/maps?q=N+57+26.915+W+6+42.494">Google Maps.</a>
</p><p><em><small>Last visited in August 2021.</small></em>
</p><h2>Three Chimneys, Colbost
</h2><p>I think it’s hard to talk about good food on Skye without mentioning
the Three Chimneys—how many restaurants have their <a href="https://en.wikipedia.org/wiki/The_Three_Chimneys">own page</a>
on Wikipedia ?
</p><p>In practice, the food is excellent, the rooms very comfortable, and
the hospitality perfect. Perhaps I expected more, but it seemed to
lack a bit of <em>je ne sais quoi</em>.
</p><p>For more details visit <a href="https://www.threechimneys.co.uk">their website</a>
or see <a href="http://maps.google.com/maps?q=N+57+26.609+W+6+38.503">Google Maps.</a>
</p><p><em><small>Last visited in August 2021.</small></em>
</p>0984B4C0-291E-11EB-BF5E-A9DE63CDBC1C2020-11-17T21:39:16:16Z2020-12-12T12:20:35:35ZCanon lenses and the HQ cameraMartin Oldfield<p>Fun and games with FD-Mount SLR lenses on the
Raspberry Pi High Quality Camera.
</p><h2>Introduction
</h2><p>In 2020, Raspberry Pi released a new <a href="https://www.raspberrypi.org/products/raspberry-pi-high-quality-camera/">High Quality
Camera</a>
which unlike previous cameras doesn’t come attached to a lens. Instead
the camera sports a standard
<a href="https://en.wikipedia.org/wiki/C_mount#CS_mount">CS-mount</a> thread to
which a variety of lenses can be attached.
</p><p><img alt="[High Quality Camera]" class="img_border_small" src="hqc-2.jpg">
</p><p>Most of the recommended lenses are fairly small, and have focal
lengths less than 50mm. This article explores lenses for Canon (D)SLR
cameras which have significantly longer focal lengths, and can thus
resolve small objects further away. Such lenses are big and quite
heavy, so you’ll probably need to use a tripod, and indeed a better
tripod mount than provided by the HQ camera itself. A longer ribbon
cable between the camera and the Raspberry Pi is also convenient.
</p><p>Most contemporary Canon DSLR cameras use the
<a href="https://en.wikipedia.org/wiki/Canon_EF_lens_mount">EF-mount</a> standard
which mandates that the aperture is controlled electronically by the
camera. Normally this is very convenient, but the lack of manual
control is frustrating if you’re using the lens without a Canon
camera. Happily earlier
<a href="https://en.wikipedia.org/wiki/Canon_FD_lens_mount">FD-mount</a> lenses
are still available in the second-hand market and are entirely
mechanical. This makes them easy to adjust manually, and thus more
suitable than EF lenses for our application. They’re also a lot
cheaper than newer lenses.
</p><p>Many third parties made FD-mount lenses and some are available at
even lower prices than those made by Canon. I stuck to Canon for
the shorter focal lengths, but tried some exotic long lenses too.
</p><h2>The High Quality Camera
</h2><p>The <a href="https://www.raspberrypi.org/products/raspberry-pi-high-quality-camera/">High Quality
Camera</a>
is based around the Sony
<a href="https://www.sony-semicon.co.jp/products/common/pdf/IMX477-AACK_Flyer.pdf">IMX477R</a>
sensor. The sensor has 4056 x 3040 active pixels, or about 12.3M
pixels in total. Each pixel is 1.55µm square, which corresponds to a
diagonal of about 7.9mm.
</p><p>As a point of comparison with other cameras, it might be helpful to
compare this sensor to those in DSLR (and similar) cameras. Let’s
begin with the sensor’s size. Wikipedia has a <a href="https://en.wikipedia.org/wiki/Image_sensor_format">helpful
page</a> which shows
the sizes of sensors used in many different cameras. Of particular
note is the 35mm ‘full-frame’ sensor which matches the 35mm SLR film
standard, and is a benchmark for photographers. This sensor has a
diagonal of about 43mm so it’s about 5.4 times the linear size of the
HQ camera’s sensor and covers nearly thirty times the area.
</p><p>To see the consequences of this, imagine taking a photo with a 50mm
lens on a full-frame camera. If we now replace the sensor with the one
from the HQ camera, then most of the image will fall outside the
sensor. Put another way, the field-of-view of the HQ camera will be
much smaller than for the full-frame sensor: it will see about 18% of
the image size (or about a thirtieth of the area).
</p><p>Roughly speaking, a 50mm lens on the HQ camera will thus show the same
view as a 270mm lens on the full-frame camera. As another example, if
someone recommends that you should use a 50mm lens on a full-frame
camera to get a particular composition, on the HQ camera you should
use a 10mm lens instead. More excitingly, a relatively normal 200mm
lens on the HQ camera will have a similar reach to a 1000mm lens on a
full-frame sensor.
</p><p>It’s also useful to compare the pixel size of the sensors. The current
top-end sensors from Canon and Sony have about 30M and 61M pixels
spread across a full-frame sensor, or equivalently have 5.4µm and
3.7µm pixels. Bigger pixels mean less resolution for a given sensor
size, but they collect more light so we’d expect them to have better
performance in low-light.
</p><p>Finally, suppose we took photographs with the HQ camera and 100mm
lens, and with the Sony full-frame camera and 500mm lens. As discussed
above, these should have similar fields of view but the Sony sensor
divides the image into 60M pixels to the HQ camera’s 12M: we can think
of each pixel on the HQ camera as equivalent to 2.3 x 2.3 pixels on
the Sony. The Sony system is likely to be a lot better in other ways
too, but it will also be vastly more expensive.
</p><h3>CS and C mounts
</h3><p>In mechanical terms two things matter about the mount: how does it
attach to the lens, and how much distance lies between the sensor and
the back of the lens—the <a href="https://en.wikipedia.org/wiki/Flange_focal_distance">flange focal
distance (FFD)</a>.
</p><p>The HQ camera has a
<a href="https://en.wikipedia.org/wiki/C_mount#CS_mount">CS-mount</a> which takes
the form of a 1" diameter thread with 32 threads per inch, and a FFD of
12.526mm.
</p><p>The same thread is used for the
<a href="https://en.wikipedia.org/wiki/C_mount#CS_mount">C-mount</a>, but the FFD
is longer: 17.526mm. To attach a C-mount lens to a CS-mount body is
easy: just use a 5mm extender.
</p><p>If you screw a C-mount lens directly to a CS-mount body like the HQ
camera, it will appear to work, but you’ll probably find it impossible
to focus the lens properly.
</p><h2>Canon FD lenses
</h2><p>As mentioned in the introduction, I think <a href="https://en.wikipedia.org/wiki/Canon_FD_lens_mount">Canon FD
lenses</a> are a great
choice for the HQ camera. They are totally mechanical, which makes it
easy to adjust things manually, and readily availble at good prices on
<a href="https://www.ebay.co.uk/sch/i.html?_nkw=canon+fd+lens">eBay</a>. All the
lenses I bought worked, though some would benefit from cleaning.
</p><p>The FFD for FD lenses is 42mm, comfortably longer than both the C- and
CS-mount standards so the adapter needs only to have the right
mountings and to hold the lens the right distance away from the
sensor.
</p><p>I couldn’t find FD- to CS-mount adapters, but FD- to C-mount adapters
are plentiful. Remember to use the 5mm ring to convert the native
CS-mount of the HQ camera to C-mount. Without it, the converter will
fit, but it’s likely that you won’t be able to focus.
</p><p><img alt="[FD-mount adapter fitted to camera]" class="img_border_small" src="hqc-3.jpg">
</p><h3>Attaching the lens
</h3><p>I’m not sure if it’s a lack of familiarity, but I found the FD lenses
to be much fiddlier to attach to a camera body than Canon’s newer EF
lenses.
</p><p>For most FD lenses, the key is to align the three flanges on the
body/adapter with the lens, then rotate the lens and body relative to
each other until you hear and feel the lens lock. To unlock things
there’s usually a button on the lens.
</p><p>One of the flanges is larger, has a small stud on the lens, and is
marked by a red dot. If these aren’t aligned, you need to rotate the
inner part of the lens to fix it.
</p><p><img alt="[FD-mount lens]" class="img_border_small" src="hqc-1.jpg">
</p><p>The FD- to C-mount converters I bought have a ring on them which is
labelled “Lock ⟷ Open”. This unlocks the aperture and has nothing to
do with attaching the lens to the body. I think it should be in the
“Open” position when you’re attaching the lens, then moved to the
“Lock” position to unlock the iris.
</p><p>Some older lenses have a loose silver ring which you rotate to attach
the lens. On such lenses the lens and body don’t rotate with respect
to each other.
</p><p>There are plentiful YouTube videos showing all this, and good
articles too. I liked one on <a href="http://cholla.mmto.org/photography/gear/canon_fd/introduction.html">Tom’s Digital Photography</a>.
</p><h3>Aperture lock
</h3><p>Although the FD lens is mechanical, the aperture ring only affects
the aperture when the lens is attached to a body. I think you can
unlock it by cunningly pressing the right spots on the lens, but it’s
easier to just mate it with the adapter. Sometimes you have to rotate
a ring on the adapter to unlock the iris.
</p><p>It’s worth remembering this if you buy a lens and think the aperture
control is broken.
</p><h3>Lens hoods
</h3><p>It always seemed to me that old books on photography made too much
fuss about lens hoods. Having now played with some older lenses, I
realise that modern optical coatings are completely amazing, and
if you use lenses without them you really <em>do</em> need to worry about
stray light falling on the surface of the lens.
</p><p>Using uncoated lenses without a hood usually gives washed out,
poor contrast, images.
</p><h3>FD lenses on EF bodies
</h3><p>Having found some splendid old lenses, I was naturally interested
to try them on a modern Canon body too.
</p><p>The FFD for EF lenses is 44mm, 2mm longer than for FD, and the problem
is exacerbated by the finite thickness of the adapter which adds
5–10mm. To make things work properly you need an extra lens in the
adapter: without it you won’t be able to focus properly.
</p><p>Such adapters exist, but are obviously more expensive than those
without optics. Sadly adapters <em>without</em> optics also exist, so care is
needed when buying them.
</p><h2>Canon EF lenses
</h2><p>For completeness, I include a few notes about using EF lenses on
the HQ camera.
</p><p>As with FD lenses, the FFD (44mm) is longer than for a CS- or C-mount,
so no optics are required in the adapter. Again the adapters seem to
target C-mounts, so you’ll need the extension tube.
</p><p>Most EF lenses support manual focus, so the only problem is
controlling the aperture.
</p><p>There is at least one open-source project which tries to control the
lens: Jan Henrik Hemsing’s
<a href="https://github.com/Jan--Henrik/EF-S-Adapter">EF-S-Adapter</a>. Other
information is
<a href="https://pickandplace.wordpress.com/2011/10/05/canon-ef-s-protocol-and-electronic-follow-focus/">scattered</a>
<a href="https://www.dslr-forum.de/showthread.php?t=649529">around</a>
<a href="https://www.dslr-forum.de/showthread.php?t=649529&page=61">the</a>
<a href="http://oliford.co.uk/phys/canon-lens-protocol/">Internet</a>.
At least some people claim to have damaged lenses doing this,
so proceed at your own risk.
</p><p>If you could get all this to work without letting the magic smoke out,
you could get the Raspberry Pi to actually control the lens without
manual intervention. That sounds like a cool project!
</p><h2>Sample image
</h2><p>The image below shows a <a href="https://en.wikipedia.org/wiki/Lego_minifigure">Lego
minifigure</a> taken from
about 20m away using a 400mm Prinz Galaxy lens which I bought on eBay
for about £35. Even on a reasonable tripod, aligning and focussing
this was a bit tricky using the 7" Raspberry Pi display, but I was
delighted with the result.
</p><p><img alt="A lego minifigure from 20m" class="img_border_small" src="minifig.jpg">
</p><p>Even if we crop the image, it is still relatively sharp; you can clearly
see the figure smiling:
</p><p><img alt="Cropped view" class="img_border_small" src="minifigz.jpg">
</p><h2>Conclusions
</h2><p>Putting a nice lens on the High Quality Camera makes it a lot more
versatile, and lets you capture a wider ranges of images. Although
relatively old, FD-Mount lenses are a relatively inexpensive way to
explore this.
</p>59EF6780-ED2D-11E4-AA11-F7DA0D0BC63C2015-04-27T22:32:10:10Z2020-09-07T14:19:15:15ZMonoids in HaskellMartin Oldfield<p>Brief notes on monoids in Haskell. </p><h2>Introduction</h2>
<p>Some very brief notes summarizing Haskell’s monoids. It’s my crib sheet, written partly to straighten matters in my own mind and partly for future reference.</p>
<p>Most of the information here comes from elsewhere: see a list at the end of the article. I’m also indebted to Dominic Prior for many helpful discussions. Dominic is collecting <a href="https://docs.google.com/document/d/1DvbcQTibeUEOVmoLO14vvRa27kf6y29sObUmQpyFn9g/pub">useful and interesting code</a> on Google Docs.</p>
<h2>Groups</h2>
<p>Many people are familiar with <a href="http://en.wikipedia.org/wiki/Group_(mathematics)">groups.</a> Every group has:</p>
<ul>
<li>an associative, binary operation ⊕;</li>
<li>a set of elements closed under ⊕;</li>
<li>an identity element;</li>
<li>an inverse element.</li>
</ul>
<p>For example, consider the integers under addition.</p>
<p>We have:</p>
<ul>
<li>associativity: (a ⊕ b) ⊕ c = a ⊕ (b ⊕ c);</li>
<li>an identity, 0: a ⊕ 0 = 0 ⊕ a = a;</li>
<li>an inverse, -a: a ⊕ (-a) = (-a) ⊕ a = 0.</li>
</ul>
<h2>Semigroups</h2>
<p>Now, consider instead the positive integers under addition. We still have an interesting structure, but because the set of elements does not include 0 there’s no identity element. Similarly the lack of negative numbers means there’s no inverse.</p>
<p>Such a structure is called a <a href="https://en.wikipedia.org/wiki/Semigroup">semigroup</a></p>
<p>Moving away from the integers, the set of strings of finite, non-zero length forms a semigroup under concatenation.</p>
<h2>Monoids</h2>
<p>Perhaps throwing away both the inverses and the identity is too much. If we have inverses we must have the identity, but the converse isn’t true. So, let’s consider a group without inverses. This is a monoid.</p>
<p>Three examples come easily to mind:</p>
<table class="spaced" cellspacing="0"><tr><th align="center">Set</th><th align="center">Operation</th><th align="center">Identity</th></tr><tr><td align="center">Natural Numbers</td><td align="center">+</td><td align="center">0</td></tr><tr><td align="center">Positive Integers</td><td align="center">*</td><td align="center">1</td></tr><tr><td align="center">Strings</td><td align="center">++</td><td align="center">""</td></tr></table>
<p>Given that such basic things admit a monoidal structure, it is not surprising to find more complicated things do too. For example, Brent Yorgey’s fine <a href="http://projects.haskell.org/diagrams/">diagrams</a> package provides a <a href="http://projects.haskell.org/diagrams/doc/manual.html#semigroups-and-monoids">monoidal instance</a> for diagrams. In words, we can combine two diagrams to make a new diagram.</p>
<h2>Data.Monoid</h2>
<p>In Haskell the monoid typeclass lives in <a href="https://downloads.haskell.org/~ghc/latest/docs/html/libraries/base/Data-Monoid.html">Data.Monoid</a> which gives us:</p>
<ul>
<li>the associative operator <code>mappend</code> or <code><></code>,</li>
<li>the identity element <code>mempty</code>,</li>
</ul>
<p>both are subject to laws:</p>
<ul>
<li><code>mempty</code> <code><></code> <code>a</code> = <code>a</code>,</li>
<li><code>a</code> <code><></code> <code>mempty</code> = <code>a</code>,</li>
<li>(<code>a</code> <code><></code> <code>b</code>) <code><></code> <code>c</code> = <code>a</code> <code><></code> (<code>b</code> <code><></code> <code>c</code>).</li>
</ul>
<p>Note that you really ought to use <code>mappend</code> when implementing your own monoids, but it’s just too ugly for me.</p>
<p>Further, there is a <code>mconcat</code> method which combines a list of elements. There’s a default implementation which simply folds <code><></code>, but instances might be able to implement it more efficiently.</p>
<p>You can see that the function names are somewhat inspired by the list instance:</p>
<pre><code>instance Monoid [a] where
mempty = []
mappend = ++</code></pre>
<p>So we can concatenate lists more abstractly:</p>
<pre><code>> "the " <> "quick"
"the quick"
> mempty <> "quick"
"quick"
> mconcat [ "the ", "quick ", "brown " ]
"the quick brown "</code></pre>
<p>Now let’s turn to the integers. Recall that there are two different monoids: one under multiplication and the other under addition. Haskell handles this with two different classes: <code>Product</code> and <code>Sum</code> respectively.</p>
<pre><code>> Product 2 <> Product 3
Product {getProduct = 6}
> Product 2 <> mempty
Product {getProduct = 2}
> Sum 2 <> Sum 3
Sum {getSum = 5}
> Sum 2 <> Sum 0
Sum {getSum = 2}
> Sum 2 <> mempty
Sum {getSum = 2}
> mconcat $ map Sum [1..10]
Sum {getSum = 55}
> mconcat $ map Product [1..10]
Product {getProduct = 3628800}</code></pre>
<p>The instance implementation looks like this:</p>
<pre><code>newtype Product a = Product { getProduct :: a }
deriving (Eq, Ord, Read, Show, Bounded, Generic, Generic1, Num)
instance Num a => Monoid (Product a) where
mempty = Product 1
Product x <> Product y = Product $ x * y
</code></pre>
<h2>The Maybe monoid</h2>
<p>We can often think of the Maybe type as being a special case of lists with at most one element, and so unsurprisingly there’s a monoid instance for Maybe too:</p>
<pre><code>> Just "a" <> Just "b"
Just "ab"
> Nothing <> Just "b"
Just "b"
> mempty <> Just "b"
Just "b"
> mconcat $ map ((\x -> Just [x]) ['a' .. 'f']
Just "abcdef" </code></pre>
<p>An implementation is straightforward:</p>
<pre><code>instance Monoid a => Monoid (Maybe a) where
mempty = Nothing
Just a <> Just b = Just $ a <> b
Just a <> Nothing = Just a
Nothing <> Just b = Just b
Nothing <> Nothing = Nothing</code></pre>
<p>Assuming that <code>mempty</code> = <code>Nothing</code> the last three equations follow from the monoid laws, but we have more freedom when evaluating</p>
<pre><code> Just a <> Just b</code></pre>
<p>Ignoring <code>Just $ a <> b</code> there are only two choices:</p>
<ul>
<li><code>Just a</code></li>
<li><code>Just b</code></li>
</ul>
<p>and it turns out that both choices have been instantiated as <code>First</code> and <code>Last</code>. We’ll consider <code>First</code> in more detail:</p>
<pre><code>newtype First a = First { getFirst :: Maybe a }
deriving (Eq, Ord, Read, Show, Generic, Generic1,
Functor, Applicative, Monad)
instance Monoid (First a) where
mempty = First Nothing
First (Just a) <> First (Just b) = First $ Just a
First (Just a) <> First (Nothing) = First $ Just a
First (Nothing) <> First (Just b) = First $ Just b
First (Nothing) <> First (Nothing) = First Nothing</code></pre>
<p>So we have a way of picking out the first or last interesting entry. For example, let’s set up a little database with just a couple of interesting characters in it: a and b:</p>
<pre><code>> let interesting = [ 'a', 'b' ]
> let q c = if c `elem` interesting then Just c else Nothing
> q 'a'
Just 'a'
> q 'c'
Nothing</code></pre>
<p>Now let’s look at the monoids:</p>
<pre><code>> mconcat $ map (First . q) "cabinet"
First {getFirst = Just 'a'}
> mconcat $ map (Last . q) "cabinet"
Last {getLast = Just 'b'}
> mconcat $ map (Last . q) "desk"
Last {getLast = Nothing}</code></pre>
<p>Note that because we are <em>selecting</em> one of the existing values and <em>not creating</em> one, we don’t need the underlying data type to be a monoid inself. This isn’t the case with the plain Maybe monoid.</p>
<h2>Maximum, <span class="caps">AND,</span> OR</h2>
<p>Given a set of numbers we could form another monoid over maximum. There’s no standard instance, but it’s easy to write one. In fact, it’s easy to write two!</p>
<p>The key decision is <code>mempty</code>. We could just reuse Maybe:</p>
<pre><code>newtype MaxM a = MaxM { getMaxM :: Maybe a }
deriving (Eq, Ord, Read, Show)
instance Monoid (MaxM a) where
mempty = MaxM Nothing
a <> MaxM Nothing = a
MaxM Nothing <> b = b
MaxM (Just a) <> MaxM (Just b) = MaxM . Just $ max a b</code></pre>
<p>Alternatively we could make <code>mempty</code> the lower bound for the type in question (if such a thing exists):</p>
<pre><code>newtype MaxB a = MaxB { getMaxB :: a }
deriving (Eq, Ord, Read, Show, Bounded, Generic, Generic1, Num)
instance Num a => Monoid (MaxB a) where
mempty = minBound
mappend = max</code></pre>
<p>Note that because <code>minBound</code> depends on the type, we’ll often have to explicit supply one:</p>
<pre><code>> MaxB 1 <> MaxB 2 :: MaxB Int
MaxB {getMaxB = 2}</code></pre>
<p>We can play games with different types:</p>
<pre><code>> import Data.Int
> import Data.Word
> mempty :: MaxB Int16
MaxB {getMaxB = -32768}
> mempty :: MaxB Word16
MaxB {getMaxB = 0}
> mempty :: MaxB Int
MaxB {getMaxB = -9223372036854775808}</code></pre>
<p>But not <code>Integer</code>: being unbounded it doesn’t have a bound!</p>
<pre><code>> mempty :: MaxB Integer
<interactive>:...:
No instance for (Bounded Integer) arising from a use of ‘mempty’
In the expression: mempty :: MaxB Integer
In an equation for ‘it’: it = mempty :: MaxB Integer</code></pre>
<p>At the other end of the size scale, consider 1-bit integers. With the usual equivalences, 0 ≡ False and 1 ≡ True, we find <code>max</code> ≡ <code>||</code> and <code>min</code> ≡ <code>&&</code>. These instances are standard ones: <code>Any</code> and <code>All</code>:</p>
<pre><code>> Any True <> Any True
Any {getAny = True}
> Any True <> Any False
Any {getAny = True}
> All True <> All True
All {getAll = True}
> All True <> All False
All {getAll = False}</code></pre>
<h2>Ordering</h2>
<p>Haskell defines a comparison function for the <code>Ord</code> typeclass. <code>a `compare` b</code> will return:</p>
<ul>
<li><code>EQ</code> if <code>a</code> = <code>b</code>;</li>
<li><code>GT</code> if <code>a</code> > <code>b</code>;</li>
<li><code>LT</code> if <code>a</code> < <code>b</code>.</li>
</ul>
<p>If we define a monoid instance akin to First, where EQ plays the role of Nothing, then we’ll get first-is-most-significant comparisons.</p>
<pre><code>instance Monoid Ordering where
mempty = EQ
EQ <> b = b
a <> _ = a
> mconcat $ zipWith compare [1,8,9] [3,4,5]
LT</code></pre>
<p>However, the real trick, which I first saw on <a href="http://www.reddit.com/r/programming/comments/7cf4r/monoids_in_my_programming_language/c06adnx">reddit</a> is to append two comparison functions:</p>
<pre><code>> :t comparing length
comparing length :: [a] -> [a] -> Ordering
> :t compare
compare :: Ord a => a -> a -> Ordering
> :t comparing length <> compare
comparing length <> compare :: Ord a => [a] -> [a] -> Ordering
> sortBy (comparing length <> compare) $ words "the quick brown fox"
["fox","the","brown","quick"]</code></pre>
<h2>The Writer Monad</h2>
<p>Finally, a common use for monoids is the Writer monad: the things we log must be monoidal. In modern parlance we should refer to the <a href="http://hackage.haskell.org/package/mtl-2.2.1/docs/Control-Monad-Writer-Lazy.html#g:1">MonadWriter class.</a></p>
<p>Following <a href="http://stackoverflow.com/questions/11684321/how-to-play-with-control-monad-writer-in-haskell">Chris Taylor on StackOverflow,</a> let’s define as toy action, parameterized by the logging method:</p>
<pre><code>import Control.Monad.Writer
> let toyAction l = do { a <- l 3; b <- l 5; return (a*b) }</code></pre>
<p>Let’s start with a fairly traditional log:</p>
<pre><code>> let logS x = writer (x, "Got " ++ show x ++ "\n")
> runWriter $ toyAction logS
(15,"Got 3\nGot 5\n")</code></pre>
<p>or a list of numbers encountered:</p>
<pre><code>> let logN x = writer (x, [x])
> runWriter $ toyAction logN
(15,[3,5])</code></pre>
<p>or just a count of them:</p>
<pre><code>> let logA x = writer (x, Sum 1)
> runWriter $ toyAction logA
(15,Sum {getSum = 2})</code></pre>
<h2>Endo</h2>
<p><a href="https://en.wikipedia.org/wiki/Endomorphism">Endomorphisms</a> are maps from a thing to itself, which in Haskell terms means functions of type <code>(a -> a)</code>. You can make a monoid from these under function composition:</p>
<pre><code>> (+2) . (+3) $ 10
15
> (appEndo $ Endo (+2) <> Endo (+3)) 10
15</code></pre>
<h2>Foldable</h2>
<p>The Foldable typeclass models reducing a set of things to a single value. The minimal implementation is either <code>foldr</code> or <code>foldMap</code>. The latter is perhaps most interesting here:</p>
<pre><code>foldMap :: (Foldable t, Monoid m) => (a -> m) -> t a -> m</code></pre>
<p>If we specialize to a list to get an intuitive picture:</p>
<pre><code>foldMap :: (Monoid m) => (a -> m) -> [a] -> m
foldMap f = mconcat . fmap f</code></pre>
<p>In other words to apply <code>foldMap</code>, first map things into a Monoid, and then collapse the structure with <code>mconcat</code>. We used exactly this construction above, and so those examples can be expressed more succinctly with <code>foldMap</code>. For example:</p>
<pre><code>> foldMap Sum [1..10]
Sum {getSum = 55}
> foldMap Product [1..10]
Product {getProduct = 3628800}</code></pre>
<p>The trick to expressing <code>foldr</code> in terms of <code>foldr</code> in terms of <code>foldMap</code> is to note that the step function in <code>foldr</code> has type:</p>
<pre><code>(a -> b -> b) = (a -> (b -> b))</code></pre>
<p>and that <code>(b -> b)</code> forms a monoid under composition (see Endo above). So to <code>foldr</code> on a list of <code>[a]</code>:</p>
<ul>
<li>map all the <code>a</code> into transformations functions with type <code>b -> b</code>;</li>
<li>compose those functions into one overall transform;</li>
<li>apply that to the starting <code>b</code>.</li>
</ul>
<p>You can read a fuller explanation of this in the <a href="https://en.wikibooks.org/wiki/Haskell/Foldable">Haskell wikibook</a>.</p>
<h2>Other discussions</h2>
<p>Most of the material here has been stolen from other pages. If you want to consult these primary sources I recommend:</p>
<ul>
<li><a href="https://wiki.haskell.org/Typeclassopedia#Monoid">The typeclassopedia;</a></li>
<li><a href="http://blog.sigfpe.com/2009/01/haskell-monoids-and-their-uses.html">a 2009 article by Dan Piponi;</a></li>
<li>and <a href="http://apfelmus.nfshost.com/articles/monoid-fingertree.html">Heinrich Apfelmus’ article on annotated trees.</a> </li>
</ul>03CAC346-DC82-11EA-ADEC-1B15263E50072020-07-28T16:58:01:01Z2020-08-13T15:51:45:45ZDiagrams with HaskellMartin Oldfield<p>Thoughts on using Haskell
</p><h2>Introduction
</h2><p>I like using Brent Yorgey’s
<a href="https://archives.haskell.org/projects.haskell.org/diagrams/">diagrams</a>
package to create images by writing
<a href="https://www.haskell.org">Haskell</a>. Here’s a small example:
</p><p><img alt="[]" class="img_noborder" src="img-0004.svg">
</p><p>and the code to generate it:
</p><pre><code>hello :: Diagram B
hello = vsep 5
. zipWith letterRow [ "HELLO", "WORLD!" ]
$ L.tails cols
where letterRow ls = centerX . hsep 5 . zipWith letterDisc ls
letterDisc l c = letter l <> circle 10 # lw 2.5 # fc c
cols = cycle [ red, green, blue ]
letter c = stroke (textSVG [c] 20)
		 # fc yellow # lw 1.0 # lc yellow
</code></pre><p>In the example above graphics primitives <code>circle</code>, <code>stroke</code>, and
<code>textSVG</code> are combined to make the final image. The combinators
include <code>hsep</code> and <code>vsep</code>, which take lists of elements and stack them
separated by a space, and the <code><></code> operator which puts one diagram on
top of another.
</p><p>There are lots of little modifiers e.g. <code>lw</code> which sets the line
width: these are just functions, but the <code>#</code> operator lets us write
the object being styled before the styling.
</p><p>This article covers a few points which struck me as being
particularly interesting, or I wanted to think about more
carefully. All the information is included in the fine, official
documentation:
</p><ul><li><p><a href="https://archives.haskell.org/projects.haskell.org/diagrams/doc/quickstart.html">A quick-start tutorial</a>;
</p></li><li><p><a href="https://archives.haskell.org/projects.haskell.org/diagrams/doc/manual.html">the user manual</a>;
</p></li><li><p><a href="http://hackage.haskell.org/package/diagrams">the diagrams package on Hackage</a>.
</p></li></ul><h2>A principled package
</h2><p>Like Haskell the diagrams package has a strong theoretical
underpinning. As an example, an important distinction is made between
a location in space (a Point) and a displacement in space (a
Vector). Although both can be represented by a coordinate tuple, they
are very different animals:
</p><ul><li><p>It makes no sense to add a Point to a Point and get a Point, but
it is perfectly natural to add a Vector to a Vector and get a Vector.
You could also add a Vector to a Point and get another Point.
</p></li><li><p>If you translate a Point it becomes a different Point; translating a
Vector leaves it unchanged. If this seems odd it might help to think
of a Vector as the displacement between two Points, both of which will
move in the same way when translated.
</p></li><li><p>You can’t turn a Vector into a Point unless you specify an Origin.
</p></li></ul><p>Some software conflates Points and Vectors, perhaps because they often
have the same representation: the Haskell diagrams package doesn’t. If
you think the distinction is worthwhile, then I think you’ll enjoy
using diagrams; on the other hand, if you think it’s just pedantry I
suspect you’ll be frustrated.
</p><p>The official diagrams documentation has a helpful <a href="https://diagrams.github.io/doc/vector.html">introduction to
vectors and points</a> which
discusses all this in more detail.
</p><h3>Silly games with Vectors and Points
</h3><p>The basic type of a two-dimensional vector is <code>V2 n</code> where <code>n</code>
tells us the underlying scalar type e.g. <code>Double</code>. You can
make such a vector in lots of ways:
</p><pre><code>*Main> V2 1.0 2.0
V2 1.0 2.0
*Main> r2 (3.0, 4.0)
V2 3.0 4.0
*Main> (5.0 ^& 6.0) :: V2 Double
V2 5.0 6.0</code></pre><p>You can make Points in similar ways, though note that there’s no
<code>P2</code> constructor:
</p><pre><code>*Main> p2 (1.0, 2.0)
P (V2 1.0 2.0)
*Main> (3.0 ^& 4.0) :: P2 Double
P (V2 3.0 4.0)</code></pre><p>Although it’s an internal detail, we make a Point by wrapping
a Vector. For example we could have written the last example
above as:
</p><pre><code>*Main> (3.0 ^& 4.0) :: Point V2 Double
P (V2 3.0 4.0)</code></pre><p>Having created Vectors and Points, we can now transform them. Here
we translate a Vector and a Point, noting that the former is
unchanged:
</p><pre><code>*Main> translateX 10 $ r2 (0,1)
V2 0 1
*Main> translateX 10 $ p2 (0,1)
P (V2 10 1)</code></pre><p>As you might expect, we can do all this in three-dimensions too e.g.:
</p><pre><code>> (0.0 ^& 1.0 ^& 2.0) :: V3 Double
V3 0.0 1.0 2.0
*Main> translateX 10 $ p3 (0,1,2)
P (V3 10 1 2)
</code></pre><h3>Polymorphism
</h3><p>The astute reader will have noticed that we applied <code>translateX</code> to
both Points and Vectors, in both two- and three-dimensions. Clearly it’s
a polymorphic function so let’s look at its type:
</p><pre><code>*Main> :t translateX
translateX
:: (Additive (V t), Num (N t), R1 (V t), Transformable t) =>
N t -> t -> t</code></pre><p>This rather scary signature needs a bit of unpicking. Ignoring the stuff
before the fat arrow, we have the type:
</p><pre><code>N t -> t -> t</code></pre><p>Having seen how it’s used, we know that <code>t</code> is something like a Point
or a Vector, and <code>N t</code> is a scalar of the appropriate type. In the
examples above, we had e.g.:
</p><pre><code>t ~> V2 Double
N t ~> Double</code></pre><p>So it’s clear that <code>N</code> is a type level function which extracts the
underlying type from a more complicated thing. Looking now at the
constraints before the fat arrow, we also see <code>V t</code> which is the
vector-space in which <code>t</code> lives.
</p><p>Most of the constraints on <code>t</code> are straight-forward: it needs
to be transformable, the underlying type has to be numeric, and
so on. The most interesting term is <code>R1 (V t)</code> which loosely
means that the vector-space in which <code>t</code> lives has to have a
first dimension: <code>R1</code> extracts that coordinate.
</p><p>By contrast if we look at <code>translateZ</code>,
</p><pre><code>*Mail> :t translateZ
translateZ
:: (Additive (V t), Num (N t), R3 (V t), Transformable t) =>
N t -> t -> t</code></pre><p>the <code>R1</code> constraint is now <code>R3</code> which constrains the vector-space
to have a third-dimension. In practical terms this means that if we
try to translate a two-dimensional point in the Z-direction, it will
fail at compile time:
</p><pre><code>*Main> translateZ 10 $ p2 (0,1)
<interactive>:66:1: error:
• Could not deduce (R3 V2) arising from a use of ‘translateZ’
from the context: Num n
bound by the inferred type of it :: Num n => P2 n
at <interactive>:66:1-24
• In the expression: translateZ 10
In the expression: translateZ 10 $ p2 (0, 1)
In an equation for ‘it’: it = translateZ 10 $ p2 (0, 1)</code></pre><p>The meaning of this might not be immediately obvious to the casual observer.
</p><h2>Type classes
</h2><p>It’s worth stating explicitly that many of the functions in the
diagrams API don’t take a particular type: rather they take any type
which conforms to the relevant type class constraints. This is elegant
and powerful, but it can lead to unwieldy signatures and Byzantine
error messages. The User Manual has some useful <a href="https://archives.haskell.org/projects.haskell.org/diagrams/doc/manual.html#tips-and-tricks">tips and tricks</a> on this topic.
</p><p>More positively, if we return to the <code>Transformable</code> type class
above, we can find <a href="https://archives.haskell.org/projects.haskell.org/diagrams/haddock/diagrams-core/Diagrams-Core-Transform.html#g:4">many
instances</a>.
Unsurprisingly you can apply <code>translateX</code> to all sorts of things,
including diagrams and other transformations. It’s nice that one
function can move so many things.
</p><p>As with the translation examples above, particular transformations may
place other contraints on the objects which are being
transformed. However any type will work if it has the necessary
instances to satisfy the constraints.
</p><p>The diagrams manual has a good <a href="https://diagrams.github.io/doc/manual.html#type-reference">Type class
reference</a>
which explains all this and more.
</p><h2>Monoids
</h2><p>A general theme in Haskell is that abstract mathematical ideas are
often translated into a Haskell type class. If you create something
which obeys the laws of the type class, you can make an instance of
the type class which both saves writing code and unifies syntax.
</p><p>For example, a <a href="https://en.wikipedia.org/wiki/Monoid">monoid</a> is a
structure with a single associative operation and an identity
element. Essentially this means that we can take two things and
combine them into another thing of the same type, and that there’s a
particular element which doesn’t change things when you combine with
it. If there isn’t such an identity element you formally have a semigroup,
not a monoid, but I’ll gloss over that distinction here.
</p><p>The Haskell type class corresponding to a monoid is
<a href="http://hackage.haskell.org/package/base-4.14.0.0/docs/Data-Monoid.html">Data.Monoid</a>,
and a while ago I wrote <a href="../../2015/04/monoid.html">some notes</a> about
it. Rather than rehashing that theory, let’s just look at some
examples to illustrate the general idea.
</p><p>You can make a list monoid where the operation is concatenation, and
the identity element the empty list:
</p><pre><code>[1,2,3] <> [4,5,6] = [1,2,3,4,5,6]
[1,2,3] <> [] = [1,2,3]
[] <> [1,2,3] = [1,2,3]</code></pre><p>It’s easy to see that this is associative:
</p><pre><code>([1,2] <> [3,4]) <> [5,6] = [1,2,3,4] <> [5,6]
= [1,2,3,4,5,6]
[1,2] <> ([3,4] <> [5,6]) = [1,2] <> [3,4,5,6]
= [1,2,3,4,5,6]</code></pre><p>We could also make a monoid from the integers under addition with zero
as the identity (or a different one under multiplication):
</p><pre><code>1 <> 2 = 3
1 <> 0 = 1
0 <> 1 = 1
(1 <> 2) <> 3 = 3 <> 3
= 6
1 <> (2 <> 3) = 1 <> 5
= 6</code></pre><p>Although the meaning of the <code><></code> operator changes, it obeys the same rules
in both cases.
</p><p>Similarly, we can make a monoid for diagrams. Here, the operator
means putting one diagram on top of the other:
</p><p><img alt="[]" class="img_noborder" src="img-0000.svg">
</p><p>I think it’s clear that the empty diagram is a perfectly good
identity element here.
</p><p>Turning back to the operator, the order matters, as it does with
lists. Mathematically, we’d say that the operator isn’t commutative:
</p><p><img alt="[]" class="img_noborder" src="img-0001.svg">
</p><p>However, the operator is associative and that’s all that matters if
you want to be a monoid:
</p><p><img alt="[]" class="img_noborder" src="img-0002.svg">
</p><p><img alt="[]" class="img_noborder" src="img-0003.svg">
</p><h3>More diagrammatic Monoids
</h3><p>Besides diagrams themselves, the diagram package has many other
monoid instances.
</p><p>For example, if you have two transformations you can either apply them
sequentially, or combine them into one uber-transformation and then
apply that. So we can make a monoid instance for transformations.
</p><p>Other examples abound: the word ‘Monoid’ appears nearly fifty times
in the documentation for <a href="https://hackage.haskell.org/package/diagrams-core-1.4.2/docs/Diagrams-Core.html"><code>Diagrams.Core</code></a>.
</p><h2>Making a <code>#</code> of things
</h2><p>Diagrams makes extensive use of <code>#</code> which is flipped function
application. <code>fc red</code> is a function which makes the foreground-colour
of a diagram red. You might use it thus:
</p><pre><code>	fc red (circle 2)</code></pre><p>but it is more elegant to say:
</p><pre><code>	circle 2 # fc red</code></pre><p>It’s worth emphasizing that there’s nothing diagrams specific about
<code>#</code>. You could also say things like:
</p><pre><code> > "Wibble" # length
6</code></pre><p>The <code>&</code> function in <code>Data.Function</code> is similar, but has a lower
precedence (1 vs 8). If we used this instead, we would typically
need more parentheses, and we’re writing Haskell not Lisp.
</p><h2>Named diagrams
</h2><p>Diagrams (including subdiagrams) can be named, and then subsequently
referred to by name. This is extremely helpful because it allows you
to refer to some element of a diagram after it’s been composed.
</p><p>For me it greatly extended the scope of the diagrams I could make
<a href="https://archives.haskell.org/projects.haskell.org/diagrams/doc/manual.html?ref#using-absolute-coordinates">without using explicit coordinates</a>.
)
</p><p>There are many ways to use names, but for simple cases I use
<a href="http://hackage.haskell.org/package/diagrams-lib-1.4.3/docs/Diagrams-Names.html#v:named"><code>named</code></a>
to give something a name:
</p><pre><code>circle 1 # fc green # named "Foo"</code></pre><p>and <a href="http://hackage.haskell.org/package/diagrams-lib-1.4.3/docs/Diagrams-Names.html#v:withName"><code>withName</code></a>
to operate on a named subdiagram:
</p><pre><code>addMark n = withName n $
atop . place (circle 0.1 # fc red) . location</code></pre><p>Names don't have to be strings: you can use any instance of
<a href="http://hackage.haskell.org/package/diagrams-lib-1.4.3/docs/Diagrams-Names.html#t:IsName``"><code>isName</code></a>.
</p><h3>Chess board example
</h3><p>To show how useful names can be, consider the example below which
draws a chess board with named squares, then fills it with pieces.
</p><p><img alt="[]" class="img_noborder" src="img-0005.svg">
</p><pre><code>data Square = Square Char Int
deriving (Eq, Ord, Show)
instance IsName Square
chessBoard :: PreparedFont Double -> Diagram B
chessBoard f = L.foldl' (flip draw) cbBoard piecePosns
where draw (n,s) = withName n $
atop . place (myText f 16 [s]) . location
piecePosns :: [(Square, Char)]
piecePosns = concatMap (uncurry doFile) [('a', "♖♘♗♕♔♗♘♖")
,('b', "♙♙♙♙♙♙♙♙")
,('g', "♟♟♟♟♟♟♟♟")
,('h', "♜♞♝♛♚♝♞♜")
]
where doFile file = zipWith (\r p -> (Square file r, p)) [1..8]
cbBoard :: Diagram B
cbBoard = vcat . reverse
. zipWith cbRank ['a' .. 'h' ]
$ L.tails bgs
where cbRank rank = hcat . zipWith (cbCell rank) [1..8]
cbCell rank file b = square 10.0
# fc b # lw 0.5 # lc black
# named (Square rank file)
bgs = cycle [darkgoldenrod,lightgoldenrodyellow]
myText :: PreparedFont Double -> Double -> String -> Diagram B
myText f h t = stroke (textSVG' opts t)
# fc black # lw 0		
	where opts = TextOpts f INSIDE_H KERN False h h</code></pre><h3>Font Acknowledgment
</h3><p>I should begin by saying that I’m using Alexander Lange’s fine <a href="http://www.quivira-font.com">Quivira
font</a> to draw all the pieces, which makes
things much easier. If you want to use this:
</p><ul><li><p>download the font;
</p></li><li><p>convert it into SVG format with <a href="https://fontforge.org/en-US/">FontForge</a>;
</p></li><li><p>load the font with the <a href="https://hackage.haskell.org/package/SVGFonts">SVGFonts package</a>.
</p></li></ul><h3>Implementation
</h3><p>The key function is <code>cbBoard</code> which draws an empty board by assembling
squares into ranks, then ranks into the board. The cells are all named
with their rank and file e.g. <code>Square 'c' 7</code>. It is nice that you
can use almost anything sensible as a name with relatively little
effort.
</p><p>Having generated the board, we just fold over a list of pieces and their
locations, grab the cell by its name and draw the piece on it. At no stage
do we have to worry about where the cell is: we just ask for it by name.
</p><h2>Conclusions
</h2><p>In many ways I think the diagrams package is a microcosm of Haskell itself:
there’s quite a steep learning curve, because it embraces some clever and
abstract ideas. However, once you’ve absorbed those ideas it’s a joy to
use and affords new insights into the problem you’re trying to solve.
</p>41098FC0-AC0E-11EA-99B1-F530ADD37A012020-06-11T16:56:55:55Z2020-06-11T16:56:55:55ZEncrypted disks on LinuxMartin Oldfield<p><em>Aidés-memoires</em> for setting up an encrypted volume on Linux.
</p><h2>Introduction
</h2><p>These are brief notes on setting up an encrypted disk partition
on Linux. I am no expert, so they could be wrong!
</p><h2>Threat model
</h2><p>We can use encryption in lots of different ways, but my needs are
simple: if the machine gets stolen, I want it to be hard to read the
user information on the drives. I also want it to be fairly easy to
use the machine in normal operation.
</p><h2>The default installer
</h2><p>If you install Debian Buster, you get gieven the chance to
encrypt the root partition automatically. This leaves /boot
in the clear, but I don’t mind about that.
</p><p>However, I do mind about other, non-root, partitions. The Debian
installer doesn’t help here, so we’re on our own. Happily, it seems
reasonably straightforward.
</p><h2>The basic instructions
</h2><h3>Partitioning
</h3><p>As with all disk related projects, begin by partitioning the disk. We
will use /dev/sda here, change this if it’s not appropriate:
</p><pre><code>$ sudo fdisk /dev/sda
</code></pre><p>Inside fdisk invoke these commands:
</p><ul><li><p>g: Set up a new GPT partition table.
</p></li><li><p>n: New partition: accept all the defaults.
</p></li><li><p>w: Write the partition table.
</p></li></ul><h3>Set up the crypto
</h3><p>The
<a href="https://gitlab.com/cryptsetup/cryptsetup/-/wikis/FrequentlyAskedQuestions">cryptsetup</a>
command is a convenient way to manage encrypted volumes. Begin by
using it to create an encrypted volume, and mount it in
<a href="https://en.wikipedia.org/wiki/Device_mapper">/dev/mapper</a>.
You’ll need to provide a passphrase for the volume when you
create it or want to access it.
</p><pre><code>$ sudo cryptsetup luksFormat /dev/sda1
$ sudo cryptsetup open /dev/sda1 foo
</code></pre><p>This leaves us with an encrypted block device at /dev/mapper/foo which
is hosted by the underlying block device /dev/sda1. We can proceed
as normal to set up a new file-system:
</p><pre><code>$ sudo mkfs.ext4 /dev/mapper/foo
$ sudo mkdir /bar
$ sudo mount /dev/mapper/foo /bar
</code></pre><p>At this point, the task is basically done: we have an encrypted file-system
mounted on the system. However, if we left things here, every time we booted
the system we’d need to enter two fiddly passphrases: one to unlock the root
filing-system; the other to unlock the filing-system we’ve just created. That's
a nuisance, and if we added more encrypted partitions, the problem would get
worse.
</p><h3>Enable automounting
</h3><p>To mount the device automatically on boot, we need to fix a few things.
</p><p>We begin by adding a second key to the encrypted volume, such that it can
be unlocked by either the original key we used above, or this new key which
we’ll use automatically. Since we don’t have to type the new key, we can
make it long:
</p><pre><code>$ sudo dd if=/dev/urandom of=/root/.vol.key bs=1024 count=4
$ sudo chmod 0400 /root/.vol.key
$ sudo cryptsetup luksAddKey /dev/sda1 /root/.vol.key
</code></pre><p>You’ll see that the key is stored in the open in /root, which will typically
be on an encrypted root volume. So once /root is mounted, anyone who can read
it can access the information on this new drive. I am relaxed about that, but
you might not be!
</p><p>To use this new key at boot, we need to edit <a href="https://manpages.debian.org/testing/cryptsetup/crypttab.5.en.html">/etc/crypttab</a>. For me, that meant adding this line:
</p><pre><code>sda1_crypt UUID="...." /root/.vol.key luks,discard
</code></pre><p>Where UUID=".." gives the ID of the underlying block device. You can see these
with blkid:
</p><pre><code>$ sudo blkid
</code></pre><p>If you reboot now, you’ll find that the new encrypted block device appears
automatically at /dev/mapper/sda1_crypt.
</p><p>All that remains is to ask Linux to mount the device just like any
other filing system by editing
<a href="https://manpages.debian.org/testing/mount/fstab.5.en.html">/etc/fstab</a>:
</p><pre><code>$ sudo emacs /etc/fstab
</code></pre><p>I needed to add this:
</p><pre><code>/dev/mapper/sda1_crypt /data ext4 defaults 0 2
</code></pre><h3>Final testing
</h3><p>All that remains is to reboot the system and check everything
appears as it should:
</p><pre><code>$ sudo shutdown -r now
</code></pre><p>You might find lsblk gives you some comfort that things are as
you intended:
</p><pre><code>$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 1.8T 0 disk
└─sda1 8:1 0 1.8T 0 part
└─sda1_crypt 254:3 0 1.8T 0 crypt /data
nvme0n1 259:0 0 931.5G 0 disk
├─nvme0n1p1 259:1 0 512M 0 part /boot/efi
├─nvme0n1p2 259:2 0 244M 0 part /boot
└─nvme0n1p3 259:3 0 930.8G 0 part
└─nvme0n1p3_crypt 254:0 0 930.8G 0 crypt
├─nucky1--vg-root 254:1 0 914.9G 0 lvm /
└─nucky1--vg-swap_1 254:2 0 15.9G 0 lvm [SWAP]
</code></pre><h2>Encrypting the root
</h2><p>As noted above, the Debian installer offers you the option of
encrypting the root filing system on installation.
</p><p>I don’t have much to say about that, but it leaves the machine
in a state where you can only boot it by entering a long access
key on the keyboard. Sometimes, that’s ideal, but I would prefer
to unlock the machine remotely, so I can just paste the key.
</p><p>To do this, we need to have a ssh server running before root
is mounted, which leads us to the initial ramdisk filing system:
initramfs.
</p><p>Community wisdom is to use <a href="https://matt.ucc.asn.au/dropbear/dropbear.html">Dropbear</a>
as the ssh server, and happily these days the <a href="https://packages.debian.org/search?keywords=dropbear-initramfs&searchon=names&suite=all&section=all">dropbear-initramfs</a>
package in Debian does most of the configuration.
</p><p>I just followed a <a href="https://thej6s.com/articles/2019-03-05__decrypting-boot-drives-remotely/">helpful article by thej6s</a>
which boils down to:
</p><ul><li><p>install the package;
</p></li><li><p>add your public key to /etc/dropbear-initramfs/authorized_keys;
</p></li><li><p>set DROPBEAR_OPTIONS='-c cryptroot-unlock' in /etc/dropbear-initramfs/config;
</p></li><li><p>configure the interface in /etc/initramfs-tools/initramfs.conf;
</p></li><li><p>rebuild the initramfs with sudo update-initramfs -u.
</p></li></ul><p>Although there are many other articles which discuss this topic, most
of them seemed to be out-of-date.
</p>4109CA30-AC0E-11EA-99B1-F530ADD37A012020-06-11T16:32:52:52Z2020-06-11T16:32:52:52ZPrinting from Linux in 2020Martin Oldfield<p><em>Aidés-memoires</em> for setting up printing on a Linux box in 202.
</p><h2>Introduction
</h2><p>Recently, I wanted to print from the command line of a new computer to
a printer on the LAN. Recently, most of my new machines have been Macs
or iOS devices, where setting up a new printer is trivial. This time
though the box ran Debian, and last time I tried to configure a
printer on Linux it was a lot of hassle. Happily things have improved!
</p><p>More specifically, these notes refer to talking to a HP M477 LaserJet
Pro from a box running Debian Buster. The printer supports modern APIs
including <a href="https://en.wikipedia.org/wiki/Internet_Printing_Protocol">IPP</a>/<a href="https://en.wikipedia.org/wiki/AirPrint">AirPrint</a> and I think these are key to making the process almost trivial.
</p><p>There is a lot of information about printing in more general contexts
on the <a href="https://wiki.debian.org/Printing">Debian Printing Portal</a>,
and specific notes for <a href="https://wiki.debian.org/CUPSIPPEverywhere">IPP</a>.
</p><h2>CUPS
</h2><p>On macOS printing services are run by <a href="https://www.cups.org">CUPS</a>. This
is also available on Debian, so let’s start by installing it:
</p><pre><code>$ sudo apt-get install cups cups-ipp-utils
</code></pre><p>Amazingly, that’s basically it. With the wonders of autodiscovery,
you don’t even need to edit any files to tell CUPS about the printer.
As proof ask lpstat:
</p><pre><code>$ sudo lpstat -v
device for HP_Color_LaserJet_MFP_M477fdw...
device for HP_Color_LaserJet_MFP_M477fdw...
</code></pre><p>There are two entries here because one relates to the printer
and the other to the printer’s intergrated fax machine.
</p><p>Rather than stop here, a little polishing is worthwhile. Firstly, It
might be helpful to add any admin users to the lpadmin group:
</p><pre><code>$ sudo usermod -a -G lpadmin [user]
</code></pre><p>Finally, it’s helpful to make the printer the default:
</p><pre><code>$ lpoptions -d 'HP_Color_LaserJet_MFP_M477fdw...'
</code></pre><p>and test it:
</p><pre><code> $ echo "Hello World" | lp
</code></pre><h2>ippfind
</h2><p>The ippfind command is helpful for diagnosing problems: it lists any
printers advertising themselves on the LAN:
</p><pre><code>$ ippfind
ipp://m477.local:631/ipp/print
...
</code></pre>D655D40E-9556-11EA-A115-B09F2EB08ECF2020-05-13T20:12:01:01Z2020-05-13T20:12:01:01ZThe Creality Ender 3 ProMartin Oldfield<p>Brief notes on setting up and simple printing with the
Creality Ender 3 Pro 3D printer.
</p><p><em>Updated in December 2021.</em>
</p><p><img alt="[The Ender 3 Pro]" class="img_border_small" src="ender3p.jpg">
</p><h2>Introduction
</h2><p>I recently (March 2020) bought a <a href="https://www.creality3dofficial.com/products/creality-ender-3-pro-3d-printer">Creality Ender 3
Pro</a>
3D printer. It’s one of the original ‘heat up plastic and squirt it
out’ sort, but it had <a href="https://all3dp.com/1/creality-ender-3-pro-3d-printer-review/">good
reviews</a>
and was on offer for about £200. These are my notes on setting it up,
and starting to print.
</p><p>This article was revised in December 2021 because I added a few extra
things to the printer: a CR-Touch probe (Crealty’s own take on the
BL-Touch) and a new mainboard to facilitate that. I also spent a
while tweaking things and fixed the problems I’d had with the first
few layers.
</p><h2>Construction
</h2><p>If I were starting now I’d begin by watching some of the third-party
build instructions and videos. I think they add useful details beyond
the instructions from Creality, particularly if you’re new to all this
as I was. Sadly, I only found them after I’d built the machine. Even,
so it wasn’t hard to get the printer working acceptably well.
</p><p>I suspect the best guide changes over time, but I found Maker Steve’s
<a href="https://makersteve.com/2018/08/25/ultimate-build-guide-for-creality-ender-3-step-by-step-a-makersteve-special-report/">guide</a> helpful.
</p><p><em>Update:</em> If I were starting now, I’d buy the <a href="https://www.creality.com/goods-detail/creality-ender-3-s1-3d-printer">new
S1</a>
instead.
</p><h3>Truing up
</h3><p>Although the printer is made from sturdy aluminium sections, there’s
still enough play to make some of the right angles decidedly wrong if
you’re not careful. In most cases it’s just a case of squaring things
up before tightening bolts though. Getting the gantry parallel to the
base of the unit involves getting the six idler-wheels in place, a
couple of which have eccentric tensioning bolts. It’s important that
the z-motion is smooth and without play, and I didn’t care enough
about that when I started.
</p><p>In a similar vein, you need to be careful to avoid slack in the x-axis
belt, but that seems easy enough. The 3d-bros have a
<a href="https://the3dbros.com/ender-3-3d-belt-tension-guide/">useful discussion</a>
of the issues, which includes links to designs of printable belt
tensioning wheels. Creality include something similar on more recent
printers, so perhaps there’s merit in them. Unsurprisingly, you buy
such things too: just look on Amazon or eBay.
</p><h3>z-axis Binding
</h3><p>The z-axis on the Ender 3 is driven by a vertical lead-screw. Sadly
though that bar is often not quite vertical, and so the nut tends to
bind. You can print a little shim to improve matters, but I found that
a couple of washers did the job for me. Maker Steve
<a href="https://makersteve.com/2018/08/24/ender-3-z-axis-binding-fix-bring-a-basic-stringing-test/">discusses the issue</a>.
</p><p><em>Update: I only worked out the problem below after trying to fix the
first few print layers by changing temperatures.</em>
</p><p>You also get problems if the wheels that run on uprights are too
tight. For me they led to elephant’s foot: squished layers for the
bottom few millimetres of the print. I don’t quite understand the
mechanism, but if the wheels are too tight the z-axis has a
significant amount of hysteresis: if you move the z-axis down then up
it doesn‘t return to the same place.
</p><p>One way to check the z-axis is with a <a href="https://en.wikipedia.org/wiki/Indicator_(distance_amplifying_instrument)">dial
indicator</a>.
Rest the indicator’s lever on the cross-bar, jog down 0.1mm a few
times and check that the bar does indeed move down the right
amount. Then jog up and see if you return to the same place. If not,
try loosening the eccentric nuts.
</p><p>This tweaking is simple to do and wholly independent of anything
related to temperatures, bed-levelling or extrusion settings. If you
have a dial indicator I recommend you try it.
</p><h2>Operation
</h2><p>The discussions below all refer to printing 1.75mm PLA at a bed
temperature of 50°C, and a nozzle temperature of 200°C. I tried
varying these temperatures a bit, and not much changed. It was hardly
a scientific study though. I’ve tried Amazon Basics and SUNLU
filament, and there didn’t seem to be any difference.
</p><p><em>Update: I’ve settled on a bed temperature of 60°C now, just because
that’s the default in Cura. Having spent a fair while messing around
with temperatures, I reckon that if you are printing simple objects
with PLA then they don’t matter that much: ±10°C is fine. If you are
printing more challenging objects perhaps you need to be more careful
though.</em>
</p><h3>The bed and leveling it
</h3><p>Almost all the reviews of the Ender (and indeed most similar machines)
talk about the importance of bed-leveling i.e. making sure that the
print bed is a surface of constant z. You achieve this by twiddling
four little adjustments in the corners of the bed, which pull the bed
down against springs.
</p><p>At first this straightforward if a little tedious, but at some point
it became a bit tricky. I’m not sure what changed, but I wonder if I
managed to deform the bed in a way which couldn’t be trimmed by the
corner adjustments. In the end I had a glass build plate on hand, and
the problem went away when I switched to that. I suspect I could have
managed without the glass plate, but it was there and an easy solution.
</p><p>Although there are firmware upgrades to the printer which semi-automate
the leveling process, I didn’t try them.
</p><h3>The CR Touch probe
</h3><p><em>I only installed this in the December 2021 upgrade.</em>
</p><p>The <a href="https://www.creality3dofficial.com/products/creality-cr-touch">CR
Touch</a>
probe is Crealty’s own take on the
<a href="https://www.antclabs.com/bltouch">BLTouch</a> probe. You can think of
the probe as a switch which triggers when it gets a certain distance
from the bed: in other words it is a fancy push-button. I think it’s
important to realise that the probe doesn’t measure distance <em>per se</em>,
rather you move the z-axis up far enough that the probe is not
triggered then slowly move it down until it is.
</p><p>Besides this probing, it seems common practice to use the probe as a
z-limit switch too. I was a bit reluctant to do this, because it seemed
more prone to failure than the existing microswitch, but I don’t think
you can just leave the z-stop microswitch in place without it getting in
the way. It might be possible to move it lower, and use it as a switch
of last resort, but I didn’t try that.
</p><p>The Marlin firmware lets you probe a grid of xy-locations, then
display the z-offsets for each point. This is doubly useful:
</p><ol><li><p>You can use the offsets to level the bed by twiddling the four
adjustments in the corners. I found this much easier to do when
I could see objective measurements of the offsets.
</p></li><li><p>The Marlin firmware can try to compensate for any remaining
offsets during printing by tweaking the print. I am not sure how
it does this.
</p></li></ol><p>One final twist: the probe origin is displaced from the nozzle in all
three dimensions. The xy-displacement is easy to measure and only
needs to be accurate to a few millimetres. The z-displacement is
basically a return to the old paper-sliding trick, though given that
it’s a single parameter, getting close and then just trying different
values is perfectly possible. I think an accuracy of about 0.1mm is
plenty. Once you know what the offsets are, use <code>M851</code> to set them:
</p><pre><code>M851 X-45.0000 Y-5.0000 Z-1.6500
</code></pre><p>Wiring and configuring the firmware for this was a bit fiddly. See the
notes below.
</p><h3>Adhesion
</h3><p>Most of the time, I’ve not had any problems with adhesion, and when there
were problems leveling the bed fixed them. This applies to printing
PLA on both the original bed with flexible magnetic cover, and the upgraded
tempered glass plate.
</p><p>I think things stuck harder on glass, but they’re easy to remove if you
let the glass cool first.
</p><h3>Extrusion calibration
</h3><p>Other people have found that the Ender’s extruder under-extrudes i.e.
if you ask it to extrude say 100mm of filament, you only get say 90mm.
Happily it’s easy to tweak a parameter in the firmware to fix this:
just use the <a href="https://marlinfw.org/docs/gcode/M092.html">M92</a> gcode
command. The original firmware set 93 steps to the mm, I found 100
worked better for me.
</p><p>I found <a href="https://all3dp.com/2/extruder-calibration-6-easy-steps-2/">this guide at All3DP</a>
helpful.
</p><h3>Calibration cubes and Elephant feet
</h3><p>Following general advice, I tried printing some <a href="https://www.thingiverse.com/thing:1278865">20mm
cubes</a> to test the
printer’s calibration. Both X- and Y-axes were good, typically well
within 0.1mm of the desired 20mm, but the Z-axis was usually about
0.5mm short.
</p><p><img alt="[The Ender 3 Pro]" class="img_border_small" src="calcube.jpg">
</p><p>On inspection some fraction of the bottom millimeter of the cube was
squashed, a phenomenon know as <a href="https://all3dp.com/2/elephant-s-foot-3d-printing-problem-easy-fixes/">“Elephant’s
foot”</a>. Although
not listed on that page, for me the problem seemed in large part to
come from an overly tight z-axis. After some tuning, I could reliably
print cubes about 0.2mm short, but it remains a small issue.
</p><p><em>Update: I spent far too long messing around with temperatures
here. Were I starting again, I would first check the mechanicals: make
sure that the frame was square, and that the z-axis moved
properly. I’d then level the bed and find the right z-offset. These
were enough to print cubes with 0.1mm accuracy using the standard
200°C/60°C temperatures.</em>
</p><h3>Slicing
</h3><p>I’ve just used Cura, with the default settings.
</p><h2>Upgrades
</h2><p>One of the great joys of open designs is that they’re often easy
to improve by tinkering. So I’ve added a few things to the stock
printer. This freedom can become a rabbit hole though, so my list
is deliberately incomplete.
</p><h3>Controller board
</h3><p><em>Update: I’ve upgraded this again, and the old words seem obsolete.</em>
</p><p>Originally the Ender was pretty noisy, but I replaced the controller
board with <a href="https://www.creality3dofficial.com/products/creality-silent-mainboard-v1-1-5">Creality’s “Silent
Board”</a>
upgrade and it got a lot quieter. I think the key is that the newer
board has TMC2208 stepper motor drivers.
</p><p>When I came to install the CR-Touch it seemed sensible to upgrade this
again because the Silent Board lacked both connectors and much extra
space in the microcontroller. The current <a href="https://marlinfw.org">Marlin firmware</a> (V2) also
suggests that it will be happier running on newer hardware.
</p><p>There are now numerous ARM-based controller boards. When I bought
them, the <a href="https://www.biqu.equipment/products/bigtreetech-skr-mini-e3-v2-0-32-bit-control-board-integrated-tmc2209-uart-for-ender-3">SKR MINI E3
V2.0</a>
mainboard and <a href="https://www.biqu.equipment/products/btt-tft35-e3-v3-0-display-touch-screen-two-working-modes">TFT35-E3
V3</a>
screen from BigTreeTech were well-regarded and inexpensive. The
prices seem to have risen, but they’ve worked well for me.
</p><h4>Firmware
</h4><p>At first I used the latest firmware from BIGTREETECH’s <a href="https://github.com/bigtreetech/">GitHub
repo</a>. I upgraded the firmware on
both the controller and display boards, and was somewhat surprised to
find both use Marlin.
</p><p>At this point, I was just interested in getting the printer working
again and didn’t use the CR-Touch probe.
</p><h4>CR-Touch
</h4><p>There is a dedicated socket on the SKR board for the CR-Touch. In my
experience it is electrically compatible with the cable supplied with
the probe, though the wires in the cable are coloured in the opposite
order to almost all the reports on the Internet.
</p><p>However, almost all the firmware images on the Internet are <em>not</em>
compatible with wiring things up this way. Instead they expect you
to split the CR-Touch cable between the CR-Touch socket on the board
and the now-unused Z-stop microswitch connection.
</p><p>In Marlin terms the relevant configuration line is:
</p><pre><code>#define Z_MIN_PROBE_USES_Z_MIN_ENDSTOP_PIN
</code></pre><p>I prefer to wire the probe to the dedicated socket, so I configured
Marlin thus:
</p><pre><code>// #define Z_MIN_PROBE_USES_Z_MIN_ENDSTOP_PIN
...
#define Z_MIN_PROBE_PIN PC14
</code></pre><p>As with the Silent Board, I found <a href="https://www.danbp.org/p/en/node/149">Daniel Brooke Peig’s
site</a> very helpful.
</p><h3>Octoprint
</h3><p>It turns out that putting the printer on a network is most civilized:
</p><ul><li><p>you can upload files without the faff of an SD-card;
</p></li><li><p>controlling the printer from a browser is much nicer than using
the UI on the printer;
</p></li><li><p>you can watch the print emerge.
</p></li></ul><p>Happily Gina Häußge has created
<a href="https://www.octoprint.org">OctoPrint</a>: software
which turns a Raspberry Pi into the perfect Ender-network interface.
</p><p>The software is mature enough that even quite unusual tasks are
supported. For example, there’s a module to update the firmware on the
driver board.
</p><h3>Camera mount
</h3><p>If you’ve got a camera for OctoPrint, it helps to point it in the
right direction. Happily you can print a suitable mount. I used Modmike’s
<a href="https://www.thingiverse.com/thing:2886101">design</a> on Thingiverse.
</p><p><img alt="[The Ender 3 Pro]" class="img_border_small" src="cammount.jpg">
</p><h3>Filament guide
</h3><p>I found that the filament didn’t feed smoothly into the extruder
stepper assembly: it tended to make a very sharp angle at the entrance
to the driver, and often sprang off the reel.
</p><p>Printing a simple pulley solved this. I used <a href="https://www.thingiverse.com/thing:3052488">a design by
Holspeed</a> which is based
around a skate-board bearing.
</p><p><img alt="[The Ender 3 Pro]" class="img_border_small" src="filguide.jpg">
</p><h2>Pending things
</h2><p>Although I’ve got the printer to a state where it prints well
enough to be useful, there are still things to be done. I include
them here as an <em>aide-mémoire</em>.
</p><h3>Enclosure
</h3><p>Although you can print PLA without worrying unduly about draughts and
cold air, these seem to be more of an issue for ABS and PETG. One
solution is to put the printer in an enclosure. There’s an official
one <a href="https://www.creality3dofficial.com/collections/accessories/products/3d-printer-enclosure-safe-quick-and-easy-installation">from
Creality</a>
but it makes the printer’s footprint a lot bigger.
</p><h4>Belt Tighteners
</h4><p>Although I’ve not needed these yet, if I were to take the printer
apart it would be prudent to add them.
</p><h3>Replacement idler-wheels
</h3><p>I’m still a bit suspicious of the z-axis motion, and I wonder
if I’ve either got some dodgy bearings or deformed wheels.
As with the belt tighteners, if I ever take the printer apart
it would be worth looking into this.
</p><h2>Conclusions
</h2><p>The Ender Pro 3 is a fine device: inexpensive but capable of
making good prints. Although it’s easy to assemble, it’s worth
spending more time and taking to build it well.
</p><p>If I were starting afresh now, I would probably buy the Ender 3 S1.
</p><p>If I were starting afresh with another Ender 3 Pro, I would install:
</p><ul><li><p>A SKR Mini E3 main board or something similar.
</p></li><li><p>A CR-Touch probe or something similar.
</p></li><li><p>Some sort of filament guide.
</p></li><li><p>Octoprint.
</p></li></ul><p>As noted above I would also spend more time checking the basic
mechanical setup. I found that to get good prints it was enough
to get the bed level to ±0.15mm and the z-offset correct to ±0.1mm.
</p>F9D385FE-E2FD-11E4-83C6-B3C461D8802A2015-04-14T23:23:34:34Z2019-02-21T22:18:59:59ZARM development on STM32Martin Oldfield<p>Basic notes on setting up development on ST’s <span class="caps">STM32 ARM </span>chips. </p><p><em>Update 2018-08-06: These days <span class="caps">ARM </span>are the canonical source for the open source toolchain.</em> <em>Update 2018-08-08: Nucleos come in various sizes these days; added Black Magic Probe notes.</em></p>
<p>ST make a range of Cortex-M chips, the <a href="http://www.st.com/web/en/catalog/mmc/FM141/SC1169"><span class="caps">STM32.</span></a> These are brief notes on developing software for them on a Mac with free and open tools. They’re very much ‘I tried this and it worked’ notes, and <em>not</em> ‘these are the <em>best</em> ways to do things.’. <span class="caps">YMMV</span>!</p>
<p>Inevitably Wikipedia has a good <a href="https://en.wikipedia.org/wiki/STM32">summary of <span class="caps">STM32 </span>stuff.</a></p>
<h2>Development boards</h2>
<p>ST make two ranges of inexpensive development boards: Nucleo and Discovery. Both ranges include an integrated ST-LINK programmer so you can just connect the board to a <span class="caps">USB </span>port and play.</p>
<h3>Nucleo</h3>
<p>These are very cheap (£10-ish) boards which contain the processor and have some measure of Arduino compatibililty. Originally, the boards came in one size and all sported a 64-pin processor: these days you get get both smaller (48-pin processor) and larger (144-pin processor) boards too.</p>
<p>The ST website has:</p>
<ul>
<li><a href="http://www.st.com/web/en/catalog/tools/FM116/SC959/SS1532/LN1847?icmp=ln1847_pron_pr-nucleo_feb2014&sc=stm32nucleo-pr">a full list of boards;</a></li>
<li>a page per board: <a href="http://www.st.com/web/catalog/tools/FM116/SC959/SS1532/LN1847/PF260000">here is the <span class="caps">F401</span>-RE.</a></li>
</ul>
<h3>Discovery</h3>
<p>These are slightly more expensive but contain more than just the processor.</p>
<p>Again you can see the <a href="http://www.st.com/web/catalog/tools/FM116/SC959/SS1532/LN1848">full range</a> on the ST website.</p>
<p>As an example, they make a <a href="http://www.st.com/web/catalog/tools/FM116/SC959/SS1532/LN1848/PF259090"><span class="caps">F429 </span>based board with a lots of IO and a colour <span class="caps">QVGA </span>display.</a> All for about £20.</p>
<h2>Toolchain</h2>
<p>All the Nucleo boards are <a href="http://developer.mbed.org/platforms/?tvend=10">mbed friendly</a> but I wanted a more traditional approach.</p>
<p>Happily the <span class="caps">GNU </span>tools support <span class="caps">ARM, </span>and <span class="caps">ARM </span>now provide a <a href="https://developer.arm.com/open-source/gnu-toolchain/gnu-rm/downloads">canonical version</a> of them.</p>
<p>In the past there used to be a homebrew friendly gcc-arm-embedded cask for this, but <a href="https://github.com/Homebrew/homebrew-cask/pull/56802">it’s been removed</a>. For now, if you want to use brew you can either do something like this:</p>
<pre><code>$ brew cask install https://raw.githubusercontent.com/
Homebrew/homebrew-cask/
b88346667547cc85f8f2cacb3dfe7b754c8afc8a/
Casks/gcc-arm-embedded.rb</code></pre>
<p>or use the <span class="caps">PX4 </span>tap:</p>
<pre><code>$ brew tap PX4/px4
...
$ brew search px4
...
$ brew install px4/px4/gcc-arm-none-eabi</code></pre>
<h2>Firmware library</h2>
<p>In principle you could write code which targets the processor’s hardware directly: all you need is the datasheet. However I found the <a href="http://libopencm3.org/wiki/Main_Page">libopencm3</a> open-source firmware library useful on two counts:</p>
<ol>
<li>They have plentiful examples, often for the exact development board I’m using. So it’s easy to check that the tool chain is right, and get to the blinking <span class="caps">LED </span>stage.</li>
<li>In most cases it seems easier to use their hardware abstractions rather than writing my own.</li>
</ol>
<p>It’s easy to clone the code from GitHub where there are separate repositories for <a href="https://github.com/libopencm3/libopencm3">the library</a> and <a href="https://github.com/libopencm3/libopencm3-examples">the examples.</a></p>
<p>To start a new project based around libopencm3 follow their <a href="https://github.com/libopencm3/libopencm3-examples#reuse">Reuse instructions</a></p>
<h2>ST Link</h2>
<p>The demo boards all sport an integrated ST Link programmer. These come in two flavours which are imaginatively called version 1 and version 2!</p>
<p>Version 1 of the system seems to be a bit dodgy: it requires a kernel extension on the Mac whereas version 2 ‘Just Works’. Happily only a couple of old boards use version 1, so I’ve just ignored it.</p>
<p>The ST Link interface on the Nucleo boards is apparently a slightly newer version, but I don’t know what’s changed.</p>
<p>When you connect a Nucleo to your computer, three <span class="caps">USB </span>devices get created:</p>
<ul>
<li>A debug connection which allows <span class="caps">SWD </span>(single-wire debug) of the target <span class="caps">MCU.</span> To use this from the Mac, you need only to install the ST Link software.</li>
<li>A mass storage device. In principle I think you can program the <span class="caps">MCU </span>by copying files here, but I’ve not tried it. Note: until recently, this didn’t work with MacOS 10.10, but there’s now <a href="http://www.st.com/web/catalog/tools/FM147/SC1887/PF260217#">a patch.</a></li>
<li>A serial device. I <em>think</em> the <span class="caps">USART2 MCU </span>interface can be brought to an entry in /dev, but I’ve not tried this.</li>
</ul>
<h3>The ST Link software</h3>
<p>You need some software to flash and debug the target <span class="caps">MCU.</span> Happily it’s on <a href="https://github.com/texane/stlink">GitHub.</a></p>
<p>Once installed you can either invoke <code>st-flash</code> to write the code directly, or launch <code>st-util</code> in daemon mode and then interrogate the hardware with <code>gdb</code>.</p>
<p>Taking the latter route, here are some typical commands:</p>
<p>In shell 1:</p>
<pre><code>$ st-link
...</code></pre>
<p>In shell 2:</p>
<pre><code>$ arm-none-eabi-gdb foo.elf
...
(gdb) target extended-remote :4242
...
(gdb) load
...
(gdb) run</code></pre>
<p>If you’re using libopencm3 convenient dummy targets are provided by make. If your target is called foo.elf:</p>
<pre><code>make foo.flash</code></pre>
<p>should flash the data via ST Link and gdb, but it fails for me:</p>
<pre><code>warning: ../libopencm3/scripts/stlink_flash.scr: No such file or directory</code></pre>
<p>However there is a direct approach:</p>
<pre><code>make foo.stlink-flash</code></pre>
<p>Note that <code>st-util</code> must <em>not</em> be running—otherwise you’ll get an error.</p>
<h3>OpenOCD</h3>
<p>I think one can replace the ST Link software with <a href="http://openocd.org">OpenOCD</a> which brings the freedom to talk to many different sorts of hardware. I’ve not tried it though.</p>
<h2>Black Magic Probe</h2>
<p>The <a href="https://github.com/blacksphere/blackmagic/wiki">Black Magic Probe</a> is an open source <span class="caps">JTAG </span>and <span class="caps">SWD </span>adapter with integrated support for <span class="caps">GDB</span>’s remote debugging protocol. Essentially the <span class="caps">BMP </span>bridges between your host machine and the target <span class="caps">ARM.</span> You just connect it, point gdb at it, and get to work.</p>
<p>You can <a href="https://github.com/blacksphere/blackmagic/wiki/Frequently-Asked-Questions#where-can-i-get-the-hardware">buy native hardware</a>, which is both designed for the <span class="caps">BMP </span>and supports its development.</p>
<p>You can also flash the firmware into any number of generic <span class="caps">ARM </span>boards. Having bought an official <span class="caps">BMP,</span> I had no qualms <a href="../../2018/08/nucleo-bmp.html">deploying it on the ST-Link part of a Nucleo</a> board. </p>7037AD00-1BAF-11E8-B01C-27BEE42F9A4D2018-02-27T11:12:53:53Z2019-01-29T11:20:08:08ZiCE40 Blinky on iCEstickMartin Oldfield<p>A brief walkthrough of making the <span class="caps">LED</span>s flash on Lattice’s iCEstick demo board. </p><p><em>This article is part of a series documenting my first foray into <span class="caps">FPGA </span>programming. You might find it helpful to read the <a href="http://ice40-blinky.html">summary article</a> first.</em></p>
<p><em>Updated in Jan 2019: Kees Jongenburger pointed out that the clock in on pin 21, not pin 12 as it used to say below. Thank you Kees.</em></p>
<h2>Introduction</h2>
<p>The <a href="http://www.latticesemi.com/icestick">iCEstick</a> is a <span class="caps">USB</span>-stick style board, made by Lattice.</p>
<p><img src="icestick-1.jpg" alt="" class="img_noborder" /></p>
<h2>Walkthrough</h2>
<p>Two steps are common to all the boards:</p>
<ol>
<li>Install the <a href="./ice40-toolchain.html">iCE40 toolchain</a>.</li>
<li>Clone the repo:</li>
</ol>
<pre><code>$ git clone https://github.com/mjoldfield/ice40-blinky.git</code></pre>
<p>Now let’s tackle the hardware. Unpack the iCEstick and plug it in. The hardware is now ready!</p>
<p>Finally, build the relevant demo, and flash it to the board:</p>
<pre><code>$ cd icestick
$ make prog</code></pre>
<p>Finally, enjoy the <a href="https://en.wikipedia.org/wiki/Blinkenlights">blinkenlights</a>!</p>
<h2>Testing</h2>
<p><img src="icestick-2.jpg" alt="" class="img_noborder_small" /></p>
<p>If you have a frequency counter to hand, measure the frequency on test point A: it should be about 6.3MHz. If you prefer something slower, you should find a frequency of about 0.7Hz on test point B.</p>
<h2>Hardware Notes</h2>
<p>Full schematics of the board are available in the <a href="http://www.latticesemi.com/view_document?document_id=50701">user manual</a>. Here are some highlights, relevant to our simple project.</p>
<h3><span class="caps">FPGA</span></h3>
<p>The <span class="caps">FPGA </span>is a iCE40HX-1K in a 144-pin quad flat-pack.</p>
<h3>Clock and <span class="caps">PLL</span></h3>
<p>A 12MHz clock from a ceramic resonator is provided on pin 21.</p>
<p>This <span class="caps">FPGA </span>has a <span class="caps">PLL </span>which lets us scale the incoming clock. Arbitrarily, we will try to get a 100MHz system clock, and to do this we need some magic numbers with which we can configure the <span class="caps">PLL.</span> Enter <code>icepll</code>:</p>
<pre><code>$ icepll -i 12 -o 100 -m -f pll.v
F_PLLIN: 12.000 MHz (given)
F_PLLOUT: 100.000 MHz (requested)
F_PLLOUT: 100.500 MHz (achieved)
FEEDBACK: SIMPLE
F_PFD: 12.000 MHz
F_VCO: 804.000 MHz
DIVR: 0 (4'b0000)
DIVF: 66 (7'b1000010)
DIVQ: 3 (3'b011)
FILTER_RANGE: 1 (3'b001)
PLL configuration written to: pll.v </code></pre>
<p>As you can see the <span class="caps">PLL </span>can’t generate a 100MHz clock, so we will use 100.5MHz instead.</p>
<p>Notice too, that <code>icepll</code> helpfully writes the relevant verilog to a file. Sadly though, that verilog doesn’t use the global clock buffer, so it needs to be tweaked by hand.</p>
<h3><span class="caps">LED</span>s</h3>
<p>Four red <span class="caps">LED</span>s are connected to pins 96–99; a green <span class="caps">LED </span>is connected to pin 95.</p>
<h3>Test points</h3>
<p>Dozens of spare IO pins exist, and we use two as test points: pins 44 and 45.</p>
<h3>Programming</h3>
<p>The board has a <span class="caps">FTDI</span> 2232H <span class="caps">USB </span>interface which can be used to program flash on the board with <code>iceprog</code> from the IceStorm Tools. However, unless you are prepared to wield a soldering iron, the <span class="caps">SRAM </span>in the <span class="caps">FPGA </span>can not be programmed directly.</p>
<h3>Other peripherals</h3>
<p>The manual also contains details of the other peripherals on the board, and, for example, how to use the <span class="caps">FTDI </span>chip to talk to a <span class="caps">UART </span>on the <span class="caps">FPGA.</span> Our needs are more specialized though.</p>
<h2>Software Notes</h2>
<p>Please remember that you can download all of this from <a href="https://github.com/mjoldfield/ice40-blinky">GitHub</a>.</p>
<p>There are only four small files: a couple of bits of verilog, the pin definitions, and a Makefile.</p>
<h3>The main source code</h3>
<p>The code is very much as you’d expect. There’s a simple binary counter to reduce the clock frequency to something manageable, then a bit of sequential logic to drive the <span class="caps">LED</span>s.</p>
<pre><code>/*
* Top module for iCEstick blinky
*
* Make circular pattern on red LEDs, flash green LEDs.
*
* Generate test signals at 6.28MHz and 0.749Hz.
*/
module top(input CLK
, output LED1
, output LED2
, output LED3
, output LED4
, output LED5
, output TSTA
, output TSTB
);
// PLL to get 100.5MHz clock
wire sysclk;
wire locked;
pll myPLL (.clock_in(CLK), .global_clock(sysclk), .locked(locked));
// 27-bit counter: 100.5MHz / 2^27 ~ 0.749Hz
localparam SYS_CNTR_WIDTH = 27;
reg [SYS_CNTR_WIDTH-1:0] syscounter;
always @(posedge sysclk)
syscounter <= syscounter + 1;
// test signals on counter
assign TSTA = syscounter[3]; // 100.5MHz / 2^4 = 6.28MHz
assign TSTB = syscounter[SYS_CNTR_WIDTH-1]; // 0.749Hz
// extract slowest 3-bits...
reg [2:0] display;
assign display[2:0] = syscounter[SYS_CNTR_WIDTH-1:SYS_CNTR_WIDTH-3];
// .. use slowest to flash green LED,
assign LED5 = display[2];
// .. and slightly faster ones to make a spinner
decode_2to4 myDecoder (.a0(display[0]), .a1(display[1]),
.q0(LED1), .q1(LED2), .q2(LED3), .q3(LED4));
endmodule
/*
* 2-bit to 4-line decode
* - positive logic i.e. q0 is high when (a0,a1) == (low,low)
*/
module decode_2to4(input a0, input a1
, output q0, output q1, output q2, output q3);
assign q0 = (~a0) & (~a1);
assign q1 = ( a0) & (~a1);
assign q2 = (~a0) & ( a1);
assign q3 = ( a0) & ( a1);
endmodule</code></pre>
<h3>The <span class="caps">PLL </span>code</h3>
<p>The <span class="caps">PLL </span>code is generated by <code>icepll</code>, then edited to use the global buffer for clock distribution.</p>
<p>Technical note <a href="http://www.latticesemi.com/~/media/LatticeSemi/Documents/ApplicationNotes/IK/iCE40sysCLOCKPLLDesignandUsageGuide.pdf?document_id=47778"><span class="caps">TN1251</span></a> discusses clocks and <span class="caps">PLL</span>s on the iCE40.</p>
<pre><code>/**
* PLL configuration
*
* This Verilog module was generated automatically
* using the icepll tool from the IceStorm project.
* Use at your own risk.
*
* Subsequent tweaks to use a Global buffer were made
* by hand.
*
* Given input frequency: 12.000 MHz
* Requested output frequency: 100.000 MHz
* Achieved output frequency: 100.500 MHz
*/
module pll(
input clock_in,
output global_clock,
output locked
);
wire g_clock_int;
SB_PLL40_CORE #(
.FEEDBACK_PATH("SIMPLE"),
.DIVR(4'b0000), // DIVR = 0
.DIVF(7'b1000010), // DIVF = 66
.DIVQ(3'b011), // DIVQ = 3
.FILTER_RANGE(3'b001) // FILTER_RANGE = 1
) uut (
.LOCK(locked),
.RESETB(1'b1),
.BYPASS(1'b0),
.REFERENCECLK(clock_in),
.PLLOUTGLOBAL(g_clock_int)
);
SB_GB sbGlobalBuffer_inst( .USER_SIGNAL_TO_GLOBAL_BUFFER(g_clock_int)
, .GLOBAL_BUFFER_OUTPUT(global_clock) );
endmodule
</code></pre>
<h3>Makefile</h3>
<p>Most of the rules are shared across different dev. boards: we need only to specify the <span class="caps">FPGA </span>and the programming software:</p>
<pre><code>ARACHNE_DEVICE = 1k
PACKAGE = tq144
ICETIME_DEVICE = hx1k
PROG_BIN = iceprog
include ../std.mk </code></pre>
<h3>Pin summary</h3>
<p>Finally, we need to tell the software which pins are associated with the signals:</p>
<pre><code>$ cat pins.pcf
set_io LED1 99
set_io LED2 98
set_io LED3 97
set_io LED4 96
set_io LED5 95
set_io CLK 21
set_io TSTA 44
set_io TSTB 45 </code></pre>CF9166AE-1107-11E8-8D7E-E062541A57932018-02-13T21:49:27:27Z2018-10-31T14:13:18:18ZMagnetic Bison TubesMartin Oldfield<p>How I make urban magnetic geocaches. </p><p> I’ve never been a fan of nano caches, those little-magnetic cylinders roughly 1cm tall and 1cm in diameter. I find them fiddly to open, the log is too small, and all too easily they become difficult to find.</p>
<p>That said, they are awfully convenient. In towns, lots of convenient metal object have nooks in which a cache can be hidden, and if the object is made of iron or steel, magnet is a good way to attach it.</p>
<p>So, I started attaching magnets to bison tubes, and thought it was about time I documented the process.</p>
<h2>The basic idea</h2>
<p><img src="mbison.jpg" alt="" class="img_noborder" /></p>
<p>The basic idea is to attach a couple of <a href="https://en.wikipedia.org/wiki/Neodymium_magnet">neodymium magnets</a> to a standard bison tube with <a href="https://en.wikipedia.org/wiki/Heat-shrink_tubing">heat-shrink tubing</a>.</p>
<p>You can see the result above.</p>
<h3>Bison tubes</h3>
<p>The standard bison tube available in the UK in early 2018 has a diameter of 16.8mm, though I think in the past they were a bit thinner.</p>
<p><img src="bison.jpg" alt="" class="img_noborder" /></p>
<h3>Neodymium magnets</h3>
<p><strong><span class="caps">WARNING</span>: Neodymium magnets can be dangerous.</strong></p>
<p>eBay has loads of people selling magnets. I used <span class="caps">N52 </span>discs 10mm in diameter and 3mm thick.</p>
<p>The <span class="caps">N52 </span>designation tells you how strong the magnet is: <span class="caps">N52 </span>seems to be the strongest widely available grade. <span class="caps">N35 </span>is also common, and I think generates about 75% of the field.</p>
<p>As you can probably see from the photo, I put two magnets in each cache. They are arranged in opposite senses i.e. one north pole and one south pole is exposed. Initially I thought that having the magnets in the same sense would be stronger, but this <a href="../03/dipole-interactions-2.html">turned out to be wrong</a>: the anti-parallel alignment is best.</p>
<h3>Heat shrink</h3>
<p>The heat-shrink is Unistrand 19.1mm (unshrunk diameter) 3:1 glue-lined tubing from <a href="https://www.rapidonline.com/unistrand-19-1mm-x-1-2m-adhesive-heat-shrink-sleeving-3-1-black-03-0857">Rapid Electronics</a></p>
<p>Although the 3:1 shrink ratio is overkill, the adhesive lining seems to seal the cache after shrinking.</p>
<h2>Method</h2>
<p>You need to heat the tubing to at least 70°C, but there’s no need to exceed 110°C. I used a crude <span class="caps">DIY </span>oven set to about 95°C and all seemed well. You could also use a hot-air station.</p>
<p>Most of the shrinking happened within three minutes, but I left some tubes in the over for much longer without any apparent ill-effect. Some Neodymium magnets appear to have <a href="https://en.wikipedia.org/wiki/Curie_temperature">Curie Temperatures</a> of around 100°C so there is a risk of destroying the magnets.</p>
<p>The heat-shrink is sufficiently close to the bison that I found I didn’t need anything extra to hold the magnets in place. Assuming that is that they were arranged in opposite senses so as to attract each other.</p>
<p>The heat-shrink needs to be cut short enough that it doesn’t overhang the curved end of the bison. Otherwise when the tube shrinks, it pulls the tube towards the curved end leaving the magnets exposed.</p>
<h2>Magnet safety</h2>
<p>As mentioned above, you do need to take care with the magnets. Wikipedia has an explicit <a href="https://en.wikipedia.org/wiki/Neodymium_magnet#Hazards">hazards</a> section on it’s magnet page. I also include a non-exhaustive list below.</p>
<p>Although these magnets are small they are easily strong enough to give you a nasty nip if your skin gets caught between them.</p>
<p>Another potential problem is the magnets shattering if dropped, or if one flies into another.</p>
<p>In general you need to remember that modern magnets are strong, and the forces typically increase as distance decreases. So, if something is sufficiently attracted to start moving, it is likely to accelerate very rapidly. That applies to magnets, but also to small steel things like screwdrivers and knife blades.</p>
<h2>Plastic pots</h2>
<p>I tried, without much success to wrap plastic sample containers. The basic problem seems to be that even if the container doesn’t melt, it does become flexible, and thus gets deformed as the tubing shrinks. Were I trying it again, I’d experiment with a short immersion in boiling water in the hope that the tube didn’t have time to melt. </p>A2F497BE-D88B-11E8-B126-07A3A1C0BB4E2018-10-25T19:24:32:32Z2018-10-25T20:47:34:34ZThe AD9833Martin Oldfield<p>Fun with the <span class="caps">AD9833 </span>clock generator. </p><h2>Introduction</h2>
<p>About three years ago, I spent some time generating <a href="../../2015/06/ad9850-lissajous.html">Lissajous figures</a> with a couple of <a href="https://www.analog.com/en/products/ad9850.html"><span class="caps">AD9850</span></a> <span class="caps">DDS </span>synthesizers.</p>
<p>Today (October 2018) the <span class="caps">AD9850 </span>boards on eBay go for about £10, but you can buy a similar product based around the <a href="https://www.analog.com/en/products/ad9833.html"><span class="caps">AD9833</span></a> for about £2.50 or £5 with buffers and fancy connectors.</p>
<p>The <span class="caps">AD9850 </span>handles higher-frequencies (up to 60MHz) whilst the <span class="caps">AD8933 </span>is limited to 12.5MHz. That’s fine for me though: I’m interested in frequencies below 100kHz. To be pedantic the <span class="caps">AD9833 </span>can handle frequencies up to half the frequency of the reference clock, but all the eBay boards use a 25MHz oscillator.</p>
<p>I was interested in using these to build a swept-frequency generator i.e. to create a signal which is basically sinusoidal with slowly increasing frequency.</p>
<p>Incidentally you can also get similar boards based around the Silicon Labs <a href="https://www.silabs.com/documents/public/data-sheets/Si5351-B.pdf">Si5351</a> chip. These generate multiple clocks and cost about £10.</p>
<h2>A conceptual view</h2>
<p>At the heart of the <span class="caps">AD9833 </span>is a phase accumulator which increments at a rate determined by both the external master clock and the value loaded into one of the chip’s registers. The official block diagram shows this:</p>
<p><img src="ad9833-fbl.png" alt="" class="img_noborder" /></p>
<p>You can see there is more than I’ve described. In particular:</p>
<ul>
<li>The <span class="caps">AD9833 </span>lets you store two different frequencies and then switch between them easily. This makes it easier to implement <a href="https://en.wikipedia.org/wiki/Frequency-shift_keying">frequency-shift keying</a>.</li>
<li>The phase of the output can also be tweaked. This makes it easier to implement <a href="https://en.wikipedia.org/wiki/Phase-shift_keying">phase-shift keying</a>.</li>
</ul>
<p>However, we can ignore these if we just want to generate a slowly changing sine wave.</p>
<h2>Arduino interfacing</h2>
<p>Google will furnish many articles on <a href="https://www.google.com/search?q=arduino+ad9833">ad9833 arduino</a>, but it’s not quite clear of their relative merits.</p>
<p>Most of the libraries implement a C++ class which hides the functionality of the chip behind a nice <span class="caps">API.</span> Sadly the one I tried seemed to reset the phase accumulator when the frequency changed, leading to discontinuities in the output.</p>
<p>Given the datasheet, It is simple to program the chip to generate a single frequency, so I thought it better to just drive it directly.</p>
<p>There is one wrinkle though: although the interface to the <span class="caps">AD9833 </span>is essentially <span class="caps">SPI, </span>blog posts commonly talk about it finding it hard to use the system <span class="caps">SPI </span>and instead use bespoke bit-banging code. I followed that advice here, and stole the <span class="caps">SPI </span>code from Marco Colli’s <a href="https://github.com/MajicDesigns/MD_AD9833">library</a>.</p>
<pre><code>// for chip info see https://www.analog.com/en/products/ad9833.html
// SPI code taken from https://github.com/MajicDesigns/MD_AD9833/
const uint8_t _dataPin = 11;
const uint8_t _clkPin = 13;
const uint8_t _fsyncPin = 10;
// send raw 16-bit word
void spiSend(const uint16_t data)
{
digitalWrite(_fsyncPin, LOW);
uint16_t m = 1UL << 15;
for (uint8_t i = 0; i < 16; i++)
{
digitalWrite(_dataPin, data & m ? HIGH : LOW);
digitalWrite(_clkPin, LOW); //data is valid on falling edge
digitalWrite(_clkPin, HIGH);
m >>= 1;
}
digitalWrite(_dataPin, LOW); //idle low
digitalWrite(_fsyncPin, HIGH);
}
void setFreq(double f)
{
const uint16_t b28 = (1UL << 13);
const uint16_t freq = (1UL << 14);
const double f_clk = 25e6;
const double scale = 1UL << 28;
const uint32_t n_reg = f * scale / f_clk;
const uint16_t f_low = n_reg & 0x3fffUL;
const uint16_t f_high = (n_reg >> 14) & 0x3fffUL;
spiSend(b28);
spiSend(f_low | freq);
spiSend(f_high | freq);
}
void setup() {
pinMode(_clkPin, OUTPUT);
pinMode(_fsyncPin, OUTPUT);
pinMode(_dataPin, OUTPUT);
digitalWrite(_fsyncPin, HIGH);
digitalWrite(_clkPin, LOW);
}
void loop() {
setFreq(800.0);
delay(50);
setFreq(1600.0);
delay(50);
}</code></pre>
<h2>Raspberry Pi</h2>
<p>The code above is entirely self-contained, so it is easy to port it to the Raspberry Pi. I switched the code from C to Python, targetting the <a href="https://gpiozero.readthedocs.io/en/stable/">gpiozero</a> <span class="caps">API.</span></p>
<pre><code>import gpiozero
class AD9833:
def __init__(self, data, clk, fsync):
self.dataPin = gpiozero.OutputDevice(pin = data)
self.clkPin = gpiozero.OutputDevice(pin = clk)
self.fsyncPin = gpiozero.OutputDevice(pin = fsync)
self.fsyncPin.on()
self.clkPin.on()
self.dataPin.off()
self.clk_freq = 25.0e6
def set_freq(self, f):
flag_b28 = 1 << 13
flag_freq = 1 << 14
scale = 1 << 28
n_reg = int(f * scale / self.clk_freq)
n_low = n_reg & 0x3fff
n_hi = (n_reg >> 14) & 0x3fff
self.send16(flag_b28)
self.send16(flag_freq | n_low)
self.send16(flag_freq | n_hi)
def send16(self, n):
self.fsyncPin.off()
mask = 1 << 15
for i in range(0, 16):
self.dataPin.value = bool(n & mask)
self.clkPin.off()
self.clkPin.on()
mask = mask >> 1
self.dataPin.off()
self.fsyncPin.on()
ad = AD9833(10, 11, 8)
while 1:
for f in range(10,10000):
ad.set_freq(f)
</code></pre>
<h2>Code</h2>
<p>You can get the code from <a href="https://github.com/mjoldfield/ad9833">GitHub</a>. </p>A5C40266-9BE8-11E8-AC19-98CD95A8631C2018-08-09T15:26:34:34Z2018-08-10T16:52:18:18ZBlack Magic Probe on NucleoMartin Oldfield<p>Explicit instructions for flashing the Black Magic Probe firmware onto the ST-Link part of a Nucleo dev board. </p><h2>Introduction</h2>
<p>The <a href="https://github.com/blacksphere/blackmagic/wiki">Black Magic Probe</a> is an open source <span class="caps">JTAG </span>and <span class="caps">SWD </span>adapter with integrated support for <span class="caps">GDB</span>’s remote debugging protocol. Essentially the <span class="caps">BMP </span>bridges between your host machine and the target <span class="caps">ARM.</span> You just connect it, point gdb at it, and get to work.</p>
<p>As you can see below, the <span class="caps">BMP </span>is tiny and connects to the target with cables, which is great for some projects, but I found myself missing the single, solid <span class="caps">PCB </span>of <span class="caps">STM</span>’s Nucleo and Discovery boards.</p>
<p><a href="./bmp-nuc.jpg"><img src="bmp-nuc.jpg" alt="" class="img_border" /></a></p>
<p>However, the <span class="caps">BMP </span>firmware explicitly mentions <a href="https://github.com/blacksphere/blackmagic/wiki/Debugger-Hardware#sw-link">ST-Link compatibility</a>, so I decided to replace the ST-Link firmware on one of my Nucleo boards. Although a lot of helpful information is available online, the process proved to be quite a palaver, so I am making these notes in case I want to do it again. I should add that there’s nothing intrinsically difficult, but I’d forgotten or didn’t know a few things which made it rather frustrating.</p>
<p>The <a href="http://nuft.github.io/arm/2015/08/24/blackmagic-stlink.html">best single article</a> about replacing the ST-Link firmware comes from <a href="https://github.com/nuft">Michael Spieler</a> and discusses flashing the Nucleo’s ST-Link from a Discovery board. I wish I’d found this at the start of the job, rather than the end. Thanks to Chuck McManis in the blacksphere/blackmagic Gitter channel for telling me about it.</p>
<h2>Tools</h2>
<p>I do most of my development on a Mac, so these notes are Mac-centric, though I expect they’d work on Linux without many changes.</p>
<p>As we will discuss below, the standard ST-Link firmware is locked, so whichever tool you use to flash the new firmware must be able to unlock the flash first.</p>
<p>To program <span class="caps">ARM</span>s I usually use texane’s <a href="https://github.com/texane/stlink">stlink</a> software to drive the ST-Link hardware, but this can’t do the job. Happily <a href="http://openocd.org">openocd</a> can do the job though. So with a nod to symmetry I used a second Nucleo board and openocd to flash the firmware on the target ST-Link.</p>
<p>I think other <span class="caps">ARM </span>programmers could also be used: notably the Black Magic Probe itself, or indeed the ST-Link when driven by ST’s official Windows tools.</p>
<h2>Building the firmware</h2>
<p>This is well documented all over the place e.g. on <a href="http://esden.net/2014/12/29/black-magic-discovery/">Piotr Esden-Tempski’s blog</a>.</p>
<p>Assuming that you have a suitable <span class="caps">ARM </span>toolchain already installed:</p>
<pre><code class="small">$ git clone git@github.com:blacksphere/blackmagic.git
$ cd blackmagic
$ git submodule init
$ git submodule update
$ make
$ cd src
$ make clean
$ make PROBE_HOST=stlink</code></pre>
<h2>Flashing in detail</h2>
<p>The ST-Link part of the Nucleo is based around the <span class="caps">STM32F103CBT6 </span>processor. It is flashed with ST’s firmware which provides the ST-Link functionality, and also for a proprietary firmware upgrade feature.</p>
<p>Some work has been done to reverse engineer the update process but it’s not complete. I found articles by <a href="http://www.taylorkillian.com/2013/01/retrieving-st-linkv2-firmware-from.html">Taylor Killian</a> and <a href="https://lujji.github.io/blog/reverse-engineering-stlink-firmware/">lujji</a> instructive: if I wanted to revisit this problem I’d start there.</p>
<p>Instead I think the best approach is to wipe the ST-Link stuff entirely and replace it with the Black Magic Probe firmware. We divide the task into three stages: preparation; flashing; tidying-up.</p>
<p><span class="caps">NOTE</span>: Having wiped the ST-Link firmware, it is difficult to put it back, at least with the tools I found for the Mac. On the other hand, Nucleos are cheap, so I was quite happy to regard this as an irreversible step.</p>
<h3>Preparation</h3>
<p>Our aim here is to connect the host ST-Link we will use for programming to the target ST-Link. Sadly this requires a soldering iron!</p>
<p>On the bottom of the target ST-Link you need to remove four zero-ohm bridges from the ‘DEFAULT’ positions and make bridges across the four ‘RESERVED’ positions. This connects the four-pin <span class="caps">CN2 </span>header to the <span class="caps">ARM </span>in the ST-Link rather than the normal target <span class="caps">ARM </span>on the Nucleo board.</p>
<p><a href="./nbot.png"><img src="nbot.png" alt="" class="img_border" /></a></p>
<p>Having tweaked the links, we now need to connect the target board to the other Nucleo. The four pins on <span class="caps">CN2 </span>are the only connections you need: they provide both power and the programming signals. Explicitly, the pins are Vcc, <span class="caps">SWCLK,</span> Ground, and <span class="caps">SWDIO.</span></p>
<p>Remove the two jumpers from <span class="caps">CN2 </span>and connect four short wires to them instead. Keep the jumpers safe though.</p>
<p><a href="./ntop.png"><img src="ntop.png" alt="" class="img_border" /></a></p>
<p>On the host Nucleo, remove the jumpers from <span class="caps">CN2 </span>so that the programming signals aren’t routed to the on-board target. Three of the signals we need come from the six-pin <span class="caps">SWD </span>header <span class="caps">CN4. </span> We also need a 3.3V rail for Vcc. There are various sources but the safest is probably from the block of power headers.</p>
<p><a href="./ntop-host.png"><img src="ntop-host.png" alt="" class="img_border" /></a></p>
<p>At this point the hardware is all set-up.</p>
<h3>Flashing</h3>
<p>Recall that we will use openocd to unlock the flash on the target ST-Link and upload the new firmware. Begin by starting the openocd daemon, here on port 4567:</p>
<pre><code class="small">$ openocd -f interface/stlink-v2-1.cfg -c 'transport select hla_swd'
-f target/stm32f1x.cfg -c 'telnet_port 4567'
Open On-Chip Debugger 0.10.0
Licensed under GNU GPL v2
For bug reports, read
http://openocd.org/doc/doxygen/bugs.html
hla_swd
Info : The selected transport took over low-level target ...
adapter speed: 1000 kHz
adapter_nsrst_delay: 100
none separate
Info : Unable to match requested speed 1000 kHz, using 950 kHz
Info : Unable to match requested speed 1000 kHz, using 950 kHz
Info : clock speed 950 kHz
Info : STLINK v2 JTAG v20 API v2 SWIM v4 VID 0x0483 PID 0x374B
Info : using stlink api v2
Info : Target voltage: 3.228402
Info : stm32f1x.cpu: hardware has 6 breakpoints, 4 watchpoints</code></pre>
<p><span class="caps">NOTE</span>: The Nucleo needs stlink-v2-1.cfg; the Discovery stlink-v2.cfg. If you get this wrong, you get an unhelpful error message:</p>
<pre><code class="small">...
Info : clock speed 950 kHz
Error: open failed
in procedure 'init'
in procedure 'ocd_bouncer'</code></pre>
<p>Asking Google to help with this error really leads you down the rabbit hole! The ‘open failed’ error is reasonably generic and covers all manner of problems: accordingly most of the solutions suggested by the Internet are irrelevant.</p>
<p>Having started the daemon we can now issue the programming commands from a second shell:</p>
<pre><code class="small">$ telnet localhost 4567
...
Escape character is '^]'.
Open On-Chip Debugger
> init
> reset halt
target halted due to debug-request, current mode: Thread
xPSR: 0x01000000 pc: 0x08002e0c msp: 0x20001ce0
> stm32f1x unlock 0
device id = 0x20036410
STM32 flash size failed, probe inaccurate - assuming 128k flash
flash size = 128kbytes
Device Security Bit Set
target halted due to breakpoint, current mode: Thread
xPSR: 0x61000000 pc: 0x2000003a msp: 0x20001ce0
stm32x unlocked.
INFO: a reset or power cycle is required for the new settings to
take effect.
> reset halt
target halted due to debug-request, current mode: Thread
xPSR: 0x01000000 pc: 0xfffffffe msp: 0xfffffffc
> stm32f1x mass_erase 0
stm32x mass erase complete
> flash write_bank 0 blackmagic.bin 0x2000
target halted due to breakpoint, current mode: Thread
xPSR: 0x61000000 pc: 0x2000003a msp: 0xfffffffc
wrote 70932 bytes from file blackmagic.bin to flash bank 0
at offset 0x00002000 in 2.148439s (32.242 KiB/s)
> flash write_bank 0 blackmagic_dfu.bin 0
target halted due to breakpoint, current mode: Thread
xPSR: 0x61000000 pc: 0x2000003a msp: 0xfffffffc
wrote 7284 bytes from file blackmagic_dfu.bin to flash bank 0
at offset 0x00000000 in 0.276390s (25.736 KiB/s)
> exit
Connection closed by foreign host.</code></pre>
<p>At this point the firmware has been upgraded!</p>
<p>To check:</p>
<pre><code class="small">$ system_profiler SPUSBDataType | fgrep Black
Black Magic Probe (STLINK), (Firmware v1.6.1-205-gc5c0783):
Manufacturer: Black Sphere Technologies</code></pre>
<h4>Future upgrades</h4>
<p>Having converted the ST-Link to a Black Magic Probe, you can upgrade the firmware with the usual <a href="https://en.wikipedia.org/wiki/USB#DFU"><span class="caps">USB DFU</span></a> tools.</p>
<h3>Tidying up</h3>
<p>There are basically only a couple of tasks to do. Firstly power-up the soldering iron, and swap the solder-bridges on the underside of the board back to the ‘DEFAULT’ side, breaking the ‘RESERVED’ set. I usually end up with lots of flux residue at this point, and so clean the board.</p>
<p>Finally replace the two jumpers in <span class="caps">CN2 </span>so that the new Black Magic Probe can see the target <span class="caps">ARM </span>on the Nucleo board.</p>
<p>When that’s done, you can do a final check:</p>
<pre><code class="small">$ arm-none-eabi-gdb
GNU gdb ...
...
(gdb) target extended-remote /dev/cu.usbmodemB5C8D9E1
Remote debugging using /dev/cu.usbmodemB5C8D9E1
(gdb) monitor swdp_scan
Target voltage: unknown
Available Targets:
No. Att Driver
1 STM32F401E</code></pre>
<p>Note that you need to use the /dev/cu.usbmodem* device, not the /dev/tty.usbmodem* device.</p>
<p>Note too, that jtag_scan appears to fail:</p>
<pre><code class="small">(gdb) monitor jtag_scan
Target voltage: unknown
JTAG device scan failed!</code></pre>
<h2>Conclusions</h2>
<p>In practice, if everything just works, converting the ST-Link portion of a Nucleo board to a Black Magic Probe takes about twenty minutes. The change is effectively permanent though, so if you want a normal Nucleo again talk to Digikey or Farnell. </p>F543EDF2-7BE4-11E8-A039-97EF87A8D55E2018-06-29T21:39:23:23Z2018-08-02T09:16:04:04ZA HomeKit LightMartin Oldfield<p>A very simple, HomeKit compatible, light. </p><p><em>Update 2018-07-11: Added note about Pi model, evdev installation, and gamma correction.</em> <em>Update 2018-08-02: Deprecated QR varient of <span class="caps">HAP </span>python.</em></p>
<h2>Introduction</h2>
<p>The article describes how to make a very simple light which you can control through Apple’s <a href="https://www.apple.com/uk/ios/home/">HomeKit</a> service.</p>
<p>It is very much a proof of principle, rather than a practical device, and all the hard work has been done by other people!</p>
<h2>Hardware</h2>
<p>The key element in the project is a Raspberry Pi with an internet connection. Originally I used an old Model A, but I found this often led to ‘Accessory not responding’ errors. In practice a Pi Zero W worked much better.</p>
<p>The notes below assume you’ve set up the Pi roughly along <a href="./rpi-setup.html">these lines</a>.</p>
<p>The next part of the hardware is to connect an <span class="caps">LED </span>to <span class="caps">GPIO</span> 18.</p>
<p>If you’ve installed the gpiozero package, you can identify pin 18 with the pinout command.</p>
<p>And that’s the hardware done! You can see my version, cobbled together with stuff lying within easy reach on my desk:</p>
<p><a href="./hap-light.jpg"><img src="./hap-light.jpg" alt="" class="img_border" /></a></p>
<h3>Testing</h3>
<p>To test the hardware do this (note that you shouldn’t need to be root):</p>
<pre><code class="small">$ echo 18 > /sys/class/gpio/export
$ echo out > /sys/class/gpio/gpio18/direction
$ echo 1 > /sys/class/gpio/gpio18/value
$ echo 0 > /sys/class/gpio/gpio18/value</code></pre>
<p>You should see the <span class="caps">LED </span>turn on, then off.</p>
<h2><span class="caps">PWM </span>support</h2>
<p>Of course, any self-respecting light these days can be dimmed, so we should add that. Happily we can use the <span class="caps">PWM </span>drivers built into the kernel. To do this, edit /boot/config.txt and add this line:</p>
<pre><code class="small">dtoverlay=pwm</code></pre>
<p>Now reboot the machine.</p>
<p>The runes above tell the Raspberry Pi to use a devicetree overlay to set up a <a href="https://en.wikipedia.org/wiki/Pulse-width_modulation">pulse-width modulation</a> device on <span class="caps">GPIO </span>pin 18. If you want to know more about this you might find <a href="../../2017/03/rpi-devicetree.html">an article I wrote about devicetree</a> interesting.</p>
<p>If you just want to use control the <span class="caps">LED </span>though, you just need to know about the sysfs interface:</p>
<pre><code class="small">$ echo 0 > /sys/class/pwm/pwmchip0/export
$ echo 1000000 > /sys/class/pwm/pwmchip0/pwm0/period
$ echo 500000 > /sys/class/pwm/pwmchip0/pwm0/duty_cycle
$ echo 1 > /sys/class/pwm/pwmchip0/pwm0/enable
$ echo 900000 > /sys/class/pwm/pwmchip0/pwm0/duty_cycle
$ echo 100000 > /sys/class/pwm/pwmchip0/pwm0/duty_cycle</code></pre>
<p>The units above are all nanoseconds, and so these commands:</p>
<ul>
<li>Set up a 1kHz signal on <span class="caps">GPIO18</span>;</li>
<li>Initially set the signal on 50% of the time;</li>
<li>Change this faction to 90%, then 10%.</li>
</ul>
<p>One of the nice features about all this, is that we have been able to handle all the hardware by just configuring the standard kernel software. This leaves our application code free from any need to talk directly to hardware, free from any need to have elevated permissions, and free to run without significant timing constraints</p>
<h2>The HomeKit Accessory Protocol</h2>
<p>These days, it is relatively easy to see the <a href="https://developer.apple.com/support/homekit-accessory-protocol/">official documentation</a> from Apple. There are still hoops to jump through if you want to do this commercially, but those don’t concern us here.</p>
<p>Now, actually implementing the Protocol from scratch is a major task but happily someone has done it: Ivan Kalchev’s <a href="https://github.com/ikalchev/HAP-python"><span class="caps">HAP</span>-python</a> library. Thank you Ivan!</p>
<p>I should say that there are other similar projects: some in python, many more in Javascript. <a href="https://github.com/home-assistant">Home Assistant</a> uses <span class="caps">HAP</span>-python though, and that’s enough to break the symmetry for me.</p>
<p>To install <span class="caps">HAP</span>-python:</p>
<pre><code class="small">$ sudo apt-get install libavahi-compat-libdnssd-dev
$ pip3 install HAP-python</code></pre>
<p>Sadly this can take quite a lot of time.</p>
<p>Previous versions of this post used the QR version of <span class="caps">HAP</span>-python which displays the pairing code in a pretty QR-code which the iPhone can read. It is great on the command line, but awful if it gets sent to syslog.</p>
<h2>Our software</h2>
<p>Having used existing software to control the <span class="caps">LED </span>and to talk to HomeKit, we now need to write some code to connect the two. Happily it is all very straightforward.</p>
<h3>lamp.py</h3>
<p>Let’s begin with the main program:</p>
<pre><code class="small">import sys
sys.path.append('lib')
import logging
import signal
from pyhap.accessory_driver import AccessoryDriver
from LightBulb import LightBulb
logging.basicConfig(level=logging.INFO)
driver = AccessoryDriver(port=51826)
lamp = LightBulb(driver, 'ToyLamp')
driver.add_accessory(accessory=lamp)
signal.signal(signal.SIGTERM, driver.signal_handler)
driver.start()</code></pre>
<p>This code is based heavily on the <a href="https://github.com/ikalchev/HAP-python/blob/master/main.py">example</a> in the <span class="caps">HAP</span>-python repository. In essence, it just instantiates the accessory, the driver, then tells the driver to get on with it.</p>
<h3>Lightbulb.py</h3>
<p>Although this is the largest bit of new code, it is hardly enormous. Happily the <span class="caps">HAP</span>-python repository includes an <a href="https://github.com/ikalchev/HAP-python/blob/dev/accessories/LightBulb.py">example LightBulb</a> accessory: the chief differences below are:</p>
<ul>
<li>I want to be able to dim the bulb so there’s a new Brightness characteristic.</li>
<li>The <span class="caps">LED </span>is attached to a <span class="caps">PWM </span>device which is controlled via the sysfs <span class="caps">API, </span>rather than a binary <span class="caps">GPIO </span>line.</li>
</ul>
<pre><code class="small">from pwm import PWM
from pyhap.accessory import Accessory
from pyhap.const import CATEGORY_LIGHTBULB
class LightBulb(Accessory):
category = CATEGORY_LIGHTBULB
def __init__(self, *args, pwm_channel=0, **kwargs):
super().__init__(*args, **kwargs)
chars = [ ( 'On', self.set_on )
, ( 'Brightness', self.set_brightness )
]
server = self.add_preload_service(
'Lightbulb', chars = [ name for (name,_) in chars ])
for (name, setter) in chars:
server.configure_char(name, setter_callback = setter)
self.pwm_channel = pwm_channel
self.brightness = 1.0 # fraction
self.is_on = False
# Initialize this now, so that it has time to initialize
# properly before we call it
self.pwm_device = PWM(self.pwm_channel)
self.pwm_device.export()
def set_on(self, value):
self.is_on = bool(value)
self.set_bulb()
def set_brightness(self, value):
# HAP spec says brightness is specified as a percentage
self.brightness = float(value) / 100.0
self.set_bulb()
# push local state to PWM
def set_bulb(self):
if self.is_on:
self.set_pwm_state(self.brightness)
else:
self.set_pwm_state(0)
# actually drive PWM device
def set_pwm_state(self, f):
pwm_period = 1000000 # 1ms = 1000000ns => 1kHz
self.pwm_device.period = pwm_period
self.pwm_device.duty_cycle = int(f * pwm_period)
self.pwm_device.enable = True
def stop(self):
super().stop()</code></pre>
<h3>pwm.py</h3>
<p>To access the <span class="caps">PWM </span>sysfs <span class="caps">API,</span> I’m using code written by Scott Ellis. You can get his original from <a href="https://raw.githubusercontent.com/scottellis/pwmpy/master/pwm.py">GitHub</a></p>
<p>The only local difference is that I added code to wait for the <span class="caps">PWM </span>device to be exported: particularly on older hardware, this seems to take a while. A better patch might check that the udev rules which fix the <span class="caps">PWM </span>device permissions have had time to fire too.</p>
<pre><code class="small"> ...
def export(self):
"""Export the channel for use through the sysfs interface.
Required before first use.
"""
if not self.is_exported():
with open(self.base + '/export', 'w') as f:
f.write('{:d}'.format(self._channel))
# wait for the device to appear, so that immediate attempts
# to configure it don't fail
max_wait = 10.0 # in seconds
sleepq = 0.1 # in seconds
timeout = time.clock() + max_wait
while not self.is_exported():
if time.clock() > timeout:
raise TimeoutError("Unable to export PWM device")
time.sleep(sleepq)
...</code></pre>
<h2>Basic operation</h2>
<p>As you’ll know if you’ve used HomeKit before, you have to add new accessories to the system. Happily <span class="caps">HAP</span>-python makes this easy: the first time the accessory is run, it displays a QR code on the command line, which the Home app on your iPhone understands.</p>
<p>Our accessory isn’t certified, so you have to explicitly approve it.</p>
<p>The AccessoryDriver persists its state in a local file, by default called accessory.state, so subsequent invocations of the accessory don’t go through the pairing routine and generate the QR code. If you should need to re-pair the accessory, delete the state file.</p>
<h2>systemd</h2>
<p>Having paired the accessory, you will probably want to start it automatically when the system boots.</p>
<p>To do this, we use <a href="https://en.wikipedia.org/wiki/Systemd">systemd</a>. Specifically, with this script:</p>
<pre><code class="small">[Unit]
Description="HomeKit Lamp"
[Service]
Type=simple
User=pi
WorkingDirectory=/home/pi/mjo-homekit/code
ExecStart=/usr/bin/python3 /home/pi/mjo-homekit/code/lamp.py
StandardInput=null
StandardOutput=syslog
StandardError=syslog
Restart=on-failure
[Install]
WantedBy=multi-user.target</code></pre>
<p>I don’t claim this is optimal, but it appears to work. Some caveats:</p>
<ul>
<li>It is not robust with respect to the network disappearing and reappearing.</li>
<li>It runs as the standard pi user: it would be better to set up a new one.</li>
</ul>
<h2>Local control</h2>
<p>Although it’s handy to be able to control the light from the Internet, I think it’s helpful to have controls on the light too.</p>
<p>When I <a href="../..2017/05/yauiotl.html">last built an IoT light</a> I included a combined push-button and rotary-encoder to control both brightness and whether the light was on. Unsurprisingly, I’ve now replaced the software I wrote then with this HomeKit version, and so I want similar local controls.</p>
<p><a href="../../2017/05/yauiotl-case.jpg"><img src="../../2017/05/yauiotl-case.jpg" alt="" class="img_border_small" /></a></p>
<h3>Events</h3>
<p>Just as it is convenient to use a devicetree overlay to create a <span class="caps">PWM </span>device which drives the light, we can also use overlays to create input devices for the rotary-encoder and power button.</p>
<p>I’ve made more <a href="../../2017/03/rpi-devicetree.html">detailed notes</a>, but the key lines to include in /boot/config.txt are:</p>
<pre><code>dtoverlay=gpio-key,gpio=25,label=MYBTN,keycode=0x101
dtoverlay=rotary-encoder,pin_a=7,pin_b=8,relative_axis=1</code></pre>
<p>As you can probably guess, you should connect the rotary-encoder to <span class="caps">GPIO</span>s 7 and 8, and the power button to <span class="caps">GPIO</span> 25. The overlays enable pull-ups in the processor, so you don’t need extra resistors.</p>
<p>Having installed the overlays and connected the hardware, devices appear in /dev/input/eventN. You can test them with evtest, or talk to them from Python with the evdev library.</p>
<p>Sadly evdev isn’t packaged in raspbian, but you can install it easily with pip3:</p>
<pre><code>$ pip3 install evdev</code></pre>
<h3>New code</h3>
<p>We need to make several changes to our accessory code: add code to parse the events, add code to push local changes to the HomeKit world, and finally arrange for the code to be called appropriately.</p>
<p>Parsing the events is made simpler because we don’t have to worry about distinguishing different buttons.</p>
<pre><code class="small">async def event_handler(self, dev):
async for e in dev.async_read_loop():
# all events _might_ update local state variables...
t = e.type
v = e.value
# key presses toggle state...
if t evdev.ecodes.EV_KEY and v 1:
self.is_on = not(self.is_on)
self.local_update = True
# rotary encoder changes brightness (and turns light on)...
elif t == evdev.ecodes.EV_REL and v != 0:
if not self.is_on:
self.is_on = True
self.local_update = True
b = clamp(0, 1.0, self.brightness + v * self.bri_delta)
if self.brightness != b:
self.brightness = b
self.local_update = True
# if the state has changed push those changes to the bulb
if self.local_update:
self.set_bulb()
</code></pre>
<p>You’ll see that this code is marked <a href="https://docs.python.org/3/library/asyncio-task.html">async</a> which marks it as an asynchronous coroutine. This makes it easy to integrate into the main AccessoryDriver code. In the init method just add:</p>
<pre><code>for d in evdevs:
driver.async_add_job(self.event_handler(d))</code></pre>
<p>Where evdevs is a list of devices passed to the accessory when it’s created.</p>
<p>To push changes back to HomeKit, we use another async event, though <span class="caps">HAP</span>-python provides some syntactic sugar to make the code a bit sweeter:</p>
<pre><code class="small">@Accessory.run_at_interval(1)
def run(self):
if self.local_update:
self.char['On'].set_value(self.is_on)
self.char['Brightness'].set_value(int(100.0 * self.brightness))
self.local_update = False</code></pre>
<p>Note that we only push updates if there are any changes: this both reduces spurious traffic and stops local and remote updates from fighting.</p>
<h2>Improvements</h2>
<p>In practice, setting the <span class="caps">PWM </span>duty-cycle to the fractional brightness gives very crude control at low levels, and the perceived change in intensity isn’t consistent. Better results comes from employing <a href="https://en.wikipedia.org/wiki/Gamma_correction">gamma correction</a>, here with γ = 2.5:</p>
<pre><code># push local state to PWM
def set_bulb(self):
if self.is_on and self.brightness > 0:
theta = self.brightness ** 2.5
self.set_pwm_state(theta)
else:
self.set_pwm_state(0)</code></pre>
<h2>GitHub</h2>
<p>You can get all the code for this from <a href="https://github.com/mjoldfield/mjo-homekit">GitHub</a>.</p>
<h2>Conclusions</h2>
<p>This project worked well: the final light works reliably, integrates seemlessly into the Apple HomeKit ecosystem, and the whole process wasn’t particularly difficult.</p>
<p>Much of the credit for this goes to Ivan Kalchev’s <span class="caps">HAP</span>-python library, which does all the heavy-lifting. Abstracted a little, you can see it as yet another nice project enabled by cheap commodity electronics and fine open-source software.</p>
<p>I think some credit is also due to Apple though. By placing HomeKit devices firmly on the local network, and only allowing remote access through some sort of Hub (e.g. Apple TV or HomePod), Apple have made the security problems much easier. In particular we don’t have to worry about setting up crypto-certificates so as to limit access to the right people.</p>
<p>Overall, then, it was a fun thing to do, and if you’ve got Apple devices around then I recommend you start building your own HomeKit trinkets. </p>9545916E-9591-11E8-ABE5-903B6A4B58AE2018-08-01T13:48:17:17Z2018-08-01T18:32:26:26ZA HomeKit Enviro pHAT SensorMartin Oldfield<p>Publishing sensor data (temperature and ambient-light level) from the Raspberry Pi Enviro pHAT to HomeKit.</p><h2>Introduction</h2>
<p>Having built a <a href="../06/homekit-light.html">HomeKit compatible light</a> I thought it was time to build a sensor too. Rather than build a finished product, I wanted a proof-of-concept to test reliability and convenience. So, the biggest design concern was simplicity and speed of construction.</p>
<p>One slightly longer term project is to automate the lights at home, so that they come on when it gets dark. Some sort of ambient light sensor seems a key part of this. I am also quite interested to monitor the environment at home, so ideally I’d like temperature and humidity sensors too.</p>
<h2>The Enviro pHAT</h2>
<p>To keep things simple, I wanted to use off-the-shelf hardware and so acquired an <a href="https://shop.pimoroni.com/products/enviro-phat">Enviro pHAT</a> board from Pimoroni. This board lacks a humidity sensor, but has a convenient python library to simplify the software. The board also measures motion and barometric pressure, but I’m ignoring these for now.</p>
<p>Happily Pimoroni provide a nice python library which <a href="https://github.com/pimoroni/enviro-phat">talks to the board</a>.</p>
<p>Temperature sensing is done by the <a href="https://www.bosch-sensortec.com/bst/products/all_products/bmp280"><span class="caps">BMP280</span></a> pressure sensor, which isn’t ideal. The data sheet says:</p>
<blockquote><p>Temperature measured by the internal temperature sensor. This temperature value depends on the <span class="caps">PCB </span>temperature, sensor element self-heating and ambient temperature and is typically above ambient temperature.</p></blockquote>
<p>Besides these issues, the sensor is reasonably close to the <span class="caps">CPU </span>on the Pi which gets quite warm. The net effect of all this is that the sensor reads significantly high: as I write this a thermometer reads about 22°C, the <span class="caps">BMP280 </span>about 30°C. Although these problems could be mitigated by calibrarion, for real work a different sensor placed some distance from the Pi is probably warranted.</p>
<p>Light is sensed by a <a href="https://ams.com/documents/20143/36005/TCS3472_DS000390_2-00.pdf"><span class="caps">TCS3472</span></a>. This is a full <span class="caps">RGB </span>sensor, but I only use the total brightness value. The HomeKit documentation says I should provide a value in lux: I am just using the number returned by the python library. Subjectively, I want to turn on the lights when the level falls to about 100.</p>
<h2>Software</h2>
<p>You can grab all the code from <a href="https://github.com/mjoldfield/mjo-homekit">GitHub</a>.</p>
<p>As with the light, Ivan Kalchev’s <a href="https://github.com/ikalchev/HAP-python"><span class="caps">HAP</span>-python</a> library handles all the HomeKit stuff. Thank you again Ivan!</p>
<p>The main code for the accessory is shown below:</p>
<pre><code class="small">from pyhap.accessory import Accessory
from pyhap.const import CATEGORY_SENSOR
from envirophat import light, weather
class Ephat(Accessory):
category = CATEGORY_SENSOR
def __init__(self, driver, *args, **kwargs):
super().__init__(driver, *args, **kwargs)
chars = { 'LightSensor':
[ ( 'CurrentAmbientLightLevel', lambda: light.light() )
]
,
'TemperatureSensor':
[ ( 'CurrentTemperature', lambda: weather.temperature() )
]
,
'Switch':
[ ('On', lambda: light.light() < 100)
]
}
self.chars = []
for sname, charlist in chars.items():
cnames = [ name for (name,_) in charlist ]
service = self.add_preload_service(sname, chars = cnames)
for (name, getter) in charlist:
c = service.configure_char(name)
self.chars.append((c, getter))
@Accessory.run_at_interval(3)
def run(self):
for (char, getter) in self.chars:
v = getter()
char.set_value(v)</code></pre>
<p>The local <code>chars</code> dictionary defines all the sensors. We walk this structure both to initialize the <span class="caps">HAP</span> Accessory, and to compile a list of characteristics and callbacks in the Accessory’s <code>chars</code> property.</p>
<p>Note that the names for the services and characteristics e.g. LightSensor and CurrentTemperature must match the official Apple standards. You can’t just invent your own (which is why I didn’t add a characteristic for atmospheric pressure).</p>
<p>You will also see that I’ve added a virtual Switch characteristic which turns on when the light level falls below a threshold. This makes it easier to automate things in the Home app: just tell the light to come on which the Switch closes. I really should add other virtual switches with slightly different thresholds so that different lights turn on at slightly different times.</p>
<p>Finally, we set up a periodic task (here every three seconds) to make new measurements and update the Accessory’s state.</p>
<h2>Walkthrough</h2>
<p>The notes below assume you’ve set up the Pi roughly along <a href="../06/rpi-setup.html">these lines</a>.</p>
<p>The Enviro pHAT sensors communicate with the Pi over the I²C bus, so you’ll need to enable that e.g. by running raspi-config.</p>
<p>Now install the dependencies:</p>
<pre><code class="small">$ sudo apt-get install libavahi-compat-libdnssd-dev git python3-envirophat
$ pip3 install HAP-python</code></pre>
<p>You will notice that <span class="caps">HAP</span>-python is installed without QR-code support. Although QR codes are convenient if you attach the accessory to your Home from the command line, the QR code is unreadable in system logs (and makes a real mess of them).</p>
<p>Once python3-envirophat has been installed you can use Pimoroni’s test code to check that the hardware is working:</p>
<pre><code class="small">$ wget https://raw.githubusercontent.com/pimoroni/enviro-phat/master/examples/all.py
$ python3 all.py</code></pre>
<p>Finally grab the Accessory code from GitHub, and run it:</p>
<pre><code class="small">$ git clone https://github.com/mjoldfield/mjo-homekit.git
$ cd mjo-homekit/code
$ python3 ephat.py</code></pre>
<p>You should then be able to add the Accessory in the Home app on your iPhone.</p>
<h3>systemd</h3>
<p>If you want to start all this automatically, you can use the systemd script in mjo-homekit/systemd/ephat.service.</p>
<h2>Discussion</h2>
<p>In essence, that’s all there is to this. The sensor works reliably with a minimum of fuss. Clearly things could be improved: it would be nice to collect more data and with better fidelity.</p>
<p>There is one issue: sometimes the sensor gets ‘stuck’ when powered down and then restarted. You can follow this on <a href="https://github.com/ikalchev/HAP-python/issues/140">GitHub</a>. </p>AB4A86D2-3DA8-11E7-B936-ADB433E3EC292017-05-20T22:06:16:16Z2018-07-04T18:17:44:44ZYAUIoTLMartin Oldfield<p>Yet Another Useless Internet of Things Lamp. </p><p><i>Update June 2018: Although I continue to use the hardware described below, I have completely replaced the software with a stack targetting Apple’s HomeKit <span class="caps">API.</span> You can read about that in <a href="../../2018/06/homekit-light.html">another article</a>.</i></p>
<p><a href="./yauiotl-case.jpg"><img src="yauiotl-case.jpg" alt="" class="img_border" /></a></p>
<h2>Introduction</h2>
<p>For a while I’ve wanted a lamp on my desk which throws a reasonable amount of light around, and I took the opportunity to scratch a few ‘I want to play with’ itches. Although final object look destined to be Yet Another Useless Internet of Things Lamp, it turned out to be useful! In the process of building it I learnt a lot about Amazon’s IoT service, and generally about using Linux in a more embedded environment.</p>
<h3><em>desiderata</em></h3>
<p>So I want a light on the Internet. Parts of the design are obvious: there needs to be something which emits light, something which controls it, and some sort of network connection.</p>
<p>I also want:</p>
<ul>
<li>diffuse light;</li>
<li>to limit machining to a laser-cutter;</li>
<li>the light to work if the Internet breaks.</li>
</ul>
<p>Happily I wasn’t too fussed about the cost of this. Obviously these aren’t normal engineering constraints: I’m optimizing for fun and educational value, rather than for profit or efficiency. I’m also aware of my limitations: if I try to cut or drill stuff, it always ends up a bit skewwiff.</p>
<h2>Illumination</h2>
<p>These days there’s an obvious way to make a light: pass a current through a high-power <span class="caps">LED.</span> To avoid direct illumination, it’s not quite clear whether it’s better to shine the light upwards and bounce it off the ceiling, or sideways through some sort of diffuser. I chose the latter, which coupled with the desire for symmetry, led to a basically square light with four <span class="caps">LED</span>s: one per face.</p>
<p>Although more efficient than incandescent bulbs, <span class="caps">LED</span>s still get hot, so some sort of heatsink will probably be needed. That’s probably going to be made of aluminium, so it might be sensible to treat this as a structural element. However, making this work well would probably violate the ‘no machining’ rule, so I didn’t do it. Instead the <span class="caps">LED</span>/heatsink assembly is clamped into place by a couple of laser-cut parts.</p>
<h3>Choice of <span class="caps">LED</span>s</h3>
<p>These days there are any number of high-quality <span class="caps">LED</span>s designed specifically for lighting, so many in fact that it’s time consuming to consider them all. To make things easier, I limited my search to <span class="caps">LED</span>s from <a href="http://www.cree.com">Cree</a> essentially because they have a good reputation in random parts of the Internet.</p>
<p>Cree make a vast number of plausible products in their <a href="http://www.cree.com/led-components/products/xlamp-leds">XLamp range</a>. The <span class="caps">CXB1304 </span>is the smallest of their most recent family of <span class="caps">LED </span>arrays: you can pump nearly 10W into it and get out roughly 500 lumens of light. Besides choosing the forward voltage between 9V, 18V and 36V, you also have some control over the colour and intensity of light. The full range is enormous, but Digikey only <a href="https://www.digikey.co.uk/products/en?keywords=CXB1304">stock a few options</a>.</p>
<p>I went for the <a href="https://www.digikey.co.uk/product-detail/en/cree-inc/CXB1304-0000-000C0UB230G/CXB1304-0000-000C0UB230G-ND/5124948"><span class="caps">CXB1304</span>-0000-000C0UB230G</a>, which trades a warmer light for less brightness and runs off 9V. In small quantities they cost about £2.50 each. The viewing half-angle, i.e. the angle at which the intensity halves, is 57.5°.</p>
<h4>Mounting clip</h4>
<p>The <span class="caps">LED </span>doesn’t have mounting holes, but Molex have the answer. They make the <a href="http://www.molex.com/molex/products/datasheet.jsp?part=active%2F1805550002_SOLID_STATE_LIGHTI.xml&channel=Products&Lang=en-US">1805550002</a> mounting clip which holds the <span class="caps">LED </span>in place, and connects a couple of flying leads to it. Digikey sell them for roughly the same price as the <span class="caps">LED</span>!</p>
<h3>Heatsinks</h3>
<p>The choice of heatsink is perhaps wider than that of <span class="caps">LED</span>s: after all there are a vast number of different shapes, quite apart from such basic parameters as thermal resistance.</p>
<p>Just as I arbitrarily restricted my <span class="caps">LED </span>search to those from Cree, I similarly looked only at heatsinks from <a href="http://www.fischerelektronik.de/en/home-en/">Fischer Electronics</a>.</p>
<p>With the <span class="caps">LED </span>firing horizontally, having vertical fins seemed sensible: encouraging convection will help cooling. Although there are some <a href="http://www.fischerelektronik.de/web_fischer/en_GB/heatsinks/B03.1/Heatsinks%20for%20LEDs/$search_result_naviActualPage/1/$search_result_naviLinesPerPage/100/search.xhtml">specialized parts</a> the generic <a href="http://www.fischerelektronik.de/web_fischer/en_GB/heatsinks/A01/Standard%20extruded%20heatsinks/PR/SK48_/index.xhtml">SK 48 50</a> seems to fit the bill. Somewhat against my desire I had to drill a couple of holes in each heatsink to attach the <span class="caps">LED </span>mounting clip, but it worked out all right in the end.</p>
<p>The basic thermal resistance of the SK 48 50 is quoted at roughly 3°C/W, so, in an ideal world, if the entire 9W is dissipated into the heatsink, the temperature should rise by 27°C. The set up isn’t ideal though, and specifications are often optimistic, so I expect the real rise will be higher. For one thing, the clamp which holds the heatsink surely impedes cooling.</p>
<p><a href="./yauiotl-innards.jpg"><img src="yauiotl-innards.jpg" alt="" class="img_border" /></a></p>
<h3><span class="caps">LED </span>driver</h3>
<p>There are lots of ways to control an <span class="caps">LED, </span>but ultimately its brightness is governed by the current flowing through it. So we can either vary that current, or modulate a fixed current with a <span class="caps">PWM </span>signal.</p>
<p>Given that this will be driven by a digital signal, the latter seems more sensible. This is a reasonably common task, and unsurprisingly dedicated ICs are available. One such is the <a href="https://www.diodes.com/products/power-management/led-drivers/medium-voltage-dc-dc-led-drivers/part/AL8805"><span class="caps">AL8805</span></a> from Diodes Incorporated. It’s a <span class="caps">PWM</span>-friendly, constant-current, buck converter, which handles up to 1A in a <span class="caps">SOT25 </span>package. The external components are limited to a handful of resistors and capacitors, and a single inductor.</p>
<p>Of particular note in this prototype is that Sparkfun sell the <a href="http://+https://www.sparkfun.com/products/13705">Picobuck</a>, which contains three <span class="caps">AL8805</span>s configured for 330mA or 660mA, or up to 1A if you wield a soldering iron. This saves spinning a board to drive the <span class="caps">LED</span>s.</p>
<p>Being a buck converter, the <span class="caps">AL8805 </span>can only drop the voltage, so we’ll need to supply at least 9.5V (the voltage across the <span class="caps">LED </span>at full brightness). Allowing for some drop across the device, and rounding up to a nice number, 12V seems a convenient lowest supply voltage to quote. At the high-end, the <span class="caps">AL8805 </span>is happy up to 36V.</p>
<h3>Diffuser</h3>
<p>Without any better ideas, I used 3mm frosted acrylic to diffuse the light. There’s an enormous choice, so again I pruned the search to one manufacturer: <a href="https://perspex.com/">Perspex from Lucite</a>. However, this still leaves a <a href="https://perspex.com/product-ranges/">vast range</a> to consider, and I’m far from sure I chose the best.</p>
<p>One interesting range is called <a href="https://perspex.com/product-ranges/perspex-textures/frost/">Frost</a>, which has a nice matt finish on both sides. I experimented with a couple:</p>
<ul>
<li>S2 1T41 ‘Moonlight White’. This looks great: a sort of milky-white colour, but lets through very little light.</li>
<li>S2 030 ‘Polar White’. This looks fine, though more ‘plasticky’ than ‘Moonlight’, but is significantly more transparent.</li>
</ul>
<p>Rather than try to blast vast amounts of light through the former, I went with the latter. Mounted about 2.5cm from the <span class="caps">LED, </span>the half-intensity ring should have a diameter of 8cm. In practice the diffuser’s height is 10cm which seems to work well: its width is larger to accommodate the heatsink.</p>
<p>Incidentally, if you want Perspex near Cambridge, <span class="caps">UK,</span> I recommend <a href="http://www.edplastics.co.uk/index.html">Engineering & Design Plastics</a>. Besides being both nice and knowledgeable people, they have a handy sample card hanging in their reception.</p>
<h3>Summary</h3>
<p>The unsurprising conclusion is that you can easily make a perfectly respectable light by shining a <span class="caps">LED </span>through a piece of frosted acrylic. In practice, four <span class="caps">LED</span>s driven at 330mA (so they draw roughly 3W each) illuminate my room nicely.</p>
<p>An optimist might expect the heatsink temperature to rise by about 3W × 3°C/W = 9°C, but in practice the rise seems to be roughly twice this: about 20°C.</p>
<p>Although the diffuser is <span class="caps">OK, </span>were I designing the lamp again, I would investigate some of the other Perspex ranges: <a href="https://perspex.com/product-ranges/perspex-textures/silk-and-satin/">Silk</a>, and <a href="https://perspex.com/product-ranges/perspex-light/diffuse/">Light</a>.</p>
<h2>Case</h2>
<p>The finished lamp is 17.5cm square and stands about 15cm tall. It is made from 3mm, laser-cut, sheets of plywood and frosted acrylic. The plywood is quite pale, so after cutting it, I stained it with light-oak stain, then a couple of coats of Danish oil, both from <a href="https://www.rustins.ltd/rustins/">Rustins</a>.</p>
<p>Rather than use <span class="caps">CAD </span>software, plans for the case were drawn by hand, or at least by <a href="https://www.haskell.org">Haskell</a>. Explicitly, the <span class="caps">PDF </span>files for the laser-cutter were produced by a Haskell program using the <a href="http://projects.haskell.org/diagrams/">diagrams library</a>.</p>
<h3>Joints</h3>
<p>Panels join each other at 90° in rows of shallow mortice-and-tenon joints. Although the laser-cutter is very accurate and has very little <a href="https://en.wikipedia.org/wiki/Saw#Terminology">kerf</a>, in practice getting tight joints needed a fair bit of fiddling. Instead I opted for reasonably loose fits, and held things together with bolts.</p>
<p>Most of the uncertainty comes from variation in the thickness of the materials: typically ±0.5mm. I suspect I could have made the mortices’ quite tight lengthwise, whilst leaving them loose widthwise.</p>
<h3>Plans</h3>
<p>The laser cutter is driven by, of all things, Corel Draw, and <span class="caps">PDF</span>s are a convenient way to import designs. The simple convention is that red lines indicate where the laser should cut; black where it should etch.</p>
<p>The files below all fit nicely onto A4 for convenient printing, but very little work has been done to optimize the arrangement of parts to reduce the material used. You’ll see that many parts are duplicated: the files below contain a full set of parts for the light.</p>
<table class="cspaced" cellspacing="0"><tr><td><a href="./case-pdfs/out-ply3mm-000.pdf"><img src="case-pdfs/out-ply3mm-000.png" alt="" /></a></td><td><a href="./case-pdfs/out-ply3mm-001.pdf"><img src="case-pdfs/out-ply3mm-001.png" alt="" /></a></td><td><a href="./case-pdfs/out-ply3mm-002.pdf"><img src="case-pdfs/out-ply3mm-002.png" alt="" /></a></td></tr><tr><td>Top plate</td><td>Mid plate</td><td>Base plate</td></tr><tr><td><a href="./case-pdfs/out-ply3mm-003.pdf"><img src="case-pdfs/out-ply3mm-003.png" alt="" /></a></td><td><a href="./case-pdfs/out-ply3mm-004.pdf"><img src="case-pdfs/out-ply3mm-004.png" alt="" /></a></td><td><a href="./case-pdfs/out-whiteacrylic-003.pdf"><img src="case-pdfs/out-whiteacrylic-003.png" alt="" /></a></td></tr><tr><td>Front & back</td><td>Sides</td><td>Heatsink clamp</td></tr><tr><td><a href="./case-pdfs/out-whiteacrylic-000.pdf"><img src="case-pdfs/out-whiteacrylic-000.png" alt="" /></a></td><td><a href="./case-pdfs/out-whiteacrylic-001.pdf"><img src="case-pdfs/out-whiteacrylic-001.png" alt="" /></a></td><td><a href="./case-pdfs/out-whiteacrylic-002.pdf"><img src="case-pdfs/out-whiteacrylic-002.png" alt="" /></a></td></tr><tr><td>Diffusers</td><td>Diffusers</td><td>Shades</td></tr></table>
<p>These files along with the Haskell which generated them are included in the github repository.</p>
<h2>Control</h2>
<p>Having discussed the illumination part of the project, we turn now to the brains. This is a one-off project, so we’d like to use something off-the-shelf and easy to use. Further:</p>
<ul>
<li>we have plentiful space and power;</li>
<li>we need Internet connectivity;</li>
<li>the software does’t have to do anything complicated or quickly.</li>
</ul>
<p>Consequently it seemed sensible to use a <a href="https://www.raspberrypi.org">Raspberry Pi</a> running <a href="https://www.raspberrypi.org/downloads/raspbian/">Raspbian</a>, a Debian derived Linux.</p>
<p>This choice makes the software easier than it would have been on a traditional embedded platform. The development environment is rich with both tools and languages, and there’s enough power floating around to implement everything in, say, Python, without worry.</p>
<p>Further, on Linux there’s already code to talk to lots of devices, reducing the amount of code which needs to be written, debugged, and maintained for this project. Quite apart from the networking stack, I found I could use existing code to handle most of the hardware too.</p>
<p>There are many models of Pi these days. The cool kids would doubtless use a dinky <a href="https://www.raspberrypi.org/products/pi-zero-w/">Pi Zero W</a>, but I had an old <a href="https://www.raspberrypi.org/products/model-a/">Model A</a> lying around, so used that instead. This lacks any networking hardware, so I added a cheap <span class="caps">USB</span> WiFi dongle.</p>
<p>The Pi needs a 5V supply, which is provided by a cheap Chinese buck converter module from eBay.</p>
<h3><span class="caps">PWM</span></h3>
<p>The Pi has hardware support for a <span class="caps">PWM </span>output, which you can configure with <a href="../03/rpi-devicetree.html">Devicetree</a>. However, I used the <a href="http://abyz.co.uk/rpi/pigpio/">pigpio</a> library which offers up to 31 <span class="caps">PWM </span>channels: I used four channels to give each <span class="caps">LED </span>its own brightness control.</p>
<p>Occasionally, I’ve seen glitches with the light: a few seconds of flickering. I don’t know if this is a problem with pigpio or my application. Even if there is a problem in the version of pigpio I’m using (v60), it might be fixed in more recent ones.</p>
<p>On the other hand, I always set the channels to the same value, so it would probably be simpler to switch back to a single <span class="caps">PWM </span>channel and remove the dependency on pigpio.</p>
<h2>The Internet of Things</h2>
<p>When building the light, I wanted to connect it to the Internet. I have vague future plans for hooking it into the Apple’s Homekit or Amazon’s Alexa, and it might be fun to try some sort of automation too.</p>
<p>For now though, I just wanted to put the light on the Internet in some way. One approach would be to embed a web-server into the lamp, then provide something like a <span class="caps">REST API.</span> However, if I wanted to access the lamp from outside the <span class="caps">LAN,</span> I’d have to poke a hole in the firewall, which always seems risky.</p>
<h3><span class="caps">AWS</span> IoT</h3>
<p>Instead, I embraced the wonder of the <a href="https://aws.amazon.com/documentation/iot/">Amazon Web Services IoT <span class="caps">API</span></a>, which:</p>
<blockquote><p>...enables secure, bi-directional communication between Internet-connected things ... and the <span class="caps">AWS </span>cloud over <span class="caps">MQTT </span>and <span class="caps">HTTP.</span></p></blockquote>
<p>There are other similar services, but <span class="caps">AWS </span>generally seems to do things well, and I am reasonably confident that Amazon are in this game for the long-term.</p>
<p><span class="caps">AWS</span> IoT allows you to create a <a href="http://docs.aws.amazon.com/iot/latest/developerguide/iot-thing-shadows.html">device shadow</a> in the <span class="caps">AWS </span>cloud, which in this case stands as a proxy for the lamp. In particular, the shadow has a brightness attribute, which clients can set. The physical lamp is also an <span class="caps">AWS</span> IoT client, and sets the light’s brightness to match the shadow’s brightness.</p>
<p>Rather than just a single brightness level, the device shadow has the notion of a desired level and a reported one. If we want the light to brighter, we change the desired level. When the <span class="caps">PWM </span>duty-cycle changes to actually make the light brighter, we update the reported level.</p>
<p>All this works efficiently, in the sense that the lamp doesn’t have to poll to get status: instead the lamp subscribes to the shadow, which then sends updates when the state changes. In practice, I typically see 200 updates a day, but it’s late spring as I write this, so the light gets used most evenings but rarely during the rest of the day.</p>
<p>At an early stage in development, all the updates went via the <span class="caps">AWS</span> IoT service, because I was interested to know whether introducing such latency between the knob and the light would be annoying. In practice even though the light is the UK and I’m using the <span class="caps">AWS</span> IoT service in North Virginia, the latency is low-enough that it’s this isn’t a problem. The ping time to the <span class="caps">AWS </span>server is about 85ms, so perhaps it’s not that surprising. Incidentally light takes about 43ms to travel 8,000 miles in a vacuum.</p>
<p>These days, updates from the knob on the box are handled directly, but the remote controller still sends its commands via <span class="caps">AWS </span>even though it seems daft to send signals about 8,000 miles to control a lamp 30cm away.</p>
<p>I should add the system doesn’t work perfectly: there are times when it becomes unresponsive. It has the feel of a connection dying without anyone noticing, at least until a message is sent. I’ve not checked this though, so it’s just a hunch.</p>
<p>Amazon provide a <a href="https://github.com/aws/aws-iot-device-sdk-python">python <span class="caps">SDK</span></a> for the <span class="caps">MQTT API </span>which was reasonably straightforward to use. Much of the apparent verbosity comes from setting up the connection in a secure way. The <span class="caps">SDK </span>is agnostic about how you provide the information: I encapsulated it all in a more opinionated wrapper.</p>
<p>I’m not sure I got the encapsulation quite right: it would be nice to revisit it at some point, and perhaps look at using a generic <span class="caps">MQTT </span>library instead. There’s also scope for taking a rather more thoughtful look at the provision of credentials to access the service. For now, I generated these manually, and copied them to the devices. Easy enough, but it doesn’t scale.</p>
<h3>Ongoing costs</h3>
<p>Although many of the benefits of embracing the IoT are ‘future work’, one downside is clear and present now: Amazon charge for sending messages!</p>
<p>As of now (May 2017), they charge $5 per million messages. At 200 messages per day that’s roughly ¢3 per month. However, as discussed above I think this is unrepresentative, and I suspect the annual bill will be closer to $2.</p>
<p>In practice it is easy to send many more messages during development. I expect this month I’ll clock up charges of perhaps $1, and I’ve not done much work on it. Some apparently innocent ideas can be expensive too. For example, imagine sending a message every second: that’s nearly 3 million per month for which Amazon will charge you about $15. Not a problem for a one-off, but not scalable.</p>
<p>Were I deploying this sort of thing for real, I think more thought would be needed here. For control over the <span class="caps">LAN, </span>fast response is clearly useful, but it’s less important if the commands are coming over the Internet from miles away. So it probably makes sense to do <span class="caps">LAN </span>control independently of <span class="caps">AWS, </span>and only update the device shadow when things have settled.</p>
<h2>Input devices</h2>
<p>One benefit of using Linux to control the lamp is that the kernel already knows how to talk to various devices. For example, to control the device we have a rotary encoder to set the brightness and a button to toggle it on and off. Code to handle all this is in the kernel: we just need to configure it with some suitable <a href="../03/rpi-devicetree.html">device tree overlays</a>.</p>
<p>Using kernel device drivers removes all the speed-critical code from our program, which means we can write it in a high-level language. Here, we use python. Explicitly, once enabled devices appear at /dev/input which we can conveniently access with Python’s <a href="https://python-evdev.readthedocs.io/en/latest/">evdev</a> bindings.</p>
<p>Here’s some code which sets up the devices:</p>
<pre><code class="small">import evdev
...
def getDevices():
devices = {}
for f in evdev.list_devices():
d = evdev.InputDevice(f)
tag = None
name = d.name.lower()
if name "soc:knob":
tag = "knob"
if name "soc:keypad" or "keyboard" in name:
tag = "button"
if tag:
devices[d.fd] = { "tag":tag, "dev":d }
return devices</code></pre>
<p>Here’s some which parses the events:</p>
<pre><code class="small">import evdev
from evdev import ecodes
...
r, w, x = select(devices, [], [], idleTime)
for fd in r:
d = devices[fd]
for e in d["dev"].read():
updateState(localState, d["tag"], e)
...
def updateState(s, tag, e):
brightness = s[brightnessKey]
# handle rotation events
if tag == "knob" and e.type 2:
brightness += e.value * deltaBrightness
# handle button down events
if tag == "button" and e.type 1 and e.value 1:
# e.code tells us which button was pushed
# custom button toggles state
if e.code == 256:
if brightness < maxBrightness * 0.05:
brightness = maxBrightness
else:
brightness = 0
if e.code == ecodes.KEY_UP:
brightness += deltaBrightness
if e.code == ecodes.KEY_DOWN:
brightness -= deltaBrightness
if e.code == ecodes.KEY_LEFT:
brightness = 0
if e.code == ecodes.KEY_RIGHT:
brightness = maxBrightness
brightness = clampTo(0, maxBrightness, brightness)
s[brightnessKey] = brightness</code></pre>
<p>Another benefit to using the kernel code is that there’s a clean separation between the device driver and application logic. Apart from aesthetic benefits, it means we can test hardware and software independently. For example, the evtest program lets us check that the hardware’s working without writing a test harness.</p>
<h2>Temperature sensing</h2>
<p>It’s useful to be able to sense the temperature of the <span class="caps">LED </span>heatsinks, and again we can do this without writing much software.</p>
<p>First the sensors. Maxim make the <a href="https://www.maximintegrated.com/en/products/analog/sensors-and-sensor-interface/DS18S20.html"><span class="caps">DS18S20</span></a>, a cheap, widely-available, temperature sensor. It sits on a 1-wire bus, which makes the wiring easy: besides the sensors themselves, we need only a single pull-resistor</p>
<p>Both the 1-wire bus and the <span class="caps">DS18S20 </span>are supported by Raspbian, and kind people on the Internet have already <a href="https://www.modmypi.com/blog/ds18b20-one-wire-digital-temperature-sensor-and-the-raspberry-pi">documented it</a>.</p>
<p>Having set this up, measuring the temperature from the command-line is easy:</p>
<pre><code class="small">$ cat /sys/bus/w1/devices/*/w1_slave | fgrep t=
c7 01 4b 46 7f ff 09 10 01 t=28437
d5 01 4b 46 7f ff 0b 10 42 t=29312
c6 01 4b 46 7f ff 0a 10 17 t=28375
da 01 4b 46 7f ff 06 10 31 t=29625</code></pre>
<p>The t= lines show the temperature in milli-Celsius on the four <span class="caps">DS18S20 </span>sensors in the lamp: one on each heatsink.</p>
<h2>Application code</h2>
<p>The application is really rather simple. Essentially we look for changes from either our local controls or the <span class="caps">AWS </span>device shadow, decide on a new brightness then push that to both the device shadow and the <span class="caps">LED </span>controller.</p>
<p>The only essential complexity is to multiplex changes made locally by the physical controls with other updates to the device shadow.</p>
<p>There’s also some accidental complexity because the input events are handled by a select, but the <span class="caps">AWS </span>stuff invokes a callback. In the code below, <code>remoteState</code> gets tweaked at a distance: it might be nicer to put all the <span class="caps">AWS </span>stuff in a separate thread which talks to the main code over a socket which we can pass to select. The choices here feel entwined with extending the code to handle Homekit or Alexa, so it will do for now.</p>
<pre><code class="small">while True:
if bot == None:
bot = connectToAWS(args.config, remoteState)
r, w, x = select(devices, [], [], idleTime)
oldState = localState.copy()
for fd in r:
d = devices[fd]
for e in d["dev"].read():
updateState(localState, d["tag"], e)
t = time.time()
if localState != oldState:
remote_lockout_expire = t + remote_lockout_period
if t > update_lockout_expire and sentState != localState:
awsiot.pushState(bot, localState)
sentState = localState.copy()
update_lockout_expire = t + update_lockout_period
if t > remote_lockout_expire:
localState.update(remoteState)
sentState = localState.copy()
if args.hasLED and localState != oldState:
setBrightness(gpio, bot, localState) </code></pre>
<h2>General Linux configuration</h2>
<p>Raspbian is pretty reasonable out-of-the-box. However, besides the hardware specific things above I changed a few system things to suit better an embedded device.</p>
<h3>systemd</h3>
<p>In the past, to start programs at boot, you would typically write a script in /etc/init.d. Those days are gone now, so <a href="https://en.wikipedia.org/wiki/Systemd#Criticism">for good or ill</a> we now write scripts for systemd.</p>
<p>Here’s the script I used for the light:</p>
<pre><code class="small">[Unit]
Description="Toy IOT Light - LED"
[Service]
Type=simple
User=mjo
WorkingDirectory=/home/mjo/aws-iot-toys
ExecStartPre=/usr/bin/sudo /home/mjo/aws-iot-toys/fix-pwm-perms
ExecStart=/usr/bin/python /home/mjo/aws-iot-toys/bin/toy-light-led.py
StandardInput=null
StandardOutput=syslog
StandardError=syslog
Restart=on-failure
[Install]
WantedBy=network-online.target </code></pre>
<h3>unattended updates</h3>
<p>It’s important that anything on a network gets regular security patches, but it’s a pain to have to login to lots of systems, or even to read mail from them. So I throw caution to the wind, and enable <a href="https://wiki.debian.org/UnattendedUpgrades">unattended updates</a> without configuring mail.</p>
<p>I’m reasonably sure this increases the chances of the light crashing, but reduces the risk of security holes being unpatched, and that’s the right trade-off for me.</p>
<h3>syslog</h3>
<p>I have one machine which runs an syslog server open to other machines on the <span class="caps">LAN.</span> So I configure the lamp to send log messages there:</p>
<pre><code>$ less /etc/rsyslog.d/remote.conf
*.* @@logger.local</code></pre>
<p>It would be nice to send other messages here, e.g. if software isn’t being patched.</p>
<h2>Remote controller</h2>
<p><a href="./yauiotl-remote.jpg"><img src="yauiotl-remote.jpg" alt="" class="img_noborder" /></a></p>
<p>Having built a light and put it on the Internet, it makes sense to add a remote control. Although it seems odd at first, in retrospect it’s clear that to make a remote control we just clone the lamp code and remove the bit which report setting the new brightness level to the device shadow.</p>
<p>In particular, it’s clear that the remote control has to track the shadow’s state, because it needs to respect any changes made on other devices.</p>
<p>Given that the device is so similar, it makes sense to implement it in a similar way too: a Raspberry Pi plus and rotary encoder.</p>
<h2>Code and plans</h2>
<p>All of the code for this is on <a href="https://github.com/mjoldfield/yauiotl">github</a> but it’s rather more a brain dump than a nicely formatted repository.</p>
<p>Feel free to contact me if you’re interested. </p>942AAB8A-7A51-11E8-8589-DEA687A8D55E2018-06-27T21:28:42:42Z2018-06-29T20:07:35:35ZSetting up a Raspberry PiMartin Oldfield<p>Brief notes on building a new Raspberry Pi project. puzzles. </p><h2>Introduction</h2>
<p>These are brief notes on what I do when starting a new Raspberry Pi project. By the end, I have a basically working machine, on the network, with a minimal set of development tools.</p>
<h2>Flash the SD card</h2>
<p>Follow the instructions on the <a href="https://www.raspberrypi.org/documentation/installation/installing-images/mac.md">Raspberry Pi website</a>. Find out the relevant device:</p>
<pre><code>$ diskutil list</code></pre>
<p>Now flash it:</p>
<pre><code>$ diskutil unmountDisk /dev/diskN
$ sudo dd bs=1m if=raspbian-stretch-lite.img \
conv=sync of=/dev/rdiskN
$ diskutil unmountDisk /dev/diskN</code></pre>
<p>You can now remove the SD card.</p>
<h2>First boot</h2>
<p>The aim here is to get to a point where everything else can be done remotely.</p>
<p>Connect the Raspberry Pi to:</p>
<ul>
<li>a monitor;</li>
<li>a keyboard;</li>
<li>the network.</li>
</ul>
<p>On a machine which no on-board networking and only one <span class="caps">USB </span>port you might have problems!</p>
<p>Apply power and check that it boots properly.</p>
<p>Run raspi-config and change:</p>
<ul>
<li>the pi account password;</li>
<li>hostname;</li>
<li>enable Wi-Fi (note the the country is GB not UK);</li>
<li>enable <span class="caps">SSH</span>d (in ‘Interfacing Options’).</li>
</ul>
<p>Reboot.</p>
<h2>First login</h2>
<p>You should now be able to login remotely. Even the lite version of Raspbian has Bonjour support so you can just</p>
<pre><code>$ ssh pi@foo.local
pi@foo.local's password:</code></pre>
<p>It is a pain to keep using passwords, so copy over any relevant <span class="caps">SSH </span>public keys.</p>
<h2>Nesting</h2>
<p>On all installations I want some basic development tools:</p>
<pre><code>$ sudo apt-get update && sudo apt-get dist-upgrade
$ sudo apt-get autoremove
$ sudo apt-get install emacs-nox python3-pip ipython python3-gpiozero</code></pre>
<p>Other useful packages include i2c-tools and pimoroni.</p>
<h3>Unattended updates</h3>
<p>Life is too short for me to manually update all the Raspberry Pis around the place so I live dangerously and enable <a href="https://wiki.debian.org/UnattendedUpgrades">Unattended Upgrades</a>.</p>
<pre><code>$ sudo apt-get install unattended-upgrades
$ sudo dpkg-reconfigure -plow unattended-upgrades</code></pre>
<h3>Syslog</h3>
<p>To send the system logs to a central machine, edit the config thus:</p>
<pre><code>$ less /etc/rsyslog.d/remote.conf
*.* @@logger.local
$ sudo service rsyslog restart</code></pre>
<h3>udev rules for <span class="caps">PWM </span>permissions</h3>
<p>By default, the <span class="caps">GPIO </span>devices in sysfs are in the <span class="caps">GPIO </span>group which makes it easy for non-root programs to access them:</p>
<pre><code>pi@zowie2:~ $ ls -l /sys/class
...
drwxrwx— 2 root gpio 0 Jun 28 20:17 gpio
...
drwxr-xr-x 2 root root 0 Jun 29 11:26 pwm
... </code></pre>
<p>This is done by rules in /etc/udev/rules.d/99-com.rules but sadly there aren’t analogous entries for the <span class="caps">PWM </span>devices. So I append this:</p>
<pre><code class="small">SUBSYSTEM=="pwm*", PROGRAM="/bin/sh -c '\
chown -R root:gpio /sys/class/pwm && chmod -R 770 /sys/class/pwm;\
chown -R root:gpio /sys/devices/platform/soc/*.pwm/pwm/pwmchip* \
&& chmod -R 770 /sys/devices/platform/soc/*.pwm/pwm/pwmchip*\
'"</code></pre>
<h2>Prudence</h2>
<p>It’s probably worth taking an image of the SD card at this point. You can then clone the card to make other machines rather more quickly: all that needs to be done is to change the hostname with raspi-config. </p>90507968-2930-11E7-A059-DAB633E6D3122017-04-24T20:56:19:19Z2018-06-29T20:06:51:51ZDevicetree on the Raspberry PiMartin Oldfield<p>Using Devicetree (particularly) on the Raspberry Pi. </p><p><em>Note: When I revisited this in June 2018, I found that I could accomplish most of my goals by simply configuring devicetree overlays included by default in Raspbian.</em></p>
<h2>Introduction</h2>
<p>The Raspberry Pi has a little <span class="caps">LED </span>which flashes when you access the SD card. The hardware for this is trivial: a <span class="caps">LED </span>connected to a <span class="caps">GPIO </span>pin. The software is more interesting though. For a start, there isn’t any code in the SD card block device driver which talks to the <span class="caps">LED</span>’s <span class="caps">GPIO </span>pin. Instead, an instance of the Linux <span class="caps">LED </span>device is created which acquires the <span class="caps">GPIO </span>pin and hooks into the block device using a well-defined <span class="caps">API.</span> A nice design which separates the code for driving the <span class="caps">LED </span>from the code for handling the SD card, then composes them to get the desired behaviour.</p>
<p>The code for all this is in the kernel, but we still need to specify details: for example which <span class="caps">GPIO </span>pin we are using. One approach is to write a trivial kernel module to instantiate the devices with hard-coded parameters: I did this when I built <a href="/atelier/2015/02/seabass.html">Seabass,</a> a small Intel based computer. However there is a better way: <a href="https://www.devicetree.org">devicetree.</a></p>
<p>Devicetree provides a way for the kernel to load configuration data at an early stage of the boot process, which can then be used to bring up the rest of the system. As the name suggests devicetree creates a hierarchical tree of configuration data. Although the examples discussed here are all reasonably peripheral to the system’s operation, devicetree also handles core, system, data. Here’s a snippet showing some facet of interrupt handling:</p>
<pre><code class="small">interrupt-controller@7e00b200 {
reg = <0x7e00b200 0x200>;
compatible = "brcm,bcm2835-armctrl-ic";
#interrupt-cells = <0x2>;
phandle = <0x1>;
interrupt-controller;
};</code></pre>
<p>These days (early 2017) you can also load devicetree overlays dynamically, potentially letting us reconfigure a running system. However, not all devices support this: for instance, you can’t dynamically configure the <span class="caps">LED </span>subsystem.</p>
<p>Separating configuration and code makes it easier to use kernel modules without having to compile and maintain a local fork. Happily devicetree allows us to split the configuration data across multiple files so it’s easy to separate local changes from the standard system tree: we can patch different bits of the tree with overlay files.</p>
<p>Happily there’s good official support for all of this: both <a href="https://www.raspberrypi.org/documentation/configuration/device-tree.md">general documentation</a> and a <a href="https://www.raspberrypi.org/forums/viewforum.php?f=107">dedicated forum</a> for specific questions.</p>
<h2>sysfs</h2>
<p>Devicetree is a good way to configure kernel devices, but how do we talk to them ? Happily, in just the same way we do without devicetree. For example, we can control the <span class="caps">GPIO </span>pins via devices in <a href="https://www.kernel.org/doc/Documentation/gpio/sysfs.txt">/sys/class/gpio.</a> Accessing the <span class="caps">GPIO </span>pins this way has some advantages:</p>
<ul>
<li>You only need the standard file access <span class="caps">API </span>to make things work, so you can use any language. The <a href="http://elinux.org/Main_Page">Embedded Linux Wiki</a> has <a href="http://elinux.org/RPi_GPIO_Code_Samples#sysfs">an example written in C.</a></li>
<li>It is consistent across different platforms: I can use exactly the same client code on any hardware, modulo changing the path of the <span class="caps">GPIO </span>pin.</li>
<li>It plays well with the Unix permission system. By default, udev in Raspbian assigns the /sys/class/gpio path to the <code>gpio</code> group, so we need only make sure the user running the code belongs to the gpio group. We <em>don’t</em> need to grant the client code ‘root access’.</li>
</ul>
<p>We could control a <span class="caps">LED </span>this way, but Linux gives us a more abstract route: if we tell the <span class="caps">LED </span>subsystem that there’s a <span class="caps">LED </span>connected to a particular <span class="caps">GPIO </span>pin, we can control it at a higher level. Specifically we can use devicetree to asssociate a <span class="caps">LED </span>device with a particular <span class="caps">GPIO </span>pin. Having done this, access to the pin via /sys/class/gpio disappears, but we now have a <a href="https://www.kernel.org/doc/Documentation/leds/">/sys/class/led</a> device to play with instead:</p>
<pre><code class="small">$ cd /sys/class/leds/
$ ls
led0 led1</code></pre>
<p>We can see that led0 indicates disc activity on mmc0:</p>
<pre><code class="small">$ cat led0/trigger
none kbd-scrolllock kbd-numlock kbd-capslock kbd-kanalock kbd-shiftlock
kbd-altgrlock kbd-ctrllock kbd-altlock kbd-shiftllock kbd-shiftrlock
kbd-ctrlllock kbd-ctrlrlock [mmc0] timer oneshot heartbeat backlight
gpio cpu0 default-on input rfkill0</code></pre>
<p>Jumping forward slightly, here’s a snippet from the devicetree which accomplishes it all:</p>
<pre><code class="small">leds {
compatible = "gpio-leds";
phandle = <0x36>;
act {
gpios = <0xa 0x10 0x1>;
label = "led0";
linux,default-trigger = "mmc0";
phandle = <0x20>;
};
};</code></pre>
<p>Roughly:</p>
<ul>
<li>gpio-leds refers to the <a href="https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git/tree/drivers/leds/leds-gpio.c">leds-gpio.c</a> kernel module.</li>
<li>The 0xa part of the gpios line is a handle to the specific <span class="caps">GPIO </span>controller we’re using: in practice we’d use a symbolic reference in our source file.</li>
<li>The 0x10 part of the gpios line says use <span class="caps">GPIO</span> 16 (which you can verify by looking at pin C1 on the <a href="https://www.raspberrypi.org/app/uploads/2012/04/Raspberry-Pi-Schematics-R1.0.pdf">relevant schematic.</a></li>
<li>The 0x1 part of the gpios line says that the <span class="caps">GPIO </span>line is active low i.e. the light comes on when the output is low.</li>
<li>The default-trigger line says that once instantiated, the <span class="caps">LED </span>should be triggered by activity on the mmc0 block device.</li>
</ul>
<p>I used the <span class="caps">LED </span>subsystem on <a href="../../2014/12/mbmx-leds.html">seabass</a> where the <span class="caps">LED</span>/GPIO map was embedded in a bespoke kernel module, but the sysfs <span class="caps">API </span>is the same.</p>
<h2>Caveats</h2>
<p>Many people used to writing embedded code on tiny devices would doubtless recoil in horror at the inefficiency involved in controlling a <span class="caps">LED </span>through so many levels of indirection. Such people do have a point: if you want to toggle the <span class="caps">LED </span>really quickly, this isn’t the way to go. On the other hand, if you’re writing that kind of code, perhaps user-mode Linux isn’t the right environment anyway.</p>
<p>On the other hand, most of the time it seems only sensible to put code so closely tied to the hardware on the kernel side of an <span class="caps">API.</span> If we can afford the inefficiencies, it is also good engineering to establish a clear divide between application code and hardware drivers.</p>
<h2>A Devicetree cookbook</h2>
<p>You can read proper documentation on the Raspberry Pi <a href="https://www.raspberrypi.org/documentation/configuration/device-tree.md#part3">website</a> but roughly speaking, the kernel consults files in /boot during boot. In particular it loads an appropriate devicetree blob from one of the /boot/*.dtb files, then consults /boot/config.txt for more information.</p>
<p>Typically it will then load devicetree overlays from /boot/overlays to satisy dtoverlay commands in /boot/config.txt. For example dtoverlay=foo will load /boot/overlays/foo.dtbo</p>
<p>Both the initial blob and the overlays are binary files, but happily you can compile them losslessly from text files with the <code>dtc</code> command (note the odd filename convention):</p>
<pre><code>$ dtc -@ -I dts -O dtb -o foo.dtbo foo-overlay.dts</code></pre>
<p>Having compiled the overlay, you’ll probably want to copy it into /boot then reboot to load it:</p>
<pre><code class="small">$ sudo cp foo.dtbo /boot/overlays/
$ shutdown -r now</code></pre>
<p>The lossless nature of the compilation means we can disassemble a binary file too:</p>
<pre><code>$ dtc -I dtb -O dts /boot/overlays/i2s-mmap.dtbo
Warning (unit_address_vs_reg): Node /fragment@0 has a unit name, but no reg property
/dts-compatible/;
/ {
v1 = "brcm,bcm2708";
fragment@0 {
target = <0xdeadbeef>;
__overlay__ {
brcm,enable-mmap;
};
};
__fixups__ {
i2s = "/fragment@0:target:0";
};
}; </code></pre>
<p>Or even parse the devicetree of a running kernel:</p>
<pre><code class="small">$ dtc -I fs -O dts /proc/device-tree
...
/dts-v1/;
/ {
model = "Raspberry Pi Model B Rev 2";
compatible = "brcm,bcm2708";
memreserve = <0x1c000000 0x4000000>;
...</code></pre>
<h2>Devicetree in practice (2018 edition)</h2>
<p>When I started playing with devicetree in 2017, I did most things from first principles. Revisiting things in June 2018, I think most simple things can be done by simply using the overlays included by default in e.g. raspbian. To be precise here, the notes which follow are based on the 2018-06-27 release of Raspbian Stretch Lite.</p>
<p>The bottom line is that if you just want to <em>use</em> devicetree on the Raspberry Pi, begin by perusing <a href="https://raw.githubusercontent.com/raspberrypi/firmware/master/boot/overlays/README">/boot/overlays/README</a> and see if your problem is covered.</p>
<h3><span class="caps">LED </span>devicetree</h3>
<p>There is only minimal support for this, so I roll my own.</p>
<h3><span class="caps">PWM </span>devicetree</h3>
<p>Rather than just switching a pin on or off, sometimes it’s helpful to wiggle it around so that it’s on for say 80% of the time. The proper name for this is <a href="https://en.wikipedia.org/wiki/Pulse-width_modulation">Pulse-Width Modulation</a> (PWM) and happily we have both hardware and software support for this on the Raspberry Pi.</p>
<p>On the hardware side the <span class="caps">BCM2835 </span>has a dedicated <span class="caps">PWM </span>peripheral, documented in chapter 9 of the <a href="https://www.raspberrypi.org/app/uploads/2012/02/BCM2835-ARM-Peripherals.pdf">datasheet.</a> In the Raspberry Pi world, it is common to control the <span class="caps">PWM </span>peripheral from user code, but there is a perfectly functional <a href="https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git/tree/drivers/pwm/pwm-bcm2835.c">kernel module</a> too.</p>
<p>There are couple of overlays: one to enable a single <span class="caps">PWM </span>channel; the other to enable two of them for twice the fun.</p>
<p>The relevant incantations for /boot/config.txt are:</p>
<pre><code class="small">dtoverlay pwm
dtoverlay pwm-2chan</code></pre>
<p>There is scope to use different pins, but the hardware has some constraints. Refer to the <span class="caps">README </span>for more details.</p>
<p>The overlays above will make working <span class="caps">PWM </span>devices in /sys/class/pwm but they will be owned by root. By contrast, gpio devices are mapped into the gpio group by a udev script. In the past (mid-2017) the kernel didn’t generate the necessary events for udev, but happily today (June 2018) it does!</p>
<p>Sadly though udev doesn’t have suitable rules. I added these to /etc/udev/rules.d/99-com.rules:</p>
<pre><code class="small">SUBSYSTEM=="pwm*", PROGRAM="/bin/sh -c '\
chown -R root:gpio /sys/class/pwm && chmod -R 770 /sys/class/pwm;\
chown -R root:gpio /sys/devices/platform/soc/*.pwm/pwm/pwmchip* \
&& chmod -R 770 /sys/devices/platform/soc/*.pwm/pwm/pwmchip*\
'"</code></pre>
<p>An alternative is the rude workaround I used in the past: simply run a script to fix the permissions before running the client code, systemd has a hook for such tasks which maintains the split between system provided permissions and client code running under a normal user.</p>
<p>Having installed the pwm overlay, you can test it by connecting an <span class="caps">LED </span>between <span class="caps">GPIO</span> 18 and ground (via a suitable resistor) and then:</p>
<pre><code class="small">$ echo 0 > /sys/class/pwm/pwmchip0/export
$ echo 1000000 > /sys/class/pwm/pwmchip0/pwm0/period
$ echo 500000 > /sys/class/pwm/pwmchip0/pwm0/duty_cycle
$ echo 1 > /sys/class/pwm/pwmchip0/pwm0/enable
$ echo 900000 > /sys/class/pwm/pwmchip0/pwm0/duty_cycle
$ echo 100000 > /sys/class/pwm/pwmchip0/pwm0/duty_cycle</code></pre>
<h3>Button devicetree</h3>
<p>Devicetree isn’t just for outputs. It is easy to connect a push button to a <span class="caps">GPIO </span>pin and sense it, but someone has to debounce the input so that each press generates but a single event. This is a fairly general problem, so unsurprisingly the kernel has support for it: we just need to configure it.</p>
<pre><code class="small">dtoverlay=gpio-key,gpio=25,label=MYBTN,keycode=0x101</code></pre>
<p>The line above sets up a new button, which generates events when pressed:</p>
<pre><code class="small">$ evtest /dev/input/event1
Input driver version is 1.0.1
Input device ID: bus 0x19 vendor 0x1 product 0x1 version 0x100
Input device name: "19.button"
Supported events:
Event type 0 (EV_SYN)
Event type 1 (EV_KEY)
Event code 257 (BTN_1)
Properties:
Testing ... (interrupt to exit)
Event: time 1530301202.391360, type 1 (EV_KEY), code 257 (BTN_1), value 1
Event: time 1530301202.391360, ————-- SYN_REPORT ————
Event: time 1530301202.571365, type 1 (EV_KEY), code 257 (BTN_1), value 0
Event: time 1530301202.571365, ————-- SYN_REPORT ————</code></pre>
<p>It isn’t clear to me what the label argument does, nor if it’s possible to give the device a nicer path. The latter can be done if you write your own overlay.</p>
<p>To write applications, I found the python <a href="https://python-evdev.readthedocs.io/en/latest/">evdev</a> bindings convenient.</p>
<h3>Rotary encoder devicetree</h3>
<p>Rotary encoders are another popular input device: they generate a pair of quadrature pulse-trains which can be decoded to tell us how much the encoder has been turned. To instantiate the device add this to /boot/config.txt:</p>
<pre><code class="small">dtoverlay=rotary-encoder,pin_a=7,pin_b=8,relative_axis=1</code></pre>
<p>Then watch the events as the knob gets turned:</p>
<pre><code class="small">$ evtest /dev/input/event0
Input driver version is 1.0.1
Input device ID: bus 0x19 vendor 0x0 product 0x0 version 0x0
Input device name: "7.rotary"
Supported events:
Event type 0 (EV_SYN)
Event type 2 (EV_REL)
Event code 0 (REL_X)
Properties:
Testing ... (interrupt to exit)
Event: time 1530301590.130047, type 2 (EV_REL), code 0 (REL_X), value 1
Event: time 1530301590.130047, ————-- SYN_REPORT ————
Event: time 1530301590.679675, type 2 (EV_REL), code 0 (REL_X), value 1
Event: time 1530301590.679675, ————-- SYN_REPORT ————
Event: time 1530301591.295263, type 2 (EV_REL), code 0 (REL_X), value 1
Event: time 1530301591.295263, ————-- SYN_REPORT ————
Event: time 1530301592.056198, type 2 (EV_REL), code 0 (REL_X), value -1
Event: time 1530301592.056198, ————-- SYN_REPORT ————
Event: time 1530301592.578682, type 2 (EV_REL), code 0 (REL_X), value -1
Event: time 1530301592.578682, ————-- SYN_REPORT ————
Event: time 1530301593.355554, type 2 (EV_REL), code 0 (REL_X), value 1
Event: time 1530301593.355554, ————-- SYN_REPORT ————
Event: time 1530301593.822017, type 2 (EV_REL), code 0 (REL_X), value 1
Event: time 1530301593.822017, ————-- SYN_REPORT ————</code></pre>
<p>To write applications, I found the python <a href="https://python-evdev.readthedocs.io/en/latest/">evdev</a> bindings convenient.</p>
<h3>1-Wire devicetree</h3>
<p>Maxim Integrated make nice little digital thermometers which sit on a 1-wire bus, of these, the <a href="https://www.maximintegrated.com/en/products/sensors/DS18S20.html"><span class="caps">DS18S20</span></a> is reasonably common.</p>
<p>Happily there’s an overlay to bit-bang a 1-wire interface:</p>
<pre><code class="small">dtoverlay=w1-gpio,gpiopin=4</code></pre>
<p>Then you can attach a bunch of sensors and read the temperatures (in milliCelsius):</p>
<pre><code class="small">$ cat /sys/bus/w1/devices/*/w1_slave | fgrep t=
a9 01 4b 46 7f ff 07 10 85 t=26562
a0 01 4b 46 7f ff 10 10 6e t=26000
a4 01 4b 46 7f ff 0c 10 da t=26250
9f 01 4b 46 7f ff 01 10 40 t=25937</code></pre>
<h2>Devicetree from first principles</h2>
<p><em>These days (June 2018) you might find that you don’t need to do these anymore: see the section above. I’ve left the notes below not because I recommend them now, but because they might be useful. Note also that the <span class="caps">PWM </span>system has improved somewhat over the last year.</em></p>
<p>There is considerable overlap and duplication with the previous section: apologies for that.</p>
<p>Happily, enough people use the Raspberry Pi that not only are the general principles well tested and documented, but you can also often find someone who has implemented something close to the specific thing you want.</p>
<h3><span class="caps">LED </span>devicetree</h3>
<p>Above we saw a snippet from the devicetree which handles the disc activity <span class="caps">LED.</span> Here’s the full source of an overlay for a system heartbeat display:</p>
<pre><code class="small">/dts-v1/;
/plugin/;
/ {
compatible = "brcm,bcm2835", "brcm,bcm2708";
fragment@0 {
target = <&leds>;
__overlay__ {
hb_led: led {
label = "led1";
linux,default-trigger = "heartbeat";
gpios = <&gpio 17 0>;
};
};
};
};</code></pre>
<p>As you can guess this <span class="caps">LED</span>:</p>
<ul>
<li>is connected to <span class="caps">GPIO</span> 17;</li>
<li>is active high;</li>
<li>is driven by the <a href="https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git/tree/drivers/leds/trigger/ledtrig-heartbeat.c">heartbeat</a> trigger.</li>
</ul>
<p>The &gpio is a reference to the Pi’s main <span class="caps">GPIO </span>controller. In principle we could add other <span class="caps">GPIO </span>controllers, e.g. via <span class="caps">SPI, </span>create a <span class="caps">GPIO </span>controller node for them with a devicetree overlay, then put the heartbeat <span class="caps">LED </span>on the new <span class="caps">GPIO </span>controller by using a different reference.</p>
<p>Expansion <a href="https://www.raspberrypi.org/documentation/configuration/device-tree.md#part3.4"><span class="caps">HAT</span>s should implement this</a> transparently, by including a suitable devicetree overlay in an <span class="caps">EEPROM.</span> I’ve not played with this though.</p>
<p>The remaining difference between this overlay and the snippet above is that the overlay has to say where it fits in the existing devicetree. That’s done by target=<&leds> line: another symbolic reference.</p>
<p>More complicated overlays might want to modify different parts of the devicetree: happily they can do that by including more than one fragment.</p>
<h3><span class="caps">PWM </span>devicetree</h3>
<p>The <span class="caps">PWM </span>subsystem is reasonably complicated. Not only do we need a <span class="caps">GPIO </span>pin on which to generate the signal, but we also need to configure the clock going into the <span class="caps">PWM </span>hardware.</p>
<p>Happily people have already worked all this out and written it up. I found an article by <a href="http://www.jumpnowtek.com/rpi/Using-the-Raspberry-Pi-Hardware-PWM-timers.html">Jumpnow Technologies</a> helpful, but that refers to other, earlier, work.</p>
<p>In case of link rot, here’s the code I cribbed, though you’re probably better off getting it from <a href="https://github.com/jumpnow/meta-rpi/tree/krogoth/recipes-kernel/linux/linux-raspberrypi-4.4/dts">github:</a></p>
<pre><code class="small">/*
Legal pin,function combinations for each channel:
PWM0: 12,4(Alt0) 18,2(Alt5) 40,4(Alt0) 52,5(Alt1)
PWM1: 13,4(Alt0) 19,2(Alt5) 41,4(Alt0) 45,4(Alt0) 53,5(Alt1)
N.B.:
1) Pin 18 is the only one available on all platforms, and
it is the one used by the I2S audio interface.
Pins 12 and 13 might be better choices on an A+, B+ or Pi2.
2) The onboard analogue audio output uses both PWM channels.
3) So be careful mixing audio and PWM.
*/
/dts-v1/;
/plugin/;
/ {
compatible = "brcm,bcm2835", "brcm,bcm2708";
fragment@0 {
target = <&gpio>;
__overlay__ {
pwm_pins: pwm_pins {
brcm,pins = <18>;
brcm,function = <2>; /* Alt5 */
};
};
};
fragment@1 {
target = <&clk_pwm>;
__overlay__ {
// Rename the fixed "pwm" clock to avoid a clash
clock-output-names = "fake_pwm";
};
};
fragment@2 {
target = <&pwm>;
__overlay__ {
#clock-cells = <1>;
clocks = <&cprman 30>; /* 30 is the BCM2835_CLOCK_PWM */
assigned-clocks = <&cprman 30>;
assigned-clock-rates = <10000000>;
pinctrl-names = "default";
pinctrl-0 = <&pwm_pins>;
status = "okay";
};
};
fragment@3 {
target = <&cprman>;
__overlay__ {
status = "okay";
};
};
__overrides__ {
pin = <&pwm_pins>,"brcm,pins:0";
func = <&pwm_pins>,"brcm,function:0";
};
};</code></pre>
<p>This is clearly much more complicated than the <span class="caps">LED </span>example above, and it includes four fragments each patching a different part of the devicetree.</p>
<p>Ignoring the implementation details, three things are important:</p>
<ul>
<li>the <span class="caps">PWM </span>system will drive <span class="caps">GPIO</span> 18;</li>
<li>the <span class="caps">PWM </span>clock will run at 10MHz;</li>
<li>to some extent the <span class="caps">PWM </span>clock and audio clock are intertwined.</li>
</ul>
<p>As of 2018, the clock shenanigans above are no longer necessary. In practice I just use the standard <span class="caps">PWM </span>overlays though, so I’ve not revised this.</p>
<p>You’ll still have to worry about the permissions/owner of the sysfs device. See above for a discussion.</p>
<h4><span class="caps">PWM LED </span>control</h4>
<p>In principle we could create a Linux <span class="caps">LED </span>device with a <span class="caps">PWM </span>backend. The <a href="https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git/tree/drivers/leds/leds-pwm.c">code is in the kernel tree</a> but the module isn’t compiled in the current version of Raspbian. A fun future project.</p>
<h3>Button devicetree</h3>
<p>Happily someone has already written an article showing how to do this: see ShiftPlusOne's post in the <a href="https://www.raspberrypi.org/forums/viewtopic.php?f=107&t=115394">gpio_keys device tree overlay</a> thread.</p>
<p>I reduced his example to a single-button which generates keycode 256 when a button connected to <span class="caps">GPIO</span> 25 is pressed.</p>
<pre><code class="small">/dts-v1/;
/plugin/;
/ {
compatible = "brcm,bcm2835", "brcm,bcm2708", "brcm,bcm2709";
fragment@0 {
target-path = "/soc/gpio";
__overlay__ {
butt_pins: butt_pins {
brcm,pins = <25>;
brcm,function = <0>;
brcm,pull = <2>;
};
};
};
fragment@1 {
target-path = "/soc";
__overlay__ {
keypad: keypad {
compatible = "gpio-keys";
#address-cells = <1>;
#size-cells = <0>;
pinctrl-names = "default";
pinctrl-0 = <&butt_pins>;
button@13 {
label = "Test BTN0";
linux,code = <0x100>;
gpios = <&gpio 25 1>;
};
};
};
};
};</code></pre>
<p>We need a couple of fragments here: the first configures the <span class="caps">GPIO </span>pin; the second interprets the input as a keypad. The latter refers to the former with the &butt_pins reference.</p>
<p>Note that when we configure our <span class="caps">GPIO </span>pin, we use the brcm,pull line to configure the pin’s internal pull-up resistor, which saves installing one on the board.</p>
<h4>Watching the events</h4>
<p>Having loaded the overlay, an input device appears at /dev/input/by-path/platform-soc\:keypad-event. It’s convenient to view the events with the <a href="https://packages.debian.org/sid/utils/evtest">evtest</a> package.</p>
<pre><code class="small">$ sudo evtest /dev/input/by-path/platform-soc\:keypad-event
Input driver version is 1.0.1
Input device ID: bus 0x19 vendor 0x1 product 0x1 version 0x100
Input device name: "soc:keypad"
Supported events:
Event type 0 (EV_SYN)
Event type 1 (EV_KEY)
Event code 256 (BTN_0)
Properties:
Testing ... (interrupt to exit)
Event: time 1493118156.530675, type 1 (EV_KEY), code 256 (BTN_0), value 1
Event: time 1493118156.530675, ————-- EV_SYN ————
Event: time 1493118156.700677, type 1 (EV_KEY), code 256 (BTN_0), value 0
Event: time 1493118156.700677, ————-- EV_SYN ————
Event: time 1493118157.470660, type 1 (EV_KEY), code 256 (BTN_0), value 1
Event: time 1493118157.470660, ————-- EV_SYN ———— </code></pre>
<p>To write applications, I found the python <a href="https://python-evdev.readthedocs.io/en/latest/">evdev</a> bindings convenient.</p>
<h3>Rotary encoder devicetree</h3>
<p>Rotary encoders are another popular input device: they generate a pair of quadrature pulse-trains which can be decoded to tell us how much the encoder has been turned. The kernel has <a href="https://git.kernel.org/cgit/linux/kernel/git/stable/linux-stable.git/tree/drivers/input/misc/rotary_encoder.c">a driver</a> to do the decoding: we need only to configure it, for which <a href="https://www.kernel.org/doc/Documentation/input/rotary-encoder.txt">the documentation</a> is helpful.</p>
<p>Here’s the overlay file I used:</p>
<pre><code class="small">/dts-v1/;
/plugin/;
/ {
compatible = "brcm,bcm2835", "brcm,bcm2708", "brcm,bcm2709";
fragment@0 {
target-path = "/soc/gpio";
__overlay__ {
knob_pins: knob_pins {
brcm,pins = <7 8>;
brcm,function = <0>;
brcm,pull = <2>;
};
};
};
fragment@1 {
target-path = "/soc";
__overlay__ {
knob: knob {
compatible = "rotary-encoder";
#address-cells = <1>;
#size-cells = <0>;
pinctrl-names = "default";
pinctrl-0 = <&knob_pins>;
gpios = <&gpio 7 1>, <&gpio 8 1>;
linux,axis = <0>; /* REL_X */
rotary-encoder,relative-axis;
};
};
};
__overrides__ {
relative_axis = <&knob>,"rotary-encoder,relative-axis";
linux_axis = <&knob>,"linux,axis";
rollover = <&knob>,"rotary-encoder,rollover";
half-period = <&knob>,"rotary-encoder,half-period";
steps = <&knob>,"rotary-encoder,steps";
};
};</code></pre>
<p>You can see that it’s similar the keypad example above, but uses a couple of <span class="caps">GPIO </span>lines: 7 and 8.</p>
<p>There’s also a new overrides section which lets us change the configuration of the device by adding dt_param options to /boot/config.txt</p>
<h4>Watching the events</h4>
<p>Having loaded the overlay, an input device appears at /dev/input/by-path/platform-soc\:knob-event. It’s convenient to view the events with the <a href="https://packages.debian.org/sid/utils/evtest">evtest</a> package.</p>
<pre><code class="small">$ sudo evtest /dev/input/by-path/platform-soc\:knob-event
Input driver version is 1.0.1
Input device ID: bus 0x19 vendor 0x0 product 0x0 version 0x0
Input device name: "soc:knob"
Supported events:
Event type 0 (EV_SYN)
Event type 2 (EV_REL)
Event code 0 (REL_X)
Properties:
Testing ... (interrupt to exit)
Event: time 1493118601.236576, type 2 (EV_REL), code 0 (REL_X), value -1
Event: time 1493118601.236576, ————-- EV_SYN ————
Event: time 1493118601.335016, type 2 (EV_REL), code 0 (REL_X), value -1
Event: time 1493118601.335016, ————-- EV_SYN ————
Event: time 1493118601.393779, type 2 (EV_REL), code 0 (REL_X), value -1
Event: time 1493118601.393779, ————-- EV_SYN ————
Event: time 1493118601.456277, type 2 (EV_REL), code 0 (REL_X), value -1
Event: time 1493118601.456277, ————-- EV_SYN ————
Event: time 1493118602.207050, type 2 (EV_REL), code 0 (REL_X), value 1
Event: time 1493118602.207050, ————-- EV_SYN ————
Event: time 1493118602.259268, type 2 (EV_REL), code 0 (REL_X), value 1
Event: time 1493118602.259268, ————-- EV_SYN ———— </code></pre>
<p>To write applications, I found the python <a href="https://python-evdev.readthedocs.io/en/latest/">evdev</a> bindings convenient. </p>F421B754-5F99-11E8-AF43-5AF3111ED6612018-05-24T21:31:56:56Z2018-05-24T22:16:54:54ZGeocaching with FPGAsMartin Oldfield<p>A silly application for <span class="caps">FPGA</span>s: solving a niche class of geocaching puzzles. </p><h2>Introduction</h2>
<p>Geocache puzzles are varied, and as befitting a game enjoyed by geeks many geocaches can only be found after you’ve solved a geeky puzzle. Some such puzzles involve electronics, and within this niche we find those which involve digital logic.</p>
<p>I’ve solved about half-a-dozen of them, and even set one: <a href="https://www.geocaching.com/geocache/GC40ZBM_digital-electronics-theory?guid=533d0f0b-d247-4a8b-a5fa-b5c56f50fe6b"><span class="caps">GC40ZBM</span></a> near Cambridge in the <span class="caps">UK.</span></p>
<h2>A puzzle</h2>
<p>To make things more concrete, consider the following circuit:</p>
<p><a href="./trin.pdf"><img src="trin.png" alt="" class="img_noborder" /></a></p>
<p>After some deliberation you might realize that the seven-segment display shows the sequence <code>N</code>, <code>5</code>, <code>2</code>, <code>1</code>, <code>2</code>, <code>4</code>, <code>1</code>, <code>7</code> which you might recognize as the Northing of the <a href="https://en.wikipedia.org/wiki/Trinity_Great_Court">Great Court</a> Fountain.</p>
<p>Solving the problem by hand isn’t difficult: the circuit is structured to make manual simulation easy, and the task is guided by the test points shown on the left-hand side of the schematic.</p>
<p>However it’s always nice to make things with flashing lights, I had a iCE40 <span class="caps">FPGA </span>demo board on my desk, and this seemed worth a try:</p>
<p><img src="fpga-gc.gif" alt="" class="img_border" /></p>
<h2>More <span class="caps">LED</span>s!</h2>
<p>The <span class="caps">FPGA </span>has many pins, so it seemed a shame to drive only a single digit:</p>
<p><img src="fpga-gc.jpg" alt="" class="img_border" /></p>
<h2>Verilog</h2>
<p>Converting the schematic to verilog wasn’t hard, though there’s an off-by-one error somewhere which I bodged around.</p>
<p>The code below only covers the elements shown in the schematic, you need a bit more the generate the input clock and wire things up.</p>
<pre><code>module puzzle(input clk
, input rst
, output [5:0] ta
, output [7:0] tb
, output [7:0] seg
, output [7:0] dig);
wire [2:0] qs;
counter c1 (.clk(clk), .rst(rst), .qs(qs));
gates gs (.q(qs), .ta(ta), .tb(tb), .seg(seg));
digits ds (.notSel(tb), .digs(dig));
endmodule
module counter(input clk, input rst, output [2:0] qs);
reg q0, q1, q2;
always @(posedge clk)
q0 <= (rst) ? 0 : !q0;
always @(posedge !q0)
q1 <= (rst) ? 0 : !q1;
always @(posedge !q1)
q2 <= (rst) ? 0 : !q2;
wire [2:0] qs;
assign qs = {q2, q1, q0};
endmodule
module digits(input [7:0] notSel,
output [7:0] digs);
// odd offset here!
assign digs[0] = (notSel[6]) ? 0 : 1;
assign digs[1] = (notSel[5]) ? 0 : 1;
assign digs[2] = (notSel[4]) ? 0 : 1;
assign digs[3] = (notSel[3]) ? 0 : 1;
assign digs[4] = (notSel[2]) ? 0 : 1;
assign digs[5] = (notSel[1]) ? 0 : 1;
assign digs[6] = (notSel[0]) ? 0 : 1;
assign digs[7] = (notSel[7]) ? 0 : 1;
endmodule
module gates(input [2:0] q,
output [5:0] ta,
output [7:0] tb,
output [7:0] seg);
assign ta[0] = !q[2];
assign ta[1] = q[2];
assign ta[2] = !q[1];
assign ta[3] = q[1];
assign ta[4] = !q[0];
assign ta[5] = q[0];
assign tb[0] = !(ta[1] & ta[3] & ta[5]);
assign tb[1] = !(ta[1] & ta[3] & ta[4]);
assign tb[2] = !(ta[1] & ta[2] & ta[5]);
assign tb[3] = !(ta[1] & ta[2] & ta[4]);
assign tb[4] = !(ta[0] & ta[3] & ta[5]);
assign tb[5] = !(ta[0] & ta[3] & ta[4]);
assign tb[6] = !(ta[0] & ta[2] & ta[5]);
assign tb[7] = !(ta[0] & ta[2] & ta[4]);
assign seg[0] = !(tb[0] & tb[1] & tb[3]);
assign seg[1] = !tb[5];
assign seg[2] = !(tb[2] & tb[4]);
assign seg[3] = tb[2] & tb[4] & tb[5];
assign seg[4] = tb[2] & tb[4] & tb[6];
assign seg[5] = tb[1] & tb[5] & tb[6];
assign seg[6] = tb[1] & tb[2] & tb[4] & tb[5];
assign seg[7] = tb[2] & tb[4];
endmodule </code></pre>
<h2>Conclusion</h2>
<p>So there you have it. This isn’t a particularly good way to solve the puzzle but it didn’t take long and has an impressive <a href="https://en.wikipedia.org/wiki/Blinkenlights">blinkenlight</a> score. </p>4ADC93FA-2964-11E8-B658-E0065FFBD7F02018-03-16T21:51:33:33Z2018-05-13T21:05:50:50ZBayesian AB-testingMartin Oldfield<p>Some thoughts on AB-testing in a Bayesian framework. </p><h2>Abstract</h2>
<p>This article illustrates how one might tackle AB-testing in a full Bayesian framework. In particular it compares the Evidence for a model which distinguishes between the coins with a model which lumps them together. This appears to be a good way to decide whether to explore the coins’ properties or exploit our existing knowledge.</p>
<h2>AB-testing</h2>
<p>The aim of <a href="https://en.wikipedia.org/wiki/A/B_testing">AB-testing</a> is to decide which of two alternatives is better. These days the classic example is which of two adverts a website should display, in days gone by we might ask which of two coins is more likely to land heads-up. Looking forward, and generalizing to the case of many choices, it is a key issue in <a href="https://en.wikipedia.org/wiki/Monte_Carlo_tree_search">Monte Carlo tree search</a> where we have to decide which branch of the search tree to explore. We will usually talk about coins in the discussion below, but we’ll be tossing them in an environment which rewards us for getting heads.</p>
<p>There are two different problems to consider. In the first, we seek to make the best inference from a fixed set of data: as with all inferences this means we will apply <a href="https://en.wikipedia.org/wiki/Bayes%27_theorem">Bayes’ theorem</a>.</p>
<p>In the second problem, we seek an algorithm which decides which coin to toss so as to maximize the number of heads at the end of the day. It seems likely that we will want to infer things here, but having made the inferences we will need to make decisions based on them too. Often there will be a tension between exploring the choices we have available to us, and exploiting the best choice.</p>
<p>In this article, we marry a careful Bayesian inference to very simple decision rules. The inference explicitly includes the case where our data prefer to not distinguish between the alternatives. These algorithms are conceptually straightforward and easy to think about, and perform reasonably well in synthetic experiments.</p>
<p>We limit our comparisons to just two coins, though it could be extended to more alternatives with a bit of thought.</p>
<h2>Preliminary demonstrations</h2>
<p>To get a feel for the problem look at the graph below which shows the number of heads seen from a thousand coin tosses.</p>
<p class="indented">We will use the phrase a ‘\(n\)% coin’ to mean a coin which has a probility \(n\)% of landing heads-up when tossed.</p>
<p>Five different coins are used: 1%, 3%, 9%, 10%, 11%. Coin tossing is a random business, and so the graph shows 1,000 samples of the thousand-toss experiment. For each sample, the coin is chosen at random, and so we expect about 200 traces for each coin.</p>
<p><a href="./ab/rnd-1000.pdf"><img src="ab/rnd-1000.png" alt="" class="img_noborder" /></a></p>
<p>It is reasonably difficult to resolve the three different coins with probabilities \(0.09\), \(0.10\) and \(0.11\) on this graph, but things become clearer if we toss the coins for longer:</p>
<p><a href="./ab/rnd-10000.pdf"><img src="ab/rnd-10000.png" alt="" class="img_noborder" /></a></p>
<p>Suppose you have 10%- and 11%-coins. It is clear from the graph that you’d need about 1,000 tosses to see much of a difference. Even if you tossed the coins 10,000 times, you couldn’t be sure that the coin which <em>showed</em> more heads was indeed more <em>likely</em> to show heads.</p>
<p>To be quantitative, after a large number \(n\) tosses where the chance of a head is \(p\) on each toss, the number of heads \(n_t\) is roughly Gaussian:</p>
\[
n_t = n p \pm \sqrt{n p (1-p)}.
\]
<p>Putting in some numbers, the table below shows the number of heads we expect to see expressed as mean and standard deviation:</p>
<table class="spaced" cellspacing="0"><tr><th rowspan="2">Coin</th><th colspan="4">Number of tosses</th></tr><tr><th>100</th><th>1,000</th><th>10,000</th><th>100,000</th></tr><tr><th>9%</th><td align="right">9 ± 2.9</td><td align="right">90 ± 9.0</td><td align="right">900 ± 28.6</td><td align="right">9,000 ± 90.5</td></tr><tr><th>10%</th><td align="right">10 ± 3.0</td><td align="right">100 ± 9.5</td><td align="right">1000 ± 30.0</td><td align="right">10,000 ± 94.9</td></tr><tr><th>11%</th><td align="right">11 ± 3.1</td><td align="right">110 ± 9.9</td><td align="right">1100 ± 31.3</td><td align="right">11,000 ± 98.9</td></tr></table>
<p>Given two coins with probabilities \(p \pm \Delta p\), the one standard deviation points will match roughly when:</p>
\[
\begin{eqnarray} \sqrt{n p (1-p)} &\approx& \frac{1}{2} n \, \Delta p, \\\
n &\approx& \frac{4 p (1-p)}{\Delta^2}. \end{eqnarray}
\]
<p>Here \(p \approx 0.1, \Delta \approx 0.01 \) and so \(n \approx 3,600\), which is consistent with the demonstrations above. The details don’t matter, but it is important to realize that you’ll need a lot of tosses to resolve the difference between similar coins.</p>
<h2>Basic Bernoulli Inference</h2>
<p>In formal terms the standard model for tossing a coin is a Bernoulli process. Given the fixed probability of getting a head, \(p\), the likelihood of getting \(h\) heads in \(n\) tosses is</p>
\[
\textrm{pr}(h|p) = \frac{n!}{h!(n-h)!} \, p^h\, (1-p)^{n-h}.
\]
<p>We will generally assume a flat prior on \(p\), save for requiring it to be bounded by zero and one i.e.</p>
\[
\textrm{pr}(p) = \begin{cases} 1 & \text{when } 0 \leq p \leq 1 \\\
0 & \text{otherwise}. \end{cases}
\]
<p>As an aside, note that some kinds of prior information can be encoded as a beta-function with hyper-parameters \(a\), and \(b\):</p>
\[
\textrm{pr}(p|a,b) = \frac{(a + b + 1)!}{a!\, b!} p^a (1-p)^b.
\]
<p>This function says that before I toss the coin it is as though I have already witnessed \(a\) heads and \(b\) tails: it is a reasonable way to encode \(p\) being roughly some value with a given precision. It is mathematically convenient because the prior has the same functional form as the likelihood, and so much of the algebra below is easy to extend.</p>
<p>There is no reason to limit \(a\) and \(b\) to being integers: if you do make this generalization then you need to replace the factorials with Gamma functions.</p>
<p>Returning to the flat prior on \(p\), and assuming everywhere that \(0 \leq p \leq 1\), we can calculate the joint distribution of \(p\) and \(h\):</p>
\[
\textrm{pr}(p,h) = \frac{n!}{h! (n-h)!} p^{h} (1-p)^{n-h}.
\]
<p>the Evidence,</p>
\[
\begin{eqnarray} \textrm{pr}(h) &=& \int_0^1 \textrm{pr}(p,h) \, \textrm{d}p, \\\
&=& \frac{1}{n + 1}, \end{eqnarray}
\]
<p>and the posterior,</p>
\[
\textrm{pr}(p|h) = \frac{(n + 1)!}{h! (n-h)!} \; p^{h} (1-p)^{n-h}.
\]
<p>It is helpful to write this in terms of the binomial coefficient,</p>
\[
\textrm{pr}(p|h) = (n + 1) \, \binom{n}{h} \, p^{h} (1-p)^{n-h}.
\]
<h2>Two coins</h2>
<p>Perhaps the simplest way to model AB-testing is to assume that A and B are completely independent. This means that all the variables above sprout a suffix indicating their allegiance, and the probabilities multiply.</p>
<p>For example the posterior distribution of \(p_A\) and \(p_B\) given the data:</p>
\[
\begin{eqnarray} \textrm{pr}(p_A, p_B|h_A, h_B) &=& \textrm{pr}(p_A|h_A) \; \textrm{pr}(p_B|h_B) \\\
&=& (n_A + 1) \binom{n_A}{h_A} p_A^{h_A} (1-p_A)^{n_A-h_A} \; (n_B + 1) \binom{n_B}{h_B} p_B^{h_B} (1-p_B)^{n_B-h_B}, \\\
&=& (n_A + 1)(n_B + 1) \binom{n_A}{h_A} \binom{n_B}{h_B} \; p_A^{h_A} (1-p_A)^{n_A-h_A} \; p_B^{h_B} (1-p_B)^{n_B-h_B}. \end{eqnarray}
\]
<p>To assess the probability that A is better than B just integrate the posterior over the region where \(p_A > p_B\):</p>
\[
\textrm{pr}(p_A > p_B) = \int_0^1 \, \textrm{d}p_A \, \int_0^{p_A} \textrm{d}p_B \, \textrm{pr}(p_A, p_B|h_A, h_B).
\]
<p>Sadly though this is messy:</p>
\[
\textrm{pr}(p_A > p_B) = Q \int_0^1 \, \textrm{d}p_A \, \int_0^{p_A} \textrm{d}p_B \, p_A^{h_A} (1-p_A)^{n_A-h_A} \; p_B^{h_B} (1-p_B)^{n_B-h_B},
\]
<p>where \(Q\) denotes all the terms from above not involving \(p_i\). The integral is particularly hard in the cases that matter: if A and B are roughly as good as each other, the line \(p_A = p_B\) which marks one boundary of the area over which we are integrating will slice through a significant mass of probability.</p>
<p>That said, it is only a 2D-integral of a smooth function over a bounded region: solving it numerically is perfectly feasible for particular values of \(n_i\) and \(h_i\).</p>
<p>Happily, though, there is a better way to proceed.</p>
<h2>Mr Inclusive</h2>
<p>If we’re consciencious, we should compare the two-parameter model above with a simpler one-parameter model which assumes that both coins have the same chance of giving a head.</p>
<p>Bayesian model-comparison embodies an automatic <a href="http://mlg.eng.cam.ac.uk/zoubin/papers/occam.pdf">Occam’s Razor</a> which prefers simpler models unless the data provide a compelling reason to prefer a more complicated one. In the case of AB-testing we might hope that initially the simpler model will be preferred at first: essentially the good Reverend Bayes shrugs and says “I don’t have enough data to justify putting the coins in separate classes.”</p>
<p>Introducing \(\mathscr{H}_1\) for the 1-parameter hypothesis, the joint distribution is</p>
\[
\begin{eqnarray} \textrm{pr}(h_1, h_2, p | \mathscr{H}_1) &=& \binom{n_A}{h_A} \, p^{h_A} (1-p)^{n_A-h_A} \; \binom{n_B}{h_B} \, p^{h_B} (1-p)^{n_B-h_B} \\\
&=& \binom{n_A}{h_A} \, \binom{n_B}{h_B} \; p^{h_A + h_B} \; (1-p)^{n_A + n_B - h_A - h_B}, \end{eqnarray}
\]
<p>and thus the Evidence,</p>
\[
\textrm{pr}(h_1, h_2 | \mathscr{H}_1) = \frac{1}{n_A + n_B + 1}\; \binom{n_A}{h_A}\, \binom{n_B}{h_B} \Big/ \binom{n_A + n_B}{h_A + h_B}.
\]
<p>By contrast for the two-parameter model the Evidence is,</p>
\[
\textrm{pr}(h_A, h_B | \mathscr{H}_2) = \frac{1}{(n_A + 1)(n_B+ 1)}.
\]
<p>and thus, assuming we consider each possibility equally probable <em>a priori</em>,</p>
\[
\begin{eqnarray} \frac{\textrm{pr}(\mathscr{H}_2|h_A, h_B)}{\textrm{pr}(\mathscr{H}_1|h_A, h_B)} &=& \frac{\textrm{pr}(h_A, h_B | \mathscr{H}_2)}{\textrm{pr}(h_A, h_B | \mathscr{H}_1)} \\\
&=& \frac{(n_A + 1)(n_B + 1)}{n_A + n_B + 1}\; \binom{n_A}{h_A}\, \binom{n_B}{h_B} \Big/ \binom{n_A + n_B}{h_A + h_B}. \end{eqnarray}
\]
<p>If this ratio is bigger than one, the best inference is that the two coins have different probabilities with distribution:</p>
\[
\textrm{pr}(p_A, p_B|h_A, h_B, \mathscr{H}_2) = \prod_{i \in \{A,B\}} (n_i + 1) \binom{n_i}{h_i} \; p_i^{h_i} (1-p_i)^{n_i-h_i}.
\]
<p>otherwise we don’t distinguish the coins, and the single probability \(p\) with distribution,</p>
\[
\textrm{pr}(p|h_A, h_B, \mathscr{H}_1) = (n_A + n_B + 1) \binom{n_A + n_B}{h_A + h_B} \; p^{h_A + h_B} \; (1-p)^{n_A-h_A + n_B - h_B}.
\]
<p>Here endeth the inference.</p>
<h3>An illustration</h3>
<p>The expressions above are difficult to grasp intuitively, so instead we shall plot them! The plot below shows \(\textrm{pr}(\mathscr{H}_2|D)\) for different combinations of heads when two coins were each tossed 100 times.</p>
<p><a href="./ab/evratio-100.pdf"><img src="ab/evratio-100.png" alt="" class="img_noborder" /></a></p>
<p>Areas in green are those where \(\mathscr{H}_2\) is more likely; those in purple favour \(\mathscr{H}_1\).</p>
<p>It is easy to see the symmetry between heads and tails, and between coins A & B. Also, note the relatively sharp transition between the two regimes.</p>
<h2>Decision theory</h2>
<p>The inferences above are the optimal conclusions to draw from a given set of data. If all we had were the results of a fixed experiment, we’d be done. However, suppose we can get more data. It is clear that we should do so both because we might learn more about the properties of the coins, but also we get rewarded for getting heads. This naturally forces us to choose: which coin should we toss ?</p>
<p>Although Bayes doesn’t tell us what to do, the Evidence ratio tells us whether we have collected enough data to distinguish between the coins: pendantically which of the one- and two-parameters model is more probable given the data.</p>
<p>This suggests a simple way to proceed: if the Evidence-ratio favours the one-parameter model focus on improving our inference; otherwise just exploit the best model.</p>
<p>I claim without proof that a sensible way to exploit things is to toss the coin with the highest fraction of heads. Remember we’re only going to do this, when we infer that the data we’ve collected distinguish between the two coins.</p>
<p>In the exploration phase, a couple of simple ideas appeal: prefer the rarest, and just guess!</p>
<h3>Prefer the rarest</h3>
<p>Intuitively it seems reasonably that if we can’t distinguish between the coins, we should explore the one we’ve tossed least often.</p>
<p>The following python snippet implements the algorithm:</p>
<pre><code>log_ev1 = (log_binomial(na, ha)
+ log_binomial(nb, hb)
- log_binomial(na + nb, ha + hb)
- math.log(na + nb + 1)
)
log_ev2 = -(math.log(1 + na) + math.log(1 + nb))
if (log_ev2 < log_ev1):
if (na < nb):
return 0
elif (na > nb):
return 1
else:
if (ha * nb < hb * na):
return 1
elif (ha * nb > hb * na):
return 0
return random.randint(0,1)</code></pre>
<h3>Random exploration</h3>
<p>If we can’t distinguish between the coins, just choosing between them randomly will probably work tolerably well too.</p>
<p>The code to implement this is even shorter:</p>
<pre><code>log_ev1 = (log_binomial(na, ha)
+ log_binomial(nb, hb)
- log_binomial(na + nb, ha + hb)
- math.log(na + nb + 1)
)
log_ev2 = -(math.log(1 + na) + math.log(1 + nb))
if (log_ev2 > log_ev1):
if (ha * nb < hb * na):
return 1
elif (ha * nb > hb * na):
return 0
return random.randint(0,1)</code></pre>
<h3>Implementation details</h3>
<p>There are a few points of note:</p>
<ol>
<li>It is extremely important to not introduce bias into the choice when e.g. \(n_A = n_B\), above we break the tie by choosing randomly between the choices. These coincidences occur quite often because we often keep choosing the same model until a statistic balances.</li>
<li>The Evidences grow rapidly with more data, so it is more sensible to work with logs. Even so, at some point floating point precision will become an issue in the calculations.</li>
<li>For large \(n\) <a href="https://en.wikipedia.org/wiki/Stirling%27s_approximation">Stirling’s approximation</a> is often a good way to calculate \(\log(n!)\). Alternatively there’s a <code>gammaln</code> function in <a href="https://docs.scipy.org/doc/scipy/reference/generated/scipy.special.gammaln.html">scipy</a>.</li>
</ol>
<p>The code on GitHub abstracts most of the common behaviour into an abstract base class. Comparisons are often implemented in terms of <code>ternary_cmp</code>:</p>
<pre><code># compare a & b then return
# x_lt if a < b
# x_eq if a == b
# x_gt if a > b
def ternary_cmp(a, b, x_lt, x_eq, x_gt):
if (a < b):
return x_lt
elif (a > b):
return x_gt
else:
return x_eq </code></pre>
<p>which explicitly forces us to consider the case where the arguments are equal. This makes it harder to implicitly include a bias.</p>
<h2>Results</h2>
<p>The following section shows how well the algorithm performs in a variety of different scenarios. One could run many experiments, the sample here hopefully gives some insight into how the code behaves, without making any claim to be comprehensive.</p>
<p>We compare four different algorithms. Two, <span class="caps">UCB1 </span>and the Annealing Epsilon Greedy (AEG), are commonly mentioned for such problems. John White’s excellent book <a href="http://shop.oreilly.com/product/0636920027393.do">Bandit Algorithms for Website Optimization</a>, has good explanations of them both. John helpfully provided <span class="caps">MIT </span>licensed versions of the code on <a href="https://github.com/johnmyleswhite/BanditsBook">GitHub</a>: I am using both his implementation of those algorithms and his general code structure. Thank you John.</p>
<p>I should say that I’ve not tried to tune the <span class="caps">UCB1 </span>or <span class="caps">AEG </span>algorithms, so a skilled practitioner might get better performance from them.</p>
<p>Besides <span class="caps">AEG </span>and <span class="caps">UCB1, </span>we also include the two Bayesian algorithms above. These differ only in their strategy when exploring i.e. when \(\textrm{pr}(\mathscr{H}_1|D) > \textrm{pr}(\mathscr{H}_2|D)\). In these circumstances “Bayes” chooses the coin which has been tossed least frequently; “Bayes Rnd” chooses randomly with equal weighting.</p>
<p>We track three numbers during the simulations:</p>
<ul>
<li>The total score i.e. how many heads we have tossed so far.</li>
<li>The “average coin”, or equivalently the fraction of times we have tossed coin B. Thus if coin A is better we hope this will be close to zero, and if coin B is better we hope it will close to one.</li>
<li>\(\textrm{pr}(\mathscr{H}_2|D)\) i.e. the probability of the two-parameter model given the Data. This only makes sense for the Bayesian algorithms.</li>
</ul>
<p>In some cases we report only the final value of the statistic, averaged over 10,000 runs. In others we plot the statistic’s evolution during a run by drawing 200 samples from the simulation.</p>
<p>In all simulations, coin A is always a 10% coin. To avoid biases we run the code twice: reversing the order of the coins between them: this makes shouldn’t make a difference, but it will if an algorithm is biased towards, say, the first coin in the list.</p>
<p>Some parameters change across runs:</p>
<ul>
<li>Coin B is variously a 1%, 3%, 9%, 11%, 30% or 90% coin.</li>
<li>We consider three different lengths: 100 tosses, 1,000 tosses, and 10,000 tosses.</li>
</ul>
<h3>Reproducibility</h3>
<p>All the code used to generate the data can be downloaded from <a href="https://github.com/mjoldfield/bayesian-ab">GitHub</a>. Assuming that you have a full Python 3 installation:</p>
<pre><code>$ git clone https://github.com/mjoldfield/bayesian-ab.git
$ cd bayesian-ab
$ ./do-simulations
$ ./do-plots</code></pre>
<p>will leave all the images and table data in the <code>plots</code> subdirectory. Note that neither script is fast. On my iMac the simulations take about 27 hours; the plots about 20 minutes. If you’re running the code a lot, it might be sensible to parallelize it.</p>
<h3>1,000 tosses</h3>
<p>We begin by exploring runs of 1,000 tosses. Our earlier exploratory work suggests that we will only be able to distinguish significantly different coins in such a run. For example, we don’t expect to distinguish reliably between 9% and 10% coins</p>
<p>Perhaps the best measure of how well the algorithm is doing is to see how often it tosses the best coin. This measure will always lie between zero and one, so we can directly compare say a 1% coin with a 30% coin, even though we’d expect the latter to see a lot more heads.</p>
<p><a href="./ab/res-1000.pdf"><img src="ab/res-1000.png" alt="" class="img_noborder" /></a></p>
<table class="cspaced_sml" cellspacing="0"><tr><th rowspan="2">Algorithm</th><th colspan="6">Coin B</th></tr><tr><th>1%</th><th>3%</th><th>9%</th><th>11%</th><th>30%</th><th>90%</th></tr><tr><th>Bayes</th><td>0.912</td><td>0.831</td><td>0.511</td><td>0.513</td><td>0.964</td><td><strong>0.998</strong></td></tr><tr><th>Bayes Rnd</th><td><strong>0.916</strong></td><td>0.838</td><td>0.516</td><td>0.515</td><td><strong>0.968</strong></td><td>0.998</td></tr><tr><th><span class="caps">AEG</span></th><td>0.886</td><td><strong>0.872</strong></td><td><strong>0.582</strong></td><td><strong>0.588</strong></td><td>0.903</td><td>0.911</td></tr><tr><th><span class="caps">UCB</span> 1</th><td>0.730</td><td>0.688</td><td>0.529</td><td>0.530</td><td>0.868</td><td>0.983</td></tr></table>
<p>As is probably clear, all the algorithms fail to pick the good coin all the time when the coins are similar. Most algorithms do little better than 50%; <span class="caps">AEG </span>is the best of bunch tossing the good coin nearly 60% of the time.</p>
<p>However, when the coins can be resolved, the Bayesian algorithms are bolder in their inference and get the best score. Quantitatively, when choosing between 10% and 30% coins, both Bayesian methods choose the good coin over 95% of the time; <span class="caps">AEG </span>about 90% of the time.</p>
<p>Turning now to the final score, the basic pattern is repeated. One point is worthy of note: although the <span class="caps">AEG </span>algorithm does better job of distinguishing the coins when they are similar, this doesn’t make much difference to the total number of heads at the end of the run. After all, if the coins are very similar it won’t matter much which one you choose!</p>
<table class="cspaced_sml" cellspacing="0"><tr><th rowspan="2">Algorithm</th><th colspan="6">Coin B</th></tr><tr><th>1%</th><th>3%</th><th>9%</th><th>11%</th><th>30%</th><th>90%</th></tr><tr><th>Bayes</th><td>92.110</td><td>88.189</td><td>95.101</td><td>105.029</td><td>293.032</td><td><strong>898.725</strong></td></tr><tr><th>Bayes Rnd</th><td><strong>92.393</strong></td><td>88.782</td><td>95.118</td><td>105.175</td><td><strong>293.579</strong></td><td>898.513</td></tr><tr><th><span class="caps">AEG</span></th><td>89.786</td><td><strong>91.208</strong></td><td><strong>95.923</strong></td><td><strong>105.948</strong></td><td>280.570</td><td>828.705</td></tr><tr><th><span class="caps">UCB</span> 1</th><td>75.716</td><td>78.166</td><td>95.306</td><td>105.327</td><td>273.677</td><td>886.324</td></tr></table>
<p>Perhaps the proper conclusion to draw is that you need to think carefully about the appropriate measure when evaluating the different algorithms.</p>
<p>We turn now to (many) colourful plots. Each column of three plots below corresponds to a single entry in the table above and a single dot on the graph.</p>
<p>The first graph in each column shows the total number of heads which increases monitonically as the run progresses. The slope of the graph shows the average number of heads per toss, and I think it isn’t entirely fanciful to suggest that in some cases you can see the slope increase as the algorithm learns which coin is best.</p>
<p>The third graph shows which coin we are tossing. It is averaged over a small number of tosses and noise is added to make identical traces more obvious. There are clear differences in the way the algorithms work:</p>
<ul>
<li>The Bayesian algorithms start tossing each coin roughly 50:50, then switch almost entirely to one coin, which is almost always the better one. Before we switch, which corresponds to \(\mathscr{H}_1\) being preferred, the random variant is noisier.</li>
<li>The <span class="caps">AEG </span>algorithm seems to embrace one coin quite quickly, but then upgrades over time if that choice seems to be wrong.</li>
<li>The <span class="caps">UCB1 </span>algorithm stays close to 50:50, and drifts towards the correct coin <em>en masse</em>.</li>
</ul>
<p>Finally, the second graph shows which model the Bayesian algorithms consider to be best. Formally, we plot the probability of \(\mathscr{H}_2\) given the data. Again we add noise to give a better representation of the probability density.</p>
<p>In all cases, \(\mathscr{H}_1\) and \(\mathscr{H}_2\) are equally probable <em>a priori</em>, and so the traces all start at 0.5. Initially we do not have enough data to support the two-paramter model and so the trace moves down, over time \(\mathscr{H}_2\) fits the data better and so the trace moves up.</p>
<p>When \(\mathscr{H}_2\) becomes more probable than \(\mathscr{H}_1\), we move into the exploiting phase. At this point, the better coin is tossed exclusively, so we stop learning about the other coin. If the limiting factor on being sure that we have different coins is the uncertainty about the poorer coin, then this won’t change over time. This is why the traces don’t typically proceed to \(\textrm{pr}(\mathscr{H}_2|D) = 1\), but instead fill the half-space \(\textrm{pr}(\mathscr{H}_2|D) > \frac{1}{2}\).</p>
<p>Finally we should also discuss the possibility that the algorithm picks the wrong coin when it switches to \(\mathscr{H}_2\). This might happen if, for example, the good coin has a run of bad luck at the beginning and is never investigated again. In practice this does happen, but apparently not often enough to affect the overall result significantly. Mitigations for this will be discussed later.</p>
<p>Note, that the converse problem, where the bad coin has a lucky streak isn’t as big a problem: we would keep tossing the bad coin and eventually its lucky-streak would end.</p>
<p><a href="./ab/paths-1000-0_01.pdf"><img src="ab/paths-1000-0_01.png" alt="" class="img_noborder" /></a></p>
<p><a href="./ab/paths-1000-0_03.pdf"><img src="ab/paths-1000-0_03.png" alt="" class="img_noborder" /></a></p>
<p>The two plots above correspond to the case were the second coin is much less likely to give a head. For the first coin, the difference is large enough to be identified within 1000 tosses; for the second the algorithms are still not sure.</p>
<p><a href="./ab/paths-1000-0_09.pdf"><img src="ab/paths-1000-0_09.png" alt="" class="img_noborder" /></a></p>
<p><a href="./ab/paths-1000-0_11.pdf"><img src="ab/paths-1000-0_11.png" alt="" class="img_noborder" /></a></p>
<p>These two plots show coins which are very similar. As expected the Bayesian algorithms only rarely move into the exploitation phase. In one case, it exploits the wrong coin, choosing to toss the 9% coin instead of the 10%. As discussed in the text, this mistake might persist for a long time because the algorithm isn’t tossing the 10% coin and so will persist in its delusion that it’s a bad choice.</p>
<p><a href="./ab/paths-1000-0_30.pdf"><img src="ab/paths-1000-0_30.png" alt="" class="img_noborder" /></a></p>
<p><a href="./ab/paths-1000-0_90.pdf"><img src="ab/paths-1000-0_90.png" alt="" class="img_noborder" /></a></p>
<p>Finally, these plots show how the algorithms handle the second coin being relatively very likely to land heads. It is clear that these are easier scenarios: all the algorithms swiftly pick the good coin and exploit it.</p>
<h3>100 tosses</h3>
<p>Below, we zoom in on the early steps of the run, looking at only 100 steps i.e. the first tenth of the run above. The plot below shows the overall performance, and as you’d expect it is somewhat worse than for the longer runs. Unsurprisingly the easy scenarios are affected less.</p>
<p><a href="./ab/res-100.pdf"><img src="ab/res-100.png" alt="" class="img_noborder" /></a></p>
<p>Beyond the somewhat facile observation above, there a few interesting things in the detailed evolution for the extreme coins.</p>
<p><a href="./ab/paths-100-0_01.pdf"><img src="ab/paths-100-0_01.png" alt="" class="img_noborder" /></a></p>
<p>The graphs above show 1% and 10% coins. Usually, we don’t expect to resolve the difference between the coins in 100 tosses, so we expect the Bayes algorithm will often toss each coin 50 times: the data bear this out.</p>
<p>With this pair of coins, there is a about a 70% chance we will be in one of only ten discrete states \(n_A \in \{3,4,5,6,7\}, n_B \in \{0,1\}\). This explains the discrete lines seen in the plot of \(\textrm{pr}(\mathscr{H}_2|D)\).</p>
<p>The situation is similar for the Bayes Rnd algorithm but the number of times each coin is tossed is noisy, and this blurs the traces.</p>
<p><a href="./ab/paths-100-0_90.pdf"><img src="ab/paths-100-0_90.png" alt="" class="img_noborder" /></a></p>
<p>Finally the plots above show an easy inference problem: the superiority of the 90% coin becomes evident in less than ten tosses. Consequently the traces here show the only times the 10% coin will be used in the entire run, even if it lasts much longer.</p>
<p>Note that we might remain quite uncertain about how bad the weaker coin is: we need only to know that it is weaker than the 90% coin. Having decided that we no longer toss it, and thus no longer learn about it. Given that this happens so quickly, it is likely that the long term uncertainty in the 10% coin will persist at one of the handful of values consistent with, say, \(n_A \approx 5\). This is even more obvious in the graph of \(\textrm{pr}(\mathscr{H}_2|D)\) for 1,000 tosses in the previous section.</p>
<h3>10,000 tosses</h3>
<p>Finally let us explore longer time scales, and look what happens when we toss the coins 10,000 times.</p>
<p><a href="./ab/res-10000.pdf"><img src="ab/res-10000.png" alt="" class="img_noborder" /></a></p>
<p>One interesting observation: the <span class="caps">AEG </span>algorithm still tosses the bad coin nearly 10% of the time here, which is bad for performance. Perhaps different tuning would help.</p>
<p>For the Bayesian algorithms, the interesting data concern the 9% and 11% coins: can we resolve the difference between one of them and the benchmark 10% coin ? Perhaps as we might expect, no!</p>
<p><a href="./ab/paths-10000-0_09.pdf"><img src="ab/paths-10000-0_09.png" alt="" class="img_noborder" /></a></p>
<p><a href="./ab/paths-10000-0_11.pdf"><img src="ab/paths-10000-0_11.png" alt="" class="img_noborder" /></a></p>
<h3>Shock response</h3>
<p>As a final experiment with these Bayesian algorithms, let’s explore how they do when a new coin appears in the middle of a run. In the examples below, we toss the 10% coin 1,000 times letting the algorithm observe the results. Then we allow the algorithm to pick the coin, and see what happens after 100 tosses.</p>
<p><a href="./ab/paths-w-100-0_01.pdf"><img src="ab/paths-w-100-0_01.png" alt="" class="img_noborder" /></a></p>
<p>In the first case, the new option is a 1% coin. As usual, the Bayes algorithm is the easiest to understand. It starts by tossing the new coin exclusively, concludes that it’s inferior after about 35 tosses, and then switches back.</p>
<p>\(\textrm{pr}(\mathscr{H}_2|D)\) is a useful diagnostic: it starts of at a half reflecting the equal <em>a priori</em> probabilities, then falls because the Occam factors prefer \(\mathscr{H}_1\) until we have enough data to support \(\mathscr{H}_2\). Note that the probability only increases very slowly once we have switched back to the 10% coin. We aren’t very sure whether we have one class or two because we don’t have enough data to characterize the second coin very well. The state persists because we have gone back to tossing the first coin.</p>
<p>Over longer periods, \(\textrm{pr}(\mathscr{H}_2|D)\) does rise, presumably driven by a bad run on coin A favouring \(\mathscr{H}_1\) which lets us explore coin B for a while.</p>
<p><a href="./ab/paths-w-100-0_30.pdf"><img src="ab/paths-w-100-0_30.png" alt="" class="img_noborder" /></a></p>
<p>By contrast the simulation above shows what happens when then second coin is better: here a 30% coin. Although the immediate response of the algorithm is the same—toss the new coin—it rapidly becomes clear that the switch is advantageous and we keep tossing it.</p>
<p>In information terms this means we keep getting more information about the poorly characterized coin, and become confident that it is better. Accordingly \(\textrm{pr}(\mathscr{H}_2|D)\) climbs rapidly to one.</p>
<p>There is one further observation: as noted above the algorithms will start to toss the new coin immediately just because it is new. If the new coin happens to be better than the old, the algorithm might appear to be preternaturally astute by switching to the new coin so quickly. In other words, in the short-term the algorithm’s performance is dominated by whether the new coin is better or worse than the old. This care is needed to assess the algorithm’s quality here.</p>
<h2>Extreme tossing</h2>
<p>Finally I present a different algorithm, which does extremely well on the tests shown here.</p>
<p><a href="./ab/res-100.pdf"><img src="ab/all-res-100.png" alt="" class="img_noborder" /></a></p>
<table class="cspaced_sml" cellspacing="0"><tr><th rowspan="2">Algorithm</th><th colspan="6">Coin B</th></tr><tr><th>1%</th><th>3%</th><th>9%</th><th>11%</th><th>30%</th><th>90%</th></tr><tr><th>Bayes</th><td>0.550</td><td>0.531</td><td>0.503</td><td>0.504</td><td>0.729</td><td>0.982</td></tr><tr><th>Bayes Rnd</th><td>0.560</td><td>0.540</td><td>0.505</td><td>0.505</td><td>0.750</td><td>0.982</td></tr><tr><th>Bayes Ext</th><td><strong>0.835</strong></td><td><strong>0.766</strong></td><td><strong>0.537</strong></td><td><strong>0.539</strong></td><td><strong>0.888</strong></td><td><strong>0.982</strong></td></tr><tr><th><span class="caps">AEG</span></th><td>0.694</td><td>0.664</td><td>0.525</td><td>0.523</td><td>0.779</td><td>0.845</td></tr><tr><th><span class="caps">UCB</span> 1</th><td>0.595</td><td>0.575</td><td>0.511</td><td>0.511</td><td>0.698</td><td>0.921</td></tr></table>
<p><a href="./ab/cur-paths-100-0_30.pdf"><img src="ab/cur-paths-100-0_30.png" alt="" class="img_noborder" /></a></p>
<p><a href="./ab/res-1000.pdf"><img src="ab/all-res-1000.png" alt="" class="img_noborder" /></a></p>
<table class="cspaced_sml" cellspacing="0"><tr><th rowspan="2">Algorithm</th><th colspan="6">Coin B</th></tr><tr><th>1%</th><th>3%</th><th>9%</th><th>11%</th><th>30%</th><th>90%</th></tr><tr><th>Bayes</th><td>0.912</td><td>0.831</td><td>0.511</td><td>0.513</td><td>0.964</td><td>0.998</td></tr><tr><th>Bayes Rnd</th><td>0.916</td><td>0.838</td><td>0.516</td><td>0.515</td><td><strong>0.968</strong></td><td>0.998</td></tr><tr><th>Bayes Ext</th><td><strong>0.981</strong></td><td><strong>0.949</strong></td><td>0.575</td><td>0.572</td><td>0.966</td><td><strong>0.998</strong></td></tr><tr><th><span class="caps">AEG</span></th><td>0.886</td><td>0.872</td><td><strong>0.582</strong></td><td><strong>0.588</strong></td><td>0.903</td><td>0.911</td></tr><tr><th><span class="caps">UCB</span> 1</th><td>0.730</td><td>0.688</td><td>0.529</td><td>0.530</td><td>0.868</td><td>0.983</td></tr></table>
<p>The algorithm is still basically a Bayesian approach: like the examples above we use Evidence ratio to choose when to switch into the exploitation phase; and to exploit we always choose the coin with the higher fraction of heads.</p>
<p>However, while exploring we pick the coin which is most likely to keep us in the exploring phase.</p>
<p>It’s time for some algebra. From above the Evidence ratio is</p>
\[
Q(n_a, n_b) \equiv \frac{\textrm{pr}(\mathscr{H}_1|h_A, n_A, h_B, n_B)}{\textrm{pr}(\mathscr{H}_2|h_A, n_A, h_B, n_B)} = \frac{n_A + n_B + 1}{(n_A + 1)(n_B + 1)}\; \binom{n_A + n_B}{h_A + h_B} \Big/ \binom{n_A}{h_A}\, \binom{n_B}{h_B}.
\]
<p>where large \(Q\) corresponds to a high probability of \(\mathscr{H}_1\).</p>
<p>When we toss a coin, we choose either A or B. Then the coin gives us either a head or a tail. So, after the next toss, we will be in one of four states. Assume, without loss of generality, that we pick coin A. Further, assume that the coin gives more heads than tails. Thus, we are likely to get a head on the next toss, after which the most likely state is that:</p>
\[
\begin{eqnarray} n_A &\to& n_A + 1, \\\
h_A &\to& h_A, \\\
n_B &\to& n_B, \\\
h_B &\to& h_B. \end{eqnarray}
\]
<p>It is a matter of algebra to show that,</p>
\[
Q(n_a + 1, n_b) = Q(n_a, n_b) \, \frac{n_A + n_B + 2}{n_A - h_A + n_B - h_B + 1} \, \frac{n_A - h_A + 1}{n_A + 2},
\]
<p>swapping A & B tells us,</p>
\[
Q(n_a, n_b + 1) = Q(n_a, n_b) \, \frac{n_A + n_B + 2}{n_A - h_A + n_B - h_B + 1} \, \frac{n_B - h_B + 1}{n_B + 2}.
\]
<p>So our best guess to maximize \(Q\) after the next toss is to pick coin A when</p>
\[
\begin{eqnarray} Q(n_a + 1, n_b) &>& Q(n_a, n_b + 1) \\\
\frac{n_A - h_A + 1}{n_A + 2} &>& \frac{n_B - h_B + 1}{n_B + 2}, \end{eqnarray}
\]
<p>which after some algebra gives,</p>
\[
\frac{h_A + 1}{n_A + 2} < \frac{h_B + 1}{n_B + 2}.
\]
<p>As can easily be seen, these ratios are the fraction of heads where we add one to the count of heads and tails. It is the result you’d get from assuming a beta-distribution prior with one head and one tail, though it’s not clear to me why this is the case. Happily it is always well-behaved numerically, even before we have any data.</p>
<p>The analysis above assumes that we don’t get a head on the new toss: if we assume instead that we do, we get,</p>
\[
\frac{h_A + 1}{n_A + 2} > \frac{h_B + 1}{n_B + 2}.
\]
<p>One way to see this is to consider the dual problem where we consider the numbers of tails instead.</p>
<p>You can summarize both results by saying that to maximimize the chances of preferring \(\mathscr{H}_1\) after the next toss, we should toss the most extreme coin.</p>
<p>Here’s a python implementation:</p>
<pre><code>log_ev1 = (log_binomial(na, ha)
+ log_binomial(nb, hb)
- log_binomial(na + nb, ha + hb)
- math.log(na + nb + 1)
)
log_ev2 = -(math.log(1 + na) + math.log(1 + nb))
if (log_ev2 < log_ev1):
lhs = (ha + 1) * (nb + 2)
rhs = (hb + 1) * (na + 2)
h = ha + nb
n = na + nb
if 2 * h > n:
(u,v) = (rhs,lhs)
elif 2 * h < n:
(u,v) = (lhs,rhs)
else:
return random.randint(0,1)
if (u > v):
return 0
elif (u < v):
return 1
else:
if (ha * nb < hb * na):
return 1
elif (ha * nb > kb * na):
return 0
return random.randint(0,1)</code></pre>
<p>Note that the code make a crude comparison to decide whether it is trying to minimize or maximize the modified fraction of heads. In cases where both coins are close to 50% this might well not work well. I have not explored this case though.</p>
<h3>Discussion</h3>
<p>Although the motivation for this algorithm came from looking at the likely Evidence ratio, it isn’t clear that this is the best way to look at it.</p>
<p>For one thing, it is empirically better to choose on the basis of the <em>most probable</em> future Evidence ratio rather than on the basis of its <em>expectation</em>.</p>
<p>For another, the final result is a simple ratio which could be derived in other ways: in other words there might be a different general principle which happens to give the same exploration algorithm in this case. For example, it would be interesting to see if there’s a good information theoretic justification.</p>
<p>Nevertheless, the algorithm appears to be locally good: not only is it better to work with the most-likely rather than the average prediction, but quantitatively \(a = 1\) appears to be the best choice for the fraction</p>
\[
\frac{h_i + a}{n_i + 2a}.
\]
<p>Half-hidden in the algorithm is the question of whether to explore or exploit when the predicted Evidence ratio is unity. Having tested both options, it seems to not matter much, which is at least consistent with the algorithm chosing an optimal time to switch.</p>
<p>These comments are all predicated on the particular tests shown here: before doing much more work it would be sensible to test it with different coin parameters.</p>
<p>Not only is there a question about the direction we should try to optimize when the number of heads and tails are similar. There is also a qualitative difference between optimizing good coins and bad: with good coins we want to keep on tossing the extreme one, with bad coins we want to swap.</p>
<h2>Conclusions</h2>
<p>Although it doesn’t seem to be part of the standard AB-testing repertoire, I think the Bayesian framework outlined above has much to recommend it. Not only does it give good results, it is also a principled approach in the sense that much of the algorithm comes from the direct application of more abstract mathematics.</p>
<p>That said, the tests here are not exhaustive and so it is premature to conclude that the results are robust with respect to the details of the problem before us.</p>
<p>For one thing, all the tests use a 10% coin in one arm of the comparison. This might be appropriate when we are trying to optimize an event which happens reasonable often, but we might draw different conclusions if we used a 90% coin, or indeed more extreme values: say 1% or 99%. However, this article is already rather too long, and so I’m happy to file these tasks under ‘Future Research’.</p>
<p>Another issue is that we have assumed the coins are stationary i.e. that the underlying probability doesn’t change. That might make sense for a coin, but it isn’t hard to imagine that the desirability of a particular advert would depend on the time of day, or the proportion of souffles which fail to rise might depend on the temperature in the kitchen. If the underlying probabilites change, then the risk is that the algorithm won’t notice because it considers the data it has already gleaned is definitive.</p>
<p>As such, this problem is similar to the situation where one coin has a run of really bad luck and is unfairly rejected, never to be sampled again. I can see a couple of obvious ways to tackle this: one could either impose a hard limit on how biased the tossing algorithm is, or one could expire old data.</p>
<p>For example, it would be easy enough to make sure that in every hundred tosses, we always toss both coins at least once. Such an approach is likely to make the average performance worse but make the worst performance better. Of course there’s nothing to say that 100 is the right number to use: it will depend on the properties of the two coins and whether we subjectively care more about the extreme or expected result.</p>
<p>Expiring old data seems a more direct approach to worrying about the coins’ properties changing over time. However, if the variation has some underlying structure it might be better to extend our model to capture it. Explicitly if we think the temperature matters, we might only use past data with an appropriate temperature, or replace the fixed value \(p\) in our analysis with a function of the temperature. We would need to extend our analysis, but that is conceptually straightforward.</p>
<p>Again it would be nice to test all this on real data in a real setting. In particular it would be nice to know and perhaps helpful if the superior performance shown by the simulations here is also found in production. </p>A8222004-20B6-11E8-88EC-C41E7139F8472018-03-05T20:48:57:57Z2018-03-07T19:33:19:19ZSimple Dipole Interactions IIMartin Oldfield<p>A better model for magnets. </p><h2>Introduction</h2>
<p>Recently I wrote about arranging magnets to hold geocaches onto magnetic objects. In particular, given a couple of magnets should their dipoles be aligned so as to be parallel, or anti-parallel ?</p>
<p>The simple dipole model I used there suggested that it anti-parallel would be a bit better, but not significantly so. However, given that this arrangement is much easier to make, there wasn’t a conflict and so the conclusion was clear.</p>
<h2>Experiments</h2>
<p>Of course, the physicist in me wanted to know whether reality matched the model, and that means an experiment. So, I made some little acrylic holders then measured how much weight they’d support when attracted to a 40mm square mild-steel beam.</p>
<p>The table below shows the results: the mass of the maximum weight the magnets could support in different orientations. Two different pairs of magnets were used.</p>
<table class="spaced" cellspacing="0"><tr><th rowspan="2">Pair</th><th colspan="2">Orientation</th><th rowspan="2">Difference</th></tr><tr><th>Parallel</th><th>Anti-parallel</th></tr><tr><th>A</th><td align="center">1.6kg</td><td align="center">2.0kg</td><td align="center">22%</td></tr><tr><th>B</th><td align="center">1.45kg</td><td align="center">1.7kg</td><td align="center">16%</td></tr></table>
<p><span class="caps">N.B.</span> Difference = 2 (max - min) / (max + min)</p>
<p>The experiment was crude, and it was hard to get consistent readings, but the conclusion is still clear: the anti-parallel arrangement is indeed better, and by a significant margin!</p>
<p>To quantify the imprecision I expect the measurements have an uncertainty of about 100g.</p>
<p>There also seemed to be some preferred spots on the bar which makes me wonder if it was being magnetized to some degree.</p>
<p>Nevertheless, despite the having so few data and the rather crude experiment, it seems clear that the simple model does a poor job of capturing reality.</p>
<h2>The old model</h2>
<p>Formally, it can be shown (or looked up in <a href="https://en.wikipedia.org/wiki/Magnetic_dipole&ndash;dipole_interaction">Wikipedia</a>) that the force between two dipoles \(\textbf{m}_1\) and \(\textbf{m}_2\) a distance \(\textbf{r}\) apart is given by</p>
\[
\textbf{F} = \frac{3 \mu_0}{4 \pi r^4} \Bigl( (\hat{\textbf{r}} \times \textbf{m}_1 ) \times \textbf{m}_2 + (\hat{\textbf{r}} \times \textbf{m}_2 ) \times \textbf{m}_1 - 2 \, \hat{\textbf{r}} \, (\textbf{m}_1 . \textbf{m}_2) + 5 \, \hat{\textbf{r}} \, \bigl((\hat{\textbf{r}} \times \textbf{m}_1) . (\hat{\textbf{r}} \times \textbf{m}_2)\bigr) \Bigr).
\]
<p>In our case the dipoles are both in the \(z\) direction, and we are interested in the \(z\)-component of the force. Writing</p>
\[
\begin{align} \textbf{r} &= (x,y,z), \\\
\textbf{m}_1 &= (0,0,m), \\\
\textbf{m}_2 &= (0,0,m), \end{align}
\]
<p>it is a matter of algebra to show that,</p>
\[
F_z = \frac{3 \mu m^2}{4 \pi} \, \frac{z(3x^2 + 3y^2 - 2z^2)}{(x^2 + y^2 + z^2)^{7/2}}.
\]
<p>Now, recall this diagram from the previous article and calculate the force on the dipole at A due to the dipoles at A', and B'.</p>
<p><img src="md2-fig1.png" alt="" class="img_border" /></p>
\[
\begin{align} F_z(A') &= - \frac{3 \mu m^2}{32 \pi}\,\frac{1}{a^4}, \\\
F_z(B') &= \frac{3 \mu m^2}{64 \pi}\,\frac{a(3b^2 - 2a^2)}{(a^2 + b^2)^{7/2}}. \end{align}
\]
<p>Now, the magnetic adhesion force is given by</p>
\[
F_{tot} = 2 \bigl( F_z(A') + \theta F_z(B') \bigr),
\]
<p>where \(\theta\) encodes the relative orientation of the dipoles. and the prefactor of two comes from including the force on B too.</p>
<p>Finally, to calculate the difference between the parallel and anti-parallel alignments we need</p>
\[
\begin{align} \chi &= 2 \frac{F_{tot}(\theta = -1) - F_{tot}(\theta = +1)}{F_{tot}(\theta = -1) + F_{tot}(\theta = +1)},\\\
&= -2 \frac{F_z(B')}{F_z(A')},\\\
&= \frac{a^5(3 b^2 - 2 a^2)}{(a^2 + b^2)^{7/2}}. \end{align}
\]
<p>Substituting \(a = 1.5\textrm{mm}\), \(b = 5\textrm{mm}\) into this gives</p>
\[
\chi \approx 0.005.
\]
<p>Happily, this is in agreement with the previous result (the factor of two comes from a different measure of difference).</p>
<p>In the actual experiment there’s a 1mm thick sheet of acrylic between the magnet and the bar so it’s more appropriate to use \(a = 2.5\textrm{mm}, b = 5\textrm{mm}\):</p>
<p>this gives</p>
\[
\chi \approx 0.036.
\]
<h2>A better model</h2>
<p>The original model represented each magnet by a dipole at its centre. So, an obvious improvement would be to assume that the magnet is made up of a continuum of dipoles arranged uniformly.</p>
<p>Mathematically we need to replace \(F_z\) with integrals over the volume of the magnets:</p>
\[
\begin{align} G_z &= \int_{V1} \textrm{d}V_1 \int_{V2} \textrm{d}V_2 F_z \\\
&= \frac{3 \mu \sigma^2}{4 \pi} \int_{V1} \textrm{d}V_1 \int_{V2} \textrm{d}V_2 \frac{z(3x^2 + 3y^2 - 2z^2)}{(x^2 + y^2 + z^2)^{7/2}}. \end{align}
\]
<p>where we’ve replaced the dipole moment \(m\) with a density \(\sigma\).</p>
<p>Introduce cylindrical polars to integrate over the magnets and assume the magnet centres are offset by \((u,v,w)\) to give:</p>
\[
\begin{align} x &= u + \rho_2 \cos\theta_2 - \rho_1 \cos\theta_1, \\\
y &= v + \rho_2 \sin\theta_2 - \rho_1 \sin\theta_1, \\\
z &= w + \tau_2 - \tau_1. \end{align}
\]
<p>and thus, if the magnets have thickness \(t\) and radius \(s\):</p>
\[
\int_{V} \textrm{d}V \equiv \int_0^s \int_0^{2\pi} \int_{-t/2}^{t/2} \rho \, \textrm{d}\tau \, \textrm{d}\theta \, \textrm{d}\rho,
\]
<p>An analytic result seems optimistic, so let’s ask Mathematica to do the calculation numerically. We are only interested in the ratio of forces, so I’ve dropped the prefactor.</p>
<pre><code>Fz[{x_, y_, z_}] :=
z * (3 x^2 + 3 y^2 - 2 z^2) / (x^2 + y^2 + z^2)^(7/2)</code></pre>
<p>Checking with the result above for \(a = 2.5\textrm{mm}, b = 5\textrm{mm}\) and recalling that e.g. \(w = 2a\),</p>
<pre><code>In[]:= -2 * N[Fz[{10, 0, 5}] / Fz[{0, 0, 5}]]
Out[]= 0.0357771</code></pre>
<p>Now define the integral:</p>
<pre><code>Gz[s_, t_, {u_, v_, w_}] :=
NIntegrate[
Fz[{u + \[Rho]2 * Cos[\[Theta]2] - \[Rho]1 * Cos[\[Theta]1],
v + \[Rho]2 * Sin[\[Theta]2] - \[Rho]1 * Sin[\[Theta]1],
w + \[Tau]2 - \[Tau]1}],
{\[Tau]1, -t/2, t/2},
{\[Tau]2, -t/2, t/2},
{\[Theta]1, 0, 2 * Pi},
{\[Theta]2, 0, 2 * Pi},
{\[Rho]1, 0, s},
{\[Rho]2, 0, s},
Method -> "AdaptiveMonteCarlo"]</code></pre>
<p>If we use this to model the simple dipole case by setting \(s = 0.1\) and \(t = 0.1\), we get \(\chi \approx 0.03586\) which agrees well with the true simple dipole result of \(\chi \approx 0.03578\):</p>
<pre><code>In[]:= -2 * Gz[0.1, 0.1, {10, 0, 5}] / Gz[0.1, 0.1, {0, 0, 5}]
Out[]= 0.0358565</code></pre>
<p> Substituting the true magnet size \(s = 5\textrm{mm}, t = 3\textrm{mm}\) and positions \(a = 2.5\textrm{mm}, b = 5\textrm{mm}\), gives</p>
<pre><code>In[]:= -2 * Gz[5, 3, {10, 0, 5}] / Gz[5, 3, {0, 0, 5}]
Out[]= 0.152685</code></pre>
<p>which agrees pretty well with the experiment: 15% for theory vs 16-22% from experiment.</p>
<h2>Discussion</h2>
<p>As noted above it’s hard to measure the maximum weight the magnets will support, so I’d be wary of saying anything stronger than difference is between 14% and 25%.</p>
<p>Our theoretical result is similar but probably a little on the small side. We’ve also assumed that the magnets are equally strong, which seems a bit optimistic. Any imbalance will reduce the difference. Thus, the experimental value is probably still a bit larger than this model predicts.</p>
<p>One potential contribution is that when the dipoles are aligned in parallel, they really want to stagger themselves. The diagram below sketches equilibrium arrangements for magnets lying on a non-magnetic surface, where the magnets are constrained to lie in roughly the same plane.</p>
<p><img src="md2-fig2.png" alt="" class="img_border" /></p>
<p>If this happens, then in the parallel case the magnets will be a bit further from the bar and thus be held less strongly.</p>
<p>Quantifying this is hard: you’d need to model the way the thin sheet of perspex flexs, or find a way to hold the magnets to prevent the movement: probably glue!</p>
<p>I also have some concerns that the bar becomes magnetized, which is obviously outside the scope of the model.</p>
<h2>Conclusions</h2>
<p>This whole project started because I was curious about how best to arrange a pair of magnets on a geocache, and that’s now answered to my satisfaction. Just do the easy thing and align their dipoles to be anti-parallel.</p>
<p>Another lesson is that it’s fairly easy to tackle numerical solutions to this sort of thing in Mathematica.</p>
<p>Finally, and perhaps most germane to the geocache question it is nice to know that a couple of small magents can hold a bison tube firmly enough to a steel bar that you can hang well over a kilogramme from it! </p>832684A8-1368-11E8-8584-F25B551A57932018-02-16T22:26:21:21Z2018-03-07T16:56:07:07ZSimple Dipole InteractionsMartin Oldfield<p>A simple model to explore the best arrangement for magnets near metal. </p><p>Recently I wrote about making <a href="./magnetic-bisons.html">magnetic bison tubes</a>: little aluminum geocaches which stick to iron and steel by virtue of having two disc magnets attached to them.</p>
<p>When I made these, I arranged the magnets in opposite senses so that they stick together. This makes it easier to assemble the geocaches, but I wondered if such expediency led to inferior sticking force.</p>
<p>In other words, in the sketch below, which of the green geocaches is held more strongly to the metal ?</p>
<p><img src="md-fig1.png" alt="" class="img_border" /></p>
<p>Is it the geocache on the left where the magnets have the same polarity, or the one on the right where they point in opposite directions ?</p>
<p>To answer this, let’s build a simple model.</p>
<h2>One magnet</h2>
<p>Perhaps the simplest possible model for a magnet sticking to a flat metal surface is to assume that:</p>
<ul>
<li>The magnet is well represented by a simple dipole, perpendicular to the metal surface.</li>
<li>The metal is sufficiently thick to be well represented by a half-space e.g. \(y < 0\).</li>
<li>Magnetically the metal has a high relative-permeability \(\mu >> 1\), but we can neglect any intrinsic magnetization.</li>
</ul>
<p>This last point is perhaps dodgy: we are effectively saying that the ‘metal’ is paramagnetic but has a permeability typical of a ferromagnet.</p>
<p>Such problems, particularly in electrostatics, are conveniently solved by <a href="https://en.wikipedia.org/wiki/Method_of_image_charges">the method of image-charges</a>. The image-charge trick is to combine solutions which don’t satisfy the boundary conditions into one which does. We can do this because Maxwell’s equations are all linear in the fields, so any combination of solutions is also a solution.</p>
<p>Now, although magnetic monopoles do not exist, they are useful for calculations. For example, we can represent a dipole as two monopoles with opposite charges slightly displaced from each other. To find the image of the dipole, we need to only to superimpose the images of the two monopoles.</p>
<p>That just leaves the problem of finding the image of a magnetic monopole above a metal.</p>
<h3>The image of a monopole</h3>
<p>Maxwell’s equations tell us that the <a href="https://en.wikipedia.org/wiki/Interface_conditions_for_electromagnetic_fields">interface conditions</a> are:</p>
<ul>
<li>The normal component of \(\textbf{B}\) is continuous across the interface;</li>
<li>the parallel component of \(\textbf{H}\) is continuous across the interface.</li>
</ul>
<p>Inside the metal \(\textbf{H} = \textbf{B} / \mu\), \(\textbf{B}\) is finite, and so in the limit \(\mu \to \infty\) \(\textbf{H}\) vanishes. Thus, from the interface conditions, outside the metal the parallel component of \(\textbf{H}\) vanishes too. Since \(\textbf{B} = \mu \textbf{H}\) the parallel component of \(\textbf{B}\) vanishes and \(\textbf{B}\) is normal to the metal surface.</p>
<p><img src="md-fig2.png" alt="" class="img_border" /></p>
<p>Note, that we are only interested in the field above the metal surface. It is reasonably easy to see that this field is the same as we would get were we to remove the metal and replace it with a negative image charge below the metal surface.</p>
<p><img src="md-fig3.png" alt="" class="img_border" /></p>
<p>In the more general case where the field doesn’t completely vanish inside the metal, the same basic principle applies, but the algebra is more complicated.</p>
<p>Having a negative image charge is just like the case of an electrically-charged particle above a perfectly conducting sheet in electrostatics. Conversely, a magnetic monopole above a superconductor has an image with the same sign.</p>
<h3>The image of a dipole</h3>
<p>Recall that we studied the monopole because we can make a dipole by combining two monopoles. So, to work out the image of a dipole, we:</p>
<ul>
<li>decompose the dipole into positive and negative monopoles;</li>
<li>work out the images of those monopoles;</li>
<li>combine the images into an image dipole.</li>
</ul>
<p>This is done graphically below:</p>
<p><img src="md-fig4.png" alt="" class="img_border" /></p>
<p>By drawing coloured dots, it is easy to see that the image of a magnetic dipole perpendicular to the surface has the same sign as the original.</p>
<p>This is good news, because we expect the magnet and metal to attract each other!</p>
<p>Incidentally, if the dipole were aligned parallel to the surface then it’s image would have the opposite sign. Happily they’d still attract though!</p>
<h3>Getting Quantitative</h3>
<p>Having shown how to solve the problem we now need to actually do the calculation.</p>
<p><img src="md-fig5.png" alt="" class="img_border" /></p>
<p>The sketch above shows the problem we’re trying to solve: what’s the force on the dipole at A due to the dipole at A' ?</p>
<p>The absence of currents means that we can solve the problem with a scalar potential \(\psi(\textbf{x})\). The potential seen by the dipole at A comes from the image dipole at A':</p>
\[
\begin{align} \psi(\textbf{r}) &= \frac{\textbf{m} . \textbf{r}}{4 \pi r^3},\\\
&= \frac{m(y + a)}{4 \pi} (x^2 + (y + a)^2 + z^2)^{-3/2}. \end{align}
\]
<p>From here, we can calculate the field:</p>
\[
\textbf{H}(\textbf{r}) = - \nabla \psi.
\]
<p>We only need the \(y\)-component of \(\textbf{B}\),</p>
\[
B_y(\textbf{r}) = \frac{m \mu_0}{4 \pi} \Bigl( 3 \frac{(y + a)^2}{(x^2 + (y + a)^2 + z^2)^{5/2}} - \frac{1}{(x^2 + (y + a)^2 + z^2)^{3/2}} \Bigr),
\]
<p>to calculate the force,</p>
\[
\begin{align} \textbf{F} &= \nabla \bigl(\textbf{m} . \textbf{B} \bigr), \\\
&= m \nabla B_y, \\\
&= \frac{3 m^2 \mu_0}{4 \pi r^7} \Bigl(x \, (r^2 - 5 y'^2),\; y' \, (3 r^2 - 5 y'^2),\; z \, (r^2 - 5 y'^2)\Bigr), \end{align}
\]
<p>where</p>
\[
\begin{align} y' &= y + a, \\\
r &= x^2 + y'^2 + z^2. \end{align}
\]
<p>Substituting the position of the magnet, \((x,y,z) = (0,a,0)\):</p>
\[
\textbf{F} = \frac{3 m^2 \mu_0}{32 \pi} \Bigl( 0, -1, 0 \Bigr).
\]
<p>As we’d expect, the magnet feels a force directly towards the metal.</p>
<h3>Parallel dipoles</h3>
<p>We could also calculate the force expected from a dipole <em>parallel</em> to the surface, remembering that in this case the image dipole points in the opposite direction.</p>
<p>The algebra is similar, so here’s the answer:</p>
\[
\textbf{F} = \frac{3 m^2 \mu_0}{64 \pi} \Bigl( 0, -1, 0 \Bigr).
\]
<p>In other words half the force we found above.</p>
<h2>Two dipoles</h2>
<p>Having solved simplified problem for one dipole, we can return to the real problem: two dipoles.</p>
<p><img src="md-fig6.png" alt="" class="img_border" /></p>
<p>Recall the central question: how does the holding force depend on the orientation of the second dipole. Suppose that this has a dipole moment</p>
\[
\textbf{m'} = \bigl(0, \theta m, 0 \bigr).
\]
<p>The calculation proceeds as above. Introduce the short-hand for the scalar-potential from a dipole in the \(y\)-direction:</p>
\[
\psi(x,y,z,\theta) = \frac{\theta m y}{4 \pi} \Bigl( x^2 + y^2 + z^2 \Bigr)^{-\frac{3}{2}}.
\]
<p>so that we can write the total potential due to the dipoles at A' an B' as:</p>
\[
\Psi(x,y,z) = \psi(x - b, y + a, z, 1) + \psi(x + b, y + a, z, \theta).
\]
<p><em>Note, previously I included the potential from B here too: that’s a mistake because its position is fixed relative to A. Happily though the force turns out to be zero: it doesn’t change the calculation below.</em></p>
<p>At this point the algebra gets awfully messy, so I won’t reproduce it here. Close to the surface, we expect \(b >> a\). So, introduce</p>
\[
\beta = \frac{a}{b},
\]
<p>noting that small \(\beta\) corresponds to widely spaced magnets.</p>
<p>Then we can expand in terms of \(\beta\), keeping only the first \(\beta\)-term for each component:</p>
\[
\textbf{F} \approx \frac{3 m^2 \mu_0}{32 \pi a^4} \Bigl(\theta \beta^4, -(1 - \frac{3}{2} \theta \beta^5), 0 \Bigr).
\]
<p>Three observations:</p>
<ul>
<li>If the magnets are aligned in the same direction they will indeed repel each other.</li>
<li>The magnets will actually attract the metal more strongly if they are aligned in opposite directions.</li>
<li>The effect of the interactions is pretty weak.</li>
</ul>
<p>To quantify the last point, note that a typical magnet might have a diameter of 10mm and a thickness of 3mm, so \(\beta = 0.3\). The truncated series approximation gives:</p>
\[
\textbf{F} \approx \frac{3 m^2 \mu_0}{32 \pi a^4} \Bigl(0.0081 \, \theta,\; -(1 - 0.0036 \, \theta),\; 0 \Bigr),
\]
<p>against the exact result,</p>
\[
\textbf{F} \approx \frac{3 m^2 \mu_0}{32 \pi a^4} \Bigl(0.0060 \, \theta,\; -(1 - 0.0025 \, \theta),\; 0 \Bigr).
\]
<p>Exact here just means avoiding the error by truncating the series. It doesn’t mean that the model is faithful to reality.</p>
<h3>Summing up</h3>
<p>The calculation above is for one magnet, but there are of course two. Summing up the total forces:</p>
\[
\textbf{F}_{tot} \approx - \frac{3 m^2 \mu_0}{16 \pi a^4} \Bigl(0, 1 - \frac{3}{2} \theta \beta^5, 0 \Bigr).
\]
<p>To a surprisingly good approximation, two magnets hold about twice as well as one!</p>
<h2>Conclusions</h2>
<p>To the extent that this is a good model of reality, the pleasing conclusion is that if you’re using a couple of magnets to hold geocaches in place then putting them in opposite orientations is both easier to assemble and sticks slightly more strongly.</p>
<p>The more general conclusion is that when you’re using dipoles the fields and hence forces decay rapidly with distance. So, unless the magnets are really close you don’t go far wrong by ignoring this interaction.</p>
<p>It would be nice to verify these claims experimentally. </p>22CE81DE-1B13-11E8-9638-8767E32F9A4D2018-02-26T16:33:03:03Z2018-03-03T16:45:00:00ZiCE40 BlinkyMartin Oldfield<p>Brief notes on getting to Blinky on an iCE40 <span class="caps">FPGA </span>demo board. </p><h2>Introduction</h2>
<p>This is a top-level index page, covering my first foray into <span class="caps">FPGA </span>programming. All of the experiments use the IceStorm toolchain running on a Mac to program iCE40 <span class="caps">FPGA</span>s from Lattice.</p>
<p>The articles exist both to clarify things in my own mind now, and because I think my future self might find explicit instructions for making an <span class="caps">LED </span>blinky useful. Who knows, maybe other people will too!</p>
<p>I should say that although the verilog presented here appears to work, these are my first verilog programs, and so are doubtless far from model code.</p>
<p>One article covers <a href="./ice40-toolchain.html">the toolchain</a>, which covers both installing and invoking things</p>
<p>Others cover different development boards: the <a href="./ice40-blinky-icestick.html">IceStick</a> and <a href="./ice40-blinky-hx8k-breakout.html"><span class="caps">HX8K</span> Breakout</a> boards from Lattice; a <a href="./ice40-blinky-olimex-hx1k.html"><span class="caps">HX1K</span></a> board from Olimex.</p>
<p>Were I starting afresh now, I’d stick to the <span class="caps">HX8K </span>board from Lattice. It boasts a significantly more capable <span class="caps">FPGA, </span>can be programmed more quickly, and sits nicely on the desk.</p>
<h2>Verilog Resources</h2>
<p>This is just a random sample of interesting articles which seemed good to me, but I put large error-bars on that judgement.</p>
<h3>Introductory articles</h3>
<ul>
<li>A <a href="http://www.asic-world.com/verilog/veritut.html">verilog tutorial</a>, though a lot of the early stuff isn’t relevant for <a href="http://www.asic-world.com/verilog/synthesis.html">synthesis</a>.</li>
<li><a href="http://zipcpu.com">Gisselquist Technology</a> has a list of <a href="http://zipcpu.com/blog/2017/08/21/rules-for-newbies.html">Rules for new <span class="caps">FPGA </span>designers</a>.</li>
<li><a href="https://embeddedmicro.com/">Embedded Micro’s</a> page of <a href="https://embeddedmicro.com/pages/verilog">tutorials</a>.</li>
</ul>
<h3>Specific designs</h3>
<ul>
<li><a href="https://opencores.org">OpenCores</a> a smörgåsbord of open designs.</li>
<li><a href="http://zipcpu.com/dsp/2017/10/27/lfsr.html">Pseudorandom numbers</a> at Gisselquist.</li>
<li><a href="http://zipcpu.com/blog/2017/07/29/fifo.html"><span class="caps">FIFO</span>s</a> at Gisselquist.</li>
<li><a href="http://zipcpu.com/blog/2017/08/04/debouncing.html">Buttons</a> at Gisselquist.</li>
<li><a href="http://zipcpu.com/dsp/2017/09/04/pwm-reinvention.html"><span class="caps">PWM</span></a> at Gisselquist.</li>
<li>Pointers to <a href="https://github.com/cliffordwolf/icestorm/issues/51"><span class="caps">UART</span>s</a>.</li>
</ul>
<h3>Philosophical articles</h3>
<ul>
<li>At <a href="http://zipcpu.com/blog/2017/06/23/my-dbg-philosophy.html">Gisselquist</a>, and a newer post about <a href="http://zipcpu.com/blog/2017/10/19/formal-intro.html">formal methods</a>.</li>
</ul>
<h3>Wishbone Bus</h3>
<p>The <a href="https://en.wikipedia.org/wiki/Wishbone_(computer_bus)">Wishbone Bus</a> is a standard for connecting different parts of design to each other.</p>
<p>It seems common at OpenCores.</p>
<p>Gisselquist have loads of good articles about it, including:</p>
<ul>
<li><a href="http://zipcpu.com/blog/2017/06/05/wb-bridge-overview.html">A Wishbone-UART bridge</a> to connect the host computer to the bus.</li>
<li><a href="https://github.com/ZipCPU/wbscope">A Wishbone scope</a> which lets you watch the design’s behaviour from within (<a href="http://zipcpu.com/blog/2017/07/08/getting-started-with-wbscope.html">tutorial</a>).</li>
<li><a href="http://zipcpu.com/zipcpu/2017/05/29/simple-wishbone.html">A Wishbone slave</a>.</li>
</ul>
<p>Clifford Wolf’s <a href="https://github.com/cliffordwolf/picorv32">PicoRV <span class="caps">RISC</span>-V <span class="caps">CPU</span></a> has a Wishbone version. </p>70308D22-1BAF-11E8-B01C-27BEE42F9A4D2018-02-27T10:50:11:11Z2018-03-03T16:44:09:09ZiCE40 toolsMartin Oldfield<p>Brief notes on open-source tools for programming iCE40 <span class="caps">FPGA</span>s. </p><p><em>This article is part of a series documenting my first foray into <span class="caps">FPGA </span>programming. You might find it helpful to read the <a href="http://ice40-blinky.html">summary article</a> first.</em></p>
<h2>Project IceStorm</h2>
<p>Whilst it is true that Lattice provide their <a href="http://www.latticesemi.com/Products/DesignSoftwareAndIP/">own tools</a> for programming their <span class="caps">FPGA</span>s, they don’t run natively on the Mac. I suspect they’re also rather large, complicated, <span class="caps">GUI </span>beasts. Instead, I’ve been using <a href="http://www.clifford.at/icestorm/">Project IceStorm</a>, a third-party, open-source toolchain.</p>
<h2>The Toolchain</h2>
<p>The core toolchain comes in three parts:</p>
<ul>
<li>The <a href="https://github.com/cliffordwolf/icestorm">IceStorm</a> tools which understand the low-level details of the iCE40 binary bitstream.</li>
<li>The <a href="https://github.com/cseed/arachne-pnr">Arachne-pnr</a> place-and-route tool. This takes a netlist describing the circuit and converts it into a textual bitstream.</li>
<li>The <a href="http://www.clifford.at/yosys/">Yosys</a> Open Synthesis Suite which compiles verilog into a netlist.</li>
</ul>
<p>Whilst IceStorm and Arachne-pnr seem to be rather focussed tools, Yosys is called a suite for good reason. I think you could use it for many other tasks where logic needs to be manipulated.</p>
<h2>Installation walkthrough</h2>
<p>Note: These relate to installing the tools on MacOS in early 2018, using <a href="http://brew.sh">homebrew</a>.</p>
<p>All of these are quite easily installed from GitHub, though first we must install some build tools. Essentially we just follow the instructions on the <a href="http://www.clifford.at/icestorm/">IceStorm website</a>.</p>
<p>The shell snippets below all install under <code>$ROOT_DIR</code>, so you should either define that or edit the scripts appropriately.</p>
<h3>Dependencies</h3>
<p>The set below could easily be incomplete. Please let me know if you find any omissions.</p>
<pre><code>$ brew install python3 libffi libftdi0 readline
pkg-config bison flex git mercurial</code></pre>
<h3>IceStorm</h3>
<pre><code>$ cd $ROOT_DIR
$ git clone https://github.com/cliffordwolf/icestorm.git icestorm
$ cd icestorm
$ make
$ make install</code></pre>
<h3>Arachne-PNR</h3>
<pre><code>$ cd $ROOT_DIR
$ git clone https://github.com/cseed/arachne-pnr.git arachne-pnr
$ cd arachne-pnr
$ make
$ make install</code></pre>
<p>Arachne-PNR has an implicit dependency on chip databases included with IceStorm, so if you modify those you should reinstall Arachne-PNR.</p>
<h3>Yosys</h3>
<pre><code>$ cd $ROOT_DIR
$ git clone https://github.com/cliffordwolf/yosys.git yosys
$ cd yosys
$ make
$ make install</code></pre>
<p>Homebrew actually includes a <a href="http://formulae.brew.sh/formula/yosys">formula for yosys</a> which can be installed directly:</p>
<pre><code>$ brew install yosys</code></pre>
<p>I built the stock version from GitHub for consistency with the rest of the toolchain. The <a href="https://github.com/Homebrew/homebrew-core/commit/e9d10d69a949fae64576d8bc9cf17a5916b7d235">formulaic version</a> makes a couple of tweaks:</p>
<ul>
<li>it tweaks the paths and build options;</li>
<li>it removes the need to install mercurial to get a dependency (I just installed mercurial instead).</li>
</ul>
<h2>A Makefile</h2>
<p>It obviously makes sense to control the build with a Makefile. For simple projects, like the blinky demonstrations, I’ve used this general file:</p>
<pre><code>TARGET_STEM = blink
PINS_FILE = pins.pcf
YOSYS_LOG = synth.log
YOSYS_ARGS = -v3 -l $(YOSYS_LOG)
VERILOG_SRCS = $(wildcard *.v)
BIN_FILE = $(TARGET_STEM).bin
ASC_FILE = $(TARGET_STEM).asc
BLIF_FILE = $(TARGET_STEM).blif
all: $(BIN_FILE)
$(BIN_FILE): $(ASC_FILE)
icepack $< $@
$(ASC_FILE): $(BLIF_FILE) $(PINS_FILE)
arachne-pnr -d $(ARACHNE_DEVICE) -P $(PACKAGE) -o $(ASC_FILE) -p $(PINS_FILE) $<
$(BLIF_FILE): $(VERILOG_SRCS)
yosys $(YOSYS_ARGS) -p "synth_ice40 -blif $(BLIF_FILE)" $(VERILOG_SRCS)
prog: $(BIN_FILE)
$(PROG_BIN) $<
timings:$(ASC_FILE)
icetime -tmd $(ICETIME_DEVICE) $<
clean:
rm -f $(BIN_FILE) $(ASC_FILE) $(BLIF_FILE) $(YOSYS_LOG)
.PHONY: all clean prog timings</code></pre>
<p>Each project needs only to define the <span class="caps">FPGA </span>chip and programming command, then include the standard file above:</p>
<pre><code>ARACHNE_DEVICE = 1k
PACKAGE = tq144
ICETIME_DEVICE = hx1k
PROG_BIN = iceprog
include ../std.mk</code></pre>
<p>Note that <code>arachne-pnr</code> and <code>icetime</code> both need the device type, but sadly in an inconsistent way.</p>
<p>Then you can build the code:</p>
<pre><code>$ make</code></pre>
<p>or flash it:</p>
<pre><code>$ make prog</code></pre>
<p>or generate a timing profile:</p>
<pre><code>$ make timings</code></pre>
<p>Regrettably, there’s no test infrastructure in the Makefile: a serious omission for all but the most trivial project.</p>
<h2>Other tools</h2>
<p>You should regard the following as things which people have claimed to be useful, but I’ve little or no direct experience of them.</p>
<h3>Icarus Verilog</h3>
<p>This can be handy for simulations. Waveforms can be viewed with GtkWave. Both are in homebrew:</p>
<pre><code>$ brew install icarus-verilog
$ brew cask install gtkwave</code></pre>
<h3>Verilator</h3>
<p><a href="https://www.veripool.org/wiki/verilator">Verilator</a> is another Verilog simulator.</p>
<h3>ice40_viewer</h3>
<p>This displays a configuration graphically in the browser. You can grab the code from <a href="https://github.com/knielsen/ice40_viewer">GitHub</a>, or just <a href="https://knielsen.github.io/ice40_viewer/ice40_viewer.html">run it online</a>. </p>A900B554-1CE6-11E8-B8F9-7FB0E32F9A4D2018-03-01T00:22:31:31Z2018-03-02T20:15:24:24ZiCE40 Blinky on the Olimex HX1KMartin Oldfield<p>A brief walkthrough of making the <span class="caps">LED</span>s flash on Olimex’s iCE40HX-1K board. </p><p><em>This article is part of a series documenting my first foray into <span class="caps">FPGA </span>programming. You might find it helpful to read the <a href="http://ice40-blinky.html">summary article</a> first.</em></p>
<h2>Introduction</h2>
<p>Olimex make a <a href="https://www.olimex.com/Products/FPGA/iCE40/iCE40HX1K-EVB/open-source-hardware">development board for the <span class="caps">HX1K</span></a>. The board’s design is entirely open: it’s on <a href="https://github.com/OLIMEX/iCE40HX1K-EVB">GitHub</a>.</p>
<p>Unlike the boards from Lattice, it does <em>not</em> contain a programmer: rather Olimex suggest using one of their Arduino clones to do the task. There’s also no <span class="caps">USB </span>connection for a computer: it seems much more a standalone product.</p>
<p><img src="olimex-hx1k-1.jpg" alt="" class="img_noborder_small" /></p>
<h2>Walkthrough</h2>
<p>Two steps are common to all the boards:</p>
<ol>
<li>Install the <a href="./ice40-toolchain.html">iCE40 toolchain</a>.</li>
<li>Clone the repo:</li>
</ol>
<pre><code>$ git clone https://github.com/mjoldfield/ice40-blinky.git</code></pre>
<p>Next you will need to acquire an <span class="caps">FPGA </span>programmer. Olimex suggest programming an Arduino: this is documented <a href="#ard_prog">below</a>. Having configured the Arduino, connect it to the <span class="caps">FPGA </span>board:</p>
<p><img src="olimex-hx1k-4.jpg" alt="" class="img_noborder_small" /></p>
<p>Now we must attend to the power. There isn’t a <span class="caps">USB </span>port on this board, so you can”t directly power it from a computer. Instead you have to choose between:</p>
<ul>
<li>connecting a 5V DC supply to the jack socket on the board;</li>
<li>supplying power to the 3.3V pin on the programming header.</li>
</ul>
<p>The latter is most convenient, but you need to bridge an open solder-bridge on the bottom of the <span class="caps">PCB </span>to enable it.</p>
<p><img src="olimex-hx1k-2.jpg" alt="" class="img_noborder_small" /></p>
<p>Now you can build the relevant demo, and flash it to the board:</p>
<pre><code>$ cd olimex-hx1k/
$ make flash</code></pre>
<p>Finally, enjoy the <a href="https://en.wikipedia.org/wiki/Blinkenlights">blinkenlights</a>!</p>
<h2>Testing</h2>
<p><img src="olimex-hx1k-3.jpg" alt="" class="img_noborder_small" /></p>
<p>If you have a frequency counter to hand, measure the frequency on pin 34 of the <span class="caps">IDC </span>connector. You should see a 6.25MHz square wave.</p>
<h2>Hardware Notes</h2>
<p>The board’s design is entirely open: it’s on <a href="https://github.com/OLIMEX/iCE40HX1K-EVB">GitHub</a>.</p>
<h3><span class="caps">FPGA</span></h3>
<p>The <span class="caps">FPGA </span>is a iCE40HX-1K in a 100-pin flatpack.</p>
<h3>Clock and <span class="caps">PLL</span></h3>
<p>A 100MHz oscillator module drives pin 15.</p>
<p>In this package the <span class="caps">HX1K </span><em>has no <span class="caps">PLL</span></em>.</p>
<h3><span class="caps">LED</span>s</h3>
<p>Two red <span class="caps">LED</span>s are provided on pins 40 and 51.</p>
<h3>Test point</h3>
<p>As befits the name breakout board, many spare IO pins exist, and we use pin 24 as a test point.</p>
<h2>Software Notes</h2>
<p>Please remember that you can download all of this from <a href="https://github.com/mjoldfield/ice40-blinky">GitHub</a>.</p>
<p>There are only three files: the verilog, the pin definitions, and a Makefile.</p>
<h3>The main source code</h3>
<p>The code is very simple: there are just two <span class="caps">LED</span>s and they flash at 1Hz.</p>
<pre><code>/*
* Top module for Olimex iCE40HX1K-EVB blinky
*
* Bounce LEDs
*
* Generate test signal: 6.25MHz
*/
module top(input CLK
, output LED1
, output LED2
, output TSTA
);
// No PLL, so use 100MHz external clock
wire sysclk;
assign sysclk = CLK;
// We want to do a 2-cycle pattern in 1s, i.e. tick at
// 2Hz. log_2 (100M / 2) = 25.6. so use a 26-bit counter
localparam ANIM_PERIOD = 100000000 / 2;
localparam SYS_CNTR_WIDTH = 26;
reg [SYS_CNTR_WIDTH-1:0] syscounter;
reg led_strobe;
always @(posedge sysclk)
if (syscounter < ANIM_PERIOD-1)
begin
syscounter <= syscounter + 1;
led_strobe <= 0;
end
else
begin
syscounter <= 0;
led_strobe <= 1;
end
reg ledState;
always @(posedge sysclk)
if (led_strobe)
ledState <= !ledState;
assign LED1 = ledState;
assign LED2 = !ledState;
// test signal: 100MHz / 2^4 = 6.25MHz
assign TSTA = syscounter[3];
endmodule</code></pre>
<h3>Makefile</h3>
<p>Most of the rules are shared across different dev. boards: we need only to specify the <span class="caps">FPGA </span>and the programming software:</p>
<pre><code>ARACHNE_DEVICE = 1k
PACKAGE = vq100
ICETIME_DEVICE = hx1k
PROG_DEV = $(wildcard /dev/cu.usbmodem*)
PROG_BIN = iceprogduino -I$(PROG_DEV)
include ../std.mk</code></pre>
<h3>Pin summary</h3>
<p>Finally, we need to tell the software which pins are associated with the signals:</p>
<pre><code>$ cat pins.pcf
set_io LED1 40
set_io LED2 51
set_io CLK 15
set_io TSTA 24</code></pre>
<h2 id="ard_prog">The Arduino <span class="caps">FPGA </span>programmer</h2>
<p>Olimex recommend using their <a href="https://www.olimex.com/Products/Duino/AVR/OLIMEXINO-32U4/open-source-hardware">Olimexino-32U4</a> board for this. I think it’s roughly the same as an Arduino Leonardo, but it comes with a <span class="caps">UEXT </span>header which has precisely the right pin out for the programming header on the Olimex <span class="caps">FPGA </span>board.</p>
<p>If you try to use a different Arduino, make sure it uses 3.3V signals, rather than 5V.</p>
<p>The sketch for the board is in the <span class="caps">FPGA </span>board’s <a href="https://github.com/OLIMEX/iCE40HX1K-EVB/tree/master/programmer/olimexino-32u4%20firmware">GitHub repo</a>, but you also need a particular <span class="caps">SPIF</span>lash library. More explicit details can be found on <a href="https://www.olimex.com/wiki/ICE40HX1K-EVB#Preparing_OLIMEXINO-32U4_as_programmer">the Olimex website</a>.</p>
<p>Having flashed the hardware, we now need to install software on the Mac to drive it. That too is from <a href="https://github.com/OLIMEX/iCE40HX1K-EVB/tree/master/programmer/iceprogduino">GitHub</a>:</p>
<pre><code>$ make
$ make install</code></pre>
<p>On the Mac, the Arduino appears as a usbmodem device in /dev:</p>
<pre><code>$ ls /dev/*usbmodem*
/dev/cu.usbmodem14431 /dev/tty.usbmodem14431</code></pre>
<p>Of these, you need the cu.usbmodem version. The number in the device name reflects its place on the <span class="caps">USB </span>buses, and so changes if you plug it in to a different socket. If you just have one such device though, you can use a wildcard, and program the <span class="caps">FPGA </span>thus:</p>
<pre><code>$ iceprogduino -I/dev/cu.usbmodem* foo.bin </code></pre>5CD4C9BA-1C08-11E8-9DC9-88E0E42F9A4D2018-02-27T21:50:52:52Z2018-03-02T19:55:30:30ZiCE40 Blinky on HX8K BreakoutMartin Oldfield<p>A brief walkthrough of making the <span class="caps">LED</span>s flash on Lattice’s iCE40HX-8K breakout board. </p><p><em>This article is part of a series documenting my first foray into <span class="caps">FPGA </span>programming. You might find it helpful to read the <a href="http://ice40-blinky.html">summary article</a> first.</em></p>
<h2>Introduction</h2>
<p>Lattice make a <a href="http://www.latticesemi.com/en/Products/DevelopmentBoardsAndKits/iCE40HX8KBreakoutBoard">breakout board</a> for their iCE40HX-8K <span class="caps">FPGA.</span> It is a significantly bigger array than the <span class="caps">HX1K </span>chip on the <a href="http://www.latticesemi.com/icestick">iCEstick</a>.</p>
<p>For full documentation on the board, see the <a href="http://www.latticesemi.com/view_document?document_id=50373">user guide</a>.</p>
<p><img src="hx8k-1.jpg" alt="" class="img_noborder_small" /></p>
<h2>Walkthrough</h2>
<p>Two steps are common to all the boards:</p>
<ol>
<li>Install the <a href="./ice40-toolchain.html">iCE40 toolchain</a>.</li>
<li>Clone the repo:</li>
</ol>
<pre><code>$ git clone https://github.com/mjoldfield/ice40-blinky.git</code></pre>
<p>Now let’s tackle the hardware.</p>
<p>By default, the programmer on the board programs the external flash chip. However, it is more convenient and faster to use the <span class="caps">SRAM </span>inside the <span class="caps">FPGA.</span> Moreover, the programming configuration below assumes that you’re programming the <span class="caps">SRAM.</span></p>
<p>To enable <span class="caps">SRAM </span>programming, you need to change a few links on the board:</p>
<p><img src="hx8k-3.jpg" alt="" class="img_noborder_small" /></p>
<p>For more details, see pages 5 and 6 of the <a href="http://www.latticesemi.com/view_document?document_id=50373">user guide</a>.</p>
<p>Having moved the links, connect the board to a <span class="caps">USB </span>port.</p>
<p>Now, build the relevant demo, and flash it to the board:</p>
<pre><code>$ cd HX8K-breakout/
$ make prog</code></pre>
<p>Finally, enjoy the <a href="https://en.wikipedia.org/wiki/Blinkenlights">blinkenlights</a>!</p>
<h2>Testing</h2>
<p><img src="hx8k-2.jpg" alt="" class="img_noborder_small" /></p>
<p>If you have a frequency counter to hand, measure the frequency on test point A: it should be exactly 6MHz. If you prefer something slower, you should find a frequency of exactly 1Hz, with a duty cycle of 1/16 on test point B.</p>
<h2>Hardware Notes</h2>
<p>Full schematics of the board are available in the <a href="http://www.latticesemi.com/view_document?document_id=50373">user manual</a>. Here are some highlights, relevant to our simple project.</p>
<h3><span class="caps">FPGA</span></h3>
<p>The <span class="caps">FPGA </span>is a iCE40HX-8K in a 256-pin <span class="caps">LFBGA.</span></p>
<h3>Clock and <span class="caps">PLL</span></h3>
<p>A 12MHz clock from a ceramic resonator is provided on pin <span class="caps">J3.</span></p>
<p>This <span class="caps">FPGA </span>has a <span class="caps">PLL </span>which lets us scale the incoming clock. Arbitrarily, we will try to get a 96MHz system clock, and to do this we need some magic numbers with which we can configure the <span class="caps">PLL.</span> Enter <code>icepll</code>:</p>
<pre><code>$ icepll -i 12 -o 96 -m -f pll.v
F_PLLIN: 12.000 MHz (given)
F_PLLOUT: 96.000 MHz (requested)
F_PLLOUT: 96.000 MHz (achieved)
FEEDBACK: SIMPLE
F_PFD: 12.000 MHz
F_VCO: 768.000 MHz
DIVR: 0 (4'b0000)
DIVF: 63 (7'b1000010)
DIVQ: 3 (3'b011)
FILTER_RANGE: 1 (3'b001)
PLL configuration written to: pll.v </code></pre>
<p>As you can see the <span class="caps">PLL </span>can generate this clock exactly.</p>
<p>Notice too, that <code>icepll</code> helpfully writes the relevant verilog to a file. Sadly though, the verilog doesn’t use global clock buffers, so it needs to be tweaked by hand.</p>
<h3><span class="caps">LED</span>s</h3>
<p>The port sports eight red <span class="caps">LED</span>s arranged in a line. They are attached to pins <span class="caps">B5, B4, A2, A1, C5, C4, B3, </span>and <span class="caps">C3.</span></p>
<h3>Test points</h3>
<p>As befits the name breakout board, many spare IO pins exist, and we use two as test points: B1 and <span class="caps">B2.</span></p>
<h3>Programming</h3>
<p>The board has a <span class="caps">FTDI</span> 2232H <span class="caps">USB </span>interface which can be used to program both external flash and internal <span class="caps">SRAM </span>with <code>iceprog</code> from the IceStorm Tools. You must supply the <code>-S</code> flag to <code>iceprog</code> when programming the <span class="caps">SRAM.</span></p>
<p>Note: jumpers J6 and J7 on the board govern whether the flash or <span class="caps">SRAM </span>is programmed. As shipped they are set for flash, but the walkthrough above moves them to <span class="caps">SRAM </span>mode.</p>
<h2>Software Notes</h2>
<p>Please remember that you can download all of this from <a href="https://github.com/mjoldfield/ice40-blinky">GitHub</a>.</p>
<p>There are only four small files: a couple of bits of verilog, the pin definitions, and a Makefile.</p>
<h3>The main source code</h3>
<p>The code is much as you’d expect, though it takes slightly more care than its counterpart for the <a href="./ice40-blinky-icestick.html">iCEstick</a>.</p>
<p>In particular, we use the <span class="caps">PLL</span>’s locked signal to reset things on power-up. Rather than a free-running binary counter, we also generate a precise 16Hz clock so that the 16-cycle animation should take exactly one second (modulo the accuracy of the master oscillator).</p>
<pre><code>/*
* Top module for HX8K breakout blinky
*
* Sweep light along LED array
*
* Generate test signals at 6.0MHz and 1Hz.
*/
module top(input CLK
, output LED0
, output LED1
, output LED2
, output LED3
, output LED4
, output LED5
, output LED6
, output LED7
, output TSTA
, output TSTB
);
// PLL to get 96MHz clock
wire sysclk;
wire locked;
pll myPLL (.clock_in(CLK), .global_clock(sysclk), .locked(locked));
// We want to do a 16-cycle pattern in 1s, i.e. tick at
// 16Hz. log_2 (96M / 16) = 22.516.. so use a 23-bit counter
localparam ANIM_PERIOD = 96000000 / 16;
localparam SYS_CNTR_WIDTH = 23;
reg [SYS_CNTR_WIDTH-1:0] syscounter;
reg anim_stb;
always @(posedge sysclk)
if (locked && syscounter < ANIM_PERIOD-1)
begin
syscounter <= syscounter + 1;
anim_stb <= 0;
end
else
begin
syscounter <= 0;
anim_stb <= 1;
end
// animation phase: 4-bits so 16 cycles
reg [3:0] anim_phase;
// a register holding LED state.
reg [7:0] leds;
always @(posedge sysclk)
if (!locked)
anim_phase <= 0;
else if (anim_stb)
begin
anim_phase <= anim_phase + 1;
case (anim_phase)
4'b0000: leds <= 8'b00000001;
4'b1000: leds <= 8'b10000000;
default:
if (anim_phase[3])
leds <= leds >> 1;
else
leds <= leds << 1;
endcase
end // if (anim_stb)
assign { LED0, LED1, LED2, LED3, LED4, LED5, LED6, LED7 } = leds;
// test signals on counter
assign TSTA = syscounter[3]; // 96MHz / 2^4 = 6MHz
assign TSTB = anim_phase == 0; // 1Hz
endmodule</code></pre>
<h3>The <span class="caps">PLL </span>code</h3>
<p>The <span class="caps">PLL </span>code is generated by <code>icepll</code>, then edited to use global buffers to distribute the clock and locked status.</p>
<p>Technical note <a href="http://www.latticesemi.com/~/media/LatticeSemi/Documents/ApplicationNotes/IK/iCE40sysCLOCKPLLDesignandUsageGuide.pdf?document_id=47778"><span class="caps">TN1251</span></a> discusses clocks and <span class="caps">PLL</span>s on the iCE40.</p>
<pre><code>/**
* PLL configuration
*
* This Verilog module was generated automatically
* using the icepll tool from the IceStorm project.
* Use at your own risk.
*
* Subsequent tweaks to use a Global buffer were made
* by hand.
*
* Given input frequency: 12.000 MHz
* Requested output frequency: 96.000 MHz
* Achieved output frequency: 96.000 MHz
*/
module pll(
input clock_in,
output global_clock,
output locked
);
wire g_clock_int;
wire g_lock_int;
SB_PLL40_CORE #(
.FEEDBACK_PATH("SIMPLE"),
.DIVR(4'b0000), // DIVR = 0
.DIVF(7'b0111111), // DIVF = 63
.DIVQ(3'b011), // DIVQ = 3
.FILTER_RANGE(3'b001) // FILTER_RANGE = 1
) uut (
.LOCK(g_lock_int),
.RESETB(1'b1),
.BYPASS(1'b0),
.REFERENCECLK(clock_in),
.PLLOUTGLOBAL(g_clock_int)
);
SB_GB clk_gb ( .USER_SIGNAL_TO_GLOBAL_BUFFER(g_clock_int)
, .GLOBAL_BUFFER_OUTPUT(global_clock) );
SB_GB lck_gb ( .USER_SIGNAL_TO_GLOBAL_BUFFER(g_lock_int)
, .GLOBAL_BUFFER_OUTPUT(locked) );
endmodule
</code></pre>
<h3>Makefile</h3>
<p>Most of the rules are shared across different dev. boards: we need only to specify the <span class="caps">FPGA </span>and the programming software:</p>
<pre><code>ARACHNE_DEVICE = 8k
PACKAGE = ct256
ICETIME_DEVICE = hx8k
# the -S flag says program the SRAM, not flash
PROG_BIN = iceprog -S
include ../std.mk</code></pre>
<p>Note the the programming command now sports a <code>-S</code> flag: this means program the <span class="caps">SRAM, </span>not the external flash chip.</p>
<h3>Pin summary</h3>
<p>Finally, we need to tell the software which pins are associated with the signals:</p>
<pre><code>$ cat pins.pcf
set_io LED0 B5
set_io LED1 B4
set_io LED2 A2
set_io LED3 A1
set_io LED4 C5
set_io LED5 C4
set_io LED6 B3
set_io LED7 C3
set_io CLK J3
set_io TSTA B1
set_io TSTB B2 </code></pre>B1E895E6-023F-11E2-A874-CE98A777C6DD2012-09-19T09:52:05:05Z2018-01-04T23:01:27:27ZUseful Geocaching LinksMartin Oldfield<p>Helpful links for geocaching: particularly puzzle solving. </p><h2>Codes and Cyphers</h2>
<ul>
<li><a href="http://ref.wikibruce.com">Wikibruce</a></li>
<li><a href="http://vc.airvectors.net/ttcode.html">Greg Goebel's</a> collection of codes.</li>
<li>The <a href="http://easyciphers.com">Easy Ciphers</a> page.</li>
<li><a href="http://rumkin.com/tools/cipher/">Rumkin's tools</a></li>
<li><a href="http://bionsgadgets.appspot.com">Braingle's codes</a></li>
<li>The US Army <a href="http://www.umich.edu/~umich/fm-34-40-2/">‘Basic Cryptanalysis’ Field Manual</a></li>
</ul>
<h2>Online solvers</h2>
<ul>
<li>Magic Eye/Random Dot Stereogram <a href="http://magiceye.ecksdee.co.uk">solver</a> and <a href="http://graphics.stanford.edu/~kekoa/talks/gcafe-20030417/gcafe-20030417.pdf">some maths.</a></li>
<li><a href="https://quipqiup.com">Quipquip</a> substitution solver.</li>
<li><a href="https://www.guballa.de/vigenere-solver">Guballa</a> Vigenère solver.</li>
<li><a href="http://bionsgadgets.appspot.com"><span class="caps">BION</span>'s gadgets</a></li>
<li><a href="https://www.dcode.fr/tools-list">dcode.fr</a></li>
</ul>
<h2>Geographical things</h2>
<ul>
<li><a href="http://www.fieldenmaps.info/cconv/cconv_gb.html">Coordinate conversion:</a> handles OS GB grid references too.</li>
<li>Official <a href="https://www.ordnancesurvey.co.uk/gps/transformation/">Ordnance Survey coordinate transformation tool</a></li>
<li><a href="https://osmaps.ordnancesurvey.co.uk">Ordnance Survey Maps:</a></li>
<li><a href="http://www.streetmap.co.uk">Streetmap:</a> seems the best way to find streets and places by name.</li>
</ul>
<h2>General Resources</h2>
<ul>
<li><a href="http://geocachingtoolbox.com">The Geocaching Toolbox:</a> lots of useful tools.</li>
<li><a href="http://bcaching.wordpress.com/category/puzzle-caches/">Bcaching's</a> notes on puzzle caches.</li>
<li><a href="http://parmstro.weebly.com/solving-puzzles.html">Peter Armstrong's</a> notes on puzzle caches.</li>
<li><a href="http://perplexcitywiki.com/wiki/puzzle_tools">Puzzle Tools</a></li>
<li> <a href="http://www.geocaching.com/seek/cache_details.aspx?guid=c0c63967-52b0-4862-ad8d-36f5bfe9b1da"><span class="caps">GC25WQJ</span>:</a> a puzzle tutorial cache in Michigan.</li>
</ul>
<h2>Miscellanea</h2>
<ul>
<li><a href="http://en.wikipedia.org/wiki/Digital_root">Digital Root:</a> I can never remember the definition. </li>
</ul>30E00006-D633-11E7-8EFB-FAC1F7B7024D2017-12-01T01:00:48:48Z2017-12-01T10:28:00:00ZIlluminating Face IDMartin Oldfield<p>A quick look at Face ID illumination in the time domain. </p><h2>Introduction</h2>
<p>Apple’s new iPhone X uses <a href="https://support.apple.com/en-gb/HT208108">Face ID</a> instead of a fingerprint sensor. The basic idea, as explained on <a href="https://en.wikipedia.org/wiki/Face_ID">Wikipedia</a>, is to take a 3D, infra-red image of your face, and unlock the phone if it matches an internal model.</p>
<p>Tech Insider have a <a href="http://uk.businessinsider.com/how-face-id-iphone-x-works-infrared-dots-scan-technology-2017-11">nice video</a> showing how the phone projects a pattern of dots onto the subject, but I wondered how things looked in the time-domain.</p>
<p>So, I connected a photo-transistor to my oscilloscope and plotted the results. The circuit is in no sense optimal, rather I just threw together things I had lying around. The sensor, a <a href="https://www.vishay.com/photo-detectors/list/product-81549/">Vishay <span class="caps">TEFT4300</span></a> photo-transistor has a light current of about 3mA, which should drop about 2V across 620Ω.</p>
<p><img src="schematic.png" alt="" class="img_noborder_small" /></p>
<p>I must emphasize that this whole experiment was very crude: I held the detector close to my face, then pointed the phone at me. Given that we know Face ID projects dots of light, the variations below could easily come from small changes in the relative positions of the sensor and phone.</p>
<h2>Results</h2>
<p>As you might expect, it’s far from just constant illumination, but I was surprised by how much structure I found. I suppose in part, all this enables some sort of synchronous detection which makes the system resistant to changes in ambient illumination, and prevents two nearby iPhone Xs from interfering with each other.</p>
<h2>Basic traces</h2>
<p>We begin with full-traces of two unlocking manœuvres.</p>
<p><a href="./seg1.png"><img src="seg1.png" alt="" class="img_noborder" /></a></p>
<p><a href="./seg2.png"><img src="seg2.png" alt="" class="img_noborder" /></a></p>
<p>You can see there are many pulses of varying width and intensity. Both traces show two very high pulses, one of which is the last pulse in the sequence. I’ve no idea if this is generally true.</p>
<h3>Timing</h3>
<p>Although the two pulse trains are quite different, the both last about the same time: a shade under 450ms. That sets some sort of scale on how quickly Face ID can unlock the phone.</p>
<p><a href="./seg1-t.png"><img src="seg1-t.png" alt="" class="img_noborder" /></a></p>
<p><a href="./seg2-t.png"><img src="seg2-t.png" alt="" class="img_noborder" /></a></p>
<h2>Spikes</h2>
<p>Although both traces show two large spikes, they are not delayed by the same amount. In the first trace, they are 267ms apart; in the second 325ms.</p>
<p><a href="./seg1-spike-dt.png"><img src="seg1-spike-dt.png" alt="" class="img_noborder_2up" /></a> <a href="./seg2-spike-dt.png"><img src="seg2-spike-dt.png" alt="" class="img_noborder_2up" /></a></p>
<h2>Pulse detail</h2>
<p>On the other hand, the pulse widths do seem to be consistent. Short pulses last 3ms; long ones 9ms. The gaps aren’t nice multiples of 3ms though, so perhaps the time quantum is smaller.</p>
<p>There is also intensity variation in the bright pulses.</p>
<p><a href="./seg1-z1.png"><img src="seg1-z1.png" alt="" class="img_noborder_2up" /></a> <a href="./seg2-z1.png"><img src="seg2-z1.png" alt="" class="img_noborder_2up" /></a></p>
<p><a href="./seg1-z2.png"><img src="seg1-z2.png" alt="" class="img_noborder_2up" /></a> <a href="./seg2-z2.png"><img src="seg2-z2.png" alt="" class="img_noborder_2up" /></a></p>
<p><a href="./seg1-z3.png"><img src="seg1-z3.png" alt="" class="img_noborder_2up" /></a> <a href="./seg2-z3.png"><img src="seg2-z3.png" alt="" class="img_noborder_2up" /></a></p>
<h2>Conclusions</h2>
<p>It’s nice to see that even very simple experiments can give interesting results. The data here show that the Face ID scan takes about 450ms, and that it is modulated with pulses of 3ms and 9ms duration.</p>
<p>It would be interesting to see a video of the scan, taken with a IR camera at about 1000 frames per second. Sadly I don’t have such an animal! </p>65CBAA82-CFD9-11E7-94A9-A3F3ECE1C9C82017-11-22T23:02:44:44Z2017-11-25T22:37:40:40ZJansjö dimmerMartin Oldfield<p>A simple dimmer for the Ikea Jansjö work lamp. </p><h2>Introduction</h2>
<p>The Ikea Jansjö work lamp is a simple, cheap (~£10) <span class="caps">LED </span>light. It comes in <a href="http://www.ikea.com/gb/en/products/lighting/work-lamps/jansj&ouml;-led-work-lamp-black-art-00169659/">desktop</a>, and <a href="http://www.ikea.com/gb/en/products/lighting/work-lamps/jansj&ouml;-led-wall-clamp-spotlight-white-art-00315651/">clamp</a> variants, and you even have (some) choice of colour!</p>
<p>I’ve got a bunch of them around the house, but don’t take my word for it: Ben Krasnow <a href="https://youtu.be/YBQp04glQqc?t=4m8s">likes them too</a>!</p>
<p>The only problem with them, is that sometimes they’re too bright, so I thought I’d make a dimmer for them.</p>
<p><img src="dimmer.jpg" alt="" class="img_border" /></p>
<h2>Details</h2>
<p>From what I can tell, both desk and clamp lights have a simple <span class="caps">LED </span>in the lamp body, a switch in the cable, and a wall-wart with some sort of current control.</p>
<p>The supply for my desk lamps claim an output of 4V, whereas the clamp supply boasts a more generous 7V. This might reflect different vintages of lights, or just different batches.</p>
<h2><span class="caps">PWM </span>for the win</h2>
<p>Given that there are different designs in the wild, I thought the most robust way to control the brightness was to just modulate the <span class="caps">LED</span>’s output with a relatively slow pulse-train, and vary its duty cycle.</p>
<p>Switching the power with <span class="caps">MOSFET </span>is an obvious choice, and after consulting <a href="https://artofelectronics.net"><em>The Art of Electronics</em></a> a simple <a href="https://en.wikipedia.org/wiki/555_timer_IC">555</a>-based oscillator seemed a sensible way to drive it.</p>
<p>The classic 555 oscillator circuit needs a little fettling to produce duty-cycles less than 50%. I just copied the book to give this schematic:</p>
<p><a href="./dimmer-schematic.pdf"><img src="dimmer-schematic.png" alt="" class="img_noborder" /></a></p>
<h2>Component details</h2>
<p>The circuit is trivial, but there are a few points to note:</p>
<ul>
<li>The classic 555 needs a supply voltage of at least 4.5V, which won’t do. So, I used a cheap <span class="caps">CMOS </span>version too: the <a href="http://www.ti.com/product/LMC555"><span class="caps">LMC555</span></a>.</li>
<li>The <a href="https://www.infineon.com/dgdl/irf530nspbf.pdf?fileId=5546d462533600a4015355e38eb4199c"><span class="caps">IRF530</span></a> <span class="caps">MOSFET </span>is complete overkill: it can switch up to 12A, but it seemed to me that this board might be more generally useful to warrant some headroom. 4V is just enough to saturate the device.</li>
<li>The potentiometer is a linear slide potentiometer from <a href="http://www.bourns.com/products/potentiometers/slide-potentiometers/product/PTA">Bourns’ <span class="caps">PTA </span>series</a>. The fixed series resistor sets a lower-bound on the duty-cycle i.e. a minimum brightness for the light.</li>
<li>Although not shown here, there’s also a power switch in the main power rail. Small rocker switches seemed hard to find, I used a <a href="http://www.nkkswitches.com/wp-content/themes/impress-blank/search/inc/part.php?part_no=CWT12AAS1"><span class="caps">CWT12AAS1</span></a> from <span class="caps">NKK.</span></li>
</ul>
<h2>Performance</h2>
<p>The circuit should oscillate at</p>
\[
f_{OSC} = \frac{1.44}{R C},
\]
<p>and here, \(R = 11\textrm{k}Ω\) and \(C = 1μ\textrm{F}\). So we expect \(f_ ≈ 130\textrm{Hz}\).</p>
<h2>Characterization</h2>
<p>I did most of my testing with a 7V light. Without the dimmer in place, the power-supply generates 6.90V which dropped to 6.89V when the <span class="caps">LED </span>was connected. The <span class="caps">LED </span>drew a suspiciously round 200mA of current.</p>
<p>The circuit’s description in <em>The Art of Electronics</em> talks about being able to change the duty-cycle over almost the whole-range and without affecting the frequency much, so I thought I’d measure this.</p>
<p><a href="./phi-f.pdf"><img src="phi-f.png" alt="" class="img_noborder" /></a></p>
<p>The end points on this plot correspond to the extreme positions of the plot. You can see that the duty-cycle ranges from about 15% to 95%: the asymmetry comes from fixed resistor which provides a lower bound on the duty-cycle. It has a value of about 10% of the pot, so the basic performance of the circuit is roughly 5% to 95%.</p>
<p>At a duty-cycle of about 50%, the frequency is about 136Hz: about 5% high. That seems well within the expected tolerance of the capacitor a <span class="caps">X7R</span> 0603 ceramic.</p>
<p>We also see about a 5% change in frequency as the duty-cycle changes: high duty-cycle corresponds to low frequency and vice versa.</p>
<p>Redoing the experiment at a different time of day gave frequencies about 1.5Hz lower than shown above: I think this is probably a temperature effect.</p>
<h3>High-frequency structure</h3>
<p>The figure below shows the current flowing to the <span class="caps">LED </span>at a duty-cycle of about 63%, measured using a <a href="http://www.eevblog.com/projects/ucurrent/">µCurrent</a>.</p>
<p><img src="2cycles.png" alt="" class="img_noborder" /></p>
<p>If we zoom-in on the region where the <span class="caps">MOSFET </span>is on, we can see regular spikes, which presumably come from a switching regulator in the wall-wart. In the trace shown they are spaced 25.4μs apart, which corresponds to a frequency of 39.4kHz. This isn’t constant though: at other times I've seen significantly lower frequencies.</p>
<p><img src="on-noise.png" alt="" class="img_noborder" /></p>
<p>There is also some oscillation in when the <span class="caps">MOSFET </span>is off. I’m not sure where this comes from, nor whether it’s just a measurement artefact or a real issue. The frequency is odd: about 13kHz.</p>
<p><img src="off-noise.png" alt="" class="img_noborder" /></p>
<h3>The effect of voltage</h3>
<p>I measured the performance of a couple of 4V lamps non-invasively with a photo-diode:</p>
<table class="cspaced" cellspacing="0"><tr class="toprowborder"><th class="leftborder bottomborder" rowspan="2">Lamp</th><th class="leftborder bottomborder" colspan="2">Dim</th><th class="lrborder bottomborder" colspan="2">Bright</th></tr><tr class="bottomrowborder"><th class="leftborder">f / Hz</th><th class="leftborder">duty-cycle</th><th class="leftborder">f / Hz</th><th class="lrborder">duty-cycle</th></tr><tr><td class="leftborder">7V</td><td class="leftborder">137.9</td><td class="leftborder">14.1%</td><td class="leftborder">133.1</td><td class="lrborder">95.5%</td></tr><tr><td class="leftborder">4V #1</td><td class="leftborder">128.3</td><td class="leftborder">15.8%</td><td class="leftborder">118.7</td><td class="lrborder">95.4%</td></tr><tr class="bottomrowborder"><td class="leftborder">4V #2</td><td class="leftborder">131.9</td><td class="leftborder">16.7%</td><td class="leftborder">123.0</td><td class="lrborder">95.8%</td></tr></table>
<h3>Summary</h3>
<p>It is easy to point to ways in which the circuit is not a perfect fixed-frequency oscillator with a duty-cycle adjustable between 0 and 100%. The imperfections are slightly more marked at 4V than 7V, presumably because at least some of them come from the forward voltage of the diode, which becomes more significant as the voltage falls.</p>
<p>Although three samples isn’t enough to draw serious conclusions, the oscillator appears to run faster at higher voltages. I wonder if this comes from the <a href="https://en.wikipedia.org/wiki/Ceramic_capacitor#Voltage_dependence_of_capacitance">voltage dependence of the capacitance</a> of the <span class="caps">MLCC </span>timing capacitor.</p>
<p>Taking a step back though, these issues are minor. The circuit works as a perfectly good <span class="caps">PWM </span>controller.</p>
<h2>Design files</h2>
<p><a href="./dimmer-pcb.pdf"><img src="dimmer-pcb.png" alt="" class="img_noborder" /></a></p>
<h3>Schematic and <span class="caps">PCB</span></h3>
<p>The electronics were designed in KiCad and you can download the files from <a href="https://github.com/mjoldfield/jansjo-dimmer">GitHub</a>.</p>
<p>The <span class="caps">PCB </span>is a somewhat generous 48mm long and 28mm wide: the former’s governed by the length of the pot., and the latter by the desire to mount the pot centrally whilst allowing space for board connectors.</p>
<p>With the exception of the pot and connectors, construction is surface mount, with 0603 passives.</p>
<h3>Enclosure and front panel</h3>
<p><a href="./dimmer-panel.pdf"><img src="dimmer-panel.png" alt="" class="img_noborder" /></a></p>
<p>The <span class="caps">PCB </span>fits neatly into a Camden Boss <a href="http://camdenboss.com/enclosures/potting-boxes/potting-boxes-with-lid-hb"><span class="caps">RX2KL07</span></a> potting box. I replaced the lid with 3mm laser-cut acrylic, and a <span class="caps">PDF </span>for the cuts is also in the repo.</p>
<p>Although the <span class="caps">PCB </span>has mounting holes, in practice the whole assembly is fixed to the front panel via the mounting holes on the potentiometer.</p>
<p>Somewhat idiosyncratically, the <span class="caps">PDF </span>for the laser cutter is produced by a <a href="https://www.haskell.org">Haskell</a> program.</p>
<h2>Installation</h2>
<p>This was relatively easy: I cut out the existing switch and replaced it with the dimmer module.</p>
<p>Somewhat annoyingly the strain-relief holes in the <span class="caps">PCB </span>were a bit too small and so couldn't be used. </p>A98E4C5E-C34D-11E7-8604-9FEFF894CC522017-11-06T23:53:12:12Z2017-11-11T22:19:31:31ZPureScriptMartin Oldfield<p>Notes on PureScript. </p><h2>Introduction</h2>
<p>Recently, I’ve been playing with <a href="http://www.purescript.org">PureScript</a>, a functional, Haskell-like, language which compiles to JavaScript. These are my notes on the adventure, and lean heavily on comparisons with Haskell.</p>
<p>My motivation was the desire to build little webpage widgets, mainly because I’ve wanted to be able to embed little demonstrations into online articles. One could write these in JavaScript directly, but I given a free choice, I’d rather write Haskell. Sadly though the anecdotal evidence I have for doing this with something like <a href="https://github.com/ghcjs/ghcjs">ghcjs</a> is that it is a painful experience.</p>
<p>Instead the cool kids seem to like <a href="http://elm-lang.org">elm</a>, and <a href="http://www.purescript.org">PureScript</a>. At a cursory level, the latter looked nicer to my eye.</p>
<h2>Comparison with Haskell</h2>
<p>It is clear that PureScript owes a lot to Haskell. For example, the “Hello World” example on PureScript’s homepage is readable to anyone familiar with Haskell:</p>
<pre><code>import Prelude
import Control.Monad.Eff.Console (log)
greet :: String -> String
greet name = "Hello, " <> name <> "!"
main = log (greet "World") </code></pre>
<p>Of course there are differences. Indeed the PureScript repo on GitHub has a <a href="https://github.com/purescript/documentation/blob/master/language/Differences-from-Haskell.md#evaluation-strategy">page discussing them</a>.</p>
<h3>Prelude</h3>
<p>In the first line above we <em>explicitly</em> import the Prelude, whereas in Haskell this happens <em>implicitly</em>. In this sense Haskell is perhaps more opinionated, and as with all opinions some people disagree. The Haskell Wiki discusses how to <a href="https://wiki.haskell.org/No_import_of_Prelude">avoid importing the Prelude</a>, and it has become quite fashionable to promote <a href="http://www.stephendiehl.com/posts/protolude.html">changing the Prelude</a>.</p>
<p>To some extent I think PureScript embraces many ideas popular in the contemporary Haskell community, rather than say <a href="https://www.haskell.org/onlinereport/haskell2010/haskellch6.html#x13-1270006.3">Haskell 2010</a>.</p>
<p>For example, if you look in old Haskell documentation, you’ll find that:</p>
<pre><code>maximum :: Ord a => [a] -> a</code></pre>
<p>but today, after the <a href="https://wiki.haskell.org/Foldable_Traversable_In_Prelude"><span class="caps">FTP</span></a>, it is polymorphic:</p>
<pre><code>maximum :: (Foldable f, Ord a) => f a -> a</code></pre>
<p>PureScript follows the modern convention:</p>
<pre><code>> import Data.Foldable
> :t maximum
forall a f. Ord a => Foldable f => f a -> Maybe a</code></pre>
<p>You’ll also see that the forall keyword, which is often <a href="https://en.wikibooks.org/wiki/Haskell/Existentially_quantified_types">suppressed in Haskell</a> remains in PureScript.</p>
<h3>Strict</h3>
<p>A more substantive difference between the languages is that Haskell is <a href="https://en.wikipedia.org/wiki/Lazy_evaluation">lazy</a>, but PureScript is <a href="https://github.com/purescript/documentation/blob/master/language/Differences-from-Haskell.md#evaluation-strategy">strict</a>.</p>
<p>One of the standard arguments in favour of laziness is that it allows you to compose things more easily because it separates definition from evaluation. Sure enough, I quickly found an example which I would implement using an infinite list in Haskell: <a href="https://en.wikipedia.org/wiki/Rejection_sampling">rejection sampling</a> to generate random numbers.</p>
<p>This is a technique for taking random samples from some distribution, where sometimes the sample is invalid and needs to be rejected: specifically I’ll write code which returns a <code>Just Double</code> if things go well and <code>Nothing</code> otherwise. If I get the latter I need to sample again until I get something. Inevitably there’s hidden state here—otherwise there would be no point in repeating the calculation—so the whole thing lives in a monad.</p>
<p>For example, suppose I want to generate uniform deviates on [0,0.5[ by taking deviates on [0,1.0[ then rejecting any greater than 0.5. This is <em>not</em> a sensible solution but demonstrates the technique. Here's some Haskell:</p>
<pre><code>import System.Random
import Data.Maybe
foo :: Double -> Maybe Double
foo x | x < 0.5 = Just x
| otherwise = Nothing
randomFoos :: RandomGen g => g -> [Double]
randomFoos = catMaybes . map foo . randoms</code></pre>
<p>Which you can run thus:</p>
<pre><code>ghci> take 2 . randomFoos <$> getStdGen
[0.390256615512681,0.33410830694521]</code></pre>
<p>All reasonably straightforward: generate an infinite list of randoms, mark some as rejected, then keep the rest. We need the applicative <code><$></code> because the generator is in <code>IO</code>.</p>
<p>By contrast, in PureScript we need more machinery:</p>
<pre><code>untilJust :: forall a eff. Eff eff (Maybe a) -> Eff eff a
untilJust f = f >>= go
where go Nothing = untilJust f
go (Just x) = pure x
foo :: forall eff. Eff (random :: RANDOM | eff) (Maybe Number)
foo = do
u <- random
pure $ if u < 0.5 then (Just u) else Nothing
randomFoo :: forall eff. Eff (random :: RANDOM | eff) (Number)
randomFoo = untilJust foo</code></pre>
<p>This is a slightly unfair example in the sense that the Haskell infinite list consumes the entire future supply of random numbers. That’s fine if all the randomness you want is concentrated in those numbers, but otherwise you’ll need to <a href="https://hackage.haskell.org/package/random-1.1/docs/System-Random.html#v:split">split</a> the generator first. So, more machinery, and a constraint on the random algorithms available.</p>
<h3>Row polymorphism</h3>
<p>In the example above, you can see the <a href="https://github.com/purescript/documentation/blob/master/guides/Eff.md">Eff monad</a> where PureScript encodes the details of the environment. It’s a bit like IO in Haskell.</p>
<p>However, in PureScript, we have fine-grained control over the effects with <a href="https://github.com/paf31/purescript-book/blob/master/text/chapter8.md#extensible-effects">row polymorphism</a>. In the example above, you can read the type of <code>foo</code> as saying we need an Eff monad with <span class="caps">RANDOM </span>functionality and whatever else you like.</p>
<p>For example this type:</p>
<pre><code>main :: Eff (console :: CONSOLE, random :: RANDOM) Unit</code></pre>
<p>means that we have both <span class="caps">RANDOM </span>and <span class="caps">CONSOLE </span>access. The lack of the <code>| eff</code> bit, makes the row of effects <em>closed</em>: we are saying that our code doesn’t use any other effects, even in a subcomputation.</p>
<p>In Haskell, I think the natural way to tackle this would be with Monad Transformers.</p>
<h3>Stack overflow</h3>
<p>The only <a href="../10/foreachE.html">proper crash</a> I managed to create with PureScript was blowing the stack when looping over graphics calls in the <span class="caps">CANVAS </span>part of Eff. The short version is that this works:</p>
<pre><code>foreachE (1..100000)
fillRect ...</code></pre>
<p>but this doesn’t:</p>
<pre><code>for (1..100000)
fillRect ...</code></pre>
<h2>Useful information</h2>
<p>The canonical website for PureScript is <a href="http://www.purescript.org">purescript.org</a>. There is also a wealth of information on <a href="https://github.com/purescript/documentation">GitHub</a>, and a <a href="https://leanpub.com/purescript/">fine book</a>.</p>
<p>For specific <span class="caps">API </span>documentation, <a href="https://pursuit.purescript.org">Pursuit</a> is the place to go.</p>
<h3>Cookbook</h3>
<p>Most of the time you accomplish tasks using <a href="https://github.com/purescript-contrib/pulp"><code>pulp</code></a> or <a href="https://bower.io"><code>bower</code></a>. Both projects are well documented, and what follows is a subjectively useful subset.</p>
<h4>Building for the browser</h4>
<p>Given a fresh repository, this is how I build stuff to deploy in a browser:</p>
<pre><code>$ npm install
$ bower install
$ pulp browserify --optimize --to dist/Main.js
$ open html/index.html</code></pre>
<p>Obviously you don’t need to run the install commands subsequently.</p>
<h4>Starting a new project</h4>
<p>To start a new project, use <code>pulp</code>:</p>
<pre><code>$ pulp init</code></pre>
<p>Having done this, the key information is saved in <code>bower.json</code> e.g.</p>
<pre><code>$ cat bower.json
{
"name": "foo",
"ignore": [
"**/.*",
"node_modules",
"bower_components",
"output"
],
"dependencies": {
"purescript-prelude": "^3.1.1",
"purescript-console": "^3.0.0"
},
"devDependencies": {
"purescript-psci-support": "^3.0.0"
}
}</code></pre>
<h4>Adding a library</h4>
<p>To add a new library, and update <code>bower.json</code>:</p>
<pre><code>$ bower install purescript-foo --save</code></pre>
<h4>The <span class="caps">REPL </span>and other executions</h4>
<p>To launch psci, the <span class="caps">REPL</span>:</p>
<pre><code>$ pulp repl</code></pre>
<p>To run the code in node:</p>
<pre><code>$ pulp run</code></pre>
<p>To run the tests:</p>
<pre><code>$ pulp test</code></pre>
<h2>Browser Support</h2>
<p>Much of the time, I write PureScript to deploy in a browser. To do this we need an <span class="caps">API.</span> Happily the popular <a href="https://reactjs.org">React</a> JavaScript library is both already <a href="https://pursuit.purescript.org/packages/purescript-react">packaged</a> and described in the book.</p>
<p>By and large it was easy to use, and were I starting afresh I’d use it again. There is a higher-level library called <a href="https://pursuit.purescript.org/packages/purescript-thermite">Thermite</a>, but I preferred to stick to something used widely in JavaScript land.</p>
<p>Given that I wanted to embed a widget written in PureScript into a webpage which was written elsewhere, I could assume that the widget would be simply connected rather than spread over the page, or interleaved with explanatory text. This was fortunate, because I couldn’t quite see how to do that.</p>
<p>Another problem I had concerned layout: it seemed hard to make the widget’s width 80% of the container. This issue comes up when people use React in Javascript too: on <a href="https://stackoverflow.com/questions/33939974/make-view-80-width-of-parent-in-react-native">Stack Overflow</a>, or <a href="https://facebook.github.io/react-native/docs/height-and-width.html">facebook’s react repo</a>. Rather than translate those solutions to PureScript, it sufficed to set the width of the top-level React object with a simple <span class="caps">CSS </span>rule.</p>
<h3>Canvas</h3>
<p>For graphics, there are <a href="https://pursuit.purescript.org/packages/purescript-canvas">PureScript bindings</a> to the <a href="https://en.wikipedia.org/wiki/Canvas_element"><span class="caps">HTML5</span> Canvas Element</a> which works just as you’d expect.</p>
<h2>Conclusions</h2>
<p>The zeroth order conclusion is that this works. I found PureScript a good way to write Haskell like code which can then be deployed in the browser. The toolchain is easy to install; the abstractions are good so you don’t have to write PureScript but debug JavaScript.</p>
<p>Browser integration works well too: React is pleasant, and the Canvas works as you’d expect.</p>
<p>That all said, I still prefer Haskell. If nothing else PureScript’s convinced me that lazy really is best, and Haskell stills seems classier than anything else.</p>
<p>I suppose the final conclusion is that next time I want to write JavaScript I’ll write PureScript instead, but given the choice I’d still prefer to write Haskell. </p>AF823200-BB92-11E7-A637-DEC0BB32B7162017-10-27T21:18:41:41Z2017-10-29T09:48:52:52ZDiffing directoriesMartin Oldfield<p>How I compare big directories. </p><h2>The problem</h2>
<p>Recently I’ve wanted to compare a few versions of my home directory: I want to know which files change over time, and to check that my backups actually contain the files I expect, with the data in them I expect.</p>
<p>The notes below relate to MacOS, but I expect they’d work in approximately the same way on any Unix box.</p>
<p>In rough terms, I want a recursive <a href="https://en.wikipedia.org/wiki/Diff_utility">diff</a> i.e.</p>
<pre><code>$ diff --recursive a b</code></pre>
<p>There are a couple of problems with this:</p>
<ul>
<li>Sometimes it is not convenient to put both directory trees on the same machine.</li>
<li>When I tried doing this recently on my home directory (~700GB), I killed the job after a few days because it hadn’t finished and I got bored.</li>
</ul>
<h2>Intrusion Detection Systems</h2>
<p>This problem is close to one approach to <a href="https://en.wikipedia.org/wiki/Host-based_intrusion_detection_system">host-based intrusion detection</a> which alert a system administrator to nefarious changes in files.</p>
<p>On Linux, I’ve used <a href="https://github.com/integrit/integrit">integrit</a> and <a href="https://github.com/Tripwire/tripwire-open-source">tripwire</a> for this, <a href="http://aide.sourceforge.net"><span class="caps">AIDE</span></a> seems common too.</p>
<p>However none of these are trivial to install on MacOS, and even if they were I think they spend lots of time securing against deliberate subterfuge which necessarily makes them harder to use when you just want to compare arbitrary trees.</p>
<h2>The checksum trick</h2>
<p>One the key ideas in things like tripwire is that instead of comparing the files directly, it is a good approximation to compute a <a href="https://en.wikipedia.org/wiki/Hash_function">hash</a> of the file and then compare those instead. The idea is an old one, but remains popular: for example I think it’s central to the way <a href="https://en.wikipedia.org/wiki/Git">git</a> works.</p>
<p>So, can we easily generate hashes of all the files we care about ? This being Unix, we can indeed, in a single <a href="https://www.gnu.org/software/findutils/manual/html_node/find_html/Invoking-find.html#Invoking-find">find</a> command:</p>
<pre><code>$ find a -type f -exec gsha256sum {} + > a.csums</code></pre>
<p>This gnerates <a href="https://en.wikipedia.org/wiki/SHA-2"><span class="caps">SHA</span>-2</a> hashes for all the files under <code>a</code>, and saves them to <code>a.csums</code>. I am not sure that <span class="caps">SHA</span>-2 is the best choice here, but I’m reasonably confident that it’s not entirely stupid.</p>
<p>The <code>gsha256sum</code> command from <span class="caps">GNU</span>'s <a href="https://www.gnu.org/s/coreutils/">coreutils</a> package actually does the hashing. It isn’t installed on stock MacOS, but is in <a href="https://brew.sh">homebrew</a>. Assuming that you have homebrew, you can install coreutils thus:</p>
<pre><code>$ brew install coreutils</code></pre>
<p>It is worth noting that the find command ignores both non-files e.g. links, and any file metadata. These struck me as advantages, but <span class="caps">YMMV.</span></p>
<h3>Sample output</h3>
<p>The output of gsha256sum looks like this:</p>
<pre><code>6a2b70adfcf22278f71f75fe532a254b981dffc303925d6008ee4240b10f7317 bu
f8401d2de8c7094ca2c170dc93603179b64d5dfdcef8ea23e965a250e813e588 tm</code></pre>
<p>You could easily use a different hashing program, but sadly the format of MacOS’s standard md5 command isn’t suitable:</p>
<pre><code>$ md5 *
MD5 (bu) = 4849350721dec3431f6a506d27655641
MD5 (tm) = 6a21f7b1583cde5daeecaa0a0609fb2e</code></pre>
<h2>Sorting for the win</h2>
<p>The file full of checksums above suffers from a problem: it is ordered in the order in which <code>find</code> traversed the directory tree, which isn’t something we care about.</p>
<p>We can remove this excess entropy by simply sorting the file:</p>
<pre><code>$ sort a.csums > a.scs</code></pre>
<p>At first I balked at the idea of sorting this enormous (~500MB) file, but I tried it anyway and it took about a minute. It’s easy to forget just how fast modern machines are, particularly when running code written when resources were more limited and so people took more care to write efficient code.</p>
<h2>Diff mangling</h2>
<p>So, given a couple of files of sorted checksums, the only problem left is to compare them. The naïve approach gets us a long way there:</p>
<pre><code>$ diff a.scs b.scs
15d14
...
< 000098c3fac6be1dcad03b4f75280db0c14c4d3a3f34ad02350f16f8df646dd0 foo/bar
...
> 000dfe1a15ea0bf343d536611c71cd5c2d676d72ec380f47c015071b54b746b3 foo/baz
...</code></pre>
<p>There are three classes of line:</p>
<ul>
<li>lines which begin with a number which tell us where the differences between the files is located: we can ignore this;</li>
<li>lines which begin <code><</code>: these are lines which occur in the first file but not the second;</li>
<li>lines which begin <code>></code>: these are lines which occur in the second file but not the first.</li>
</ul>
<p>With many files it is a pain to absorb all these changes by eye, so we need a program. This is the one part of the solution which doesn’t exist, so we’ll need to write it: happily two dozen lines of Perl will suffice:</p>
<pre><code>#! /usr/bin/perl
use strict;
use warnings;
my %diffs;
while(<>)
{
next unless /^[<>]/;
chomp;
my ($dir, $hash, $file) = split(/\s+/, $_, 3);
$diffs{$file}->{$dir} = $hash;
}
foreach my $file (sort keys %diffs)
{
my $h = $diffs{$file};
my $k = (!defined $h->{'<'}) ? ' >'
: (!defined $h->{'>'}) ? '< '
: ($h->{'<'} ne $h->{'>'}) ? '<>'
: '==';
printf "%s %s\n", $k, $file;
} </code></pre>
<p>As is probably obvious the code accumulates all the diffs in a hash, keyed by the file name. It then iterates over the hash printing a list of the files marked with the kind of change:</p>
<ul>
<li><code><</code> means that it’s in the first tree but not the second;</li>
<li><code>></code> means that it’s in the second tree but not the first;</li>
<li><code><></code> means that it’s in both trees but the contents are different.</li>
</ul>
<p>In the unlikely event that <code>diff</code> flags both files as different, yet they have the same hash, the mark is <code>==</code>. I’ve not seen this ever appear, but it seems prudent to include the case.</p>
<p>Assuming that you’ve saved the code on your $PATH as <code>munge-diff-output</code>, you can do the full comparison thus:</p>
<pre><code>$ find a -type f -exec gsha256sum {} + > a.csums
$ sort a.csums > a.scs
$ find b -type f -exec gsha256sum {} + > b.csums
$ sort b.csums > b.scs
$ diff a.scs b.scs | munge-diff-output
< foo/bar
> foo/baz
<> foo/banana</code></pre>
<p>As is probably obvious, all of the information from the directory tree is distilled into the .scs files. So, the three steps above could all be performed on different machines.</p>
<p>My home directory has about 700GB of files in it. The checksum file is a bit less than 500MB, and gzip compresses it to about 150MB.</p>
<h3>Handling lots of changes</h3>
<p>As noted above the code builds a hash of changes in memory. We do this because the files are listed in order of their checksum, so the same file will probably occur at very different places in the two checksum files.</p>
<p>If holding everything in memory is a problem, it might be better to sort the output first e.g.:</p>
<pre><code>$ diff a.scs b.scs | sort -k 3 -k 1</code></pre>
<p>Which brings all mentions of a file together with < lines before > lines. This could then be post-processed with minimal memory use. I have not tried this though.</p>
<h2>Conclusion</h2>
<p>There’s very little new here and I expect most people fluent at the command line could do this for themselves without much thought. Certainly it’s taken me longer to write this up than to concoct it.</p>
<p>On the other hand, it might save some people some time, and it’s nice to be reminded how well the Unix shell still works. </p>BBFDF132-B6FA-11E7-BB64-FBB086D556772017-10-22T07:25:09:09Z2017-10-29T09:43:25:25ZGull’s LighthouseMartin Oldfield<p>Gull’s 1988 Lighthouse problem, with an interactive demonstration. </p><h2>Introduction</h2>
<p>In his 1988 paper <a href="http://bayes.wustl.edu/sfg/why.pdf">Bayesian Inference and Maximum Entropy</a> Gull discusses how to find a lighthouse. He says:</p>
<blockquote><p>A lighthouse is somewhere off a piece of straight coastline at a position \(x\) along the coast and a distance \(y\) out to sea. It emits a series of short, highly collimated flashes at random intervals and hence random azimuths. These pulse are intercepted on the coast by photo-detectors that record only the fact that a flash has occurred, but <em>not</em> the azimuth from which it came. \(N\) flashes so far have been recorded at positions \(\{a_i\}\). Where is the lighthouse ?</p></blockquote>
<p>Now, you might expect that this will be a problem about triangulation: indeed if the detectors measured the azimuths we could indeed solve it that way. However, we will have to work harder.</p>
<p>Formally, our task is to infer the position of the lighthouse \((x, y)\), given the position of the flashes: \( \{a_i\} \). The diagram below shows the setup:</p>
<p><img src="gl/fig1.png" alt="" class="img_noborder" /></p>
<p>Since Gull’s paper, the problem has been discussed in various places including <a href="https://www.amazon.com/dp/0198568320">Sivia’s and Skilling’s book</a>. The solution hasn’t changed, though these days people like to <a href="http://www.di.fc.ul.pt/~jpn/r/bugs/lighthouse.html">solve it</a> by using <span class="caps">MCMC </span>packages like <a href="http://mc-stan.org">Stan</a>.</p>
<p>This article says nothing new either: I was interested in the problem as a test-case for demonstrating how Bayesian inference works by writing an interactive widget, and ending up writing down the algebra too.</p>
<h2>Intuition</h2>
<p>It is sensible to ask if this task is reasonable, so let’s start by simulating the problem. Consider a flash emitted at an angle \(\theta_1\): trigonometry shows that it will hit the coast at:</p>
<p><img src="gl/fig2.png" alt="" class="img_noborder" /></p>
\[
a_1 = x + y \tan \theta_1.
\]
<p>Thus we can generate representative data by picking a random angle between \([-\pi / 2, \pi / 2]\), and working out the corresponding \(a\). The histogram below shows the position of 10,000 random flashes assuming that the lighthouse’s position \((x, y) = (1, 1)\).</p>
<p><img src="gl/hist_1_1.svg" alt="" class="img_noborder" /></p>
<p>The most obvious feature is that we see a peak centred on \(a = 1\), i.e. the \(x\)-coordinate of the lighthouse. So, it certainly seems plausible that we’ll be able to infer \(x\): it will be roughly where the flashes arrive most often.</p>
<p>Now, what about \(y\) ? If we imagine scaling the diagram above around the point \((a, 0)\), then I hope it’s clear that if we move the lighthouse twice as far from the shore then the flash will move twice as far along the shore. So, as we move the lighthouse further from shore, the distribution of flashes will get twice as wide. Let’s test this by calculating a new histogram assuming that \( (x, y) = (1, 2)\):</p>
<p><img src="gl/hist_1_2.svg" alt="" class="img_noborder" /></p>
<p>The centre of the distribution is in roughly the same place, but the peak is broader. Although the histogram scales are the same, the central bin has only about 300 hits rather than 600 above.</p>
<h3>Technical details</h3>
<p>The histograms both group the observations into 100 bins spaced evenly in the range \(-10 < a < 10\). Some observations lie outside this range though: in the first example the samples ranged from about -7,000 to +2,000; in the second -12,000 to 6,000. These distributions have very wide tails!</p>
<h2>A formal treatment</h2>
<p>To tackle the problem formally, we need the probability of \(a\) given \((x, y)\), although the parameters are continuous so we will actually be dealing with probability densities.</p>
<p>Specifically, we assume that flashes are equally likely to occur in all directions and so:</p>
\[
\textrm{p}(\theta)\,d\theta = \frac{d\theta}{\pi}.
\]
<p>Remember \(\theta\) is bounded by \([ -\pi/2, \pi/2 ]\), i.e. always flashes towards the shore.</p>
<p>Now, using the transformation law that,</p>
\[
\textrm{p}(a)\, da = \textrm{p}(\theta)\, d\theta,
\]
<p>we have</p>
\[
\begin{eqnarray} \textrm{p}(a) da &=& \textrm{p}(\theta)\ \frac{d\theta}{da} \, da, \\\
&=& \frac{1}{\pi} \frac{y}{y^2 + (a-x)^2} \,da. \end{eqnarray}
\]
<p>This is a standard form: the <a href="https://en.wikipedia.org/wiki/Cauchy_distribution">Cauchy distribution</a>. You can see that it matches the data if we overlay it on the histogram above:</p>
<p><img src="gl/hist_1_1-c.svg" alt="" class="img_noborder" /></p>
<p>To be pedantic, the distribution is conditioned on \((x, y)\) and so we should write:</p>
\[
\textrm{p}(a|x,y) = \frac{1}{\pi} \frac{y}{y^2 + (a-x)^2}.
\]
<p>We recognize this is the <a href="https://en.wikipedia.org/wiki/Likelihood_function">likelihood</a> of the arrival position \(a\) given the lighthouse’s position \((x, y)\).</p>
<h3>Multiple flashes</h3>
<p>It is easy to extend this to more than one flash: all are <a href="https://en.wikipedia.org/wiki/Independence_(probability_theory)">independent</a> so we just see a product of the individual likelihoods:</p>
\[
\begin{eqnarray} \textrm{p}(\{a_i\}|x,y) &=& \prod_i \, \textrm{p}(a_i|x,y),\\\
&=& \prod_i \frac{1}{\pi} \frac{y}{y^2 + (a_i-x)^2}. \end{eqnarray}
\]
<h2>Bayes’ theorem</h2>
<p>Returning to our original problem, we want to infer the position of the lighthouse given the arrival locations of the flashes i.e. we want to calculate</p>
\[
\textrm{p}(x,y|\{a_i\}).
\]
<p>Happily <a href="https://en.wikipedia.org/wiki/Bayes%27_theorem">Bayes’ theorem</a> relates this to the likelihood:</p>
\[
\textrm{p}(x,y|\{a_i\}) = \frac{\textrm{p}(\{a_i\}|x,y)\, \textrm{p}(x,y)}{\textrm{p}(\{a_i\})}.
\]
<h3>Our prior</h3>
<p>As with all Bayesian inferences, we need to choose a <a href="https://en.wikipedia.org/wiki/Prior_probability">prior</a> for \((x,y)\), which represents out state of knowledge before we get any data. We hope that our conclusions will be driven by the data we receive, and so this choice won’t matter much.</p>
<p>In the interests of simplicity, assume a flat, bounded, prior e.g.</p>
\[
\textrm{p}(x,y) = \begin{cases} (2d \times l)^{-1} & \text{for } -l < x < l \text{ and } 0 < y < d, \\\
0 & \text{otherwise} \end{cases}.
\]
<p>Or in other words we’re saying that we know the lighthouse is somewhere in a rectangular region, and every spot in that region is equally likely.</p>
<h3>The posterior</h3>
<p>Putting all of this together, if \(-l < x < l\) and \(0 < y < d\),</p>
\[
\begin{eqnarray} \textrm{p}(x,y|\{a_i\}) &=& \left(\pi^N \times 2d \times l \times \prod_i \textrm{p}({a_i})\right)^{-1} \prod_{i = 1}^N \frac{y}{y^2 + (a_i-x)^2}, \\\
&=& \frac{1}{Z} \prod_{i = 1}^N \frac{y}{y^2 + (a_i-x)^2}. \end{eqnarray}
\]
<p>Where \(Z\) absorbs all the constants, including \(\textrm{p}(\{a_i\})\) which has no \(x,y\) dependence.</p>
<p>Sadly it is hard to proceed analytically, but it is easy enough to compute the probability numerically on a grid and observe the results.</p>
<h2>A demonstration</h2>
<div id="data" width="80%"></div>
<p><script type="text/javascript" src="gl/Main.js"></script></p>
<p>The black square above represents the area in which the lighthouse is hiding. When you click on the ‘Flash!’ button, a new flash is observed as shown in the blue bar. The square is then shaded to show the posterior probability density of the lighthouse’s position.</p>
<p>The colourmap shades the most probable pixel in bright yellow, even if it’s not particularly likely in an absolute sense. So you only really know where the lighthouse is when you see a <em>small</em> bright area.</p>
<p>Flashes outside the blue box are still used in the computation, but they’re shown in red at the box edge. The height of the flash mark is chosen randomly to give a better idea of the density of flashes as more arrive.</p>
<p>The ‘Reset’ button moves the lighthouse to a new, random, location, and throws away all the old observations.</p>
<p>Finally, the small blue circle shows the true position of the lighthouse. As more data arrive, the posterior density should converge on this point.</p>
<h3>Technical details</h3>
<p>The code generates the probabilities on a 100×100 square grid, which seems to run happily in my browser up to about 100 observations.</p>
<p>The browser sees a chunk of JavaScript, but the actual code is written in <a href="http://www.purescript.org">PureScript</a>.</p>
<h2>Discussion</h2>
<p>The posterior probability distribution encodes all the information we know about the position of the lighthouse once we’ve taken into account the information given by the arrival positions of the flashes.</p>
<p>Given this distribution we could compute a mean position, and a variance around that, but it isn’t obvious that this would be helpful. If you run a few demonstrations then you’ll probably see some pretty arcs and whorls of probability. Although we <em>could</em> calculate the mean, it wouldn’t be very representative: there is no sense in which such posteriors are well approximated by a nice little ellipse.</p>
<p>Happily, when we have lots of data the lighthouse’s position is tied down. Then it makes more sense to calculate and quote a mean and covariance.</p>
<h2>(In)sufficient statistics</h2>
<p>When we analyse data with Gaussian errors, it is often helpful to calculate the mean and variance of the data, because they contain all the information we need to do the full Bayesian analysis. In other words, we can find the parameters of the Gaussian posterior distribution from just the mean and variance of the data.</p>
<p>You might wonder whether the mean and variance of the data would be helpful here. It seems unlikely becuase the Bayesian analysis above gives the right answer, and it contains neither of them.</p>
<p>Moreover, although we <em>can</em> calculate the mean and variance of the particular \(\{a_i\}\) we see, the mean and variance of the underlying Cauchy distribution are not defined.</p>
\[ \int_{-\infty}^{\infty} a\, \textrm{p}(a) \, da = \frac{1}{\pi} \int_{-\infty}^{\infty} a\, \frac{y}{y^2 + (a-x)^2} \,da,
\]
<p>does not exist, and</p>
\[ \int_{-\infty}^{\infty} a^2\, \textrm{p}(a) \, da = \frac{1}{\pi} \int_{-\infty}^{\infty} a^2\, \frac{y}{y^2 + (a-x)^2} \,da
\]
<p>diverges. This issue is discussed in more detail in section 7 of <a href="http://www.randomservices.org/random/special/Cauchy.html">Cauchy page on Random Services</a>, at the <a href="http://www.math.uah.edu/stat/">University of Alabama</a>.</p>
<h2>Fat tails</h2>
<p>The underlying issue here is that the Cauchy distribution has enormous weight in its tails i.e. it’s not that unlikely to receive flashes a long way down the beach.</p>
<p>There is a practical issue here: if we imagine having only a finite length to our detector array it is likely that we will miss some important flashes, which will lead to an underestimate of the spread of \(a\). In turn this will lead to an underestimate of \(y\).</p>
<p>To fix this, we’d need to modify the likelihood: conceptually easy but fiddly in practice.</p>
<h2>Source code</h2>
<p>The code to generate the histograms and demonstration are available on <a href="https://github.com/mjoldfield/gulls-lighthouse">GitHub</a>. </p>2642ED6A-AABF-11E7-9F00-87F529508F342017-10-06T17:52:21:21Z2017-10-06T18:09:50:50ZforeachE and stack spaceMartin Oldfield<p>A solution to looping over lots of things in Eff without busting the stack when writing Purescript. </p><h2>Introduction</h2>
<p>The code below is a lightly modified version of an example in chapter 9 of the fine <a href="https://leanpub.com/purescript/">“Purescript by Example” book</a>.</p>
<pre><code>module Example.ManyRect where
import Prelude
import Control.Monad.Eff (Eff, foreachE)
import Data.Maybe (Maybe(..))
import Graphics.Canvas (CANVAS, fillRect, setFillStyle, getContext2D,
getCanvasElementById)
import Partial.Unsafe (unsafePartial)
import Data.Foldable (for_)
import Data.Array ((..))
import Data.Int (toNumber)
main :: Eff (canvas :: CANVAS) Unit
main = void $ unsafePartial do
Just canvas <- getCanvasElementById "canvas"
ctx <- getContext2D canvas
_ <- setFillStyle "#0000FF" ctx
for_ (1..1000) (\x -> void $
fillRect ctx
{ x: toNumber x, y: toNumber x, w: 100.0, h: 100.0 })
_ <- setFillStyle "#00ff00" ctx
void $ fillRect ctx { x: 0.0, y: 200.0, w: 50.0, h: 50.0 }</code></pre>
<p>As you might guess, it draws 1,000 overlapping blue squares, and then a single green one.</p>
<p>This code works, but if you increase the number of blue squares, say to 1,000,000 then it crashes. If you’ve got a Javascript console open then you’ll see an error:</p>
<pre><code>Uncaught RangeError: Maximum call stack size exceeded</code></pre>
<p>Otherwise you’ll just wonder why the green square doesn’t get drawn.</p>
<h2>Recursion</h2>
<p>The key to understanding this problem is to realize that in many functional languages, including Purescript, loops are often implemented recursively. So, naively, each step of the loop corresponds to a subroutine call.</p>
<p>In Javascript the size of the call stack is limited. Inevitably, this has been discussed on Stack Overflow:</p>
<ul>
<li>a 2011 discussion about <a href="https://stackoverflow.com/questions/7826992/browser-javascript-stack-size-limit">the size limit</a>;</li>
<li>a 2016 article pointing out that <a href="https://stackoverflow.com/questions/34570551/call-stack-size-in-recursive-functions-maximum-call-stack-size-is-lower-than-ex">it is stack size not recursion depth</a> which matters.</li>
</ul>
<p>A general rule seems to be that if you’re doing lots of recursion in Purescript, you might hit a Javascript limit.</p>
<p>This problem isn’t new. There’s a specific term for converting the recursive calls into a loop: “tail call elimination”. Unsurprisingly, Wikipedia has a good <a href="https://en.wikipedia.org/wiki/Tail_call">article</a> about it.</p>
<p>Here though, it’s hard to manipulate the code run by the Javascript engine, because it’s generated by the Purescript compiler.</p>
<h2>A proper solution</h2>
<p>Happily though, there is a very easy way to solve the problem. The Eff monad has a specific looping construct <code>foreachE</code>:</p>
<pre><code>foreachE :: forall e a. Array a -> (a -> Eff e Unit) -> Eff e Unit</code></pre>
<p>You’ll see that this only works if the effect returns <code>Unit</code>, but that’s fine here: we care about the associated action e.g. drawing a rectangle, rather than the return value.</p>
<p>So, to fix the program above we need only replace:</p>
<pre><code> for_ (1..1000) ...</code></pre>
<p>with</p>
<pre><code> foreachE (1..1000000) ...</code></pre>
<p>and we’re done! </p>699F2C30-7537-11E7-BB94-267F8CF12A4E2017-07-30T14:57:11:11Z2017-07-30T16:30:39:39ZAn ESP8266-based switchMartin Oldfield<p>A simple switch for my IoT light built around an <span class="caps">ESP8266. </span></p><p><img src="esp8266-switch.jpg" alt="" class="img_border" />"</p>
<h2>Introduction</h2>
<p>When <a href="./mongoose.html">playing</a> with <a href="https://mongoose-os.com">Mongoose OS</a> on the <a href="http://espressif.com/en/products/hardware/esp8266ex/overview"><span class="caps">ESP8266</span></a>, I came to the conclusion it was great for building simple things.</p>
<p>So, I built a simple thing with it: a switch for my <a href="../05/yauiotl.html">IoT light</a>.</p>
<h2>Schematic</h2>
<p>The <span class="caps">BOM </span>has two items: a NodeMCU style <span class="caps">ESP8266 </span>board; and a switch. We can use the <span class="caps">ESP8266</span>’s pullups so don’t need any external resistors. We won’t need a sophisticated <span class="caps">CAD </span>program:</p>
<p><img src="esp8266-switch-schem.png" alt="" class="img_noborder" /></p>
<h3><span class="caps">ESP8266 </span>board</h3>
<p>Using a NodeMCU style board makes construction easy: electrically everything we need is on the board; mechanically it comes with handy mounting holes.</p>
<p>You could make a much smaller switch by choosing a different board, but I'm not sure that would be an advantage.</p>
<h3>The switch</h3>
<p>Getting a rotary encoder to work reliably was hassle, so I used a two way Mom-Off-Mom rocker switch instead. Digikey had a <a href="https://www.digikey.co.uk/product-detail/en/te-connectivity-alcoswitch-switches/6-1571986-4/450-1529-ND/1021826">nice one</a> made by TE Connectivity (part number <a href="http://www.te.com/commerce/DocumentDelivery/DDEController?Action=srchrtrv&DocNm=1571986&DocType=Customer+Drawing&DocLang=English">6-1571986-4</a>).</p>
<p>Annoyingly the switch cost more than the <span class="caps">ESP8266 </span>board!</p>
<h3>The case</h3>
<p>I made a case by stacking slices of 3mm plywood, laser-cut to shape. <span class="caps">PDF</span>s for the case are available in the <a href="https://github.com/mjoldfield/esp8266-aws-switch">GitHub repo</a>.</p>
<h2>Firmware</h2>
<p>The <span class="caps">ESP8266 </span>runs a lightly modified version of the Mongoose <span class="caps">AWS </span>sample app.</p>
<p>Assuming that you’ve set up the necessary certificates and configured the <span class="caps">MQTT </span>device, you need only replace <code>init.js</code>.</p>
<h3>Functionality</h3>
<p>When the switch is clicked up (down), the <span class="caps">ESP8266 </span>sets the related Amazon Device Shadow to a higher (lower) value. If the switch is held down, it gets set to the maximum (minimum) value.</p>
<h3>Implementation</h3>
<p>Firstly, the quality isn’t great.</p>
<p>The only non-trivial code is a crude task manager: this should be refactored into a separate library.</p>
<p>If you still want the code with these cavets, visit the <a href="https://github.com/mjoldfield/esp8266-aws-switch">GitHub repo</a>. </p>653E948A-64BD-11E7-9F2D-262CEBF0FE562017-07-09T15:42:29:29Z2017-07-11T10:35:25:25ZMongoose OS and the ESP8266Martin Oldfield<p>Brief notes on <a href="https://mongoose-os.com">Mongoose OS</a>, an operating system focussed on IoT applications, which good <span class="caps">ESP8266 </span>support. </p><h2>Introduction</h2>
<p>The <a href="https://en.wikipedia.org/wiki/ESP8266"><span class="caps">ESP8266</span></a> is a cheap hardware platform for WiFi-enabled devices: you can put something on the internet for well under a <a href="https://en.wikipedia.org/wiki/Bank_of_England_&pound;5_note">fiver</a>.</p>
<p>However, we need to consider the software too. The default <span class="caps">ESP8266 </span>firmware makes the device into something like a WiFi modem: it connects to the main processor over a serial link, and accepts <a href="https://www.espressif.com/sites/default/files/documentation/4a-esp8266_at_instruction_set_en.pdf">AT commands</a>. This isn’t ideal, because you’ll need a second processor in the box to handle the application code, even though in many cases the <span class="caps">ESP8266 </span>can do it all.</p>
<p><a href="https://mongoose-os.com">Mongoose OS</a> is one alternative. It provides a replacement firmware for the <span class="caps">ESP8266 </span>which includes a psuedo-Javascript interpreter and webserver. So, a typical <span class="caps">ESP8266</span> Mongoose OS project contains:</p>
<ul>
<li>The <span class="caps">OS, </span>by which I mean both the traditional OS which handles networking and the like, plus extentions to the Javascript engine to handle hardware. Like other operating systems, it includes a number of daemons, some open to the network.</li>
<li>‘User-space’ files which include can include Javascript files containing ‘application logic’.</li>
<li>C code compiled at build time, and linked into the firmware.</li>
</ul>
<p>Mongoose claim support for other processors too: more of this anon.</p>
<h2>Installation</h2>
<p>The Mongoose website includes <a href="https://mongoose-os.com/software.html">installation instructions</a>, which in the modern style amount to piping the output from curl into bash! There’s also some <a href="https://mongoose-os.com/docs/quickstart/setup.html">more technical documentation</a>.</p>
<p>I’m wary of such a plan so I looked at the script. As of today (July 2017), on the Mac, the main action is to download a single executable, <code>mos</code>, from,</p>
<pre><code>https://mongoose-os.com/downloads/mos/mac/mos</code></pre>
<p>The script then checks for <code>libusb</code> and <code>libftdi</code>, which I had already installed via <a href="https://brew.sh">Homebrew</a>.</p>
<p>Once installed you can run <code>mos</code> to get a list of options:</p>
<pre><code class="small">$ ./mos --help
The Mongoose OS command line tool, v. 20170706-152142/master@871c1644.
Checking updates... Up to date.
Usage:
./mos <command>
Commands:
ui Start GUI
init Initialise firmware directory structure in the current directory
build Build a firmware from the sources located in the current directory
flash Flash firmware to the device
flash-read Read a region of flash
console Simple serial port console
ls List files at the local device's filesystem
get Read file from the local device's filesystem and print to stdout
put Put file from the host machine to the local device's filesystem
rm Delete a file from the device's filesystem
config-get Get config value from the locally attached device
config-set Set config value at the locally attached device
call Perform a device API call. "mos call RPC.List" shows available methods
aws-iot-setup Provision the device for AWS IoT cloud
update Self-update mos tool
wifi Setup WiFi - shortcut to config-set wifi...
Global Flags:
--verbose Verbose output. Optional, default value: "false"
--logtostderr log to standard error instead of files. Optional, default value: "false"</code></pre>
<p>You’ll see that the program is very keen to check we’re running the most recent version. Again following the modern fashion, new versions are published frequently.</p>
<p>The code for <code>mos</code> lives on <a href="https://github.com/cesanta/mongoose-os/tree/master/mos">GitHub</a>. It’s written in Go, so you’ll need to install a Go toolchain to compile it.</p>
<h3>Blinky</h3>
<p>If you attach a suitable <span class="caps">ESP8266 </span>board to a <span class="caps">USB </span>port e.g. the <a href="https://github.com/nodemcu/nodemcu-devkit-v1.0">NodeMCU devkit</a>, then making a <span class="caps">LED </span>blink is simply a case of:</p>
<p>abc(small).. $ ./mos flash esp8266 <br />
Fetching https://mongoose-os.com/downloads/esp8266.zip... <br />
Loaded default/esp8266 version 1.0 (20170706-161740/???) <br />
Using port /dev/cu.SLAB_USBtoUART <br />
Opening /dev/cu.SLAB_USBtoUART... <br />
Connecting to <span class="caps">ESP8266 ROM, </span>attempt 1 of 10... <br />
Connected <br />
Running flasher @ 460800... <br />
Flasher is running <br />
Flash size: 4194304, params: 0x0240 (dio,32m,40m) <br />
Deduping... <br />
128 @ 0x3fc000 -> 0 <br />
Writing... <br />
4096 @ 0x0 <br />
4096 @ 0x7000 <br />
262144 @ 0x8000 <br />
675840 @ 0x100000 <br />
4096 @ 0x3fb000 <br />
Wrote 950272 bytes in 20.89 seconds (355.47 KBit/sec) <br />
Verifying... <br />
2592 @ 0x0 <br />
4096 @ 0x7000 <br />
262144 @ 0x8000 <br />
673824 @ 0x100000 <br />
4096 @ 0x3fb000 <br />
128 @ 0x3fc000 <br />
Booting firmware... <br />
All done!</p>
<p>The code above downloads a blob containing the OS and a file-system from https://mongoose-os.com/downloads/esp8266.zip, then flashes it to the board. At this point a <span class="caps">LED </span>should start flashing.</p>
<p>Some gotchas:</p>
<ul>
<li>If you have a directory called esp8266, <code>mos</code> will try to find a firmware blob locally instead.</li>
<li>If you have multiple plausible serial devices, you might need to tell <code>mos</code> which one to use with the --port option.</li>
<li>If your <span class="caps">LED </span>is on a different <span class="caps">GPIO, </span>you’ll need to edit the code.</li>
</ul>
<h3>WiFi</h3>
<p>At this point, you might wish to configure the WiFi connection:</p>
<pre><code>$ ./mos wifi SSID PASSWORD</code></pre>
<p>In practice, it is often easier to do this from the browser-based <span class="caps">IDE.</span></p>
<h3>Mac Flashing</h3>
<p>On my Mac I found it impossible to flash boards which use the <span class="caps">CH340 USB</span>-serial bridge. Once flashed, everything worked well though. Rather than explore this particular rabbit hole, I flashed these boards from a Linux box instead.</p>
<h2>The integrated <span class="caps">IDE</span></h2>
<p>Mongoose contains a full web-based <span class="caps">UI, </span>which allows you to all the things you can do from the command line and more. For example, you can flash the <span class="caps">ESP8266 </span>or configure the WiFi with the <span class="caps">IDE </span>instead of at the command line. To invoke the <span class="caps">IDE</span>:</p>
<pre><code class="small">$ ./mos</code></pre>
<p>At this point your web-browser should open something like http://127.0.0.1:1992/#files, which of course is a web server embedded into <code>mos</code>.</p>
<p>You should see a pretty UI which lets you explore the device. For example, you can browse files on the <span class="caps">ESP8266 </span>by clicking on the ‘Device Files’ link on the left-hand-side.</p>
<p>init.js is a key file: it’s essentially what gets run at boot, and so by looking at it, we can tell what the device is going to do. You can either use the in-browser file manager, or the command line:</p>
<pre><code class="small">$ ./mos get init.js
load('api_config.js');
load('api_gpio.js');
load('api_mqtt.js');
load('api_sys.js');
load('api_timer.js');
// Helper C function get_led_gpio_pin() in src/main.c returns built-in LED GPIO
let led = ffi('int get_led_gpio_pin()')();
let getInfo = function() {
return JSON.stringify({total_ram: Sys.total_ram(), free_ram: Sys.free_ram()});
};
// Blink built-in LED every second
GPIO.set_mode(led, GPIO.MODE_OUTPUT);
Timer.set(1000 /* 1 sec */, true /* repeat */, function() {
let value = GPIO.toggle(led);
print(value ? 'Tick' : 'Tock', 'uptime:', Sys.uptime(), getInfo());
}, null);
// Publish to MQTT topic on a button press. Button is wired to GPIO pin 0
GPIO.set_button_handler(0, GPIO.PULL_UP, GPIO.INT_EDGE_NEG, 200, function() {
let topic = '/devices/' + Cfg.get('device.id') + '/events';
let message = getInfo();
let ok = MQTT.pub(topic, message, 1);
print('Published:', ok ? 'yes' : 'no', 'topic:', topic, 'message:', message);
}, null);</code></pre>
<p>So we can see that besides flashing a <span class="caps">LED, </span>the NodeMCU has also been configured to make a <a href="https://en.wikipedia.org/wiki/MQTT"><span class="caps">MQTT</span></a> request when a push-button is pressed.</p>
<p>The <span class="caps">MQTT </span>server is configured in the <code>confN.json</code> files: these form a crude overlay database when things defined in e.g. <code>conf9.json</code> override things in <code>conf0.json</code>.</p>
<p>You can also configure <span class="caps">MQTT </span>from the ‘Device Config’ section of the <span class="caps">UI, </span>or by using <code>mos config-set</code>.</p>
<h2>The filesystem</h2>
<p>Having got to this stage, it is easy to edit files on the <span class="caps">ESP8266.</span> You can either get and put files with the <code>mos</code> tool, or just edit them live in the <span class="caps">UI.</span> Either way, you’ll probably have to reboot the <span class="caps">ESP8266 </span>for the changes to take effect.</p>
<p>For example, having flashed the default firmware, you can change the <span class="caps">LED</span>’s period by just editing the 1000ms interval between timer events. You can then save this new init.js, reboot, and see the change. There is no need to reflash the <span class="caps">ESP8266 </span>from scratch. Instead when you save the file, you make an <span class="caps">RPC </span>call to the Mongoose OS on the <span class="caps">ESP8266, </span>which puts the data into the filesystem.</p>
<p>So, one approach to building a new Mongoose OS system from scratch is to:</p>
<ul>
<li>Flash some generic firmware.</li>
<li>Upload whichever files you want on the device.</li>
<li>Set config variables as needed: it seems better to use <code>mos</code> for this rather than writing a <code>confX.json</code> file directly.</li>
</ul>
<p>For example, once flashed, I run the following script:</p>
<pre><code class="small">#! /bin/sh
for f in fs/*
do
./mos put $f
done
./mos config-set \
wifi.sta.enable=true \
wifi.ap.enable=false \
wifi.sta.ssid='XXXX' wifi.sta.pass=XXXX \
aws.shadow.thing_name='XXXX' \
mqtt.enable=true \
mqtt.server=xxx.iot.us-east-1.amazonaws.com:8883 \
mqtt.ssl_cert=cert.pem \
mqtt.ssl_key=private.pem \
mqtt.ssl_ca_cert=ca_cert.pem </code></pre>
<p>It will probably become obvious quite quickly that Mongoose has a flat filesystem i.e. there are no directories.</p>
<h3>Network access</h3>
<p>It’s probably worth emphasizing that once you’ve flashed the basic firmware, all the subsequent interactions need only to exchange blobs of data. So, although it might be convenient to keep using the <span class="caps">USB</span>-serial bridge, once the <span class="caps">ESP8266 </span>is on the network, you can do most of this remotely too.</p>
<p>You can see some examples of this in the <a href="https://mongoose-os.com/docs/overview/rpc.html"><span class="caps">RPC </span>documentation</a> on the Mongoose website.</p>
<p>This flexibility obviously has some security implications: if you do something over the network without any kind of access control, so can anyone else!</p>
<h2>Building an application</h2>
<p>If you step back a bit, it is clear that all this configuration malarky just changes bits in the filesystem, so we could avoid doing the configuration step if we flashed a correctly configured image.</p>
<p>Mongoose makes this easy, and probably even encourages such a development model. In Mongoose jargon, a customized blob like this is called an ‘app’, and the process for building them is <a href="https://mongoose-os.com/docs/overview/apps.html">well documented</a>.</p>
<p>Abstracting away from details, executing <code>mos build</code>, takes the app recipe specified in <code>mos.yml</code>, and turns it into a blob suitable for flashing to the <span class="caps">ESP8266.</span></p>
<p>The toochain for building this is supplied as Docker container. You can either run it in the cloud or locally.</p>
<p>There are advantages to this approach over flashing random firmware then tweaking it:</p>
<ul>
<li>Conceptually it is nice to have a single blob containing all the code and data.</li>
<li>You can configure the kernel as you wish, in particular disabling daemons you don’t want to both reduce the attack surface and save disk space.</li>
<li>Given that you have to flash the device anyway, it is quicker to flash it with the correct data than to flash a generic image and then update it.</li>
</ul>
<p>However, there is a penalty to pay too. It takes a while to build the image, and you have to flash the whole device. So, if you’re just changing a few things in e.g. init.js, it is much faster to edit the files on the <span class="caps">ESP8266.</span></p>
<p>As mentioned above, flashing <span class="caps">CH340 </span>boards didn’t work from my Mac which made the process even slower for me. <a href="https://en.wiktionary.org/wiki/your_mileage_may_vary"><span class="caps">YMMV</span></a>!</p>
<h2>Javascript</h2>
<p>Although Mongoose talks about writing code in JavaScript, this isn’t quite true. You actually write code in <a href="https://github.com/cesanta/mjs">mJS</a>, a limited subset of JavaScript.</p>
<p>I think it would be better for all concerned if the Mongoose documentation made this clearer. I wasted ages trying to get some code to work, only to discover that mJS does not support closures. The mJS GitHub repo does include <a href="https://github.com/cesanta/mjs#restrictions">a list of restrictions</a>, but I didn’t know that at the time.</p>
<p>The lack of closures mean that many callbacks take a <code>void *</code> pointer to <a href="https://github.com/cesanta/mjs#callbacks">userdata</a>.</p>
<h2><span class="caps">AWS</span> IoT</h2>
<p>One of Mongoose’s headline features is <a href="https://mongoose-os.com/aws_integration.html">support</a> for <a href="https://aws.amazon.com/documentation/iot/">Amazon Web Services IoT</a>. You can see examples both on the <a href="https://mongoose-os.com/aws-internet-button.html">Mongoose</a>, and <a href="https://aws.amazon.com/blogs/apn/aws-iot-on-mongoose-os-part-1/"><span class="caps">AWS</span></a> websites.</p>
<p>As <span class="caps">AWS</span> IoT supports <span class="caps">MQTT, </span>the marginal work to get this working is to create the relevant certificates for access control, and configure objects on <span class="caps">AWS.</span></p>
<p><code>mos</code> provides a helpful <code>aws-iot-setup</code> command which makes a series of <span class="caps">AWS </span>calls on your behalf.</p>
<p>I was slightly wary of this, and so did things manually. To do the job properly you could study the <a href="https://github.com/cesanta/mongoose-os/blob/master/mos/aws.go">mos source</a>, but I just did this:</p>
<pre><code class="small">$ aws --region us-east-1 iot create-keys-and-certificate \
--set-as-active \
--certificate-pem-outfile=fs/cert.pem \
--public-key-outfile=fs/public.pem \
--private-key-outfile=fs/private.pem</code></pre>
<p>and then used to <span class="caps">AWS </span>console to connect a suitable policy and thing to these certificates.</p>
<p>You’ll need to configure the <span class="caps">ESP8266 </span>too:</p>
<pre><code class="small">$ mos config-set \
wifi.sta.enable=true \
wifi.ap.enable=false \
wifi.sta.ssid='XXXXX' wifi.sta.pass=XXXX \
aws.shadow.thing_name='xxxxxxxx \
mqtt.enable=true \
mqtt.server=a2uxxxxxxxxxxxx.iot.us-east-1.amazonaws.com:8883 \
mqtt.ssl_cert=cert.pem \
mqtt.ssl_key=private.pem \
mqtt.ssl_ca_cert=ca_cert.pem </code></pre>
<p>I think the main difference from calling <code>mos</code> is that I made <span class="caps">RSA </span>certificates, but it might be better to use <span class="caps">ECDSA </span>instead. As <a href="https://forum.mongoose-os.com/discussion/1224/connection-to-aws-iot-without-aws-iot-setup">this forum post</a> explains, <span class="caps">ECDSA </span>will be a lot faster if you have a <span class="caps">ATECC508A </span>crypto-chip. On the other hand, I don’t have such a chip!</p>
<p>It is worth pointing out that connecting to <span class="caps">AWS</span> IoT does take ages: about half-a-minute in my experience. This isn’t Amazon’s fault: the <a href="https://forum.mongoose-os.com/discussion/1150/esp8266-cpu-frequency-aws-iot-connection-time"><span class="caps">ESP8266 </span>is slow</a>.</p>
<p>Once the <span class="caps">AWS</span> IoT stuff is working, you can use it to <a href="https://mongoose-os.com/blog/secure-remote-device-management-with-mongoose-os-and-aws-iot-for-esp32-esp8266-ti-cc3200-stm32/">manage the device remotely</a> as well.</p>
<p>Whilst clever and potentially interesting, I note that connecting a Mongoose device to <span class="caps">AWS</span> IoT does not mean you’re opening a connection purely for data.</p>
<h2><span class="caps">STM32</span></h2>
<p>Another headline Mongoose feature is support for other chips. Although I’ve mentioned <span class="caps">ESP8266 </span>a lot above, I’d hoped I could substitute <span class="caps">ESP32 </span>and <span class="caps">STM32 </span>without a problem.</p>
<p>However, whilst Mongoose stand by their <span class="caps">ESP32 </span>support, the <span class="caps">STM32 </span>stuff is presently <a href="https://forum.mongoose-os.com/discussion/1132/installing-on-stm32">‘really flaky’</a>.</p>
<h2>Conclusions</h2>
<p>Overall I’ve been both delighted and disappointed by Mongoose <span class="caps">OS.</span></p>
<p>On the plus side, it makes it very easy to build a certain class of projects based around the <span class="caps">ESP8266.</span> You can get source code from <a href="https://github.com/cesanta">GitHub</a>, you don’t have to go far to get <a href="https://mongoose-os.com/docs/quickstart/setup.html">reasonable documentation</a>, and the <a href="https://forum.mongoose-os.com">forums</a> are great.</p>
<p>On the other hand I think my expectations were set rather too high:</p>
<ul>
<li>You can’t program in JavaScript but instead a subset of it.</li>
<li>The <span class="caps">STM32 </span>support doesn’t work.</li>
</ul>
<p>Also I think the <span class="caps">ESP8266 </span>just isn’t fast enough to let you write even slow device handlers in mJS. For example, I wanted to connect a rotary encoder, but it was easy to turn it too fast to track.</p>
<p>In practice I think anything which goes up to the JavaScript layer takes a randomish time of about a few ms. For example, I hooked a scope to the blinky example and found that although the average flashing period was pretty accurate the standard deviation was about 1ms, and after a thousand iterations I had about 4ms spread between the minimum and maxiumum periods. I think that means that if you want to handle frequencies higher than a few Hertz, you’ll need to use C.</p>
<p>Writing in C isn’t a problem, but I’m not sure I want to write lots of it against the Mongoose <span class="caps">API</span>s or in the Mongoose environment. I think I’d prefer to work in a more traditional setting, but that might just be ignorance.</p>
<p>There’s also a subjective, aesthetic, issue. I feel there’s quite a lot of magic built into <code>mos</code> so you either have to take things on trust, or spend time and thought working around <code>mos</code>.</p>
<p>Take the <span class="caps">AWS </span>credential issues: I wasn’t particularly keen to divulge my <span class="caps">AWS </span>secrets to Mongoose, nor am I particularly keen to outsource certificate creation to Mongoose. I would have been much happier if <code>mos</code> had generated a script which I could eyeball before executing.</p>
<p>I think this is a particularly serious issue with anything security related, so I see a dark side to many of the clever remote management features.</p>
<p>My tentative conclusion is that Mongoose is a great way to build things which:</p>
<ul>
<li>are based on the <span class="caps">ESP8266</span>;</li>
<li>live on safe networks;</li>
<li>contain only hardware which is supported by existing <span class="caps">API</span>s.</li>
</ul>
<p>Happily, that’s a reasonably interesting subset! It’s probably also fair to say that with more experience, I expect I’d be happy to enlarge this domain. </p>74ECB610-6048-11E7-BAE3-25F1EBF0FE562017-07-03T00:13:57:57Z2017-07-06T23:38:49:49ZThe ESP8266Martin Oldfield<p>Brief notes on the <span class="caps">ESP8266 </span></p><h2>Introduction</h2>
<p>A few years ago, I came to the conclusion that it made sense to ignore most non-ARM embedded processors. <span class="caps">ARM </span>seemed unbiquitous and enjoyed enormous network effects, and so I thought that if anything interesting came out in the non-ARM world one of the many <span class="caps">ARM </span>licensees would release somethineg similar without undue delay.</p>
<p>So, when I first heard about Expressif’s <span class="caps">ESP8266 </span>and found out that it had Tensilica’s Xtensa core, I didn’t pay it the attention it deserved. Instead I waited for a cheap <span class="caps">ARM</span>-equivalent to appear, and dominate the market, but that doesn’t appear to have happened.</p>
<p>So, belatedly, I’ve been playing with the <span class="caps">ESP8266, </span>and these are some brief notes on the topic. Much of the information here can actually be found on <a href="https://en.wikipedia.org/wiki/ESP8266">Wikipedia</a>, but I didn’t absorb it before I’d spent a while messing around.</p>
<h2>Hardware</h2>
<p>The basic chip is a 3.3V 32-bit <span class="caps">RISC CPU </span>running at 80MHz. It has loads of hardware for networking, including most of the stuff you need for Wi-Fi, and a mixture of the usual interfaces: <span class="caps">SPI,</span> I²C, &c. Expressif’s website has a good <a href="http://espressif.com/en/products/hardware/esp8266ex/overview"><span class="caps">ESP8266 </span>section</a>, including a <a href="http://espressif.com/sites/default/files/documentation/0a-esp8266ex_datasheet_en.pdf">data sheet</a>.</p>
<p>Expressif also have a useful <a href="https://github.com/espressif/esptool">Github repo.</a> which contains esptool.py, a python script for low-level access to the <span class="caps">ESP8266 </span>over a serial connection.</p>
<h2>Modules</h2>
<p>In practice, rather than use the bare chip, it’s easier to use a module which usually adds some flash, a Wi-Fi antenna, and an oscillator. Some modules are need to be incorporated into a larger design, whilst others essentially stand alone, needing nothing more than a <span class="caps">USB </span>cable to work.</p>
<p>Many of the basic modules come from AI-Thinker, and many of <span class="caps">USB</span>-connected modules use an AI-Thinker module rather than the <span class="caps">ESP8266 </span>chip directly.</p>
<p>The list below isn’t exhaustive, but covers the things which I found on a random trawl of eBay and Amazon in mid-2017. You can find a more complete list at e.g. the <a href="http://www.esp8266.com/wiki/doku.php?id=esp8266-module-family%23esp-07"><span class="caps">ESP8266 </span>community wiki</a>.</p>
<h2>Modules without <span class="caps">USB</span></h2>
<h3>The <span class="caps">ESP</span>-01 from AI-Thinker</h3>
<p><img src="esp-01.jpg" alt="" class="img_border" /></p>
<p>This is a small <span class="caps">PCB </span>with the <span class="caps">ESP8266,</span> 512kB of flash (on my boards <a href="https://www.winbond.com/resource-files/w25q40bw%20revf%20101113.pdf"><span class="caps">W25Q40BW</span></a>), and a 8-pin <span class="caps">DIL </span>header. There’s a <span class="caps">PCB </span>antenna, but no shielding can.</p>
<p>The main benefit of this module is that it’s cheap, but the flash is smaller than most other boards which limits the software you can install. <a href="http://hackaday.com/2016/07/16/your-esp8266-needs-more-memory/">Upgrading the flash chip</a> isn’t hard, but it’s not worth the time.</p>
<h3>The <span class="caps">ESP</span>-07 and <span class="caps">ESP</span>-12 from AI-Thinker</h3>
<p><img src="esp-espnn.jpg" alt="" class="img_border" /></p>
<p>These are small modules with the <span class="caps">ESP8266, </span>some flash, and an antenna. The electronics live in a shielded can, and Wikipedia claim that the <span class="caps">ESP</span>-12 is <span class="caps">FCC </span>approved.</p>
<p>The differences:</p>
<ul>
<li>1MB of flash on the <span class="caps">ESP</span>-07, 4MB on the <span class="caps">ESP</span>-12.</li>
<li>Ceramic antenna with a <a href="https://en.wikipedia.org/wiki/Hirose_U.FL"><span class="caps">U.FL </span>connector</a> on the <span class="caps">ESP</span>-07, <span class="caps">PCB </span>antenna on the <span class="caps">ESP</span>-12.</li>
</ul>
<p>The <span class="caps">ESP</span>-12 module comes in -E and -F varients: it’s not clear to me how they differ. Some of the modules have an extra side of pins, but I think most of these are used to talk to the flash chip.</p>
<p>Both modules have both pinholes and castellated edges on a 2mm pitch.</p>
<p>Neither module has any mounting holes.</p>
<h4>Programming</h4>
<p><img src="esp-programmer.jpg" alt="" class="img_border" /></p>
<p>The lack of any <span class="caps">USB </span>support means that you’ll have to connect a serial connection yourself. I pondered building a board to do this, but found that <a href="http://www.ebay.co.uk/usr/smdking?_trksid=p2057872.m2749.l2754">smdking on eBay</a> already made <a href="http://www.ebay.co.uk/itm/Flexible-NodeMCU-ESP8266-programmer-for-ESP-12-ESP-08-ESP-07-/302180828486?hash=item465b618546%3Ag%3AD%7EYAAOSw2xRYZVIz">such a thing</a>. Cunningly it uses horizontal pogo-pins to connect to the board.</p>
<p>Against smdking’s advice, I added a bodge wire to get 5V from the <span class="caps">USB </span>port. The limited current on <span class="caps">USB </span>ports mean that using an external <span class="caps">PSU </span>is better.</p>
<h3>Olimex <span class="caps">MOD</span>-WIFI-ESP8266-DEV</h3>
<p><img src="esp-olimex.jpg" alt="" class="img_border" /></p>
<p>In some ways, this is a bit like the <span class="caps">ESP</span>-01 being the <span class="caps">ESP8266 </span>and flash on a board with a <span class="caps">PCB </span>antenna. No screening can.</p>
<p>However:</p>
<ul>
<li>it is physically bigger;</li>
<li>it has 2MB of flash rather than 512k;</li>
<li>it brings all the pins to connectors rather than just a few.</li>
</ul>
<p>So it’s a much more capable board the <span class="caps">ESP</span>-01, but I can’t see any reason to buy this over the <span class="caps">ESP</span>-12 unless you want the <span class="caps">DIL </span>form-factor.</p>
<h2>Modules with <span class="caps">USB</span></h2>
<p><img src="esp-usb.jpg" alt="" class="img_border" /></p>
<h3>NodeMCU</h3>
<p><a href="https://en.wikipedia.org/wiki/NodeMCU">NodeMCU</a> applies both to a dev. board and some firmware, though here we’ll just consider the hardware.</p>
<p>The NodeMCU hardware is a <span class="caps">ESP</span>-12F module on a board, plus a <span class="caps">USB </span>serial interface and 5V to 3.3V voltage regulator. The serial interface uses a <a href="http://www.silabs.com/products/interface/usb-bridges/usbxpress-usb-bridges"><span class="caps">CP2102</span></a> <span class="caps">USB </span>bridge.</p>
<p>Connection to the board is made via Micro-USB and a 30-pin <span class="caps">DIL </span>header with 0.9” spacing. Happily the board has M3 mounting holes in the corners.</p>
<p>Clones of the NodeMCU abound. I bought some LoLin brand boards which are a bit cheaper, somewhat larger, and use a <span class="caps">CH340 USB</span>-serial bridge. Size matters here: the <span class="caps">DIL </span>pins are 1.2” apart, so it’s not plug-compatible with the official boards.</p>
<h3>Wemos D1 mini</h3>
<p><a href="https://www.wemos.cc">Wemos</a> make a range of <span class="caps">ESP8266 </span>boards, of which the most common one seems to be the <a href="https://wiki.wemos.cc/products:d1:d1_mini">D1 mini</a>.</p>
<p>Functionally, it’s identical to the NodeMCU board, but it’s physically smaller. One downside: no mounting holes.</p>
<h2>Programming</h2>
<p>The <span class="caps">ESP8266 </span>comes with a bootloader, so to flash new firmware all you need is a serial connection. All the usual serial chips have been pressed into use: <span class="caps">CH340, CP2102, </span>&c.</p>
<p>To flash the firmware nRST and <span class="caps">GPIO0 </span>on the <span class="caps">ESP8266 </span>must be <a href="https://github.com/espressif/esptool/wiki/ESP8266-Boot-Mode-Selection">driven appropriately</a>: reset while <span class="caps">GPIO0 </span>is low. These lines are often connected to the modem control signals <span class="caps">RTS </span>and <span class="caps">DTR.</span></p>
<p>I think this standard started on the NodeMCU, either way it seems sensible to copy their <a href="https://github.com/nodemcu/nodemcu-devkit-v1.0/blob/master/NODEMCU_DEVKIT_V1.0.PDF">schematic</a>.</p>
<h3><span class="caps">CH340 </span>on the Mac</h3>
<p>Some programming software, notably the mongoose OS <code>mos</code> tool has trouble with <span class="caps">CH340 </span>based programmers on the Mac. I avoided this by using a spare Linux box instead.</p>
<h2>Firmware</h2>
<p>The original <span class="caps">ESP8266 </span>almost pretended to be a modern <a href="https://en.wikipedia.org/wiki/Modem">modem</a>: it accepted <a href="https://en.wikipedia.org/wiki/Hayes_command_set">Hayes style commands</a> over the serial port, but connected to Wi-Fi rather than the telephone network.</p>
<p>In 2014, Expressif released an <span class="caps">SDK </span>allowing people to replace this, making the <span class="caps">ESP8266 </span><a href="http://hackaday.com/2014/10/25/an-sdk-for-the-esp8266-wifi-chip/">a lot more interesting</a>.</p>
<p>Shortly after this, <a href="https://en.wikipedia.org/wiki/NodeMCU">NodeMCU</a>, an open-source firmware was released. It allows you to program the <span class="caps">ESP8266 </span>in <a href="https://en.wikipedia.org/wiki/Lua_(programming_language)">Lua</a>, potentially speeding development. You can grab all this from <a href="https://github.com/nodemcu/nodemcu-firmware">Github</a>.</p>
<p>These days, support for high-level languages is even better:</p>
<ul>
<li> There’s a <a href="http://docs.micropython.org/en/latest/esp8266/esp8266/tutorial/intro.html">MicroPython port</a>.</li>
<li>The <a href="https://github.com/cesanta/mongoose-os">Mongoose OS</a> project supports a subset of Javascript, and comes with support for the <a href="https://aws.amazon.com/iot/"><span class="caps">AWS</span> IoT</a> service. </li>
</ul>84147FB8-5CB2-11E7-BB72-AA138A5072372017-06-29T09:56:46:46Z2017-06-29T11:20:31:31ZWaiting for sixMartin Oldfield<p>Brief thoughts on a dice-rolling question in David MacKay’s book. </p><h2>Introduction</h2>
<p>Chapter 2 of David MacKay’s excellent book <a href="http://www.inference.org.uk/mackay/itila/book.html"><i>Information Theory, Inference, and Learning Algorithms</i></a> is a general introduction to probability. One of the examples asks about some dice-rolling:</p>
<p class="indented">Fred rolls an unbiased six-sided die once per second, noting the occasions when the outcome is a six.</p>
<ol class="indented alpha">
<li>What is the mean number of rolls from one six to the next six?</li>
<li>Between two rolls, the clock strikes one. What is the mean number of rolls until the next six?</li>
<li>Now think back before the clock struck. What is the mean number of rolls, going back in time, until the most recent six?</li>
<li>What is the mean number of rolls from the six before the clock struck to the next six?</li>
</ol>
<p>When I first encountered this, I found it quite hard to tackle, because it’s one of those problems which is almost trivial if you look at it in the right way, but hard otherwise. The key is to educate your intuition so that you do indeed see it from the right perspective.</p>
<p>I don’t remember how I tackled it in the past, but when I was discussing it recently, it struck me as a nice thing to simulate, and my favourite language for such things is <a href="https://en.wikipedia.org/wiki/Haskell_(programming_language)">Haskell</a>.</p>
<h2>Random Numbers in Haskell</h2>
<p>In many languages, you can generate a random number by calling a function e.g. in <a href="https://docs.python.org/2/library/random.html">python</a> we might simulate a couple of dice-rolls thus:</p>
<pre><code>>>> random.randint(1, 6)
2
>>> random.randint(1, 6)
6</code></pre>
<p>Pedantically this generates a <a href="https://en.wikipedia.org/wiki/Pseudorandomness">pseudo-random</a> number, and behind the function call there’s some hidden state so that repeated calls return different results.</p>
<p>Haskell’s idea of a function is much closer to the mathematical one: in particular functions are <a href="https://en.wikipedia.org/wiki/Pure_function">pure</a> which means that if we call a function again with the same arguments we’ll get the same result. We could proceed by explicitly passing the state of the random number generator around. However, Haskell is <a href="https://en.wikipedia.org/wiki/Lazy_evaluation">lazy</a> and so copes well with infinite lists. So, we can define an infinite list of random rolls happy in the knowledge that the samples will only be generated as they’re needed.</p>
<p>Once defined we can pass the infinite list of rolls around just like any other list. The code which analyses the sequence knows nothing about randomness: it just sees the numbers.</p>
<p>Happily the <a href="https://hackage.haskell.org/package/random/docs/System-Random.html">System.Random</a> package contains all the code we need to generate the list:</p>
<pre><code>ghci> import System.Random
ghci> let rolls = randomRs (1,6) . mkStdGen
ghci> take 10 $ rolls 42
[6,4,2,5,3,2,1,6,1,4]
ghci> sum . take 100000 $ rolls 42
350050</code></pre>
<p>The key remaining difference is that we have to specify an explicit seed (here 42). In other languages this often defaults to some external source of entropy e.g. the system clock.</p>
<p>If you’re unfamiliar with Haskell and find the <code>.</code> and <code>$</code> confusing, this <a href="https://stackoverflow.com/questions/3030675/haskell-function-composition-and-function-application-idioms-correct-us">Stack Overflow</a> article might help.</p>
<h2>Some helpful utility functions</h2>
<p>Haskell has many list handling functions in the standard <a href="https://hackage.haskell.org/package/base/docs/Data-List.html">Data.List</a> package. However, it will be useful to define several new functions, all of which are simple wrappers around <a href="https://hackage.haskell.org/package/base/docs/Data-List-Split.html">Data.List.Split</a>:</p>
<pre><code>import qualified Data.List.Split as Sp
splitAfter :: (a -> Bool) -> [a] -> [[a]]
splitAfter p = Sp.split (Sp.keepDelimsR $ Sp.whenElt p)
splitBefore :: (a -> Bool) -> [a] -> [[a]]
splitBefore p = Sp.split (Sp.keepDelimsL $ Sp.whenElt p)
takeUntil :: (a -> Bool) -> [a] -> [a]
takeUntil p = head . (splitAfter p) </code></pre>
<p>Hopefully the names and function signatures are enough to explain what these do, but here are some examples:</p>
<pre><code>ghci> splitAfter (== 'c') "abcdefabcdef"
["abc","defabc","def"]
ghci> splitBefore (== 'c') "abcdefabcdef"
["ab","cdefab","cdef"]
ghci> takeUntil (== 'c') "abcdefabcdef"
"abc"</code></pre>
<p>As is perhaps clear now, our general plan for simulating the dice rolls will be to take the infinite list of rolls, then cut it into sections whose lengths we’ll average.</p>
<p>With this in mind, we’ll find a few other functions helpful too:</p>
<pre><code>averageOver :: Integral a => Int -> [a] -> Double
averageOver n xs = (fromIntegral sigma) / (fromIntegral n)
where sigma = sum $ take n xs
averageLength :: Int -> [[a]] -> Double
averageLength n = averageOver n . map length </code></pre>
<p>We can easily calculate the mean dice-roll:</p>
<pre><code>ghci> averageOver 10000 $ rolls 42
3.4912</code></pre>
<p>However, we’re usually interested in the average length of a sequence, so here’s a (very artificial) example:</p>
<pre><code>ghci> averageLength 10000 . map (\n -> replicate n 'a') $ rolls 42
3.4912</code></pre>
<p>To see why this works consider what the <code>map</code> does to the first five rolls:</p>
<pre><code>ghci> take 5 $ rolls 42
[6,4,2,5,3]
ghci> take 5 . map (\n -> replicate n 'a') $ rolls 42
["aaaaaa","aaaa","aa","aaaaa","aaa"]</code></pre>
<h2>The simulations</h2>
<p>Having prepared our tools, we can now actually tackle the questions.</p>
<h3>Part A</h3>
<p>The question asks:</p>
<p class="indented">What is the mean number of rolls from one six to the next six ?</p>
<p>A reasonable approach is to split the list of rolls whenever we see a six, then measure the lengths of the sublists:</p>
<pre><code>a_seqs :: [Roll] -> [[Roll]]
a_seqs = splitAfter (== 6)</code></pre>
<pre><code>ghci> take 4 . a_seqs $ rolls 42
[[6],[4,2,5,3,2,1,6],[1,4,4,4,1,3,3,2,6],[2,4,1,3,1,1,5,5,5,1,3,6]]
ghci> averageLength 1000 a_seqs $ rolls 42
6.137
ghci> averageLength 100000 a_seqs $ rolls 42
5.98306</code></pre>
<p>It seems likely, and unsurprisingly, that this answer will tend to six as the number of samples tends to infinity. This is easy to show analytically too!</p>
<p>Given that the dice is fair, there is a one-sixth change of rolling a six, and a five-sixths change of not. So, the chance of having to wait n rolls for a six is:</p>
\[
p(n) = \frac{1}{6} \times \left(\frac{5}{6}\right)^{n-1}.
\]
<p>and thus the mean number of rolls is given by,</p>
\[
\mu_A = \sum_{i = 1}^{\infty} i \ \times \theta \, \left(1 - \theta\right)^{i-1},
\]
<p>where \(\theta = 1/6\).</p>
<p><a href="https://www.wolframalpha.com/input/?i=Sum%5B++i+%2A1%2F6+%2A+%285%2F6%29%5E%28i-1%29%2C+%7Bi%2C+1%2C+Infinity%7D%5D">Evaluating this</a> does indeed give,</p>
\[
\mu_B = \frac{1}{\theta} = 6.
\]
<h3>Parts B & C</h3>
<p>The question asks:</p>
<p class="indented">Between two rolls, the clock strikes one. What is the mean number of rolls until the next six?</p>
<p>The first problem here is that we have no notion of the time in our simulation, so let’s add one:</p>
<pre><code>addTimes :: [a] -> [(Time,a)]
addTimes = zip times
where times = cycle [0..longPeriod - 1]
ghci> take 5 $ rolls 42
[6,4,2,5,3]
ghci> take 5 . addTimes $ rolls 42
[(0,6),(1,4),(2,2),(3,5),(4,3)]</code></pre>
<p>We’ve replaced the list of rolls with a list of (time,roll) pairs. We need some convention about time: let’s say that “one o’clock” corresponds to t = 0, and the roll happens after the tick. Thus (0,6) means that the roll immediately after the clock struck was a 6.</p>
<p>Simulating this part is a little bit more complicated but it’s not too bad. Begin by splitting the list when the clock chimes, then discard the sequence after the first 6. In code:</p>
<pre><code>b_seqs :: [Roll] -> [[(Time,Roll)]]
b_seqs = map (takeUntil (\(t, r) -> r 6))
. splitAfter (\(t,r) -> t 0)
. addTimes
ghci> take 3 . b_seqs $ rolls 42
[[(0,6)]
,[(1,4),(2,2),(3,5),(4,3),(5,2),(6,1),(7,6)]
,[(1,3),(2,1),(3,2),(4,6)]]
ghci> averageLength 1000 b_seqs $ rolls 42
6.21
ghci> averageLength 100000 b_seqs $ rolls 42
6.01542</code></pre>
<p>The calculation is noticably slower, but the answer seems to be the same. That seems reasonable: we are just picking random points in the sequence of rolls and starting our count there.</p>
<p>Nothing in the analytic result above cares where we start, so the analytic result is also unchanged!</p>
<p>Although I’ve not done it explicitly here, I think it’s clear that part C is just the same but with time running backwards.</p>
<h3>Part D</h3>
<p>The question asks:</p>
<p class="indented">What is the mean number of rolls from the six before the clock struck to the next six?</p>
<p>Today, it seems sensible to me to tackle this question by splitting the list of rolls every time we see a six, then throwing out those sequences in which the clock doesn’t chime. The code is straightforward and gives the right answer:</p>
<pre><code>d_seqs :: [Roll] -> [[(Time,Roll)]]
d_seqs = filter (any (\(t,r) -> t 0))
. splitAfter (\(t,r) -> r 6)
. addTimes
ghci> averageLength 100000 d_seqs $ rolls 42
11.1093</code></pre>
<p>However, I worry somewhat that this code is only easy to write because I know how to think about the question. So, here’s a messier approach which relates more closely to the words in MacKay’s book.</p>
<p>We begin by adding another component to the list of rolls, which counts how long it’s been since we rolled a six. A couple of caveats: it fudges the start of the sequence, and it resets the count in the tuple <i>after</i> the six is rolled:</p>
<pre><code>addTimeSince6 :: [Roll] -> [(Roll,Int)]
addTimeSince6 = tail . addT
where addT xs = (0,0):(zipWith f xs (addT xs))
f r (r',i) = (r, if r' == 6 then 1 else i + 1)
ghci> addTimeSince6 [1,2,6,1,2,6,1,2]
[(1,1),(2,2),(6,3),(1,1),(2,2),(6,3),(1,1),(2,2)]
ghci> addTimes . addTimeSince6 $ [1,2,6,1,2,6,1,2]
[(0,(1,1)),(1,(2,2)),(2,(6,3)),(3,(1,1)),(4,(2,2)),(5,(6,3)),(6,(1,1)),(7,(2,2))]</code></pre>
<p>If we annotate this sequence with the time, and split it where the clock chimes, we can read off the length directly by looking for the first six in the break:</p>
<pre><code>d_lengths' :: [Roll] -> [Int]
d_lengths' = tail
. map len
. splitBefore (\(t,(r,q)) -> t 0)
. addTimes
. addTimeSince6
where len [] = 0
len xs = head [ q | (t,(r,q)) <- xs, r 6]
ghci> averageOver 3000 . d_lengths' $ rolls 42
11.08</code></pre>
<p>The code is slow (which is why we only consider 3,000 samples) but it appears to get the same result! If we wanted to be sure, we can compare the lengths directly:</p>
<pre><code>*Main> take 20 . map length . d_seqs $ rolls 42
[1,14,43,7,13,8,6,9,5,7,27,41,1,3,22,14,8,9,18,9]
*Main> take 20 . d_lengths' $ rolls 42
[1,14,43,7,13,8,6,9,5,7,27,41,1,3,22,14,8,9,18,9]</code></pre>
<p>Happily these sequences agree, at least up to the first twenty terms. Less happily, I think the code is rather messy, and probably reasonably opaque if you’re not familiar with Haskell.</p>
<p>Finally, let’s derive this analytically. The key insight here is that the chance of the clock chiming in a particular run between sixes depends on the length of the sequence. Although all ticks are equally likely to hear the clock chime, longer sequences have more ticks and so are more likely. Thus the probability of being in a sequence of length \(i\) when the clock chimes is,</p>
\[
q_i \propto i \times p_i,
\]
<p>Now,</p>
\[
p_i \propto \left(1 - \theta\right)^{i - 1},
\]
<p>So,</p>
\[
q_i \propto i \times \left(1 - \theta\right)^{i-1}.
\]
<p>Thus, multiplying all terms by \(1 - \theta\), the mean number is given by</p>
\[
\mu_D = \frac{\sum_{i = 1}^{\infty}{i^2\, \left(1 - \theta\right)^i}}{\sum_{i = 1}^{\infty}{i\, \left(1 - \theta\right)^i}}.
\]
<p><a href="https://www.wolframalpha.com/input/?i=Sum%5B++i%5E2+%2A+%285%2F6%29%5Ei%2C+%7Bi%2C+1%2C+Infinity%7D%5D+%2F+Sum%5B++i+%2A+%285%2F6%29%5Ei%2C+%7Bi%2C+1%2C+Infinity%7D%5D">Evaluating this</a> does indeed show that</p>
\[
\mu_D = \frac{2}{\theta} - 1 = 11.
\]
<h2>Other ways to (roll a) die</h2>
<p>As we mentioned above, because we hide all the randomness behind the infinite list of rolls, it is easy to consider other situations without changing the analysis code.</p>
<h3>A dodgy die</h3>
<p>Suppose we have a dodgy die which <a href="https://www.google.co.uk/search?q=monty+python+string+%22due+to+bad+planning%22&oq=monty+python+string+%22due+to+bad+planning%22">due to bad planning</a> has six dots painted where there should be five. In other words, there the chance of rolling a six is now one-third.</p>
<p>How does this affect our answers ?</p>
<pre><code>addBias :: [Roll] -> [Roll]
addBias = map (\i -> if i == 5 then 6 else i)
ghci> take 10 $ rolls 42
[6,4,2,5,3,2,1,6,1,4]
ghci> take 10 . addBias $ rolls 42
[6,4,2,6,3,2,1,6,1,4]
ghci> averageLength 10000 . a_seqs . addBias $ rolls 42
3.0129
ghci> averageLength 10000 . d_seqs . addBias $ rolls 42
4.958</code></pre>
<p>So, it seems that the means change to 3 and 5.</p>
<p>Sure enough, if we put \(\theta = 1/3\) into the expressions we derived about we find that:</p>
\[
\mu_A = 3, \mu_D = 5.
\]
<h3>A magic die</h3>
<p>Now suppose that instead of an incompetent manufacturer, our die came from a magician. In particular, suppose that the dice always rolls the sequence 1,2,3,4,5,6,1,2,... We will leave the implementation of this for now, but it’s easy to simulate:</p>
<pre><code>magicRolls :: [Roll]
magicRolls = cycle [1..6]
ghci> take 10 $ magicRolls
[1,2,3,4,5,6,1,2,3,4]
*Main> averageLength 10000 . a_seqs $ magicRolls
6.0
*Main> averageLength 10000 . d_seqs $ magicRolls
6.0</code></pre>
<p>\(\mu_A\) stays the same, but \(\mu_D\) is now six! To understand why it is important to notice that the magic die rolls six on every sixth roll, and so the sequences between sixes are <i>all</i> precisely six rolls long.</p>
<p>This means that the chime is equally likely to fall into all the sequences, and whichever one it does pick, is bound to be six rolls long.</p>
<h2>Implementation notes</h2>
<h3>Performance issues</h3>
<p>Although Haskell is a very pleasant environment for doing this sort of work, in practice you can hit performance issues. There are a couple of problems:</p>
<ol>
<li>The ghci <span class="caps">REPL </span>doesn’t compile the code efficiently, so in practice you’re better off compiling the code as a library, then loading the object into ghci.</li>
<li>There are space leaks in the code, so it doesn’t scale well. For the simple sequence stuff, this isn’t a problem, but it for the second part D calculation it is a nuisance.</li>
</ol>
<h3>Day length</h3>
<p>Although the question talks about a daily chime, in practice all we need are events sufficiently widely spaced that they won’t interact. This translates to saying that we can be confident that a six will occur between two sets of chimes.</p>
<p>Days are 86,400 seconds long, so one o’clock chimes occur at least 43,200 seconds apart. However, given that</p>
\[
\left(\frac{5}{6}\right)^{100} \approx 1.2 \times 10^{-8}
\]
<p>it seems enough to work in a world where the chimes happen every 100 seconds. The code above does this, and runs much more quickly as a result. </p>9F868D5C-2B5E-11E7-841B-D5B734E6D3122017-04-27T15:31:20:20Z2017-04-27T21:28:59:59ZApplicative IOMartin Oldfield<p>Some benefits to using the Applicative instance of <span class="caps">IO, </span>which I find particularly useful for casual use in ghci. </p><h2>Introduction</h2>
<p>Haskell famously has ‘monadic IO’, which allows us to compose IO actions with bind. For example, here’s a crude version of <a href="https://en.wikipedia.org/wiki/Cp_%28Unix%29">cp:</a></p>
<pre><code>cp ifn ofn = readFile ifn >>= writeFile ofn</code></pre>
<p>In practice we would normally use do-notation:</p>
<pre><code>cp ifn ofn = do
txt <- readFile ifn
writeFile ofn txt</code></pre>
<p>Unless of course we’re working in ghci, where one-line expressions seem much more convenient to me—perhaps because I wrote a lot of Perl in the past.</p>
<h2>Messing around in ghci</h2>
<p>In the example above, bind cleanly chains the IO actions. As a reminder it has type:</p>
<pre><code>(>>=) :: Monad m => m a -> (a -> m b) -> m b</code></pre>
<p>Sometimes though we don’t want to put the data into another IO action, but rather analyse it, hopefully with a pure function. Bind looks less helpful here because our results will typically not be in IO i.e. we have (a -> b) not (a -> m b).</p>
<p>Staying with bind for now though, we’ll need a way to make an IO action to display our results. Enter print:</p>
<pre><code>print :: Show a => a -> IO ()</code></pre>
<p>So we could make a <a href="https://en.wikipedia.org/wiki/Wc_%28Unix%29">wc</a> clone thus:</p>
<pre><code>ghci> readFile "/usr/share/dict/words" >>= print . length . words
235886</code></pre>
<p>Gosh that’s ugly! Remembering that the ghci prompt is a bit like a do-block, perhaps a multi-line approach would help:</p>
<pre><code>ghci> dict <- readFile "/usr/share/dict/words"
ghci> print . length . words $ dict
235886</code></pre>
<p>Or, because dict is just a String to the ghci prompt:</p>
<pre><code>ghci> length . words $ dict
235886</code></pre>
<p>It is perhaps worth pointing out that ghci gives us considerable latitude to provide a result for it to print: these all look the same but have different types:</p>
<pre><code>print . length . words $ dict :: IO ()
length . words $ dict :: Int
return . length . words $ dict :: Monad m => m Int</code></pre>
<p>Anyway, all of these multi-line approaches are a bit fiddly in ghci because you have to re-execute multiple lines if you reload files.</p>
<p>It’s worth noting that using bind forces us to make an action, and so requires the print. However, we can improve things by using liftM from Control.Monad:</p>
<pre><code>ghci> liftM (length . words) $ readFile "/usr/share/dict/words"
235886</code></pre>
<p>ghci is happy to print this IO Int. I think this is better than the bind approaches, but it’s still quite noisy.</p>
<h2>Applicative IO</h2>
<p>Now, where there’s a monad, there’s also an <a href="https://hackage.haskell.org/package/base-4.9.1.0/docs/Control-Applicative.html">applicative,</a> and it struck me the other day this this applies to <span class="caps">IO.</span> So, we can actually write wc as:</p>
<pre><code>ghci> length . words <$> readFile "/usr/share/dict/words"
235886</code></pre>
<p>which strikes me as nice and simple.</p>
<p>We can’t use applicative for all the IO tasks in ghci: we will need the full power of the monad to collapse the IO (IO a)) we get from chaining IO actions directly. For example, the result above is in <span class="caps">IO, </span>so even if we convert it into a String we can't chain it to writeFile:</p>
<pre><code>ghci> :t length . words <$> readFile "/usr/share/dict/words"
length . words <$> readFile "/usr/share/dict/words" :: IO Int
ghci> writeFile "foo" $ show . length . words
<$> readFile "/usr/share/dict/words"
<interactive>:19:20: error:
• Couldn't match type ‘IO’ with ‘[]’
Expected type: String
Actual type: IO String
... </code></pre>
<p>However, for a class of tasks, I think using applicative is a nice simplification. We are not just limited to a single action. Here’s a very crude riff on <a href="https://en.wikipedia.org/wiki/Diff_utility">diff:</a></p>
<pre><code>ghci> (==) <$> readFile "/usr/share/dict/words"
<*> readFile "/usr/share/dict/words"
True
ghci> (==) <$> readFile "/usr/share/dict/words"
<*> readFile "/usr/share/dict/propernames"
False</code></pre>
<p>It is even easy to diff against a fixed string:</p>
<pre><code>ghci> (==) <$> readFile "/usr/share/dict/words" <*> pure "Banana"
False</code></pre>
<h2>Functor IO</h2>
<p>For the simple case of one argument, we could also replace the applicative <$> with fmap:</p>
<pre><code>ghci> fmap (length . words) $ readFile "/usr/share/dict/words"
235886</code></pre>
<p>but I think that’s less clear.</p>
<h2>Conclusions</h2>
<p>I think it’s often pretty useful and productive to explore code from the ghci prompt. However, things got messy if part of the exploration needed data from files, which discouraged me from doing the right thing.</p>
<p>I think using IO’s applicative instance solves a lot of the problems, and am somewhat annoyed I didn’t think of it before. </p>8B8A2C46-1570-11E6-A363-9457DF558F2C2016-05-08T22:58:54:54Z2016-05-09T22:20:54:54ZHaskell ToysMartin Oldfield<p>An attempt to abstract away the boilerplate when writing little command line toys in Haskell. </p><h2>Rationale</h2>
<p>I find Haskell a fine tool for writing little utility programs. It’s most succinct, elegant, and programs tend to just work suprisingly often. Often I’ll just write a single .hs file and the job’s done.</p>
<p>However there are a couple of issues:</p>
<ul>
<li>Some of the cleverer libraries e.g. JuicyPixels use the type system to great effect, letting you work with images at a very abstract level. I find this makes it a bit harder to use them from e.g. the ghci prompt without peppering things with more concrete type annotations.</li>
<li>I can be quite lazy when writing such utilities. Suppose I have three images which I want to munge in a particular way, and save to new files. I think a good approach would to automatically generate a suitable name for the output file, then iterate over the three input names. In practice, I’ve tended to just invoke a command three times, manually specifying the output names.</li>
</ul>
<p>This simple library is an attempt to solve these. It is deliberately <em>opinionated</em> in that it provides functions which I find useful, whilst making it harder to access things which I use less often. You might think of it as a compression scheme tailored to give short expressions for things I personally use often.</p>
<h2>Practical matters</h2>
<p>You can get the code from <a href="https://github.com/mjoldfield/haskell-toys">GitHub</a>.</p>
<p>Haddock <a href="https://s3.amazonaws.com/mjoldfield-stack-docs/haskell-toys/index.html">documentation</a> is also available.</p>
<p>If the two resources are in sync, that’s purely coincidental: if you actually care about this, grab the code and generate the docs locally.</p>
<h2>Examples</h2>
<p>Most of the functions are short enough to read quickly, and are documented in the source. So, I present here some (somewhat contrived) examples.</p>
<h3><a href="https://s3.amazonaws.com/mjoldfield-stack-docs/haskell-toys/Toy-Generic.html">Toy.Generic</a></h3>
<h4>processGeneric</h4>
<p>If you’re interating over some input files, generating a separate output file for each input, it can be fiddly to generate good names for the outputs. This tries to help:</p>
<ul>
<li>it programmatically changes the basename;</li>
<li>it assigns a fixed suffix, because we expect output to have a fixed format.</li>
</ul>
<p>Note: it only processes one file, so you’ll typically call it repeatedly.</p>
<h4>processArgs</h4>
<p>This seems hardly worthwhile for now, but I have some vague plans to add functionality. Then again, <span class="caps">YAGNI </span>usually wins!</p>
<h4>An example</h4>
<p>Here’s a simple utility to cast the contents of text files into upper-case:</p>
<pre><code>import Toy.Generic
import Data.Char
main = processArgs $ processGeneric ucContents (++ "-u") "txt"
ucContents :: FilePath -> FilePath -> IO ()
ucContents = transformText (map toUpper)
transformText :: (String -> String) -> FilePath -> FilePath -> IO ()
transformText tx inf outf = do
orig <- readFile inf
let new = tx orig
writeFile outf new
</code></pre>
<p><code>transformText</code> should probably be included in a (presently non-existent) Toy.Text module.</p>
<p>You might use it like this:</p>
<pre><code>$ echo abc > abc.txt
$ echo def > def.text
$ ls *.txt
abc.txt def.text
$ stack exec ht-uc-text *.txt *.text
$ ls *.txt
abc-u.txt abc.txt def-u.txt def.text
$ cat abc-u.txt
ABC </code></pre>
<h3><a href="https://s3.amazonaws.com/mjoldfield-stack-docs/haskell-toys/Toy-JuicyPixels.html">Toy.JuicyPixels</a></h3>
<p><a href="http://hackage.haskell.org/package/JuicyPixels">JuicyPixels</a> is a great package for loading and saving images, and I often use it to munge images in command line tools.</p>
<p>Toy.JuicyPixels deals exclusively with PixelRGB8 images because most of the images I encounter are in that format.</p>
<h4>loadImage, loadImageThen</h4>
<p>As their names suggest these loadImages, but unlike readImage in the JuicyPixels library force the data loaded into PixelRGB8 format.</p>
<p>loadImageThen also handles error conditions by printing an error message: this simplifies life for the user.</p>
<p>Here’s a complete <span class="caps">GHCI </span>session showing code to extract the width of an image:</p>
<pre><code>$ stack ghci
Run from outside a project, using implicit global project config
...
GHCi, version 7.10.3: http://www.haskell.org/ghc/ :? for help
Ok, modules loaded: none.
ghci> import Codec.Picture
ghci> import Toy.JuicyPixels
ghci> loadImageThen (putStrLn . show . imageWidth) "avatar.png"
420 </code></pre>
<h4>transformImagePNG, transformImagePNG'</h4>
<p>These load an image, transform its contents, then saves the result as a <span class="caps">PNG </span>file. The <span class="caps">PNG</span>' version uses a fixed basename transformation: adding -x.</p>
<p>Again here’s an example in ghci: swapping red and blue channels:</p>
<pre><code>$ stack ghci
Run from outside a project, using implicit global project config
...
GHCi, version 7.10.3: http://www.haskell.org/ghc/ :? for help
Ok, modules loaded: none.
ghci> import Codec.Picture
ghci> import Toy.JuicyPixels
ghci> let px = pixelMap (\(PixelRGB8 r g b) -> PixelRGB8 b g r)
gchi> transformImagePNG' px "bird.jpg"</code></pre>
<h4>transformImagesInArgsPNG</h4>
<p>As above, but iterate over the command line arguments, this time in an application.</p>
<pre><code>import Toy.JuicyPixels
import Codec.Picture
main = transformImagesInArgsPNG px (++ "-f")
px :: ImageRGB8 -> ImageRGB8
px = pixelMap (\(PixelRGB8 r g b) -> PixelRGB8 b g r) </code></pre>
<h4>transformImage, transformImagesInArgs</h4>
<p>These are more general versions which can write any file format. Accordingly the caller must provide code to save the data, and an appropriate suffix. Unsurprisingly the <span class="caps">PNG </span>versions are implemented in terms of these. For example:</p>
<pre><code>transformImagePNG = transformImage writePng "png"</code></pre>
<h4>describeImage, describeImagesInArgs</h4>
<p>Sometimes, it’s enough to extract data from the image and print it out. Rather than taking a String, any Show instance will do.</p>
<p>Here’s a reimplementation of our image width displayer:</p>
<pre><code>ghci> describeImage imageWidth "avatar.png"
avatar.png:
420</code></pre>
<p>Note that as well as the output, it also prints the filename.</p>
<h4>iPixelList, pixelList</h4>
<p>Return a list of pixels in the image. This isn’t very efficient, but it can be convenient.</p>
<p>ht-pixel-freqs, one of the toy applications in the distribution uses this:</p>
<pre><code>import Toy.JuicyPixels
import Codec.Picture
import qualified Data.List as L
import Text.Printf
main = describeImagesInArgs countPixels
countPixels :: ImageRGB8 -> String
countPixels = concatMap pp . freqs . pixelList
freqs :: (Ord a, Eq a) => [a] -> [(a,Int)]
freqs = map (\ps -> (head ps, length ps)) . L.group . L.sort
pp (PixelRGB8 r g b, n) = printf "%3d %3d %3d: %8d\n" r g b n </code></pre>
<h4>liftRGB, memoizeWord8</h4>
<p>A couple of combinators to help with efficient, colour-blind, transforms. The haskell-toys distribution includes a program which flips the bits in each pixel, which I find helpful to reveal hidden low-level structure.</p>
<pre><code>import Toy.JuicyPixels
import Codec.Picture
import Data.Word
import Data.Bits
import qualified Data.List as L
main = transformImagesInArgsPNG flipImage (++ "-f")
flipImage :: ImageRGB8 -> ImageRGB8
flipImage = pixelMap (liftRGB flipByte)
flipByte :: Word8 -> Word8
flipByte = memoizeWord8 flipByte'
flipByte' :: Word8 -> Word8
flipByte' x = L.foldl' setBit 0 $ [ 7 - i | i <- [0..7], testBit x i ] </code></pre>D0001EAE-1515-11E6-A96D-8442DF558F2C2016-05-08T12:10:08:08Z2016-05-09T21:17:52:52ZExtra-project StackMartin Oldfield<p>Brief notes on using the Haskell Tool Stack with local repositories, but without a specific project. </p><h2>Introduction</h2>
<p>In my experience <a href="http://docs.haskellstack.org/en/stable/README/">stack</a> works pretty well out-of-the-box when you use it to compile projects whose dependencies can be found in Stackage.</p>
<p>In fact, stack works so well that I could use it without understanding how it worked. All fine, until I wanted to use stack for standalone .hs files, or use a local library. All the information below is in stack’s excellent <a href="http://docs.haskellstack.org/en/stable/README/">official documentation</a>, but I found it hard to put together.</p>
<h2>Normal operation</h2>
<p>It seems wise to document how I understand stack works, at least the bits which are relevant to the discussions below. The details of this might well be wrong: regard it less as an accurate description of stack than an approximation which might be helpful later.</p>
<p>Suppose we have some project which we want to compile with stack. To zeroth order it looks just like a normal cabal project. In particular, there’s a foo.cabal file which lists the targets, the dependencies and so on. None of that changes under stack. We do however, get a couple of new things in the project directory:</p>
<ul>
<li>stack.yaml which lets us specify extra stuff;</li>
<li>a .stack-work directory in which everything is built.</li>
</ul>
<p>When we execute e.g. <code>stack build</code> stack compiles everything, dependencies and all, in the .stack-work directory. I think it’s worth reiterating that stacks know what to build, and which dependencies are needed by looking in the project’s .cabal file.</p>
<h2>Project-free use</h2>
<p>Of course there’s nothing to stop us invoking stack outside a project, but it slightly begs the question of what stack will do about the .cabal and stack.yaml files, and the .stack-work directory.</p>
<p>Let’s proceed by experimentation, and launch ghci in my home directory:</p>
<pre><code>$ stack ghci
Run from outside a project, using implicit global project config
Using resolver: lts-5.15 from implicit global project's config file:
/Users/mjo/.stack/global-project/stack.yaml
Error parsing targets: The specified targets matched no packages.
Perhaps you need to run 'stack init'?
Warning: build failed, but optimistically launching GHCi anyway
Configuring GHCi with the following packages:
GHCi, version 7.10.3: http://www.haskell.org/ghc/ :? for help
Ok, modules loaded: none.
Prelude> </code></pre>
<p>Well that explains what’s happening about stack.yaml: stack has set up a global-project directory under ~/.stack, and is using the stack.yaml file within it. Further experiments would show that this directory is used every time we invoke stack outside a project. Effectively, there’s a single ‘not-in-a-real-project’ project, though perhaps it should have been called global-noproject!</p>
<p>You might guess that this is where we’d find .stack-work too, and you’d be right:</p>
<pre><code>$ ls -la ~/.stack/global-project/
total 20
drwxr-xr-x 6 mjo staff 204 8 May 13:24 .
drwxr-xr-x 12 mjo staff 408 8 May 07:37 ..
drwxr-xr-x 5 mjo staff 170 8 May 07:36 .stack-work
-rw-r--r-- 1 mjo staff 103 8 May 01:16 README.txt
-rw-r--r-- 1 mjo staff 572 8 May 13:24 stack.yaml </code></pre>
<p>Finally we come to the missing .cabal file. This shouldn’t really be a surprise, because it’s precisely the situation we had in the pre-stack era. This does mean though that there’s nowhere to specify dependencies, so we’ll have to manage them manually as we used to do with cabal. Whether this means there’s potential for stack global-project hell isn’t clear to me!</p>
<p>As an example, suppose we want to play with <a href="https://hackage.haskell.org/package/digits-0.2/docs/Data-Digits.html">Data.Digits</a> in ghci. This module is part of the digits package, which is included in stackage. However, just trying to import it fails:</p>
<pre><code>mjo$ stack ghci
Run from outside a project, using implicit global project config
...
GHCi, version 7.10.3: http://www.haskell.org/ghc/ :? for help
Ok, modules loaded: none.
Prelude> import Data.Digits
<no location info>:
Could not find module ‘Data.Digits’
Perhaps you meant Data.Bits (from base-4.8.2.0)
Prelude>
Leaving GHCi. </code></pre>
<p>The solution, just as it was with cabal is to install the package first (in stack’s global-project/.stack-work directory):</p>
<pre><code>mjo$ stack install digits
Run from outside a project, using implicit global project config
Using resolver: lts-5.15 from implicit global project's config file:
/Users/mjo/.stack/global-project/stack.yaml
tf-random-0.5: download
tf-random-0.5: configure
tf-random-0.5: build
tf-random-0.5: copy/register
QuickCheck-2.8.1: download
QuickCheck-2.8.1: configure
QuickCheck-2.8.1: build
QuickCheck-2.8.1: copy/register
digits-0.2: download
digits-0.2: configure
digits-0.2: build
digits-0.2: copy/register
Completed 3 action(s).</code></pre>
<p>You can see that stack handled digits’ dependencies for us. Now, it works:</p>
<pre><code>mjo$ stack ghci
Run from outside a project, using implicit global project config
...
GHCi, version 7.10.3: http://www.haskell.org/ghc/ :? for help
Ok, modules loaded: none.
Prelude> import Data.Digits
Prelude Data.Digits> </code></pre>
<h2>Local repositories</h2>
<p>Sometimes we might want to use a repository outside stackage, and happily stack supports this. All the information you need is in the <a href="http://docs.haskellstack.org/en/stable/yaml_configuration/">documentation</a>, though I found it fiddly to get right.</p>
<p>As an example, I want to use a repository on GitHub. At first assume that I’m using it from another package.</p>
<p>Firstly, we need to add the name of the distribution to the .cabal file:</p>
<pre><code>...
build-depends: base >= 4.7 && < 5
, haskell-toys
...</code></pre>
<p>Then we need to tell stack how to find it. Since this is information stack needs, we put it in the project’s stack.yaml file:</p>
<pre><code>... # Local packages, usually specified by relative directory name
packages:
- location: '.'
- location:
git: https://github.com/mjoldfield/haskell-toys.git
commit: ebeb6d6ea490db92b5ec2f68e8075887c6a6994f
extra-dep: true
...</code></pre>
<p>At first I found the meaning of this rather confusing. Now I think of it as stack where to find .cabal files for packages outside of stackage:</p>
<ul>
<li>in the project directory;</li>
<li>in the specified commit in the GitHub repo.</li>
</ul>
<p>Specifying a particular commit in the repo seems to work better with respect to upgrades than just specifying the master tarball.</p>
<p>There’s also the <code>extra_dep: true</code> line, which means that stack should treat the GitHub location as a dependency i.e. more like the things we get from stackage rather than as a addition to the local project files.</p>
<p>Having added those lines to slack.yaml, it all just works.</p>
<h3>The extra-project case</h3>
<p>As discussed above, stack handles project-free files by effectively putting them into a special global-project. So, you might think that all you need to do is add the location to the global-project/slack.yaml file. You do!</p>
<p>What you must <em>not</em> do is also add the <code>location: .</code> line as well. If you do slack seems to think that global-project is just a normal project, and so tries to read a .cabal file. This fails and slack tells you, but I didn’t quite understand <a href="https://github.com/commercialhaskell/stack/issues/2115">the issue</a>.</p>
<p>Thanks to Michael Sloan for explaining my mistake so quickly.</p>
<p>Explicitly then, here are the steps I needed to take:</p>
<ul>
<li>add the location to global-project/stack.yaml;</li>
<li>run <code>stack update</code>;</li>
<li>run e.g. <code>stack ghci</code> and enjoy my haskell-toys. </li>
</ul>653A15C4-0F8F-11E6-9FC7-94BAF269BD1C2016-05-01T11:18:20:20Z2016-05-04T08:57:14:14ZOrUnitMartin Oldfield<p>Simple experiments with polymorphic return types in Haskell. </p><h2>Motivation</h2>
<p>I have some code which is presently run for its side effects, but I’d like it data too. It’s analogous to extending <code>writeFile</code> to return the size of the contents:</p>
<pre><code>writeFileAndCount :: FilePath -> String -> IO Int
writeFileAndCount path contents = do
writeFile path contents
return $ length contents </code></pre>
<p>All well and good! Now we could just have two functions, one which returns <code>IO ()</code> and the other which returns <code>IO Int</code>, but it seems a shame to pollute the namespace. Instead it would be nice if we could say e.g.:</p>
<pre><code>ghci> writeFile' "foo.txt" "Hello" :: IO ()
ghci> writeFile' "foo.txt" "Hello" :: IO Int
5</code></pre>
<p>Happily we can!</p>
<h2>A mathematical analogy</h2>
<p>You’ve probably already seen code whose return type changes to match the context. For example in Haskell’s maths libraries many functions will return either <code>Float</code> or <code>Double</code>. Here’s <code>sqrt</code>:</p>
<pre><code>ghci> (sqrt 2) :: Float
1.4142135
ghci> (sqrt 2) :: Double
1.4142135623730951</code></pre>
<p>To see how this works, look at the type of <code>sqrt</code>:</p>
<pre><code>ghci> :t sqrt
sqrt :: Floating a => a -> a</code></pre>
<p>There’s no mention of <code>Double</code> or <code>Float</code> there. Instead, we see that <code>sqrt</code> will work with any instance of the <code>Floating</code> typeclass.</p>
<p>Under the covers we’d expect different instances of <code>sqrt</code>: one <code>Double -> Double</code>, another <code>Float -> Float</code>. Having inferred the relevant type, the compiler will then pick the particular instance we need.</p>
<p>Conceptually we might have:</p>
<pre><code>sqrtD :: Double -> Double
sqrtF :: Float -> Float
sqrt :: (Floating a) => a -> a</code></pre>
<p>Note that all of these functions don’t change the type: we can’t implicitly change e.g. a <code>Float</code> to a <code>Double</code>:</p>
<pre><code>ghci> (sqrt (2 :: Float)) :: Double
<interactive>:5:8:
Couldn't match expected type ‘Double’ with actual type ‘Float’
In the first argument of ‘sqrt’, namely ‘(2 :: Float)’
In the expression: (sqrt (2 :: Float)) :: Double
In an equation for ‘it’: it = (sqrt (2 :: Float)) :: Double </code></pre>
<p>This is because the signature has just one degree-of-freedom:</p>
<pre><code>sqrt :: a -> a</code></pre>
<p>rather than</p>
<pre><code>sqrt :: a -> b</code></pre>
<h2>A polymorphic wrapper</h2>
<p>Having seen that <code>sqrt</code> can choose different code in different contexts, let’s try to write a combinator which either passes a value unchanged, or converts it to <code>()</code>.</p>
<p>By analogy with <code>sqrt</code> consider combining:</p>
<pre><code>toId :: a -> a
toUnit :: a -> ()</code></pre>
<p> Although the first equation both accepts and returns the same type, the second doesn’t. So it makes sense to invent a type class with two parameters. On a technical level, this means we’ll need the <a href="https://wiki.haskell.org/Multi-parameter_type_class"><code>MultiParamTypeClasses</code></a> <span class="caps">GHC </span>extension.</p>
<p>Here’s a suitable type class:</p>
<pre><code>class OrUnit b a where
orUnit :: a -> b</code></pre>
<p>We also need a couple of instances:</p>
<pre><code>instance OrUnit () a where
orUnit a = ()
instance OrUnit a a where
orUnit a = a</code></pre>
<p>In the first instance above, the <code>()</code> is a concrete type rather than a variable, so we’ll also need the <a href="http://connectionrequired.com/blog/2009/07/my-first-introduction-to-haskell-extensions-flexibleinstances/"><code>FlexibleInstances</code></a> extension.</p>
<p>Given this, we can write things like this:</p>
<pre><code>ghci> orUnit 'a' :: Char
'a'
ghci> orUnit 'a' :: ()
()</code></pre>
<p>or indeed our original goal of <code>writeFile'</code>:</p>
<pre><code>writeFile' :: (OrUnit a Int) => FilePath -> String -> IO a
writeFile' path contents = liftM orUnit
$ writeFileAndCount path contents</code></pre>
<p>We need <code>liftM</code> to lift <code>orUnit</code> into the IO Monad.</p>
<h2>Problems</h2>
<p>Although we’ve met our original goal of adding optional return data from a function whilst keeping compatibility with old code, it isn’t perfect.</p>
<p>We saw above that we need <span class="caps">GHC </span>extensions to compile the module. Sadly we also need to enable extensions when using it:</p>
<pre><code>{-# LANGUAGE FlexibleInstances #-}
{-# LANGUAGE FlexibleContexts #-}</code></pre>
<p>So, it’s doesn’t give a completely backwards-compatible way to extend the original <span class="caps">API.</span></p>
<p>Also the extra flexibility we enjoy seems to force us to specify explicit types more often. Perhaps other <span class="caps">GHC </span>extensions would help here.</p>
<h2>The code</h2>
<p>You can grab the code from <a href="https://github.com/mjoldfield/or-unit">GitHub</a>. It’s just for fun, and consequently I’ve not uploaded it to hackage. </p>E0C4D26C-0FC9-11E6-A2BE-F8D3F269BD1C2016-05-01T18:23:37:37Z2016-05-03T19:05:41:41ZTranspose and sequenceAMartin Oldfield<p>Notes towards an intuitive understanding of transpose in terms of <code>sequenceA</code>. </p><h2>Introduction</h2>
<p>Many resources discuss the formal definitions of <a href="https://hackage.haskell.org/package/base/docs/Data-Foldable.html"><code>Foldable</code></a> and <a href="https://hackage.haskell.org/package/base/docs/Data-Traversable.html"><code>Traversable</code></a>.</p>
<p>For example:</p>
<ul>
<li><a href="https://wiki.haskell.org/Foldable_and_Traversable">Foldable and Traversable</a> on the Haskell Wiki.</li>
<li>Sections <a href="https://wiki.haskell.org/Typeclassopedia#Foldable">10</a> and <a href="https://wiki.haskell.org/Typeclassopedia#Traversable">11</a> of the Typeclassopedia.</li>
<li><a href="http://dev.stephendiehl.com/hask/#foldable-traversable">Foldable/Traversable</a> in Stephen Diehl's <em>magnum opus</em>.</li>
</ul>
<p>Recently though, I was struck by the succinct clarity of Michael Burge’s <a href="https://mail.haskell.org/pipermail/haskell-cafe/2016-April/123754.html">comment in the Haskell Cafe</a>:</p>
<blockquote><p>There’s Foldable for ‘things that can be converted to lists’, or Traversable for ‘things that can be converted to lists underneath a Functor’.</p></blockquote>
<p>With this in mind, let’s think about transposing a matrix represented as a list-of-lists.</p>
<h2>A list of lists</h2>
<p>Although it’s not what you’d choose if you care about performance, you can represent a matrix as a list-of-lists. If the element type is <code>a</code> then the list has type <code>[a]</code>, and the matrix has type <code>[[a]]</code>.</p>
<p>From Michael Burge’s comment, it’s clear that the matrix must be an instance of <code>Traversable</code> if we choose the ‘Functor above the list’ to be another list.</p>
<p>For clarity let’s invent a second kind of list, represented by <code>⟦a⟧</code> rather than <code>[a]</code>.</p>
<p>Then our matrix might have type <code>⟦[a]⟧</code> where <code>⟦⟧</code> represents rows and <code>[]</code> represents columns.</p>
<h2>sequenceA</h2>
<p>One of the key functions in <code>Traversable</code> is <code>sequenceA</code>:</p>
<pre><code>ghci> :t sequenceA
sequenceA :: (Applicative f, Traversable t) => t (f a) -> f (t a)</code></pre>
<p>If we specialize this for our list-of-lists:</p>
<pre><code>sequenceA :: ⟦[a]⟧ -> [⟦a⟧]</code></pre>
<p>That certainly looks like a transpose, but is it ?</p>
<pre><code>ghci> sequenceA [[1,2,3],[4,5,6]]
[[1,4],[1,5],[1,6],[2,4],[2,5],[2,6],[3,4],[3,5],[3,6]]</code></pre>
<p>Sadly not! The snag is that we’re forming the Cartesian product of the lists, rather than forming new lists by matching up the n<sup>th</sup> elements of the old ones.</p>
<h2>ZipList to the rescue</h2>
<p>Recall that the Cartesian product comes from the monad instance for lists, but you can make a different but perfectly good Applicative instance called <code>ZipList</code>. Here’s an illustration of the difference:</p>
<pre><code>ghci> (,) <$> ZipList [1,2] <*> ZipList [3,4]
ZipList {getZipList = [(1,3),(2,4)]}
ghci> (,) <$> [1,2] <*> [3,4]
[(1,3),(1,4),(2,3),(2,4)]</code></pre>
<p>This looks much better, and indeed is the solution to our problem:</p>
<pre><code>ghci> getZipList $ sequenceA $ map ZipList [[1,2,3],[4,5,6]]
[[1,4],[2,5],[3,6]]</code></pre>
<p>or, more generally:</p>
<pre><code>transpose = getZipList . sequenceA . map ZipList</code></pre>
<p>Intuitively:</p>
<ul>
<li>turn the list of columns into a list of <code>ZipLists</code>;</li>
<li>make a new <code>ZipList</code> by making a list of all the 1<sup>st</sup> elements, then a list of all the 2<sup>nd</sup> elements and so on;</li>
<li>turn that <code>ZipList</code> back into a normal list.</li>
</ul>
<p>All this is in McBride and Paterson’s <a href="http://www.staff.city.ac.uk/~ross/papers/Applicative.pdf">original paper</a>, so you should read that if you’re interested. </p>886F19E8-F916-11E5-81BF-AE1DC80C62072016-04-02T21:04:23:23Z2016-04-03T16:02:44:44Z2D line intersectionMartin Oldfield<p>A brief note showing the way I derive the intersection of two lines, each defined by two points. </p><h2>Introduction</h2>
<p>In two-dimensions, given two lines each defined by two points you can usually find a point where the lines intersect. Wikipedia gives the <a href="https://en.wikipedia.org/wiki/Line&ndash;line_intersection">coordinates</a> in terms of determinants, but not a derivation.</p>
<h2>Derivation</h2>
<p><img src="2dint.svg" alt="" class="img_border_small" /></p>
<p>Suppose \(\textbf{a}\) and \(\textbf{b}\) are on the first line, and \(\textbf{c}\) and \(\textbf{d}\) are on the second. Then, assuming that the lines aren’t parallel, we can write a general point as</p>
\[
\textbf{x} = \mu \, (\textbf{b} - \textbf{a}) + \nu \, (\textbf{d} - \textbf{c}).
\]
<p>So to find \(\textbf{x}\) we just need to find \(\mu\) and \(\nu\). We can get rid of \(\nu\) by taking the dot-product with \(\textbf{d} - \textbf{c}\) rotated by \(\pi / 2\), but the notation gets a bit messy. Equivalently we can embed the 2D vectors in the \(z = 0\) plane of a 3D space, then look at the cross-product with \(\textbf{d} - \textbf{c}\):</p>
\[
(\textbf{d} - \textbf{c}) \times \textbf{x} = \mu \, (\textbf{d} - \textbf{c}) \times (\textbf{b} - \textbf{a}).
\]
<p>Now, \(\textbf{x}\) lies between \(\textbf{c}\) and \(\textbf{d}\), so</p>
\[
\begin{align} \left(\textbf{d} - \textbf{c}\right) \times \left(\textbf{x} - \textbf{d}\right) &= 0,\\ \left(\textbf{d} - \textbf{c}\right) \times \textbf{x} - \left(\textbf{d} - \textbf{c}\right) \times \textbf{d} &= 0,\\ \left(\textbf{d} - \textbf{c}\right) \times \textbf{x} &= \left(\textbf{d} - \textbf{c}\right) \times \textbf{d},\\ \left(\textbf{d} - \textbf{c}\right) \times \textbf{x} &= \textbf{d} \times \textbf{c}. \end{align}
\]
<p>Which we can substitute to give</p>
\[
\mu \, (\textbf{d} - \textbf{c}) \times (\textbf{b} - \textbf{a}) = \textbf{d} \times \textbf{c}.
\]
<p>The results of both cross-products lie along \(\textbf{z}\), so we divide them with the understanding that we’re just looking at the \(z\)-components,</p>
\[
\mu = \frac{\textbf{d} \times \textbf{c}}{(\textbf{d} - \textbf{c}) \times (\textbf{b} - \textbf{a})}.
\]
<p>By a similar analysis we also have,</p>
\[
\nu = \frac{\textbf{b} \times \textbf{a}}{(\textbf{b} - \textbf{a}) \times (\textbf{d} - \textbf{c})}.
\]
<p>And thus,</p>
\[
\textbf{x} = \frac{\textbf{d} \times \textbf{c}}{(\textbf{d} - \textbf{c}) \times (\textbf{b} - \textbf{a})} \, (\textbf{b} - \textbf{a})- \frac{\textbf{b} \times \textbf{a}}{(\textbf{d} - \textbf{c}) \times (\textbf{b} - \textbf{a})} \, (\textbf{d} - \textbf{c}).
\]
<h2>Assumptions</h2>
<p>The main assumption is that the cross-product in the denominator doesn’t vanish, i.e.,</p>
\[
(\textbf{b} - \textbf{a}) \times (\textbf{d} - \textbf{c}) \neq 0,
\]
<p>which is equivalent to saying that the lines aren’t parallel. With finite precision arithmetic though, one needs also to beware the case where the lines are almost parallel.</p>
<h2>Implementation</h2>
<p>Here’s a trivial implementation in stand-alone Haskell:</p>
<pre><code>-- Our vector
type V2 = (Double,Double)
-- Subtraction
(.-.) :: V2 -> V2 -> V2
(xa,ya) .-. (xb,yb) = (xa - xb, ya - yb)
-- Scalar multiplication
(*.) :: Double -> V2 -> V2
a *. (x,y) = (a * x, a * y)
-- z-component of cross-product
(.^.) :: V2 -> V2 -> Double
(xa,ya) .^. (xb,yb) = xa * yb - xb * ya
intersect :: V2 -> V2 -> V2 -> V2 -> V2
intersect a b c d = (s *. ba) .-. (t *. dc)
where ba = b .-. a
dc = d .-. c
z = dc .^. ba
s = d .^. c / z
t = b .^. a / z </code></pre>
66834B58-1194-11E4-A475-CC312A626FD02014-07-22T11:35:54:54Z2016-03-30T14:01:28:28ZMonads in Haskell: ((->) r)Martin Oldfield<p>Brief notes on the ((->) r) monad in Haskell. </p><h2>Introduction</h2>
<p>Some very brief notes summarizing Haskell’s <code>((->) r)</code> monad. It’s my crib sheet, written partly to straighten matters in my own mind and partly for future reference.</p>
<p>Most of the information here comes from the usual places, notably the <a href="http://www.haskell.org/haskellwiki/Typeclassopedia">Typeclassopedia.</a> I’m also indebted to Dominic Prior for many helpful discussions. Dominic is collecting <a href="https://docs.google.com/document/d/1DvbcQTibeUEOVmoLO14vvRa27kf6y29sObUmQpyFn9g/pub">useful and interesting monad examples</a> on Google Docs.</p>
<h2>The <code>((->) r)</code> monad</h2>
<p>If you use <span class="caps">GHC, </span>the <code>((->) r)</code> instance is defined in <a href="http://hackage.haskell.org/package/base-4.6.0.1/docs/src/GHC-Base.html"><span class="caps">GHC</span>-Base</a> these days:</p>
<pre><code>instance Monad ((->) r) where
return = const
x >>= f = \r -> f (x r) r</code></pre>
<p>or,</p>
<pre><code> (x >>= f) r = f (x r) r</code></pre>
<p>To seasoned Haskell programmers, I expect the <code>((->) r)</code> instance is obvious, but it took me a while to get a feel for it. In particular we’re talking about functions <em>from</em> <code>r</code>, the type parameter of the monad, to some other type.</p>
<p>In the definition of <code>x >>= f</code> I often think of <code>x</code> as akin to a value, and <code>f</code> to a function. Let’s see what types they have in this instance:</p>
<pre><code>x :: m a
f :: a -> m b
x :: r -> a
f :: a -> r -> b</code></pre>
<p>So both <code>x</code> and <code>f</code> are functions, with an argument of type <code>r</code> which permeates everything.</p>
<p>As is often the case, the Kleisli arrow elucidates matters:</p>
<pre><code>(f >=> g) = \x -> f x >>= g
= \x -> (\r -> g (f x r) r)</code></pre>
<p>or,</p>
<pre><code>(f >=> g) x r = g (f x r) r</code></pre>
<p> Or in words: every time you evaluate a function, pass an extra <code>r</code> argument. Within a set of functions the value of this argument stays the same, so it provides a constant evaluation environment.</p>
<p>One could use this as a more civilized version of global variables.</p>
<p>If we now revisit <code>>>=</code> it makes more sense:</p>
<pre><code>(x >>= f) r = f (x r) r</code></pre>
<p> To evaluate the right-hand side, first evaluate <code>x</code> in the context of <code>r</code> to give <code>x'</code>, then evaluate <code>f</code> of <code>x'</code> again in the context of <code>r</code>.</p>
<p>Let’s turn to <code>return</code>. We know this should be the most direct translation of a non-monadic value into the monad: here that translates to being a value which doesn’t depend on the environment.</p>
<pre><code>return x e = x</code></pre>
<p>This is a standard function though: it’s just <code>const</code>.</p>
<p>If we wanted to be rigorous, we would show that these definitions for <code>>>=</code> and <code>return</code> do indeed form a monad i.e. that they satisfy the monad laws.</p>
<h3><code>fmap</code></h3>
<p>Every monad is a functor, and so given <code>>>=</code> we can deduce <code>fmap</code>:</p>
<pre><code>fmap f x = x >>= return . f</code></pre>
<p>Specialize to <code>((->) r)</code>:</p>
<pre><code>fmap f x r = (x >>= (const . f)) r
= (const . f) (x r) r
= const (f (x r)) r
= f (x r)
= (f.x) r</code></pre>
<p>Here though, the types are enough to suggest the solution:</p>
<pre><code>fmap :: Functor f => (a -> b) -> f a -> f b
fmap :: (a -> b) -> (f -> a) -> (f -> b)
fmap = (.)</code></pre>
<h3><code>join</code></h3>
<p>Given our ‘just add an extra argument’ intuition, <code>join</code> is obvious:</p>
<pre><code>join x r = x r r</code></pre>
<p>here <code>x</code> is doubly-wrapped, has type <code>m (m a)</code>, and so needs two copies of the extra argument.</p>
<p>If one wants to be rigorous:</p>
<pre><code>join x r = (x >>= id) r
= (id (x r) r)
= (x r) r
= x r r</code></pre>
<p>Going the other way is perhaps more useful because it derives <code>>>=</code> from the more intuitive <code>join</code> (assuming you accept the definition of <code>fmap</code>):</p>
<pre><code>(x >>= f) r = join (fmap f x) r
= join (f.x) r
= (f.x) r r
= f (x r) r</code></pre>
<h3>An example</h3>
<p>Consider this code:</p>
<pre><code>incN :: Enum a => a -> Int -> a
incN c n = toEnum $ n + fromEnum c
decN c n = incN c (-n)
inc2N c n = incN c (2 * n)</code></pre>
<p><code>incN</code> just increments an enumerable thing <code>n</code> times, <code>decN</code> decrements it, and <code>inc2N</code> increments it <code>2n</code> times:</p>
<pre><code>*Main> incN ’a’ 3
’d’
*Main> inc2N ’a’ 3
’g’
*Main> decN ’m’ 3
’j’</code></pre>
<p>We can compose these functions easily with the Kleisli arrow:</p>
<pre><code>*Main> (incN >=> inc2N >=> decN) ’a’ 0
’a’
*Main> (incN >=> inc2N >=> decN) ’a’ 1
’c’
*Main> (incN >=> inc2N >=> decN) ’a’ 2
’e’</code></pre>
<p>Here the character value gets threaded through the functions, whilst the same integer environment is seen throughout. I say seen, but actually it’s all invisible: the monad implicitly supplies the context variable, so we don’t need to worry about it.</p>
<p>Alternatively, in do-notation, given:</p>
<pre><code>munge c = do
x <- incN c
y <- inc2N x
z <- decN y
return z</code></pre>
<p>We can say:</p>
<pre><code>*Main> munge ’a’ 2
’e’
</code></pre>
<h3>An arithmetic amusement</h3>
<p>It’s quite nice to ponder this:</p>
<pre><code>join (+) 2</code></pre>
<p>From the discussion above:</p>
<pre><code>join (+) 2 = (+) 2 2 = 2 + 2 = 4</code></pre>
<p>Perhaps more interestingly if we <code>join -</code> we can calculate the additive identity.</p>
<pre><code>join (-) x = (-) x x = 0</code></pre>
<p>Here <code>x</code> doesn’t matter so in a loose sense (ignoring types)</p>
<pre><code>join (-) ≈ const 0</code></pre>
<p>The type caveat is just that</p>
<pre><code>Prelude Control.Monad> :t join (-)
join (-) :: Num a => a -> a</code></pre>
<p>but</p>
<pre><code>Prelude Control.Monad> :t const 0
const 0 :: Num a => b -> a</code></pre>
<p>In other words the true <code>const</code> takes anything to an arbitrary zero, but the <code>join</code> version takes any number to a zero of the same type. Surely this is still useful for some sort of obfuscation though ?</p>
<p>Perhaps the multiplicative identity is more useful:</p>
<pre><code>join div 42 = 1</code></pre>
<h2>The Applicative <code>((->) r)</code></h2>
<p>Given a monad instance we can always make an Applicative too. Here:</p>
<pre><code>instance Applicative ((->) a) where
pure = const
(f <*> x) e = (f e) (x e)</code></pre>
<p>Again we can see the pattern that inside the applicative we first evaluate things in a fixed context, and then operate with them as normal. However, here it makes better to sense to think of everything as a function which takes a single parameter: the environment.</p>
<p>As usual if we make everything <code>pure</code> it just works as normal:</p>
<pre><code>*Main> (pure (,) <*> pure ’a’ <*> pure ’b’) 2
(’a’,’b’)</code></pre>
<p>Or more idiomatically with <code><$></code>:</p>
<pre><code>*Main> ((,) <$> pure ’a’ <*> pure ’b’) 2
(’a’,’b’)</code></pre>
<p>Now it’s easy to let any of the terms access the context, and we can use the <code>incN</code> function from above:</p>
<pre><code>*Main> ((,) <$> incN ’a’ <*> pure ’b’) 2
(’c’,’b’)
*Main> ((,) <$> pure ’a’ <*> incN ’b’) 2
(’a’,’d’)</code></pre>
<p>We can also see the context from the ‘function’:</p>
<pre><code>*Main> ((\x y z -> (x,y,z)) <*> incN ’a’ <*> pure ’b’) 2
(2,’c’,’b’)</code></pre>
<h2>The Reader Monad</h2>
<p>In practice, people don’t use the naked <code>((->) r)</code> monad when they they want to provide a constant context. Instead, they use <a href="https://hackage.haskell.org/package/mtl-1.1.0.2/docs/Control-Monad-Reader.html">Reader.</a></p>
<p>Under the covers, this is basically the same animal, but it’s a bit nicer to use. A full discussion of Reader needs a new article though.</p>
<p> </p>365E30AA-C2F0-11E5-82D8-93A48738EEFC2016-01-24T23:13:53:53Z2016-02-03T22:44:04:04ZLow-power LiPo Light controllerMartin Oldfield<p>A toy project to get some experience of low-power design: LiPo powered shed lights with a battery protector. </p><h2>Abstract</h2>
<p>My shed lacks mains electricity and during the winter it lacks light as well. To fix this I built some battery powered lights, powered by a 4-cell LiPo pack and controlled by a <span class="caps">PIC</span> 16F690.</p>
<p>I used the project as an excuse to play with low-power design. The final version has an average current consumption of about 35µA but I think this could be halved without much difficulty.</p>
<p>All the design files are available on <a href="https://github.com/mjoldfield/lopo-lipo-lico">GitHub.</a></p>
<p><img src="lll-2.jpg" alt="" class="img_border" /></p>
<h2>Basic desiderata</h2>
<p>I wanted a controller to:</p>
<ul>
<li>Switch on some lights;</li>
<li>protect the battery from over-discharge;</li>
<li>not waste power.</li>
</ul>
<p>There are couple of potential power sinks: the light might be left on accidentally, or the controller might consume lots of power itself. Neither of these would be acceptable.</p>
<p>I usually go to the shed to get something, and don’t spend long periods of time there. Accordingly it seems sensible to provide a button to turn the lights on, then turn them off automatically after a few minutes.</p>
<p>It’s helpful to put some numbers on this. Suppose that when the light is triggered it stays on for 2 minutes, and this happens five times per week. Assume too that the light draws about 1A. Thus, the average power draw will be about</p>
\[
\begin{align} I_{ave} &= \frac{1\textrm{A} \times 2 \times 5}{60 \times 24 \times 7}, \\
&\approx 1\textrm{mA}. \end{align}
\]
<p>The battery claims a capacity of 2500mAh so we might see about one hundred days between charges: about three months. This seems reasonable to me. As a ballpark target, it would be nice to get the average power consumption of the controller down to about 50µA, so that 95% of the current drives the light.</p>
<p>I think the most sensible implementation would involve a handful of <span class="caps">FET</span>s, diodes, and passive components. The <em>Art of Electronics</em> has a suitable two-FET switch circuit which draws essentially no power when off, but lacks over-discharge protection for the battery. I suspect it could be added though.</p>
<h2>A spurious desideratum</h2>
<p>The main problem with the <span class="caps">FET </span>solution isn’t that it lacks battery protection: rather I wanted to do this project to get some experience of low-power microcontroller design, and the <span class="caps">FET </span>circuit lacks any kind of <span class="caps">CPU</span>!</p>
<p>I had some Microchip <span class="caps">PIC16F690</span>s lying around, and their datasheet has long boasted of nanoWatt performance so I added this to my desiderata:</p>
<ul>
<li>base the controller around the <a href="http://www.microchip.com/wwwproducts/Devices.aspx?product=PIC16F690"><span class="caps">PIC16F690.</span></a></li>
</ul>
<p>Now, it should be said that Microchip now make <a href="http://www.microchip.com/pagehandler/en-us/technology/xlp/"><span class="caps">XLP</span></a> devices which promise to be even more frugal. You might well be able to do as well, or better, with an <span class="caps">AVR </span>chip. More interestingly, perhaps you could do it with an <span class="caps">ARM </span>as well.</p>
<p>Further, although I am keen to do a reasonable job with the controller, my main goal is to gain insight into the sorts of trade offs involved, rather than heroically optimizing this particular case.</p>
<h2>LiPo notes</h2>
<p>Internally, the LiPo pack consists of four cells in series. You can get the full voltage via the high-current leads, but you can often get the inter-cell voltages as well on the <a href="http://www.tjinguytech.com/charging-how-tos/balance-connectors">balance connector.</a> High and low current here are relative: over 40A is available on the main output leads, the balance connector will easily provide the amp or so we need.</p>
<p>Voltage-wise, a fully changed LiPo cell generates about 4.2V which falls slowly to a ‘nominal’ 3.7V, and then 3.5V. Below 3.5V the voltage drops rapidly: discharging beyond 3.0V is generally regarded as dangerous.</p>
<p>We want the controller to flag voltages below 3.5V or above 4.2V as an error, and voltages between 3.5V and 3.7V as a warning that the battery will need to be charged soon.</p>
<p>This means that the useful full-pack voltage ranges from 14V to 16.8V, and the single-cell voltage from 3.5V to 4.2V. This latter range is ideal for powering the <span class="caps">PIC </span>directly.</p>
<p>One caveat: by drawing the <span class="caps">PIC </span>current from just one cell, we will slightly unbalance the battery. Hopefully this won’t be significant though.</p>
<h2>Basic design</h2>
<p>The main constraint relates to power, so it’s likely that the broad features of our design will be determined by power issues. Let’s get quantitative!</p>
<p>The <span class="caps">PIC16F690 </span>data sheet quotes the following as typical current consumption in selected different <span class="caps">CPU </span>modes:</p>
<table class="cspaced" style="font-size: 0.8em; margin-left: 2%; width: 96%; " cellspacing="0"><tr class="toprowborder"><th class="leftborder" colspan="2">Clock</th><th class="leftborder" colspan="3">Typical current / µA @ Vdd</th><th class="lrborder">Charge per</th></tr><tr class="bottomrowborder"><th class="leftborder">Source</th><th>Freq., f</th><th class="leftborder">3V</th><th>5V</th><th>Mean, I</th><th class="lrborder">tick / nC</th></tr><tr><td class="leftborder">LP</td><td>32kHz</td><td class="leftborder">22</td><td>33</td><td>28</td><td class="lrborder">0.86</td></tr><tr><td class="leftborder"><span class="caps">LFINTOSC</span></td><td>31kHz</td><td class="leftborder">16</td><td>31</td><td>24</td><td class="lrborder">0.77</td></tr><tr><td class="leftborder" rowspan="2"><span class="caps">HFINTOSC</span></td><td>4MHz</td><td class="leftborder">500</td><td>800</td><td>650</td><td class="lrborder">0.16</td></tr><tr><td>8MHz</td><td class="leftborder">700</td><td>1300</td><td>1000</td><td class="lrborder">0.13</td></tr><tr class="toprowborder"><td class="leftborder">Power Down</td><td>-</td><td class="leftborder">0.15</td><td>0.35</td><td>0.25</td><td class="lrborder">-</td></tr><tr class="bottomrowborder"><td class="leftborder">32kHz <span class="caps">T1OSC</span></td><td>-</td><td class="leftborder">2.5</td><td>3.0</td><td>2.75</td><td class="lrborder">-</td></tr></table>
<p>Given that we’d like to see an average current consumption of about 50µA, just using a slow clock seems unlikely to be enough: the <span class="caps">CPU </span>alone will use over half the current budget.</p>
<p>On the other hand, if the chip is fully asleep it draws about 0.4µA which is wonderful if unrealistic. In practice we’ll probably want some sort of slow timer to wake us from sleep to check the battery voltages. A 32kHz watch crystal would be a good solution: this would raise our sleep current to about 3µA or 6% of the total budget.</p>
<p>Having considered sleep, let’s think about being awake. Although higher clock-speeds draw more current, they also do things faster, and as the table shows the speed increases faster than the current. So, for a fixed amount of work it is better to run fast then sleep: the hare wins. In rough terms, we’d expect the system to draw about 1mA when busy, so we’ll need to sleep for about 95% of the time to get our average consumption down to 50µA.</p>
<h3>Clock choice</h3>
<p>Given the discussion above this seems reasonably clear:</p>
<ul>
<li>spend most of the time asleep, relying on a slow T1 clock to wake up periodically.</li>
<li>when awake use one of the fast <span class="caps">HFINTOSC </span>modes to get things done efficiently.</li>
</ul>
<h3>Light control</h3>
<p>Before we go any further, we should consider the primary task: letting me control some lights. The most obvious approach is:</p>
<ul>
<li>a push button to turn the lights off;</li>
<li>automatically turn them off after a few minutes.</li>
</ul>
<p>The push button is connected to <span class="caps">RB6 </span>(pin 11). Changes to this pin are configured to trigger an interrupt, which wakes the device from sleep.</p>
<p>Instead of a fixed delay it might be better to use a <span class="caps">PIR </span>detector to turn off the lights after a period of inactivity. I left this refinement for a future version.</p>
<p>Actually controlling the lights is easy: just use a chunky power <span class="caps">MOSFET.</span> The most important parameter is the threshold voltage: the device must be fully turned-on at 3V. I used a <a href="http://www.farnell.com/datasheets/27920.pdf"><span class="caps">STP80NF03L</span>-04</a> from <span class="caps">ST, </span>driven by <span class="caps">RC0 </span>(pin 16).</p>
<h3>Battery voltage sense</h3>
<p>Conceptually sensing the four voltages from the battery is simple: after all the <span class="caps">PIC </span>has <span class="caps">ADC</span>s. In practice though this is where most of the work was needed.</p>
<p>The voltage across the whole battery ranges from about 12–17V, so a full-scale range of about 20V seems appropriate. The <span class="caps">ADC </span>has 10-bit resolution, so the voltage quantum is about 20mV. Each cell has a useful voltage range of about 700mV so our effective resolution is about 5-bits: that seems reasonable.</p>
<p>Using the same input range for all four inputs makes the software easier because we can simply substract the voltages without scaling them.</p>
<p>So far so good. However, the <span class="caps">PIC</span>’s <span class="caps">ADC </span>needs a source impedence of less than about 10kΩ which implies a current draw of about 500µA at 5V. That’s much too large for our power budget if it’s drawn continuously, so we’ll need a bit more than a simple potential divider. There are two obvious topologies:</p>
<ul>
<li>A high-impedence potential divider followed by a buffer amplifier which we can disable when not in use.</li>
<li>A switch which isolates the battery from the low-impedence divider.</li>
</ul>
<p>I adopted the latter approach, because it reduces the sense current to almost zero when not in use. The switches were made by a couple of <span class="caps">MOSFET</span>s, which also multiplex the four voltages onto a single analogue input (specifically <span class="caps">AN2 </span>on pin 17).</p>
<p><img src="llc2.svg" alt="" class="img_noborder" /></p>
<p>Some points of note:</p>
<ul>
<li>There are two <span class="caps">MOSFET</span>s per input: an upper P-type <a href="http://www.farnell.com/datasheets/908241.pdf">(SI2377EDS-T1-GE3)</a> which switches the analogue input, and a lower N-type <a href="http://www.farnell.com/datasheets/1915678.pdf">(SI2336DS-T1-GE3)</a> which controls the gate of the P-type. Protection diodes in the <span class="caps">PIC </span>hold the outputs below (roughly) Vdd, so without the open-drain N-types, the upper transistors would always be on.</li>
<li>The voltages being switched might be as high as 20V, but the voltage swings switching them are much smaller: only 2V or so. <span class="caps">MOSFET</span>s which saturate at a low gate-drain voltage tend to not like large source-drain voltages, which explains the careful choice of bias resistors on the upper P-type transistors.</li>
<li>It seems prudent to allow some dead-time between different channels to avoid the need for fast switching.</li>
<li>The <span class="caps">ADC </span>must be clocked from its dedicated, internal RC oscillator so that conversions continue whilst the <span class="caps">CPU </span>sleeps.</li>
<li>The source impedence seen by the <span class="caps">PIC </span>is about 3kΩ.</li>
</ul>
<h4><span class="caps">ADC </span>voltage reference</h4>
<p>In the <span class="caps">PIC, </span>the <span class="caps">ADC</span>’s voltage reference is Vdd, which makes it hard to measure the battery voltage in absolute terms. There is an internal 0.6V reference which can be measured, and thus Vdd inferred, but it seemed simpler to use an external 2.5V reference.</p>
<p>I wasn’t entirely sure how much current the <span class="caps">ADC </span>needs: the datasheet talks about an initial 10–1000µA, followed by a maximum of 50µA during conversions. These are large enough that the source must be turned off when not needed: this is accomplished by driving it from <span class="caps">RC3 </span>(pin 7).</p>
<p>I used a <a href="http://www.farnell.com/datasheets/2001484.pdf"><span class="caps">LM385</span>-2.5</a> reference with a 3k3Ω current limiting resistor which will deliver 300-500µA depending on battery voltage. Given that this is below the 1mA figure in the data sheet, I introduced a delay between applying power to the reference and trusting the <span class="caps">ADC </span>readings.</p>
<p>Irritatingly the external reference must be supplied on pin 18 (RA1 et al.) which is also needed for the <span class="caps">ICSP </span>port. Provision must be made to disconnect the reference during programming.</p>
<p>With the 22k/3k3 potential divider, the 2.5V reference gives a full-scale reading of about 19.2V.</p>
<h4>An empirical interlude</h4>
<p>The yellow scope trace below shows the voltage on <span class="caps">AN2 </span>during the monitoring process, whilst the green trace shows when the voltage reference is powered.</p>
<p><img src="adc-vin.png" alt="" class="img_noborder" /></p>
<h3>Debug support</h3>
<p>There’s enough complexity in the analogue side to warrant a proper diagnostic channel. Happily the <span class="caps">PIC </span>has a full <span class="caps">UART </span>which can dump data to a serial port on the host computer. These days, that’s usually via a <span class="caps">FTDI </span>serial to <span class="caps">USB </span>converter.</p>
<p>Serial data are sent to pin 10 (RB7 et al.).</p>
<h3>Status <span class="caps">LED</span></h3>
<p>Although the <span class="caps">UART </span>is a great way to send data during development, when deployed in the shed a simpler indication is needed. I added a high-efficiency <span class="caps">RGB LED </span>to show battery status. In comparison to the 20mA old-fashioned <span class="caps">LED</span>s typically consumer, experiment shows that 300-400µA is enough to make the <span class="caps">LED </span>shine brightly, but we’ll still have to flash it to keep within the power budget.</p>
<h3>Unused pins</h3>
<p>All unused pins were configured as digital outputs and driven high.</p>
<p>Incidentally, the bare <span class="caps">PIC </span>with digital pins left floating draws 100-140µA when asleep. Over time the current falls, but rises again at the slightest disturbance.</p>
<h2>Construction</h2>
<p>The prototype was constructed on a Microchip low-pin count demo board in a mixture of through-hole and <span class="caps">SMD </span>construction. Good <span class="caps">MOSFET</span>s seemed only available in <span class="caps">SMD </span>packages.</p>
<p>A combination of laziness and lack of space led to the omission of any input protection circuitry.</p>
<p><img src="lll-1.jpg" alt="" class="img_border" /></p>
<p>You can see a full schematic below, but you might prefer a <a href="../01/llc.pdf"><span class="caps">PDF.</span></a></p>
<p><img src="llc.svg" alt="" class="img_noborder" /></p>
<p>The source for the firware and the KiCad schematic are available from <a href="https://github.com/mjoldfield/lopo-lipo-lico">GitHub.</a></p>
<h2>Software</h2>
<p>The firmware was simple enough to write easily in assembler (the final code has about 260 instructions), and the task was made still easier by using Charles McManis’ <a href="https://raw.githubusercontent.com/ChuckM/PIC-Software/master/16bits.inc">16-bit arithmetic library.</a> I suspect using assembler did bias me against more numerically fiddly solutions e.g. those which scaled the different voltage readings. More positively, using assembler made it easy to see exactly what was happening at every stage.</p>
<p>The main program loop is a infinite sleep loop: all the interesting behaviour is interrupt driven.</p>
<h3>Input change interrupt</h3>
<p><span class="caps">RB6 </span>is connected to a push button. If it’s pressed an interrupt is generated which:</p>
<ul>
<li>turns on the light;</li>
<li>sets the time to extinguish the light.</li>
</ul>
<p>No effort is made to debounce the switch, though it would be easy to add it.</p>
<h3>Timer 1 interrupt</h3>
<p>Timer 1 fires at 2Hz, maintaining a clock which is used to extinguish the light at an appropriate time.</p>
<p>Every second tick i.e. at one second intervals, it also:</p>
<ul>
<li>turns on the status <span class="caps">LED </span>to show the previous status;</li>
<li>turns off the main light if the battery voltage is out-of-range;</li>
<li>kicks off a new round of battery voltage measurements.</li>
</ul>
<p>It would be better if the timer fired at 1Hz and did the same thing every tick. That it doesn’t is an accident of history.</p>
<h3><span class="caps">ADC </span>interrupt</h3>
<p>Most of the interesting code is driven from the <span class="caps">ADC </span>interrupt.</p>
<p>We cycle through the four channels calculating the voltage across each cell and warning on voltages below 3.5V or over 4.2V. All the arithmetic is unsigned, so if a cell voltage is negative e.g. because a connection is broken, it will appear as a (very) large positive voltage and be flagged as a problem.</p>
<p>When all the readings have been taken the <span class="caps">ADC </span>is disabled until Timer 1 restarts it.</p>
<h2>Performance</h2>
<p>All the key performance measures are current related.</p>
<p>The plot below shows the current consumption over a four-second period when Vdd was 3.85V. The current is measured in the ground lead of the battery using a <a href="http://www.eevblog.com/projects/ucurrent/">µCurrent</a> powered by 3 x <span class="caps">AAA </span>batteries to give a maxiumum reading of about 2.1mA. The vertical scale on the scope plots is 500µA per division.</p>
<p>The current bumps are both narrow and (foolishly) aligned with the graticule. Sadly this makes them hard to see!</p>
<p><img src="current-all.png" alt="" class="img_noborder" /></p>
<p>Recall that most of the time the <span class="caps">CPU </span>is sleeping, and the current consumption during these periods is lost in the noise. However, more careful measurements show that it’s about 3µA.</p>
<p>The <span class="caps">CPU </span>is woken every half-second by Timer 1, and these brief wakeful moments correspond to the bumps on the graph.</p>
<h3>The small bump</h3>
<p>The small bump is about 45µs wide and 530µA high: this corresponds to the interrupt handler incrementing the clock but doing little else.</p>
<p>We can conclude:</p>
<ul>
<li>The 4MHz clock implies 1MIPS, so this path through the interrupt handler has about 42 instructions in it (the interrupt latency is about 3 instruction cycles long).</li>
<li>When running, the <span class="caps">PIC</span>’s <span class="caps">CPU </span>draws about 530µA—slightly better than the datasheet value.</li>
</ul>
<h3>The big bump</h3>
<p>During the big bump the <span class="caps">PIC </span>checks the battery voltages and flashes the status <span class="caps">LED.</span> In other words, this is where the fun and interesting things happen, and if we zoom in on the bump we can see an interesting current trace.</p>
<p><img src="c-current-gnd-awake.png" alt="" class="img_noborder" /></p>
<p>There are eight regions in the graph:</p>
<ul>
<li>the four higher current regions are when one of the battery cell’s voltage is being measured: the extra current flows when the relevant <span class="caps">MOSFET </span>is turned on.</li>
<li>the four lower current regions are gaps between the sense periods, when the <span class="caps">ADC </span>continues to run and <span class="caps">LED </span>continues to shine.</li>
</ul>
<p>You’ll see that these regions match the graph of voltage on <span class="caps">AN2 </span>shown above.</p>
<h4>Idling</h4>
<p>We can zoom in on the first gap:</p>
<p><img src="c-current-hf-base.png" alt="" class="img_noborder" /></p>
<p>During this time the <span class="caps">PIC </span>is making repeated <span class="caps">ADC </span>measurements but discarding the results. More accurately the <span class="caps">PIC</span>:</p>
<ul>
<li>starts a conversion;</li>
<li>sleeps;</li>
<li>gets woken by the <span class="caps">ADC </span>interrupt;</li>
<li>runs the interrupt handler;</li>
<li>repeats.</li>
</ul>
<p>The <span class="caps">ADC </span>is driven by the dedicated internal RC oscillator which has a period of about 4µs and conversions take 11 ticks: 44µs. The interrupt handler takes about 15 instructions, plus a delay equivalent to three more, so that’s 18µs.</p>
<p>Adding these together implies a period of 62µs which corresponds to a frequency of 16.2kHz: close enough to the measured 16.95kHz that we’re confident that we’re seeing the processor’s sleep-wake cycle.</p>
<p>We can now interpret the amplitude of the oscillation as the current consumed by the <span class="caps">CPU </span>when awake (taking the sleep current as zero): 524µA. Happily this is consistent with the value inferred from the small bump.</p>
<p>Further we can interpret the current whilst sleeping, 762µA, as the consumption from the non-CPU components:</p>
<ul>
<li>the status <span class="caps">LED </span>which I measured independently as about 330µA;</li>
<li>the voltage reference which I measured independently as about 390µA;</li>
<li>the rest of the <span class="caps">PIC </span>which we can infer to be roughly 42µA which is presumably mainly the current taken by the <span class="caps">ADC.</span></li>
</ul>
<p>Note that the average current drawn during this phase is 1.102mA. Ignoring the 720µA used by the <span class="caps">LED </span>and voltage reference, this leaves 382µA as the average current drawn by the <span class="caps">PIC</span>: about 72% of the peak value.</p>
<h4>Sensing</h4>
<p>Let’s turn now to the four sense periods. To a good approximation the current drawn during these periods follows the same pattern as above, but is higher because extra current flows through the battery voltage sense circuitry.</p>
<p>Here’s the current during the first sense period:</p>
<p><img src="c-current-hf-v0.png" alt="" class="img_noborder" /></p>
<p>The only complication is that the current to voltage converter saturates during the last period:</p>
<p><img src="c-current-hf-v3.png" alt="" class="img_noborder" /></p>
<p>Happily we can correct for this because the ‘Base’ voltage on the plot is measured correctly, so we can assume the average <span class="caps">CPU </span>current and calculate the average total draw.</p>
<table class="spaced" style="font-size: 0.8em; margin-left: 2%; width: 96%; " cellspacing="0"><tr class="toprowborder"><th class="lrborder" rowspan="2">Cells</th><th class="rightborder" colspan="5">Current / µA</th></tr><tr><th>Base</th><th>Sense</th><th>Amplitude</th><th>Ave. total</th><th class="rightborder">Ave. <span class="caps">CPU</span></th></tr><tr class="toprowborder"><td class="lrborder alignc">0</td><td align="right">762</td><td align="right">0</td><td align="right">524</td><td align="right">1102</td><td class="rightborder alignr">340</td></tr><tr><td class="lrborder alignc">1</td><td align="right">1118</td><td align="right">356</td><td align="right">510</td><td align="right">1447</td><td class="rightborder alignr">329</td></tr><tr><td class="lrborder alignc">2</td><td align="right">1408</td><td align="right">646</td><td align="right">510</td><td align="right">1734</td><td class="rightborder alignr">326</td></tr><tr><td class="lrborder alignc">3</td><td align="right">1559</td><td align="right">797</td><td align="right">550</td><td align="right">1890</td><td class="rightborder alignr">331</td></tr><tr><td class="lrborder alignc">4</td><td align="right">1703</td><td align="right">941</td><td align="right">410</td><td align="right">1977</td><td class="rightborder alignr">274</td></tr><tr class="bottomrowborder"><td class="lrborder alignc">4 (corrected)</td><td align="right">1703</td><td align="right">941</td><td align="right">-</td><td align="right">2032</td><td class="rightborder alignr">329</td></tr></table>
<p>At an earlier stage of development (and battery charge), I measured the current drawn by the sense circuitry directly, and the results broadly agree.</p>
<table class="spaced" style="font-size: 0.8em; margin-left: 2%; width: 96%; " cellspacing="0"><tr class="toprowborder"><th class="lrborder" rowspan="2">Cells</th><th class="rightborder" colspan="4">Current / µA</th></tr><tr><th>From above</th><th>Total</th><th>Bias</th><th class="rightborder">True sense</th></tr><tr class="toprowborder"><td class="lrborder alignc">1</td><td align="right">356</td><td align="right">345</td><td align="right">189</td><td class="rightborder alignr">156</td></tr><tr><td class="lrborder alignc">2</td><td align="right">646</td><td align="right">645</td><td align="right">329</td><td class="rightborder alignr">316</td></tr><tr><td class="lrborder alignc">3</td><td align="right">797</td><td align="right">817</td><td align="right">337</td><td class="rightborder alignr">480</td></tr><tr class="bottomrowborder"><td class="lrborder alignc">4</td><td align="right">941</td><td align="right">955</td><td align="right">339</td><td class="rightborder alignr">616</td></tr></table>
<p>We can go further and divide the current into that used to bias the p-channel <span class="caps">MOSFET </span>and that which is fed to the <span class="caps">ADC.</span> By apportioning the measured currents from the scope in the ratio above, we can estimate an average currents: 297µA of bias and 388µA of sense.</p>
<h4>Overall</h4>
<p>It seems helpful to collect the results above by functional block rather than time:</p>
<table class="spaced" style="font-size: 0.8em; margin-left: 2%; width: 96%; " cellspacing="0"><tr><th>Consumer</th><th>Current / µA</th><th>Time active / ms</th><th>Average current / µA</th></tr><tr><td><span class="caps">PIC </span>sleeping</td><td align="right">3</td><td align="right">982</td><td align="right">2.9</td></tr><tr><td><span class="caps">PIC </span>running</td><td align="right">382</td><td align="right">18</td><td align="right">6.9</td></tr><tr><td>Voltage reference</td><td align="right">390</td><td align="right">18</td><td align="right">7.0</td></tr><tr><td>Status <span class="caps">LED</span></td><td align="right">330</td><td align="right">18</td><td align="right">5.9</td></tr><tr><td>Monitor sense</td><td align="right">388</td><td align="right">12.8</td><td align="right">5.0</td></tr><tr><td>Monitor bias</td><td align="right">297</td><td align="right">12.8</td><td align="right">3.8</td></tr><tr><td class="alignr" colspan="3">Total</td><td class="topborder bottomborder alignr">31.5</td></tr></table>
<p>As a final check, asking the scope to average the current gives a 32.8µA after aboout 600 cycles. However, the current consumption is very spikey so I am not sure if this measurement is reliable.</p>
<h2>Conclusions</h2>
<p>I wanted to build this to solve a couple of problems: a dark shed, and ignorance of low-power gadgets. The design above gives an average current consumption of about 32µA which easily meets the 50µA goal.</p>
<p>I have avoided putting error bounds on the figures above, mainly because most of the figures are correlated. However the average consumption is dominated by the draw during main current bump, which is about 1.8mA with an error surely lower than 0.2mA. So the error in the average consumption is likely to be better than ±3.2µA.</p>
<p>Although I did not try to reduce the current consumption any further, I think 15µA is within easy reach:</p>
<ul>
<li>Sampling the voltages and flashing the status every two seconds would be fine.</li>
<li>The voltage reference draws about 400µA: I think 100µA would do.</li>
<li>The sense currents—particularly the bias—are too high and could be reduced.</li>
</ul>
<p>In both analogue cases above, I think there’s a trade-off: higher currents stabilize more quickly, but I suspect that it’s better to reduce the current and wait a bit.</p>
<p>Such improvements are for a future release though.</p>
<p>Although I think it is quite surprising how much you can infer from measuring the total current draw (because of the variation over time), were I building the device now, I’d also include more jumpers so I could measure the current at various places directly.</p>
<p>Finally, besides gaining some insight into low-power microcontroller design, I can now get things from my shed at night without needing a torch. Very handy: especially now that the nights are getting lighter! </p>9362D336-830E-11E5-89A4-B03C06F700552015-11-04T16:10:19:19Z2016-01-18T00:00:43:43ZPrinting ProblemsMartin Oldfield<p>Brief notes on problems with printers. oscilloscope. </p><p>In the past I’ve been pretty lucky writing documents in LaTeX and printing them. Usually I render the document to <span class="caps">PDF </span>using pdflatex, then print it from the Mac’s Preview app. Usually this has worked without incident!</p>
<h2>Options from <code>lpr</code></h2>
<p>If you move outside the cozy convenience of Preview’s printer configuration dialog, into the ascetic austerity of the command line, you sometimes still need to control some of the printers’ options. Runes like this work:</p>
<pre><code>lpr -o media=A4 -o PageSize=A4 -o sides=two-sided-long-edge
-o InputSlot=Tray1 foo.pdf
</code></pre>
<h2>Stapling</h2>
<p>Some fancy printers can staple documents after they’re printed. Often you can enable this from the printer driver, but this is not an option if you’re printing from a <span class="caps">USB </span>stick plugged into the printer. This sort of thing, which encompasses stapling, folding, and sometimes even stitching is referred to as <em>finishing</em>.</p>
<p>In such circumstances, you can enable the stapler by embedding <a href="https://en.wikipedia.org/wiki/Printer_Job_Language"><span class="caps">PJL</span></a> commands into the file. I tried this with <a href="https://en.wikipedia.org/wiki/Printer_Command_Language"><span class="caps">PCL</span></a> files, but I think similar tricks work in PostScript.</p>
<p>The key runes are:</p>
<pre><code>@PJL SET FINISH=STAPLE</code></pre>
<p>Which you can interpolate with this Perl snippet:</p>
<pre><code>$pcl =~ s/(\@PJL)/\@PJL SET FINISH=STAPLE\n$1/;</code></pre>
<p>Printers will often let you pick one of several options for placing the stamples, and the <code>SET STAPLEOPTION</code> <span class="caps">PJL </span>command can configure this. I’ve not explored this though.</p>
<h3>Ghostscript support</h3>
<p>It is possible that you’ll be able to enable this via a command line switch on gs in the near future. For more details see the <a href="http://bugs.ghostscript.com/show_bug.cgi?id=696314">enhancement request.</a></p>
<h2>Fixing questionable <span class="caps">PDF</span>s</h2>
<p>I have a couple of HP printers: a monochrome <span class="caps">P3010 </span>series, and a colour <span class="caps">M476</span>dw. The former has very few problems, but the latter has trouble printing files rendered with pdflatex: all the fonts get replaced by Courier.</p>
<p>Doubtless the is some misconfiguration or misunderstanding somewhere, but I couldn’t find it. Instead, running the pdf file through Ghostscript fixes the problem:</p>
<pre><code>$ gs -dSAFER -dBATCH -dNOPAUSE -sDEVICE=pdfwrite -sOutputFile=ok.pdf bad.pdf</code></pre>
<h2>Printing from <span class="caps">USB </span>on <span class="caps">SHARP </span>printers.</h2>
<p>There is a fine gotcha here: if the printer doesn’t recognize the file, it simply omits it when showing the directory listing. So, if you put PostScript files (or perhaps more accurately files with a .ps suffix) onto a <span class="caps">USB </span>stick, you won’t see them if the printer in question only understands .PCL files.</p>
<p>Incidentally, Ghostscript’s pxlcolor device is a good way to get <span class="caps">PCL </span>files:</p>
<pre><code>$ gs -dSAFER -dBATCH -dNOPAUSE -sDEVICE=pxlcolor -sOutputFile=foo.pcl foo.pdf</code></pre>
<h2>Tweaking the printer drivers in OS X</h2>
<p>I have a couple of HP printers and when I tried to print a <span class="caps">PDF </span>generated from LaTeX it worked perfectly on one, and failed with mangled fonts on the other. Printing the same file to the buggy printer from Windows worked, so I suspected the printer driver was at fault.</p>
<p>I <em>think</em> the problem stems from the Mac auto-detecting the wrong driver. Messing around on the Internet suggested that I should be using the driver in:</p>
<pre><code>/Library/Printers/PPDs/Contents/Resources/HP Color LaserJet Pro MFP M476.gz</code></pre>
<p>Happily it’s fairly easy to specify the driver manually with the <span class="caps">CUPS </span>web-interface.</p>
<p>To enable this:</p>
<pre><code>sudo cupsctl WebInterface=yes</code></pre>
<p>Then visit:</p>
<pre><code>http://localhost:631</code></pre>
<p>and follow the instructions. </p>06D92410-8C1B-11DE-AD1D-BD8092C74DB12009-08-18T17:17:19:19Z2015-11-26T08:32:51:51ZPlaying with PICs on MacOS XMartin Oldfield<p>The tools I use when playing with <span class="caps">PIC </span>microcontrollers </p><h2>Update</h2>
<p>I updated this in November 2015.</p>
<h2>Introduction</h2>
<p>Every once in a while I like playing with Microchip <span class="caps">PIC </span>microcontrollers. Partly this is historical: the first microcontrollers I used were <span class="caps">PIC</span>s, but I still enjoy writing code for them and they’re cheap-as-chips (plus I’ve already got a goodly number just lying around).</p>
<p>However, because this is a pleasure I enjoy quite infrequently, I sometimes forget how everything fits together, so it seemed sensible to document it.</p>
<p>Most of this information is available elsewhere online, both from <a href="http://www.microchip.com">Microchip’s website</a>, and from <a href="http://www.paintyourdragon.com/wordpress/?p=45">other places.</a></p>
<p>Almost all my <span class="caps">PIC </span>experience uses the relatively modern 16Fxxx series of chips, which Microchip refer to as Mid-Range Core devices.</p>
<p>I always program in <span class="caps">PIC </span>assembler using the <span class="caps">GNU </span>toolchain, and use <a href="http://www.microchip.com/stellent/idcplg?IdcService=SS_GET_PAGE&nodeId=1406&dDocName=en023805">Microchip’s <span class="caps">PIC</span>kit 2</a> programmer. There are probably other good choices, but this was the way I went.</p>
<h2>Hardware</h2>
<p>The <span class="caps">PIC</span>kit 2 described below is now obsolete, but mine still works. The <a href="http://www.microchip.com/Developmenttools/ProductDetails.aspx?PartNO=PG164130"><span class="caps">PIC</span>kit 3</a> is its spiritual successor.</p>
<p>The <a href="http://www.microchip.com/stellent/idcplg?IdcService=SS_GET_PAGE&nodeId=1406&dDocName=en023805"><span class="caps">PIC</span>kit 2</a> is a handy little gadget which sits between the <span class="caps">USB </span>bus and a <span class="caps">PIC.</span> It uses the <span class="caps">USB HID </span>protocols, which seems to make it ‘just work’ as far as the Mac’s concerned. Besides programming the <span class="caps">PIC, </span>the <span class="caps">PIC</span>kit 2 will also supply power to the target board assuming that the current is within <span class="caps">USB </span>limits.</p>
<p>To talk to the <span class="caps">PIC</span>kit 2, Microchip have made available a command line tool: <span class="caps">PK2CMD.</span> This is available in both source and binary formats on <a href="http://www.microchip.com/DevelopmentTools/ProductDetails.aspx?PartNO=PG164120">the Microchip site.</a></p>
<p>Microchip also produce a number of demo boards which are basically just a <span class="caps">PIC, </span>a few handy peripherals, and a small prototyping area. I’ve played with these so far:</p>
<ul>
<li>The <a href="http://ww1.microchip.com/downloads/en/DeviceDoc/Low%20Pin%20Count%20User%20Guide%2051556a.pdf">Low Pin Count Demo Board,</a> which includes a <a href="http://ww1.microchip.com/downloads/en/DeviceDoc/41262E.pdf"><span class="caps">PIC </span> 16F690.</a></li>
<li>The <a href="http://ww1.microchip.com/downloads/en/DeviceDoc/41301A.pdf">28-pin Demo Board,</a> which includes a <a href="http://ww1.microchip.com/downloads/en/DeviceDoc/41291F.pdf"><span class="caps">PIC </span> 16F886.</a></li>
</ul>
<h2>Software</h2>
<h3><span class="caps">PK2CMD</span></h3>
<p>This program talks to the <span class="caps">PIC</span>kit2. I just copy both the pk2cmd executable and the <span class="caps">PIC </span>data file to somewhere on my $PATH.</p>
<p>Key pk2cmd commands include:</p>
<ul>
<li>Identify a device:</li>
</ul>
<pre><code>pk2cmd -P</code></pre>
<ul>
<li>Upload a hex file:</li>
</ul>
<pre><code>pk2cmd -P pic16f690 -M -F foo.hex</code></pre>
<ul>
<li>Power up the target:</li>
</ul>
<pre><code>pk2cmd -P pic16f690 -T</code></pre>
<ul>
<li>Power down the target:</li>
</ul>
<pre><code>pk2cmd -P pic16f690 -W</code></pre>
<ul>
<li>Update the <span class="caps">PIC</span>kit 2 firmware:</li>
</ul>
<pre><code>pk2cmd -D PK2V023200.hex</code></pre>
<h3><span class="caps">GNU </span>toolchain</h3>
<p>Happily this is now available in homebrew:</p>
<p>$ brew install gputils</p>
<h3>A sample Makefile</h3>
<p>Once everything’s installed it’s just like normal. Here’s a sample Makefile:</p>
<pre><code>ASM_FLAGS = -p P16F690
PK2_FLAGS = -P PIC16F690
PK2_CMD = pk2cmd $(PK2_FLAGS)
TARGET = flash.hex
%.hex: %.asm
gpasm $(ASM_FLAGS) $<
all: $(TARGET)
test: all upload
clean:
rm -f *.o *.cod *.hex *.lst *.err
upload:
$(PK2_CMD) -M -F $(TARGET)
power_up:
$(PK2_CMD) -T
power_down:
$(PK2_CMD) -W
.PHONY: all clean upload power_up power_down</code></pre>
<h2>Missing bits</h2>
<p>Microchip provide more software for Windows users. Part of this is the <span class="caps">MPLAB IDE </span>which I’m happy to replace with emacs et al. On the other hand Microchip provide in-circuit debug support which has no equivalent on the Mac. </p>50C6D360-9303-11E5-B297-B48B17AC4B332015-11-24T23:29:58:58Z2015-11-25T22:35:43:43ZCheap XY plotting with ArduinosMartin Oldfield<p>Brief notes on making an XY display with a cheap Arduino clone and <span class="caps">ILI9340 </span>display. </p><p><img src="liss-pic.jpg" alt="" class="img_border" /></p>
<h2>The basic idea</h2>
<p>A while ago I <a href="../08/xy-arduino.html">experimented with plotting XY signals with an Arduino.</a> The key idea was to digitize a couple of analogue signals with the Arduino, then send the samples to a PC where they could be plotted as XY pairs. Recently, I’ve revisited this, but displaying the results on a 320×240 <span class="caps">LCD </span>screen connected via <span class="caps">SPI.</span></p>
<p>You can buy the <a href="http://www.adafruit.com/products/50">Arduino</a> and <a href="https://www.adafruit.com/products/1480">screen</a> from Adafruit for about $50 (£32), or from Chinese suppliers on eBay for about $9 (£6). I wanted to buy half-a-dozen, so took the latter route.</p>
<p>However, there is a problem with the Chinese displays: they need 3.3V logic levels, but the Arduino has 5V rails. The Adafruit display includes level-shifters to accomodate this.</p>
<p>Happily there is an easier hack: just replace the 5V regulator on the Arduino with a 3.3V equivalent e.g. the <a href="http://www.ti.com/product/tlv1117-33"><span class="caps">TLV1117</span>-33</a> from Texas Instruments. I am not sure if all Arduino clones use regulators with the same pinout, so check before doing this.</p>
<p>One other caveat: the regulator only affects power supplied by the jack socket, and not the <span class="caps">USB </span>port. I was going to power the Arduinos from batteries in production so that was fine, but for code development I used an Adafruit display board.</p>
<h2>Software</h2>
<p>The software is trivial: a simple marriage of the <span class="caps">TFT </span>example from Adafruit and some fast <span class="caps">ADC </span>code from Guy van den Berg. You can grab it from <a href="https://github.com/mjoldfield/xy-arduino-toy">GitHub</a> though.</p>
<p>No attempt is made to expire old points, nor is any reset provided beyond the master Arduino reset button. So, it is only useful if the signal is stationary.</p>
<h2>Wiring notes</h2>
<p>Three sets of wiring are needed:</p>
<ul>
<li>Signal. Attach the X-signal to A0 and the Y-signal to <span class="caps">A1.</span></li>
<li>The display. This is the only tricky part, so see the helpful diagram below.</li>
</ul>
<p><img src="xy-ard-wiring.png" alt="" class="img_border_small" /></p>
<ul>
<li>Power. Attach 6–9V to the power jack.</li>
</ul>
<h2>Crude specification</h2>
<p>As you can see, there is no input processing, so the signals are DC-coupled with a range from ground to Arduino’s supply voltage. If you’ve changed the regulator that will be 0–3.3V, otherwise it will be 0–5V.</p>
<p>The lack of low-pass filters means that aliasing artefacts are quite possible. Characterizing the useful frequency range is not trivial though because there are three different time scales:</p>
<ul>
<li>The time between samples: roughly 40μs;</li>
<li>The skew between X- and Y-samples: roughly 20μs.</li>
<li>The sample time: ‘short’ according to the Atmel datasheet;</li>
</ul>
<p>For periodic signals the middle term often dominates. See <a href="../08/xy-arduino.html">my earlier experiments</a> for a fuller discussion of this.</p>
<h2>Application</h2>
<p>This whole project forms part of my <a href="https://www.geocaching.com/seek/cache_details.aspx?wp=GC5WKF3">Cartesian Dualism geocache.</a> If you live near Cambridge in the <span class="caps">UK,</span> I hope you try to find it. </p>A8DCC082-3426-11E5-B580-8387CCDAEC282015-07-27T06:13:42:42Z2015-10-23T16:58:54:54ZAudi MMI WaypointsMartin Oldfield<p>Brief notes on exporting waypoints to the Audi <span class="caps">MMI </span>in a 2014 <span class="caps">A3. </span></p><h2>Similar work</h2>
<p>Mike Caddy wrote to me to tell me about a <a href="http://mcaddy.github.io/audipoi/">similar project.</a> I’ve not looked at the code, but it might be useful.</p>
<h2>Introduction</h2>
<p>As a keen, but somewhat optimistic, geocacher, I am keen to upload thousands of waypoints to my car’s satnav in the hope that this will make it easier to find caches.</p>
<p>In the past, I used a TomTom <span class="caps">GPS</span>r which understood the .ov2 format, but my new car, a 2014 Audi <span class="caps">A3, </span>has an inbuilt navigation system. In principle you can upload waypoints to this too: the myAudi website allows you upload <span class="caps">GPX, KML, </span>&c. files, then download them as ‘Special destinations’ to a SD-card. You then ask the car to read the SD card.</p>
<p>Here’s a typical file structure for the SD card:</p>
<pre><code class="small">total 8
drwxr-xr-x 4 mjo staff 136 19 Jul 20:45 PersonalPOI
-rw-r--r-- 1 mjo staff 1618 19 Jul 23:07 metainfo2.txt
PersonalPOI:
total 0
drwxr-xr-x 3 mjo staff 102 19 Jul 20:45 InfoFile
drwxr-xr-x 3 mjo staff 102 19 Jul 20:45 Package
PersonalPOI/InfoFile:
total 0
drwxr-xr-x 3 mjo staff 102 19 Jul 20:45 0
PersonalPOI/InfoFile/0:
total 0
drwxr-xr-x 3 mjo staff 102 19 Jul 20:45 default
PersonalPOI/InfoFile/0/default:
total 8
-rw-r--r-- 1 mjo staff 1448 19 Jul 23:07 Update.txt
PersonalPOI/Package:
total 0
drwxr-xr-x 3 mjo staff 102 19 Jul 20:45 0
PersonalPOI/Package/0:
total 0
drwxr-xr-x 12 mjo staff 408 19 Jul 23:07 default
PersonalPOI/Package/0/default:
total 88
-rw-r--r-- 1 mjo staff 28 19 Jul 23:07 PPOIversion.txt
drwxr-xr-x 4 mjo staff 136 19 Jul 21:45 bitmaps
-rw-r--r-- 1 mjo staff 221 19 Jul 23:07 bitmaps.xml
-rw-r--r-- 1 mjo staff 1302 19 Jul 23:07 categories.pc
-rw-r--r-- 1 mjo staff 1247 19 Jul 23:07 hashes.txt
-rw-r--r-- 1 mjo staff 269 19 Jul 23:07 lang_map.xml
-rw-r--r-- 1 mjo staff 11264 19 Jul 23:07 poidata.db
-rw-r--r-- 1 mjo staff 150 19 Jul 23:07 strings_de-DE.xml
-rw-r--r-- 1 mjo staff 150 19 Jul 23:07 strings_en-GB.xml
-rw-r--r-- 1 mjo staff 613 19 Jul 23:07 versions.xml
PersonalPOI/Package/0/default/bitmaps:
total 48
-rw-r--r-- 1 mjo staff 2342 27 Jul 07:12 image_1010.png
-rw-r--r-- 1 mjo staff 1862 27 Jul 07:12 image_1011.png
-rw-r--r-- 1 mjo staff 10161 19 Jul 23:07 stacking_2.png
-rw-r--r-- 1 mjo staff 10979 19 Jul 23:07 stacking_3.png</code></pre>
<h3>The reality</h3>
<p>However in reality using this is a <em>pain</em>. In principle Audi could teach the car about <span class="caps">GPX </span>files avoiding the cloud entirely, or allow you to upload and download multiple types of data at once. In reality, you must:</p>
<ul>
<li>upload a separate file for each type of waypoint;</li>
<li>download a file which turns out to be a small Java program;</li>
<li>run that ignoring all the warnings about unsigned binaries;</li>
<li>let this program download yet more data;</li>
<li>save the data using the awful Java file-chooser.</li>
</ul>
<p>Having banished Java from my Mac, it was galling to have to reinstall it for this, particularly given all the nonsense Oracle try to foist on you when installing software.</p>
<h2>Standing back</h2>
<p>Although the implementation seems foolish to me, in essence the task is simple: given some coordinates convert them into a set of files compatible with the Audi’s <span class="caps">MMI.</span></p>
<p>If you look at the files from myAudi, you’ll see that it’s all fairly straightforward. After a bit of digging, it was obvious that:</p>
<ul>
<li>All the coordinate data are stored in a <span class="caps">SQL</span>ite3 database with <a href="http://sqlite.org/rtree.html">R*Tree</a> and <a href="http://sqlite.org/fts3.html">full-text search</a> extensions.</li>
<li>Most of the important files have checksums stored too: the algorithm is <span class="caps">SHA1.</span></li>
<li>There is no crypto involved, so we don’t have to worry about keys.</li>
</ul>
<h3><span class="caps">SQL</span>ite schema</h3>
<p>Here is the basic database schema:</p>
<pre><code class="small">CREATE VIRTUAL TABLE poicoord USING rtree(poiid INTEGER,
latmin REAL,
latmax REAL,
lonmin REAL,
lonmax REAL);
CREATE TABLE poidata(poiid INTEGER,
type INTEGER,
namephon TEXT,
ccode INTEGER,
zipcode TEXT,
city TEXT,
street TEXT,
housenr TEXT,
phone TEXT,
ntlimportance INTEGER,
exttype TEXT,
extcont TEXT,
warning TEXT,
warnphon TEXT,
CONSTRAINT PK_poidata PRIMARY KEY (poiid));
CREATE VIRTUAL TABLE poiname USING fts3 (name TEXT);</code></pre>
<p>Astute observers will note the lack of a <code>poiid</code> column in the <code>poiname</code> table, which breaks normal form! There appears to be an implicit assumption that the first row inserted into <code>poiname</code> corresponds to <code>poiid == 1</code>, and so on.</p>
<h3><span class="caps">SHA</span>-1</h3>
<p>Most of the files are small, and their checksums are computed using normal <a href="https://en.wikipedia.org/wiki/SHA-1"><span class="caps">SHA</span>-1.</a> Larger files are first cut into 512kB chunks, and each chunk processed individually. The chunk size is specified in the files, so perhaps you could change it: I have not explored this.</p>
<p>In one case a file contains its own checksum: what this really means is that the file contains the checksum of the file with that line removed.</p>
<h3>File sizes</h3>
<p>Reference is also made to the size of files. When referring to the size of the <code>hashes.txt</code> file (which stores information about other files) the number in question is actually the total size of the files to which reference is made in <code>hashes.txt</code>.</p>
<h3>Icons</h3>
<p>Each class of waypoints is associated with an icon. I used 33×33 pixel icons in my tests, and those numbers are implitly embedded into magic constants in the code.</p>
<h3>Metadata</h3>
<p>I suspect you can have fun messing around with the metadata to promote your waypoints higher up the waypoint hierarchy, and tweaking their display on the map. I have not explored this.</p>
<h3>File names and version numbers</h3>
<p>I made guesses about the formats these should have, and hoped that here <strong>not</strong> be magick. Thus far it seems to be <span class="caps">OK.</span></p>
<h2>Proof of concept</h2>
<p>I wrote a quick proof-of-concept library in Perl. It takes the form of a single file, which you can <a href="https://github.com/mjoldfield/audi-waypoint-upload">grab from GitHub.</a></p>
<p>The file is quite large, in part because it contains uncompressed versions of some icons which appear to be included in every download.</p>
<p>It is <em>not</em> production quality code, it might not work, and it might break your car. Use it at your own risk. </p>3024F654-4D7C-11E5-B682-05CA1CD6F7302015-08-28T11:58:09:09Z2015-08-30T21:39:31:31ZXY plotting with ArduinosMartin Oldfield<p>Brief notes on plotting y(t) against x(t) using an Arduino Uno. </p><p> Having tried XY plotting signals with both <a href="./xy-scope.html">oscilloscopes</a> and <a href="./xy-sound.html">sound cards,</a> I wondered about a bare-bones approach using the <span class="caps">ADC</span>s in a microcontroller. To make things easy, I reached for an <a href="https://www.arduino.cc">Arduino.</a> In particular, an <a href="https://www.arduino.cc/en/Main/ArduinoBoardUno">Arduino Uno</a> which is essentially a <a href="https://en.wikipedia.org/wiki/ATmega328">ATmega328</a> microcontroller plus a link to a host computer. We can use this link to display the results on a computer rather than going to the hassle of driving a screen from the Arduino itself.</p>
<p>On the face of it, using an Arduino seems doomed. Although it has 8 analogue inputs, these channels are multiplexed onto a single <span class="caps">ADC.</span> Furthermore, the <span class="caps">ADC </span>is slow: the standard Arduino software only manages about 10k samples per second: about a quarter the frequency of the <span class="caps">ADC </span>in a sound card.</p>
<p>However, this is a bad way to think about the problem. In a normal sampled data system, like the sound interface, the sampling rate is critical. The <a href="https://en.wikipedia.org/wiki/Nyquist&ndash;Shannon_sampling_theorem">Nyquist Theorem</a> tells us that if we sample at a frequency \(2 f\) we can’t distinguish signals with frequencies \(f \pm \Delta\). So, in a sound card the input signal is usually low-pass filtered to remove frequencies above \(f\). However, this is not the case with the Arduino.</p>
<p>For now, pretend that the Arduino can capture two channels simultaneously, and that we set it up to make a pair of readings roughly every second. Would this be able to generate a XY plot of a stationary signal ?</p>
<p>Happily it could, modulo a couple of important caveats:</p>
<ul>
<li>To get reasonable results, experiment suggests we need a few thousand coordinates, so we’d need to wait a long time: perhaps half-an-hour for a simple figure and many hours for something more complicated. In other words we care about the sampling rate not because it governs what we can see, but how long it takes to collect the data.</li>
<li>Each reading needs to be accurate. Suppose we fed the <span class="caps">ADC </span>a very fast signal: this would be doomed to failure, not because we took samples infrequently, but because none of the samples would be representative of the signal at that point in time. The front-end of the <span class="caps">ADC </span>has a sample-and-hold circuit which is basically a switch and a capacitor: when a particular channel is chosen it is connected to the capacitor by the switch and current flows until the voltage across the capacitor matches the input voltage. We need to ensure that the switch is only closed briefly, and that enough current can flow into the capacitor which it does. The ATmega data sheet isn’t explicit about the values, beyond noting that it’s not likely to be a problem. For now, let’s not argue with their optimism.</li>
</ul>
<p>The one concern is the issue of simultaneous samples which we explicitly ignored above. On real Arduinos the sampling rate <em>does</em> matter because it sets an lower bound on the time between samples on the \(x\) and \(y\) channels.</p>
<p>As we mentioned above, the standard Arduino software manages about 9,600 conversions per second on the Uno, which means that each one takes about 100µs, and thus the \(x\) and \(y\) channels will be skewed by at least this amount. Will this matter ?</p>
<p>Let’s consider a specific example: will it be able to plot the 70Hz frame rate text we displayed with scopes and sound cards ? Setting the frame rate to 70Hz was close to the ideal speed for the sound card, so we know that the important information in the signals was contained in frequencies up to about 20kHz.</p>
<p>In rough terms, if the 20kHz components are important, we expect that the signal changes in important ways on a time scale of about 50µs. Further, it suggests that points on the signal 100µs apart will often be unconnected. This does not sound promising!</p>
<p>Let’s simulate the effect. The plots below show the effects of delaying the \(y\) channel by multiples of about 17µs. Although our analysis above was crude, we see that things do indeed begin to degrade significantly at about 50µs.</p>
<h3>Original signal</h3>
<p><img src="shft00000.png" alt="" class="img_noborder" /></p>
<h3>17µs delay</h3>
<p><img src="shft00005.png" alt="" class="img_noborder" /></p>
<h3>35µs delay</h3>
<p><img src="shft00010.png" alt="" class="img_noborder" /></p>
<h3>52µs delay</h3>
<p><img src="shft00015.png" alt="" class="img_noborder" /></p>
<h3>70µs delay</h3>
<p><img src="shft00020.png" alt="" class="img_noborder" /></p>
<h3>105µs delay</h3>
<p><img src="shft00030.png" alt="" class="img_noborder" /></p>
<h3>Real data</h3>
<p>So much for theory, lets look at the actual data:</p>
<p><img src="ard-bad.gif" alt="" class="img_noborder" /></p>
<p>As we expected, it looks pretty grotty, but the similarity to the 100µs prediction is pleasing. Not only is it nice to see theory and experiment agree, it gives us some hope that if we could speed up the Arduino’s <span class="caps">ADC, </span>we might be successful.</p>
<h3>Speeding it up a notch</h3>
<p>Other people have bemoaned the Arduino’s slow <span class="caps">ADC</span>s, and noted that you can speed them up without many side-effects. Of particular note is <a href="http://www.microsmart.co.za/technical/2014/03/01/advanced-arduino-adc/">a fine article by Guy van den Berg</a> in which he describes how to reduce the <span class="caps">ADC </span>sample time to about 20µs with only minor loss in accuracy.</p>
<p>Happily this change is easy to implement, and works perfectly!</p>
<p><img src="ard-good.gif" alt="" class="img_noborder" /></p>
<h2>Practical details</h2>
<p>Having shown the results, let’s look at the recipe. The hardware is simple: <a href="https://www.arduino.cc/en/Main/ArduinoBoardUno">an Arduino Uno.</a> Connect the \(x\) and \(y\) signals to the A0 and A1 analogue inputs. Most of the time you will need some signal processing too. The Arduino wants 0–5V, and I was generating 0–3.3V so I didn’t bother.</p>
<p>On top of this, we will need:</p>
<ul>
<li>Firmware for the Arduino, to digitize the signals, and send them to an attached <span class="caps">PC.</span></li>
<li>Software for the PC to display the data. In keeping with the Arduino style, this is written in <a href="https://processing.org">Processing,</a> and runs on Linux, OS X and Windows.</li>
</ul>
<h3>Arduino Firmware</h3>
<p>The code is almost trivial:</p>
<pre><code>// Crank up ADC speed as per
// http://www.microsmart.co.za/technical/2014/03/01/advanced-arduino-adc/
const unsigned char PS_16 = (1 << ADPS2);
const unsigned char PS_32 = (1 << ADPS2) | (1 << ADPS0);
const unsigned char PS_64 = (1 << ADPS2) | (1 << ADPS1);
const unsigned char PS_128 = (1 << ADPS2) | (1 << ADPS1) | (1 << ADPS0);
void setup()
{
// Set serial port to the maximum commonly used speed
Serial.begin(115200);
// ADC prescaler to 16 => 1MHz clock with 16MHz part => ~20us per sample
ADCSRA &= ~PS_128;
ADCSRA |= PS_16 ;
}
void loop()
{
int x = analogRead(A0);
int y = analogRead(A1);
Serial.print(x);
Serial.print(" ");
Serial.println(y);
delayMicroseconds(250 + random(250));
} </code></pre>
<p>As you can see the key idea is to sample channels A0 and <span class="caps">A1, </span>then send them over the serial port in <span class="caps">ASCII </span>decimal. It would be more efficient to send the data in binary, but using text is easy to debug and trace. It also makes it easy to plot the data with other software: for example you could just capture the data then plot them with Gnuplot.</p>
<p>The only subtle point is the delay: we will discuss this below.</p>
<h3>Display software</h3>
<p>Happily this is pretty simple too:</p>
<pre><code>import processing.serial.*;
color fg = color(0,255,0);
color bg = color(0,0,0);
String portName = "/dev/cu.usbmodem14621";
Serial myPort;
void setup()
{
size(1024, 1024);
background(bg);
noSmooth();
myPort = new Serial(this, portName, 115200);
}
void draw()
{
while(myPort.available() > 0)
{
int lf = 10;
String s = myPort.readStringUntil(lf);
if (s != null)
{
String[] ss = split(trim(s), " ");
if (ss.length == 2)
{
int x = int(ss[0]) + int(randomGaussian());
int y = height - int(ss[1]) + int(randomGaussian());
set(x,y,fg);
}
}
}
} </code></pre>
<p>Rather irritatingly, you will need to modify the definition of portName to reflect your Arduino. Once that’s done, you need only to start the program and watch the XY-plot appear. Typically this takes a few seconds.</p>
<p>You will notice that the coordinates are plotted with a bit of extra noise: this serves to enlarge points which occur frequently, making them more prominent.</p>
<h3>The wisdom of pausing</h3>
<p>Perhaps the only subtlety in the whole project is the need to include a random delay in the Arduino firmware. When we analyzed the system, we assumed that the sampling points would be randomly distributed in time. If we include a long random delay between the samples it is clear that the assumption is justified.</p>
<p>However, without it, the samples tend to occur in clumps when there is space in the serial port’s buffer. Between those gaps, no data are sampled, which leads to gaps in the XY-plot. Eventually, these gaps will fill, but it can take a surprisingly long time. Conversely with the delay the dots are plotted randomly, and the text appears slowly as though coming through the fog.</p>
<p>The form of the delay isn’t critical: the precise form shown distributes the samples well and doesn’t reduce the net sample rate very much.</p>
<h2>Alternatives</h2>
<p>Although this article focusses on the Arduino Uno and the ATmega328, the basic principles are more generally applicable. Many newer microcontrollers have better <span class="caps">ADC</span>s though, which makes things easier.</p>
<p>For example, had we used a <a href="http://www.pjrc.com/teensy/teensy31.html">Teensy 3.1</a> instead of the Arduino, we would enjoy an <span class="caps">ARM</span> Cortex-M4 chip which boasts two independent <span class="caps">ADC</span>s which would allow simultaneous reading. Much easier, but where’s the fun ? </p>B1127CA0-4917-11E5-8198-FE901CD6F7302015-08-22T21:48:53:53Z2015-08-26T10:36:24:24ZXY plotting with sound cardsMartin Oldfield<p>Brief notes on plotting y(t) against x(t) using sound cards. </p><p>Although the natural way to plot two signals against each other is with a dual-channel oscilloscope, regrettably not everyone has such an animal. However, many people do have a dual channel analogue input device for their computer in the form of a sound interface. So it’s natural to ask if we could use that instead.</p>
<p>It is worth saying that this is not a new idea: just ask <a href="https://www.google.co.uk/search?q=sound+card+oscilloscope">Google,</a> or follow <a href="http://www.instructables.com/id/Use-Your-Laptop-as-Oscilloscope/#step0">this Instructable.</a></p>
<p>Even if this works, it is important to realize:</p>
<ul>
<li>to get two input channels, we will need a stereo input;</li>
<li>most sound cards will inly digitize signals in the 20Hz-20kHz audio band;</li>
<li>for most applications you’ll need some sort of input protection or processing.</li>
</ul>
<p>Computers are expensive, and so I was wary of using the internal sound hardware. Instead, I tried some cheap <span class="caps">USB </span>sound interfaces: happily these worked tolerably well.</p>
<p>Although using an external sound interface connected by <span class="caps">USB </span>reduces the risk of destroying the computer, it doesn’t completely protect it. Proceed at your own risk!</p>
<h2>Cheap <span class="caps">USB </span>sound interfaces</h2>
<p>I tried a three different sound cards, and perhaps unsurprisingly found that cost mattered. I made some <a href="./sound-card-data.html">more extensive notes</a> elsewhere.</p>
<h3>El-cheapo dongle</h3>
<p><img src="dongle.jpg" alt="" class="img_noborder_small" /></p>
<p>At about £1 this was the cheapest device. It was also a complete waste of money, because it has only one input channel. It’s probably fine for other jobs though.</p>
<h3>C-Media <span class="caps">CM6206 </span>cards</h3>
<p><img src="cm6206.jpg" alt="" class="img_noborder_small" /></p>
<p>Lots of people seem to be making interfaces based around the <a href="http://www.cmedia.com.tw/ProductsDetail/page-p/C1Serno-25/C2Serno-26/C3Serno-0/PSerno-23.html">C-Media <span class="caps">CM6206</span></a> (and cousins). They are cheap (about £7), and full-featured: 5.1 analogue outputs, stereo inputs, and optical input and output. Sadly though, the frequency response isn’t great. I measured the -3dB points at about 42Hz and 19.6kHz (44.1kHz sampling rate).</p>
<h3>The Behringer <span class="caps">UCA202</span></h3>
<p><img src="uca202.jpg" alt="" class="img_noborder_small" /></p>
<p><a href="http://www.behringer.com/EN/Products/UCA202.aspx">This</a> was the cheapest well-regarded <span class="caps">USB </span>interface on Amazon, costing about £23.</p>
<p>The bandwidth was better than the generic <span class="caps">CM6206 </span>units: the -3dB points were at about 4Hz and 23.6kHz (48kHz sampling rate).</p>
<p>There is one oddity with the card: it inverts the audio signal, You can either deal with this in software, or rotate the image by 180° afterwards.</p>
<h2>Software</h2>
<p>Although there may be good solutions on Windows or Linux, I couldn’t find a good way to do this on OS X:</p>
<ul>
<li><a href="http://dogparksoftware.com/iSpectrum.html">iSpectrum</a> does not have an XY-mode.</li>
<li><a href="http://www.faberacoustical.com/products/signalscope_pro/">SignalScope Pro</a> will do the job but costs $150!</li>
</ul>
<p>However, if we are looking at static displays, we can sidestep the problem, by dividing it into two parts:</p>
<ul>
<li>Begin by recording a short section of the signal—any program which can record audio will do this.</li>
<li>Secondly, plot the data in the <span class="caps">WAV </span>file. I wrote a program to do this: you might know of better ways.</li>
</ul>
<h3>Saving the data to a <span class="caps">WAV </span>file</h3>
<p>There are a couple of good, free, programs for grabbing the data. If you like graphical applications, then <a href="https://en.wikipedia.org/wiki/Audacity_%28audio_editor%29">Audacity</a> fits the bill. It runs on OS X, Windows and Linux, and it’s easy to select the <span class="caps">USB </span>sound interface, capture the data, then export them as a <span class="caps">WAV </span>file.</p>
<p>Alternatively if you prefer command-line tools, you’ll like <a href="https://en.wikipedia.org/wiki/SoX">SoX.</a> Again it’s cross-platform, and on OS X you can install it with homebrew:</p>
<pre><code>$ brew install sox</code></pre>
<p>The only problem with SoX is that I couldn’t get it to select the correct input device. However, you can do that with the ‘Sound’ control in System Preferences. Once installed these runes will save 5 seconds of data to foo.wav:</p>
<pre><code>$ sox -d -e signed-integer -b 16 foo.wav trim 0 5</code></pre>
<p>If you’re using the Behringer card, SoX can even invert the signals for you on the fly:</p>
<pre><code>$ sox -d -e signed-integer -b 16 foo.wav trim 0 5 remix 1i 2i</code></pre>
<h3>Plotting the <span class="caps">WAV </span>file data</h3>
<p>I wrote some (bad) python to do this, which you can see on <a href="https://github.com/mjoldfield/xy-plot-wav">GitHub</a> or just <a href="https://raw.githubusercontent.com/mjoldfield/xy-plot-wav/master/xy-plot-wav.py">download.</a></p>
<p>I was lazy and so my code assumes that you’ll give it a 16-bit signed <span class="caps">PCM WAV </span>file. It crashes unceremoniously if you use other formats.</p>
<p>All the clever bits came from others:</p>
<ul>
<li><a href="http://wiki.scipy.org/Cookbook/EyeDiagram">Plotting Eye Diagrams from the SciPy cookbook;</a></li>
<li>the inevitable helpful article on <a href="http://stackoverflow.com/questions/23154400/read-the-data-of-a-single-channel-from-a-stereo-wave-file-in-python">Stackoverflow.</a></li>
</ul>
<p>You will need to have the numpy, scipy and matplotlib packages installed.</p>
<p>If you do have the right libraries and data to hand, it’s easy to use:</p>
<pre><code>$ python xy-plot-wav.py foo.wav bar.wav ...</code></pre>
<h2>Results</h2>
<p>I essentially repeated the experiments I tried with <a href="./xy-scope.html">XY plotting on scopes.</a></p>
<p>In all of the plots below, you can see the results for both the Behringer (on the left) and the <span class="caps">CM6202 </span>(on the right) interfaces. The frame-rate increases down the page: the somewhat odd numbers simply reflect choosing round-numbers for delay constants and speeds in the code generating the signals.</p>
<p>Alert readers might notice that sometimes the Behringer signals appear to be larger despite this unit having lower sensitivity. The explanation is simple: to prevent clipping the inputs were sometimes scaled.</p>
<h3>Drawing a square</h3>
<p>I began by drawing squares. The Behringer was sampling at 48kHz, the <span class="caps">CM6206 </span>at 44.1kHz.</p>
<p>We can characterize three regimes:</p>
<ul>
<li>At low frequencies the shape is rotated, and, in the case of the <span class="caps">CM6202 </span>reduced in size. Physically, the rotation is a consequence of the phase shift in the interfaces’ low-frequency filter, and the shrinking its attenuation. The relatively poor low-frequency performance of the <span class="caps">CM6202</span>-based interface is clear.</li>
<li>There is a surprisingly narrow regime at intermediate frequencies where the square is drawn faithfully.</li>
<li>At high frequencies the shape degrades. On the <span class="caps">CM6202, </span>the degredation is broadly consistent with the intuitions gained from the scope experiments, but something strange happens on the Behringer.</li>
</ul>
<p>My hardware for generating the signals was not able to go any faster, which was a bit of a shame. It would be nice to know if the Behringer plot gets yet crazier at higher frequencies!</p>
<h4>11Hz frame rate</h4>
<p><img src="behr-0011-square.png" alt="" class="img_noborder_2up" /> <img src="cm-0011-square.png" alt="" class="img_noborder_2up" /></p>
<h4>22Hz frame rate</h4>
<p><img src="behr-0022-square.png" alt="" class="img_noborder_2up" /> <img src="cm-0022-square.png" alt="" class="img_noborder_2up" /></p>
<h4>90Hz frame rate</h4>
<p><img src="behr-0090-square.png" alt="" class="img_noborder_2up" /> <img src="cm-0090-square.png" alt="" class="img_noborder_2up" /></p>
<h4>350Hz frame rate</h4>
<p><img src="behr-0350-square.png" alt="" class="img_noborder_2up" /> <img src="cm-0350-square.png" alt="" class="img_noborder_2up" /></p>
<h4>1.3kHz frame rate</h4>
<p><img src="behr-1300-square.png" alt="" class="img_noborder_2up" /> <img src="cm-1300-square.png" alt="" class="img_noborder_2up" /></p>
<h4>2.5kHz frame rate</h4>
<p><img src="behr-2500-square.png" alt="" class="img_noborder_2up" /> <img src="cm-2500-square.png" alt="" class="img_noborder_2up" /></p>
<h4>4.5kHz frame rate</h4>
<p><img src="behr-4500-square.png" alt="" class="img_noborder_2up" /> <img src="cm-4500-square.png" alt="" class="img_noborder_2up" /></p>
<h4>7.2kHz frame rate</h4>
<p><img src="behr-7200-square.png" alt="" class="img_noborder_2up" /> <img src="cm-7200-square.png" alt="" class="img_noborder_2up" /></p>
<h3>Drawing text</h3>
<p>When drawing text, we can easily identify three temporal scales:</p>
<ul>
<li>The roughly piecewise constant signal in the y-direction that corresponds to the line of text.</li>
<li>The slow sawtooth signal in the x-direction which corresponds to moving our pen along the line.</li>
<li>The high frequency squiggles corresponding to each letter and the gaps between them.</li>
</ul>
<p>The challenge in rendering text is to make the lowest frequencies high enough to defeat the low-frequency filters, whilst not pushing too many of the highest frequencies beyond the sound interface’s reach.</p>
<p>In broad terms, the Behringer manges this, but the <span class="caps">CM6206 </span>fails. Even with the Behringer though, the frame rate needs to be chosen carefully to get good results: the ratio of the fastest to slowest acceptable frame rate is only about three.</p>
<p>Even in the acceptable range, there is a clear compromise: if the frequency scale is low enough to preserve the high-frequency structure in the letters, the y-component decays noticably along the line and both lines of text slope towards the origin.</p>
<p>Note: The Behringer was sampling at 48kHz, the <span class="caps">CM6206 </span>at 44.1kHz.</p>
<h4>11Hz frame rate</h4>
<p><img src="behr-0011-text.png" alt="" class="img_noborder_2up" /> <img src="cm-0011-text.png" alt="" class="img_noborder_2up" /></p>
<h4>39Hz frame rate</h4>
<p><img src="behr-0039-text.png" alt="" class="img_noborder_2up" /> <img src="cm-0039-text.png" alt="" class="img_noborder_2up" /></p>
<h4>70Hz frame rate</h4>
<p><img src="behr-0070-text.png" alt="" class="img_noborder_2up" /> <img src="cm-0070-text.png" alt="" class="img_noborder_2up" /></p>
<h4>161Hz frame rate</h4>
<p><img src="behr-0161-text.png" alt="" class="img_noborder_2up" /> <img src="cm-0161-text.png" alt="" class="img_noborder_2up" /></p>
<h4>206Hz frame rate</h4>
<p><img src="behr-0206-text.png" alt="" class="img_noborder_2up" /> <img src="cm-0206-text.png" alt="" class="img_noborder_2up" /></p>
<h3>Drawing Dots</h3>
<p>In this final experiment we feed quadrature square waves into the sound interface. Both interfaces were sampling at 44.1kHz.</p>
<p>The low frequency plots look strange at first, but are relatively easy to understand. The ‘true’ vertices of the square lie not on the corners of the square seen on the pictures, but rather further out on the far ends of the yellow diagonal traces.</p>
<p>Imagine that one of the square waves has just changed state, and we are sitting on one of those dots. If the wave’s frequency is sufficiently low the signal will begin to decay towards zero along the yellow trace. When then next transition occurs we’ll dash along the thin green line to the next vertex. Add a dash of overshoot and ringing for the usual high-frequency issues, and we have explained the plots.</p>
<p>This regime persists until about 2kHz, though the Behringer only shows the problem significantly up to about 50Hz. Once we hit 2kHz, the square degrades in much manner predicted by the theory (for more details see the <a href="./xy-scope.html">scope article:</a></p>
<p><img src="q-f.svg" alt="" class="img_noborder_small" /></p>
<p>Note that this happens at about the same frequency for both cards because the underlying issue is that harmonics of the square waves are being pushed above the cards’ upper frequency limit, which is roughly 20kHz for both cards.</p>
<p>On the <span class="caps">CM6206 </span>this process continues until only the fundamental frequency is passed by the card’s filtering, and we see a circle.</p>
<p>On the Behringer though, things are rather different. The 8kHz plot shows significant anti-correlation of the x- and y-axes. Indeed, at 10.5kHz, a reasonable model is \(x + y = 0\)! This doesn’t persist though: the 15.8kHz plot looks much like its 7.8kHz cousin. Presumably the sophisticated anti-aliasing filters on the interface have rather sophisticated and complicated failure modes too. It is tempting to explore this more, but that’s a job for another day.</p>
<p>Finally above 20kHz both cards, as expected, reject the incoming signals. We do see something, but it is hard to explain. Again a task for another day!</p>
<h4>24Hz frame rate</h4>
<p><img src="behr-00024-sq.png" alt="" class="img_noborder_2up" /> <img src="cm-00024-sq.png" alt="" class="img_noborder_2up" /></p>
<h4>50Hz frame rate</h4>
<p><img src="behr-00050-sq.png" alt="" class="img_noborder_2up" /> <img src="cm-00050-sq.png" alt="" class="img_noborder_2up" /></p>
<h4>190Hz frame rate</h4>
<p><img src="behr-00190-sq.png" alt="" class="img_noborder_2up" /> <img src="cm-00190-sq.png" alt="" class="img_noborder_2up" /></p>
<h4>390Hz frame rate</h4>
<p><img src="behr-00390-sq.png" alt="" class="img_noborder_2up" /> <img src="cm-00390-sq.png" alt="" class="img_noborder_2up" /></p>
<h4>1950Hz frame rate</h4>
<p><img src="behr-01950-sq.png" alt="" class="img_noborder_2up" /> <img src="cm-01950-sq.png" alt="" class="img_noborder_2up" /></p>
<h4>2600Hz frame rate</h4>
<p><img src="behr-02600-sq.png" alt="" class="img_noborder_2up" /> <img src="cm-02600-sq.png" alt="" class="img_noborder_2up" /></p>
<h4>3900Hz frame rate</h4>
<p><img src="behr-03900-sq.png" alt="" class="img_noborder_2up" /> <img src="cm-03900-sq.png" alt="" class="img_noborder_2up" /></p>
<h4>7800Hz frame rate</h4>
<p><img src="behr-07800-sq.png" alt="" class="img_noborder_2up" /> <img src="cm-07800-sq.png" alt="" class="img_noborder_2up" /></p>
<h4>10.5kHz frame rate</h4>
<p><img src="behr-10500-sq.png" alt="" class="img_noborder_2up" /> <img src="cm-10500-sq.png" alt="" class="img_noborder_2up" /></p>
<h4>15.8kHz frame rate</h4>
<p><img src="behr-15800-sq.png" alt="" class="img_noborder_2up" /> <img src="cm-15800-sq.png" alt="" class="img_noborder_2up" /></p>
<h4>25.4kHz frame rate</h4>
<p><img src="behr-25400-sq.png" alt="" class="img_noborder_2up" /> <img src="cm-25400-sq.png" alt="" class="img_noborder_2up" /></p>
<h2>A mobile solution</h2>
<p>Although OS X lacks a good XY-scope program, on iOS we can use aptly named <a href="https://itunes.apple.com/gb/app/oscilloscope/id388636804?mt=8">Oscilloscope app.</a> Although not free, it is only £4.</p>
<p>We will still need a <span class="caps">USB </span>sound interface though which you can connect to the iPhone/iPad with a <a href="http://www.apple.com/uk/shop/product/MD821ZM/A/lightning-to-usb-camera-adapter">Lightning to <span class="caps">USB </span>camera adapter.</a> Sadly this costs £25!</p>
<p>Of the <span class="caps">USB </span>sound interfaces I tried, only the <a href="http://www.behringer.com/EN/Products/UCA202.aspx">Behringer</a> worked: the others drew too much current.</p>
<p>Most of the plots are effectively identical to those generated above, so I include just one: text at the 70Hz frame rate, sampled at 44.1kHz:</p>
<p><img src="ios-0070-text.png" alt="" class="img_noborder" /></p>
<h2>Conclusions</h2>
<p>It seems clear that you can use a sound card to plot a couple of signals in XY mode. However, I found it harder to get good results than I had anticipated, and wouldn’t really recommed it.</p>
<p>Condensing the whole thing into three lessons:</p>
<ul>
<li>The DC/low-frequency performance of the <span class="caps">CM6206 </span>interface is poor.</li>
<li>The high-frequency performance of the Behringer is complicated.</li>
<li>If you are drawing text in the obvious way, the audio spectrum is surprisingly cramped. </li>
</ul>15B2C09C-4B08-11E5-8BF1-22531DD6F7302015-08-25T09:02:42:42Z2015-08-26T10:35:56:56ZSound card dataMartin Oldfield<p>Crude characterization of the Behringer <span class="caps">UCA202 </span>and a generic <span class="caps">CM6206 </span>based interface. </p><p>I was interested in using <a href="./xy-sound.html">sound cards to plot pairs of signals in XY mode,</a> and so measured the performance of a couple of popular <span class="caps">USB </span>sound interfaces.</p>
<h2>Basic set up</h2>
<p>I fed a 1V peak-to-peak sine wave into the input of the units, then grabbed a 10s sample with <a href="https://en.wikipedia.org/wiki/SoX">SoX,</a> and looked at the statistics:</p>
<pre><code>$ sox -d foo.wav trim 0 10 stat
Input File : 'default' (coreaudio)
Channels : 2
Sample Rate : 48000
Precision : 32-bit
Sample Encoding: 32-bit Signed Integer PCM
In:0.00% 00:00:10.01 [00:00:00.00] Out:480k [ | ] Clip:0
Samples read: 960000
Length (seconds): 10.000000
Scaled by: 2147483647.0
Maximum amplitude: 0.383392
Minimum amplitude: -0.383575
Midline amplitude: -0.000092
Mean norm: 0.243714
Mean amplitude: -0.000053
RMS amplitude: 0.270708
Maximum delta: 0.001953
Minimum delta: 0.000000
Mean delta: 0.000744
RMS delta: 0.000835
Rough frequency: 23
Volume adjustment: 2.607 </code></pre>
<p>From this I extracted the ‘RMS amplitude’ at various frequencies.</p>
<p>I also used the sum of the ‘Maximum amplitude’ and ‘Minimum amplitude’ to get some measure of the interface’s DC-offset. This isn’t perfect, and is likely be noisy because we are using extremal values.</p>
<p>Also, our logging window is not synchronized to the incoming signal so we are unlikely to see an integral number of cycles. The incomplete cycle will have a non-zero DC component, and at low frequencies this might be a significant contribution.</p>
<p>If there are \( n \) cycles the worst-case fractional contribution will be \( 2 / \pi \). At 20Hz, \( n = 200 \) which implies a fractional noise level of about 0.16%.</p>
<p>I made no attempt to measure the phase changes at different frequencies.</p>
<h2>Results</h2>
<p>It is simplest to just see the frequency responses plotted on a log-log scale:</p>
<p><img src="sound-cards.svg" alt="" class="img_noborder" /></p>
<h3>Behringer <span class="caps">UCA202</span></h3>
<p><img src="uca202.jpg" alt="" class="img_noborder_small" /></p>
<p>This sells for about £23 and seems well-regarded by Amazon reviewers. It offers a stereo input on a couple of phono plugs, and stereo outputs on phono plugs, headphones and optical.</p>
<p>Internally it is based around the Burr-Brown/TI <a href="http://www.ti.com/lit/ds/symlink/pcm2900.pdf"><span class="caps">PCM2902.</span></a></p>
<p>The data above were collected at a sampling rate of 48kHz.</p>
<p>Key results:</p>
<ul>
<li>Excellent low-frequency response: -3dB point at about 4Hz.</li>
<li>Reasonably flat pass-band.</li>
<li>Very little DC offset. The measured fractional offset is about \(\pm 6 \times 10^{-5}\) in the mid-band, and this might just be noise coming into the interface.</li>
</ul>
<p>Other experiments show the <span class="caps">UCA</span>-202 inverts the signal. They also suggest that things behave oddly at the high-frequncy end—I don’t understand this and so can’t explain it succinctly.</p>
<h3>Generic <span class="caps">CM6202 </span>interface</h3>
<p><img src="cm6206.jpg" alt="" class="img_noborder_small" /></p>
<p>These sell for about £7 and offer 5.1 analogue outputs and stereo inputs on 3.5mm jack sockets, and optical input and outputs.</p>
<p>Internally, the main chaip is a <a href="http://www.cmedia.com.tw/ProductsDetail/page-p/C1Serno-25/C2Serno-26/C3Serno-0/PSerno-23.html">C-Media <span class="caps">CM6206</span></a></p>
<p>The data above were collected at a sampling rate of 44.1kHz.</p>
<p>Key results:</p>
<ul>
<li>Rather poor low-frequency response: the -3dB point is about 42Hz.</li>
<li>Visible pass-band-ripple.</li>
<li>Significant DC-offset: roughly 2% across the entire frequency range.</li>
</ul>
<p>Other experiments show that this <span class="caps">CM6206 </span>interface is not compatible with an iPhone or iPad using the Apple’s Lightning to <span class="caps">USB </span>cable because it draws too much power. </p>413D8CDC-166B-11E5-8831-C8E195A5DA662015-06-19T10:09:07:07Z2015-08-25T10:23:25:25ZGenerating Lissajous CurvesMartin Oldfield<p>Using a couple of <span class="caps">AD9850</span>s and an Arduino to plot Lissajous curves on an oscilloscope. </p><p>It is a truth universally acknowledged, that a person in possession of an oscilloscope must draw Lissajous curves.</p>
<p><img src="lissajous-1-2.gif" alt="" class="img_border_2up" /> <img src="lissajous-3-5.gif" alt="" class="img_border_2up" /></p>
<p>As <a href="https://en.wikipedia.org/wiki/Lissajous_curve">the fine Wikipedia article</a> explains a Lissajous curve is a parametric plot of sinusoids:</p>
\[
\begin{align} x &= \sin(a \, \omega t + \phi), \\\
y &= \sin(b \, \omega t). \end{align}
\]
<p>To draw nice curves we want \(a/b\) to be rational, and for the whole thing to be precise and stable.</p>
<p>In the past, one might have used use fancy analogue electronics to generate these sinusoids, but today there are good digital alternatives.</p>
<p>In particular, we will:</p>
<ul>
<li>generate the signals with a couple of <a href="http://www.analog.com/en/products/rf-microwave/direct-digital-synthesis-modulators/ad9850.html"><span class="caps">AD9850</span></a> <a href="https://en.wikipedia.org/wiki/Direct_digital_synthesizer"><span class="caps">DDS</span></a> synthesizers;</li>
<li>control these with an <a href="http://www.arduino.cc">Arduino;</a></li>
<li>buy everything prebuilt from eBay because it’s easier and probably cheaper than getting the parts separately.</li>
</ul>
<p>With all this we can set the sinusoids’ frequencies anywhere up to about 20MHz with a precision of about 0.03Hz.</p>
<h2>Hardware</h2>
<p>The key component is the <span class="caps">AD9850 DDS </span>synthesizer. Conceptually this generates a sine wave whose frequency is given by</p>
\[
f = f_m × \frac{n}{2^{32}}.
\]
<p>where \(n\) is a 32-bit number we choose, and \(f_m\) is the frequency of the master clock supplied to the chip.</p>
<p>Rather than buying the chip directly it’s easier to buy a module containing the <span class="caps">AD9850 </span>from eBay. Most of the <a href="http://www.ebay.co.uk/sch/i.html?_nkw=AD9850">these modules</a> consist of the <span class="caps">AD9850 </span>plus a 125MHz oscillator. I think most of the modules are clones of the same design, but <em>caveat emptor</em>. Mine looked like this:</p>
<p><img src="lf-mod-front.jpg" alt="" class="img_border_2up" /> <img src="lf-mod-back.jpg" alt="" class="img_border_2up" /></p>
<p>Note: The <span class="caps">AD9850 </span>can also produce a square-wave output, and the trimmer adjusts its duty-cycle: we can ignore this.</p>
<p>In essence, we just apply power, send the relevant configuration data from the Arduino, and we’re done.</p>
<p>In practice, there is one issue which has us reaching for the soldering iron: we need to synchronize the master oscillators. That’s just a case of removing the oscillator from one module and bodging a connection to the other. It’s easy to do if you remember which pin to connect! I suspect doing this isn’t ideal: the wire’s carrying (and thus presumably radiating) 125MHz.</p>
<p>Before I did this, I found the master frequencies differed by about 6ppm. That’s a bit better than I expected.</p>
<p>I wasn’t able to find a data-sheet for the precise oscillators on my boards, but they’re a bit like <a href="http://txccrystal.com/images/pdf/7c-tight.pdf">the <span class="caps">TXC</span> 7C series.</a> Certainly pin 3 (diagonally opposite the pin 1 spot) is the clock output.</p>
<p>Although sharing the master oscillator ensures that the synthesizers won’t drift with respect to each other, they can still be a constant phase apart. Assuming that the synthesizers lock to the phase of the master clock, they will potentially be an integral number of cycles apart. At 125MHz, one cycle takes 8ns. If we’re synthesizing a 100kHz sine wave, that’s a phase error of about 0.3°. Effectively then, the sinusoids have a random arbitrary offset from each other.</p>
<p>To some extent we could address the problem by better synchronizing the synthesizers’ reset signals and frequency adjustments, but I’ve not explored that.</p>
<p>Instead it’s easy to adjust the phase to the desired value. There are a couple of different approaches:</p>
<ul>
<li>The <span class="caps">AD9850 </span>has a 5-bit phase register which allows adjustments in increments of 11.25°. This is rather coarse!</li>
<li>We can simply increase the frequency of one of the oscillators by a small amount, wait until the desired phase difference is reached and then restore locked operation.</li>
</ul>
<p>The latter can be improved by making the Arduino perturb the frequency for a fixed time. The software described below does this and gives a tuning resolution of roughly 1.5°. One could easily do better, though I’ve no idea about the accuracy of the method.</p>
<p>The original prototype was build on a breadboard,</p>
<p><img src="lf-breadboard.jpg" alt="" class="img_border" /></p>
<p>but then transferred to matrix board having proved it worked:</p>
<p><img src="lf-soldered.jpg" alt="" class="img_border" /></p>
<h2>Software</h2>
<p>There are many <span class="caps">AD9850 </span>drivers on the Internet. It falls into that class of hardware which presents just enough complexity that it’s helpful to start from working code, but is easy enough that there’s nothing much to encapsulate.</p>
<p>Many examples use a static singleton <span class="caps">AD9850 </span>device, which makes it harder for our application. However, Poul-Henning Kamp wrote <a href="https://code.google.com/p/ad9850-arduino/">something more suitable</a> which I proceeded to clone and butcher. You can see the result <a href="https://github.com/mjoldfield/ad9850-arduino">on GitHub.</a></p>
<p>There are three key changes:</p>
<ol>
<li>I added support for a reset line;</li>
<li>I extended the <span class="caps">API </span>to facilitate generating sinusoids with frequencies in a rational ratio;</li>
<li>I added an example to drive a couple of synthesizers over a serial connection to the Arduino.</li>
</ol>
<p>On top of this I changed the <span class="caps">API</span>’s style to suit my own preferences.</p>
<p>The key <span class="caps">API </span>change is to explicitly expose the integer which sets the <span class="caps">AD9850</span>’s frequency. Thus we can easily ensure that e.g the synthesized frequencies are in the ratio 3:2 without worrying about how they’ll be rounded.</p>
<p>For example, to set the oscillators to 200kHz and 300kHz we might do this:</p>
<pre><code>AD9850 osc1(...);
AD9850 osc2(...);
double f = 100000.0;
uint32_t base = osc_1.calc_phase_delta(f);
osc1.set_phase_delta(2 * base);
osc2.set_phase_delta(3 * base); </code></pre>
<p>If the 125MHz master clock were exactly correct we would see output frequencies about 0.0095Hz and 0.0142Hz too high. However regardless of the master clock their ratio will be exactly 2:3.</p>
<h2>A full recipe</h2>
<p>It’s easy to replicate the example at the top of the page. Begin by building the hardware.</p>
<p>If you want to make stable pictures, you’ll need to modify the <span class="caps">AD9850 </span>modules with a soldering iron to share a common clock, as described above. You could get away without doing this though.</p>
<p>Connect the Arduino to the PC and check that you can program it. The Arduino website has a good <a href="http://www.arduino.cc/en/Guide/HomePage">getting started page</a> if you need help with that.</p>
<p>Now connect:</p>
<table class="spaced" cellspacing="0"><tr><th>Arduino Pin</th><th><span class="caps">AD9850</span> 1</th><th><span class="caps">AD9850</span> 2</th></tr><tr><td>+5V</td><td>Vcc</td><td>Vcc</td></tr><tr><td><span class="caps">GND</span></td><td><span class="caps">GND</span></td><td><span class="caps">GND</span></td></tr><tr><td>D4</td><td colspan="2">W_CLK</td></tr><tr><td>D5</td><td colspan="2">FQ_UD</td></tr><tr><td>D6</td><td colspan="2"><span class="caps">DATA</span></td></tr><tr><td>D7</td><td colspan="2"><span class="caps">RESET</span></td></tr><tr><td colspan="2">D8</td><td>W_CLK</td></tr><tr><td colspan="2">D9</td><td>FQ_UD</td></tr><tr><td colspan="2"><span class="caps">D10</span></td><td><span class="caps">DATA</span></td></tr><tr><td colspan="2"><span class="caps">D11</span></td><td><span class="caps">RESET</span></td></tr></table>
<p>Note that there are two Vcc and <span class="caps">GND </span>pins on each module so you’ll end up running four wires from both the Arduino’s 5V output and its <span class="caps">GND </span>pin.</p>
<p>Finally, connect an oscilloscope to the <span class="caps">ZOUT2 </span>outputs of the two modules. Set the scope to XY mode (often a setting in the horizontal timebase), and set the inputs to AC coupling with a scale of about 200mV per division.</p>
<p>If you just want to see the waveforms plotted against time, the signals are 20–30kHz, so a timebase of about 5µs per division is about right.</p>
<p>That’s the hardware done.</p>
<p>Next the software.</p>
<p>Download the <span class="caps">AD9850 </span>driver from <a href="https://github.com/mjoldfield/ad9850-arduino">GitHub</a> as a <span class="caps">ZIP </span>file and add it to the Arduino <span class="caps">IDE.</span> The key option is in the Sketch > Include Library menu.</p>
<p>You should then be able to compile and upload the Dual example from the File > Examples menu. When you upload it to the Arduino and it runs you should see something on the scope.</p>
<p>To change the pattern, open a serial connection to the Arduino. You can use the Arduino <span class="caps">IDE</span>’s own Serial Monitor, but that will only send a command when you hit return which gets boring. On the Mac or Linux, <a href="http://www.gnu.org/software/screen/">gnu screen</a> is a better alternative.</p>
<p>Once connected, <code>h</code> will display help:</p>
<pre><code>$ screen /dev/tty.... 115200
Dual AD9850 controller
M J Oldfield, 17.iv.2015
Controls:
tweak phase: <,>
change frequency multiplier: 1,..,9
select second oscillator: :
so e.g. 2:3 does what you expect
tweak frequency of osc 1: k,l
as above just for a trice: i,o
zero frequency tweaks: z
Current state:
base: 10.000kHz => 343597
osc1: base * 1 + 0
osc2: base * 1 + 0
phase: 0
</code></pre>
<p>To recreate the movie at the top of the article type <code>2</code>, <code>:</code>, <code>3</code>, <code>k</code> which should generate signals of roughly 20kHz and 30kHz, then slightly increase the lower frequency. You can think of this as continuously adjusting the relative phases of the signals which causes the display to ‛rotate’.</p>
<p>Hit <code>z</code> to stop the rotation. Use <code>i</code> and <code>o</code> to step the rotation by changing the frequency of oscillator 1 for a fraction of a second. After about 250 such steps the display will return to the original pattern.</p>
<h2>Addendum</h2>
<p>The <a href="https://www.silabs.com/Support%20Documents/TechnicalDocs/Si5351-B.pdf">Si5351</a> is probably a better choice than two <span class="caps">AD9850</span>s. Development boards are available on eBay for under £10. </p>00EF4164-3FA3-11E5-8C1A-1F393F1ECC0D2015-08-10T21:01:26:26Z2015-08-19T13:56:59:59ZXY plotting with oscilloscopesMartin Oldfield<p>Brief notes on plotting y(t) against x(t) using an oscilloscope. </p><p>I’ve become interested in plotting signals against each other. The obvious way to do this is with a dual-channel oscilloscope, and these notes document some of the things I tried.</p>
<p>Setting up such experiments is easy: connect the \(x\) and \(y\) signals to different channels, and engage XY-mode (which often lurks in the horizontal timebase settings).</p>
<p>Perhaps the classic display is a <a href="https://en.wikipedia.org/wiki/Lissajous_curve">Lissajous curve</a> which you can <a href="../06/ad9850-lissajous.html">easily generate</a> with modern chips. In the example below,</p>
\[
\begin{align} x(t) &= \sin(\omega t), \\\
y(t) &= \sin(3 \omega t + \phi), \end{align}
\]
<p>where \(\phi \approx \pi / 2\).</p>
<p>We can plot this mathematically to see what we expect:</p>
<p><img src="liss.svg" alt="" class="img_noborder" /></p>
<p>and then compare this with reality. The first plot below comes from an Agilent 350MHz digital storage scope, and is <a href="../06/scope-fun.html">downloaded directly from the scope.</a>. The second is a photo of an old Trio cathode ray oscilloscope, and boasts a bandwidth of 15MHz.</p>
<p><img src="ag-liss-3-1.png" alt="" class="img_noborder" /></p>
<p><img src="trio-liss-3-1.jpg" alt="" class="img_noborder" /></p>
<h2>More complicated signals</h2>
<p>There is no reason why the signals need to be simple sinusoids. For example, when plotted, the signals below will generate a square:</p>
<p><img src="xy-square.svg" alt="" class="img_noborder" /></p>
<p>Note: for clarity, the signals have been given vertical offsets.</p>
<p>If the period of the signal is \(\tau\) then note that,</p>
\[
y(t) = x(t + \tau / 4).
\]
<p>Or in other words, the signals are in quadrature.</p>
<p>As expected these signals generate a nice square on the scope. We’ll begin with the <span class="caps">DSO </span>which was set to ‘high-resolution’ mode to reduce the noise. As a consequence the trace is almost too thin to see:</p>
<p><img src="ag-good-sq.png" alt="" class="img_noborder" /></p>
<p>It is clearer on the old <span class="caps">CRO</span>:</p>
<p><img src="trio-sq-good.jpg" alt="" class="img_noborder" /></p>
<h2>Practical considerations</h2>
<p>The signals above are simple and easy to reproduce accurately in the real world. In general this will not be true though. We know that some components are better characterized in the frequency domain, so it makes sense to seek insight by looking at our signals in frequency space too.</p>
<p>We are <em>not</em> going to do a full analysis: instead we will we ask what would happen if we filtered the signals with a perfect low-pass filter which simply blocks all signals above a certain frequency whilst leaving the others unchanged. This isn’t physically possible, but we hope that we will gain some insight into the ways that real signals will change in the real world. Implicit in this is the idea that the high-frequency components will suffer most.</p>
<p>We adopt a fairly casual approach to the Fourier transforms we need to get the frequency-space representation. In keeping with this, we will ignore constant multipliers and write \( \sim \) instead of \( = \).</p>
<h3>A practical square</h3>
<p>Let’s remind ourselves of the signal we used to draw the square:</p>
<p><img src="x-square.svg" alt="" class="img_noborder" /></p>
<p>Note that we can write it as the convolution of a set of delta functions \(f_{s}(t)\), and a finite motif \(f_{m}(t)\).</p>
<p><img src="f-s-t.svg" alt="" class="img_noborder_2up" /> <img src="f-m-t.svg" alt="" class="img_noborder_2up" /></p>
<p>Note: strictly this convolution is for \(f(t) + 1\). We’ll return to this later.</p>
<p>The transform of an infinite set of delta functions is <a href="https://en.wikipedia.org/wiki/Fourier_transform#Tables_of_important_Fourier_transforms">well known:</a></p>
\[
\begin{align} f_{s}(t) &= \sum_{j \in \mathbb{Z}} \delta(t - j \tau), \\\
\widetilde{f_{s}}(\omega) &\sim \sum_{j \in \mathbb{Z}} \delta(\omega - j \Omega), \end{align}
\]
<p>where \(\Omega = 2\pi / \tau\).</p>
<p>To find the Fourier transform of the motif, \( f_m \), recall that differentiating in real-space is equivalent to multiplying by \( i \omega \) in frequency-space, and thus:</p>
\[
\widetilde{f_{m}}(\omega) \sim \frac{1}{\omega^2} \widetilde{\frac{d^2 f_m}{dt^2}}.
\]
<p>Differentiating the motif twice gives us four delta functions:</p>
<p><img src="df-m-t.svg" alt="" class="img_noborder_2up" /> <img src="ddf-m-t.svg" alt="" class="img_noborder_2up" /></p>
<p>and Fourier transform of this is easy:</p>
\[
\begin{align} \widetilde{\frac{d^2 f_m}{dt^2}} &= \exp(-\frac{3}{8} i \omega \tau) - \exp(-\frac{1}{8} i \omega \tau) - \exp( \frac{1}{8} i \omega \tau) + \exp( \frac{3}{8} i \omega \tau), \\\
&\sim \left(\cos \frac{\omega \tau}{8} - \cos \frac{3 \omega \tau}{8} \right). \end{align}
\]
<p>Reassembling these parts, and recalling that convolution is real-space is equivalent to multiplication in frequency space, gives the frequency-space representation:</p>
\[
\widetilde{f}(\omega) \sim \frac{1}{\omega^2} \times \left(\sum_{j \in \mathbb{Z}} \delta(\omega - j \Omega) \right) \times \left(\cos \frac{\omega \tau}{8} - \cos \frac{3 \omega \tau}{8} \right).
\]
<p>We only need to evaluate the \( \cos \) terms at the discrete frequencies of the delta-functions, and happily these are easy to do: the pattern repeats after eight terms:</p>
\[
\left( 0, +1, 0, -1, 0, -1 , 0, +1, ... \right) .
\]
<p>Finally we can transform the delta-functions back to the time-domain to get our answer:</p>
\[
f(t) = \frac{8 \sqrt{2}}{\pi^2} \left( \cos \Omega t - \frac{1}{9} \cos 3 \Omega t - \frac{1}{25} \cos 5 \Omega t + \frac{1}{49} \cos 7 \Omega t + \frac{1}{81} \cos 9 \Omega t - ... \right) ,
\]
<p>To get an intuitive feel, note that:</p>
<ul>
<li>we see a set of odd harmonics;</li>
<li>the spectrum falls off as \(1/ \omega^2\);</li>
<li>the sign of the harmonics flips every second term.</li>
</ul>
<p>It’s a bit cheeky to include the correct scale term because we explicitly ignored such things above. Had we not though, the number would pop-out. Alternatively we can just sum the series for \(t = 0\) and assert \(f(0) = 1\).</p>
<p>In the fast and loose calculation above we ignored the details for the <span class="caps">DC, </span>\(\omega = 0\) case. Care is needed because we shifted the motif function to be continuous at \(t = \pm 1/2\) to simplify the calculation. Still, it suffices to note that the average value of \(f(t)\) is zero, and thus \(\widetilde{f}(0) = 0\) too. By contrast, a more careful analysis of the situation above would note that as \(\omega \rightarrow 0\), \(\widetilde{f_{m}}(\omega) / \omega^{2}\) does not go to zero: rather it has value \(4\) in the limit. I think this corresponds to the DC offset we applied to simplify the motif.</p>
<p>Having calculated the Fourier transform, let’s truncate it! Remember we hope this will give us some qualitative insight into the way that the signal might get degraded. The plot below shows the effect of keeping only the first three and five terms of the sum. As you can see it is still a reasonble square, but one that has been rounded off and gone a bit wobbly.</p>
<p><img src="sq-f.svg" alt="" class="img_noborder" /></p>
<p>The check marks on the curves are evenly spaced in time, and happily they’ve remained roughly evenly spaced in distance too. This means the the curve will be traversed at roughly constant speed.</p>
<h3>Discontinuities</h3>
<p>Our experiments above might serve as a model for drawing shapes, at least shapes which can be drawn in one continuous stroke. However, sometimes we will have lift our pen from the paper and jump to a new location. So, we should also explore a simple model with discontinuities.</p>
<p>Perhaps the simplest thing we might examine is a set of dots, which we could conveniently place at the corners of a square. Such a pattern could be generated with a pair of quadrature square waves:</p>
<p><img src="xy-q.svg" alt="" class="img_noborder" /></p>
<p>In an ideal world \(x\) and \(y\) change instantaneously between \(+1\) and \(-1\), so the vertical lines shown below shouldn’t really be there.</p>
<p>Let’s repeat the Fourier analysis above. The structural function is the same, but the motif \( g_m(t) \) is a simple top-hat with edges at \(\pm \tau / 4\). Again it is simpler to differentiate the function to get some delta-functions:</p>
\[
\begin{align} \frac{dg_m}{dt} &\sim \delta(t + \tau/4) - \delta(t - \tau/4), \\\
\widetilde{g_m}(\omega) &\sim \frac{1}{\omega} \sin \frac{\omega \tau}{4}. \end{align}
\]
<p>Evaluating this at the delta-functions:</p>
\[
\widetilde{g_m}(\omega = j \Omega)_{j = 0,1,...} \sim (0,+1,0,-\frac{1}{3},0,+\frac{1}{5},...)
\]
<p>and thus (again fettling the scale and DC component):</p>
\[
g(t) = \frac{4}{\pi} \left( \cos \Omega t - \frac{1}{3} \cos 3 \Omega t + \frac{1}{5} \cos 5 \Omega t - \frac{1}{7} \cos 7 \Omega t + \frac{1}{9} \cos 9 \Omega t - ... \right).
\]
<p>Let’s plot a truncated form of the series:</p>
<p><img src="q-f.svg" alt="" class="img_noborder" /></p>
<p>This time the results are rather different:</p>
<ul>
<li>At the corners of the square the trace now makes bold flourishes.</li>
<li>Although the corners remain bright, the ‘edges’ of the ‘square’ are now drawn, albeit in a somewhat curved way. It might be better to think about limited slew-rates instead.</li>
</ul>
<p>As you can see, it’s a pretty crude approximation of a square. Note that as we increase the number of terms in the series, the swirls also increase in number, and become move closer to the vertex: this is the classic <a href="https://en.wikipedia.org/wiki/Gibbs_phenomenon">Gibbs phenomenon.</a></p>
<p>So much for theory, what does reality look like ? To find out I generated a couple of square waves with a frequency of about 135kHz with an Arduino and crudely connected them to the scopes. I deliberately took no care about the connections, so we might expect the high-frequency parts of the signals to suffer.</p>
<p>On the Trio, which quotes a 15MHz bandwidth things weren’t too bad. There is some ringing in the y-direction, but the main affect seems to be that we don’t just see the corner dots: the sides of the square are quite visible too.</p>
<p><img src="trio-sq.jpg" alt="" class="img_noborder" /></p>
<p>On the <span class="caps">DSO </span>though, we have 350MHz of bandwidth. In the time domain, we can see significant ringing on the edges of the square wave, and in XY-mode wild loops appear which seem qualitatively similar to those in the graph above.</p>
<p>Simply moving the leads around the bench is enough to change the pattern a bit: by poorly terminating the signals we see both bad effects and a strong sensitivity to unimportant details.</p>
<p><img src="ag-sqware-t.png" alt="" class="img_noborder_2up" /> <img src="ag-sqware-xy.png" alt="" class="img_noborder_2up" /></p>
<p>However, the <span class="caps">DSO </span>has a handy bandwidth-limit option which cuts the bandwidth down to 20MHz. With that engaged, we see something close to the Trio:</p>
<p><img src="ag-sqware-20mhz-t.png" alt="" class="img_noborder_2up" /> <img src="ag-sqware-20mhz-xy.png" alt="" class="img_noborder_2up" /></p>
<h3>Conclusions</h3>
<p>Although we’ve only considered very simple models, we might have some intuition about the ways that real-world signal degredation will affect XY plots.</p>
<ul>
<li>Continuous paths won’t be too badly affected: they might wobble a bit.</li>
<li>Small discontinuous jumps will be replaced by thin traces between the end points, which won’t necessarily be straight.</li>
<li>In more severe cases the jumps will lead to florid loops and whorls.</li>
</ul>
<p>The key mathematical distinction is the rate at which the coefficients in the frequency-domain decay: \( 1 / \omega \) vs \( 1 / \omega^2 \).</p>
<h2>Text</h2>
<p>Squares get boring, but there is no need to stop here. Given a microcontroller with a couple of <span class="caps">DAC</span>s, we can trace out arbitrary curves. A particular example of note is to trace out letters, turning the scope into a display device.</p>
<p>For example given these signals (X: blue, Y: purple):</p>
<p><img src="ag-text-t.png" alt="" class="img_noborder" /></p>
<p>We’d expect to see this:</p>
<p><img src="plot-text.svg" alt="" class="img_noborder" /></p>
<p>The green line shows the strokes actually encoded in the signals, whilst the thinner red lines show the gaps between the strokes. The extra green strokes to the left and right of the text are designed to move the paths between the strokes well away from the text so that it’s more legible.</p>
<p>Both lines of text are written left-to-right, whilst the carriage return runs right-to-left along the thin red line. It is possible to see this basic form in the time-domain plot above. The top y-trace clearly shows the two lines of text and the transition between them, whilst the lower x-trace has a quasi-sawtooth shape.</p>
<p>Incidentally, I should say that the stroke arrangement algorithm could be improved: ‘t’ would be clearer if its cross-bar were stroked in the opposite direction, and there are probably better ways to handle ‘d’ and ‘p’ too. A job for another day!</p>
<p>Synthesizing the signal at a frame-rate of about 112Hz and displaying it on the <span class="caps">CRO </span>yields this:</p>
<p><img src="trio-text.jpg" alt="" class="img_noborder" /></p>
<p>The main distortion is on the left-hand side, where it seems the beam doesn’t have enough time to sweep back before move away to the next line of text.</p>
<p>Switching to the <span class="caps">DSO, </span>we see a better display. It’s striking how close it is to the theoretical picture, which the main issue being the extra lines which join the discontinuous jumps: these mirror closely the red lines in the ideal plot.</p>
<p><img src="ag-text-no-persist.png" alt="" class="img_noborder" /></p>
<p>However, if we focus on the word ‘and’ it is possible to see whorls at the end of the jumps to the ‘a’ and ‘d’.</p>
<p><img src="ag-and-text-xy.png" alt="" class="img_noborder" /></p>
<p>Although our simple square models seemed naive, they do seem to have given us some insight into the display of a real, complex, signal.</p>
<h2>Movies</h2>
<p>All the examples we’ve seen above have been static displays, but there’s no reason why this has to be the case.</p>
<p>A typical example is the ‘slipping’ Lissajous figure, where the frequencies of the two signals aren’t precisely locked. Here’s a movie made from frames downloaded from a digital storage scope, displaying sinusoids almost in the ratio \(3:5\).</p>
<p><img src="lissajous-3-5.gif" alt="" class="img_noborder_small" /></p>
<p>However, many more sophisticated and artistic results are possible, just look on YouTube:</p>
<ul>
<li><a href="https://www.youtube.com/watch?v=rtR63-ecUNo">mushrooms;</a></li>
<li><a href="https://www.youtube.com/watch?v=qnL40CbuodU">a KickStarter;</a></li>
<li><a href="https://www.youtube.com/watch?v=aMli33ornEU">even Quake!</a> </li>
</ul>089D22B8-18FD-11E5-8994-798EDC3BC7FB2015-06-22T16:37:30:30Z2015-08-17T14:02:59:59ZNetworked scopes and MacOSMartin Oldfield<p>Brief notes on using a networked Keysight oscilloscope with MacOS X. </p><h2>Scope</h2>
<p>The discussion below relates specifically to the Keysight <span class="caps">MSO</span>-X 3034A oscilloscope, but I suspect it will apply just as well to other Keysight scopes and perhaps more generally. I’d be interested to hear about other instruments.</p>
<p>On the computer end, although I’ve written this with MacOS in mind, I expect it would work on Linux without modification.</p>
<h2>Introduction</h2>
<p>Lots of modern test instruments have an Ethernet jack on the back, which means we can connect them to the <span class="caps">LAN </span>and control them remotely. Obvious tasks include:</p>
<ul>
<li>remote control;</li>
<li>data logging;</li>
<li>automation.</li>
</ul>
<p>For the MacOS user though, much of the online information seems to be Windows specific, which is a nuisance. Further, although software libraries exist to communicate with these devices they seem a bit unwieldy.</p>
<p>For someone using lots of different devices, or who needs a solution which is robust in the face of errors and bizarre happenings, these formal, well-tested systems make sense.</p>
<p>However, more casual approaches seem to work well on MacOS.</p>
<h2>The programming guide</h2>
<p>Commendably, Keysight publish a <a href="http://www.keysight.com/upload/cmc_upload/All/3000_series_prog_guide.pdf">programming guide to the 3000 X-Series scopes.</a> At over 1200 pages, it is undeniably a thick document, but it seems clear and well-written. I’m working from version 02.38.000 of the document, and all the page numbers below refer to that.</p>
<p>The key insight is that you can control the scope using a command line interface on socket 5024, which is referred to as the ‛Telnet Sockets’ approach. Thus without installing any software at all, you can control the scope from the terminal. Here’s a short example to query the scope’s identification number:</p>
<pre><code>$ telnet <hostname> 5024
*IDN?
AGILENT TECHNOLOGIES,MSO-X 3034A,XXXXXXXXXXXXXXXXXXXXXXXXXXX</code></pre>
<p><code>*IDN?</code> is a <a href="https://en.wikipedia.org/wiki/Standard_Commands_for_Programmable_Instruments"><span class="caps">SCPI</span></a> command which returns the instrument’s identification number. See page 170 of the progammer’s guide for more details.</p>
<p>Given that most of this is text based, Perl seems a convenient choice if we want to automate things. Rather than using port 5024 though, it’s better to use port 5025 which offers an interface without human-helpful prompts.</p>
<h2>Grabbing a screen dump</h2>
<p>As an example task, let’s dump an image of the oscilloscope’s screen. If you just want the code, grab it from <a href="https://raw.githubusercontent.com/mjoldfield/bench-tools/master/scpi-screen-grab">GitHub.</a></p>
<p>The key, and most difficult, task is to find a suitable command. After some searching, I found <code>:DISP:DATA?</code> which is described on page 306.</p>
<p>To get an image of the screen in <span class="caps">PNG </span>format, all we have to send is:</p>
<pre><code>DISP:DATA? PNG</code></pre>
<p>The data returned are in <span class="caps">IEEE</span>-488.2 binary block data format, which is helpfully described on page 70.</p>
<p>It seems convenient to derive a new class from <code>IO::Socket::INET</code> to handle this:</p>
<pre><code>package MSO;
use base qw(IO::Socket::INET);
sub get_ieee_binary_block
{
my $io = shift;
my $header;
read($io, $header, 2);
# length of length
my ($m) = ($header =~ /^#(\d)$/)
or die "Unable to parse header A: $header, ";
# length of data
read($io, $header, $m);
my ($n) = ($header =~ /^(\d+)$/)
or die "Unable to parse header B: $header, ";
# data
my $data;
read($io, $data, $n);
# end-of-line
$io->getline;
return $data;
} </code></pre>
<p>Which we can call thus:</p>
<pre><code>my $addr = 'mso.local';
my $mso = MSO->new(PeerAddr => $addr, PeerPort => 5025)
or die "Unable to open MSO connection, ";
$mso->print("DISP:DATA?\n");
my $data = $mso->get_ieee_binary_block;
open(my $fh, '>', 'foo.png');
print {$fh} $data; </code></pre>
<p>Here’s the result:</p>
<p><img src="mso.png" alt="" class="img_border" /></p>
<p>As you see, the code is pretty simple. There might be some sense in abstracting the binary block handling into a library, and perhaps wrapping <code>print</code> and <code>getline</code> to facilitate tracing, but I’ve not yet done that.</p>
<h3>Movie making</h3>
<p>One particular use for this is to take a series of screen dumps, then combine them into an animated <span class="caps">GIF</span>—indeed that’s what led me to explore this.</p>
<p><a href="http://www.graphicsmagick.org">Graphics Magick</a> and <a href="http://imagemagick.org/script/index.php">Image Magick</a> can assemble the frames processing them as it does.</p>
<p>For example, I grabbed a series of frames calling them y001.png, y002.png, &c. I wanted to crop out a section of the display, then glue them into a movie. This command did the trick:</p>
<pre><code>convert y*png -crop 301x301+187+92 -repage 301x301+0+0 aa.gif</code></pre>
<p>Note:</p>
<ul>
<li>In Graphics Magick the command is <code>gm convert...</code>.</li>
<li>Add <code>-delay n</code> to change the rate, <code>n</code> = 3 seemed good.</li>
</ul>
<h3>More imagemagick</h3>
<p>To extract the main part of the display:</p>
<pre><code>convert mso.png -crop 601x400+37+43 cropped.png</code></pre>
<p>To convert the image into a traditional green on black:</p>
<pre><code>convert mso.png -negate green-on-black.png</code></pre>
<p>You can combine the flags.</p>
<h2>Bonjour oscilloscope!</h2>
<p>I was delighted to see that the scope supports <a href="https://en.wikipedia.org/wiki/Zero-configuration_networking">zero configuration networking</a> (Bonjour in Apple parlance), so if you change the scope’s name to something sane e.g. <code>mso</code> you can say e.g.:</p>
<pre><code>$ telnet mso.local 5025</code></pre>
<h2>Web interface</h2>
<p>I was also delighted to see that the scope’s web interface works tolerably well in some places without installing Java.</p>
<pre><code>$ open http://mso.local </code></pre>5F2A9F46-3E11-11E5-8839-C3313F1ECC0D2015-08-08T21:06:31:31Z2015-08-08T21:47:27:27ZTI HDC1000 EvaluationMartin Oldfield<p>Brief notes on the <span class="caps">HDC1000, </span>a humidity and temperature sensor from Texas Instruments. </p><p>Temperature sensors are a commonplace now, but I was interested to see that Texas Instruments now make nice I²C humidity sensors too.</p>
<p>I’m particularly interested in the <a href="http://www.ti.com/product/hdc1000"><span class="caps">HDC1000</span></a> which boasts:</p>
<ul>
<li>14-bit precision;</li>
<li>0.2°C temperature accuracy;</li>
<li>±3% relative humidity accuracy.</li>
</ul>
<h2>The <span class="caps">HDC1000EVM </span>evaluation board</h2>
<p>As is often the way these days, Texas make a nice evaluation board the <a href="http://www.ti.com/tool/hdc1000evm"><span class="caps">HDC1000EVM</span></a> which is essentially the <span class="caps">HDC1000 </span>sensor chip plus a <a href="http://www.ti.com/product/msp430f5528"><span class="caps">M430F5528</span></a> microcontroller which connects the I²C interface on the sensor to <span class="caps">USB.</span></p>
<p>A bit of probing reveals that, by default the onboard sensor has I²C address 0x40.</p>
<p>Watching the I²C bus also reveals that when power is applied to the board, it sends a software reset, then enables 14-bit conversions of temperature and relative humidity. The precise commands are:</p>
<ul>
<li>0x80, 0x00;</li>
<li>0x10, 0x00.</li>
</ul>
<h3>Terminal interface</h3>
<p>On Windows you can download client software which talks to the evaluation board. However, I use a Mac. Happily the board appears as a serial device in /dev e.g. /dev/tty.usbmodem146151.</p>
<p>Opening the device with screen and prodding keys reveals that most of the numbers do something:</p>
<ul>
<li>1: return the undecoded temperature in hex;</li>
<li>2: return the undecoded relative-humidity in hex;</li>
<li>3: start streaming temperature and RH readings;</li>
<li>4: stop streaming;</li>
<li>5: decrement the time between measurements when streaming;</li>
<li>6: increment the time between measurements when streaming;</li>
<li>7: cycles 0,1,2,3—it’s not clear what this changes;</li>
<li>8: cycle the I²C address used: 0x40, 0x41, 0x42, 0x43.</li>
</ul>
<p>A reasonable strategy for logging over time is to periodically send ‘1’ and ‘2’, logging the results. If precise timing doesn’t matter though, we can just stream the data.</p>
<p>Hit ‘3’ and you get something like this:</p>
<pre><code>stream start
62e0,802c
62e0,802c
62e0,7fec
62e8,7fec
62e8,7fec
62e0,7fec
62ec,7fac
62e8,7fac
62ec,7fac
62ec,7fac
...
62ec,7fec
62e8,7fec
62ec,7fac
62ec,7fac
stream stop</code></pre>
<p>By default the measurements are taken a second apart.</p>
<h3>Decoding the measurements</h3>
<p>The <span class="caps">HDC1000 </span>datasheet explains how to decode the numbers. Given reading \(x\).</p>
\[
T / °C = \frac{x}{2^{16}} × 165.0 - 40.0.
\]
<p>Above we see a raw temperature reading of 0x62e0, this corresponds to roughly 23.73°C.</p>
\[
RH = \frac{x}{2^{16}}
\]
<p>Above we see a raw reading of 0x802c, this corresponds to a relative humidity of 50.07%.</p>
<h2>Sample results</h2>
<p>I left the device streaming data from my window sill for about a week. Here’s what I found:</p>
<p><img src="t.svg" alt="" class="img_noborder" /></p>
<p><img src="rh.svg" alt="" class="img_noborder" /></p>
<p>You can see the clear diurnal variation.</p>
<p>Finally it is interesting to plot RH against temperature: there is a clear global negative correlation, but such a description hides much local structure.</p>
<p><img src="trh.svg" alt="" class="img_noborder" /> </p>BF9319FA-9AAD-11E0-8E0E-AF58C7E7D2CC2011-06-19T19:52:23:23Z2015-06-22T16:39:45:45ZThe Keysight U1272A and MacOSMartin Oldfield<p>Brief notes on reading data from an Keysight <span class="caps">U1272A </span>multimeter from MacOS. </p><h2>The short version</h2>
<p>I wrote a toy Perl program which talks to the Keysight <span class="caps">U1272A</span>: feel free to play with it!</p>
<ol>
<li>2011-06-20: <a href="http://www.mjoldfield.com/atelier/2011/06/u1272a-0_1.pl">Version 0.1.</a></li>
</ol>
<p><img src="u1272a.jpg" alt="" height="500" width="300" class="right" style="float:right" /></p>
<h2>The Keysight <span class="caps">U1272A</span></h2>
<p>Keysight make a very nice and particularly orange multimeter: <a href="http://www.home.agilent.com/agilent/product.jspx?nid=-34618.956189.00&cc=US&lc=eng">the <span class="caps">U1272A.</span></a> Like many bits of modern test kit, one can connect it to a computer. In this case there's an <a href="http://www.home.agilent.com/agilent/product.jspx?nid=-536906710.536910943&cc=US&lc=eng">infra-red dongle</a> which connects the meter to a <span class="caps">USB </span>port.</p>
<p>Although it's nice to see the current reading, the computer connection is particularly useful in conjunction with the meter's ability to log data autonomously to its internal memory.</p>
<p>For example, one can leave the meter recording a voltage every 10s then come back the next day and see the result. We're limited to 10,000 measurements which is roughly a day's worth of samples if they're 10s apart.</p>
<p>Unsurprisingly Keysight provide <a href="http://www.home.agilent.com/agilent/editorial.jspx?cc=US&lc=eng&ckey=878442&nid=-536906710.536910943&id=878442">pretty Windows software</a> to handle all this, but I use a Mac. Happily kind people have done most of the hard work already. Building on this, I wrote a <a href="http://www.mjoldfield.com/atelier/2011/06/u1272a.pl">toy Perl program</a> which talks to the meter. In the absence of proper documentation it's far from production quality but you might find it useful or at least fun.</p>
<h2>The dongle</h2>
<p>We'll begin with the hardware. It transpires that internally the IR dongle is based around a <a href="http://www.prolific.com.tw/eng/products.asp?id=59">Prolific 2303 <span class="caps">USB </span>to Serial bridge.</a> These are quite common devices, and there's a Mac driver. Actually there are two:</p>
<ol>
<li><a href="http://www.prolific.com.tw/eng/downloads.asp?id=31">The official one;</a></li>
<li><a href="https://github.com/failberg/osx-pl2303/downloads">an open source one from Failberg.</a></li>
</ol>
<p>I used Failberg's driver, but I should say that I can't and don't vouch for the quality of either driver. Install them at your peril!</p>
<p>Assuming that you are feeling lucky and install the driver, you should find that when you plug the cable into your Mac, a device is created:</p>
<pre><code>$ ls /dev/tty.PL*
/dev/tty.PL2303-003312FD</code></pre>
<p>The key part here is the /dev/tty.PL2303: the serial number which follows is presumably some unique device <span class="caps">ID.</span></p>
<h2>The meter</h2>
<p>Once the dongle's installed and you've got a suitable /dev/tty.PL entry, then it's fairly straightforward to talk to it.</p>
<p>Inevitably, there's the usual serial configuration mess to navigate. My meter was set to 9600 baud, 8-bit, no parity and 1 stop-bit, but you should check for yourself in the meter's setup (see section 4 of <a href="http://cp.literature.agilent.com/litweb/pdf/U1271-90010.pdf">the fine manual</a> for details). I tried to set the baud rate to 19200, but failed: I'm not sure why.</p>
<p>One other minor issue: if the meter's in logging mode it seems keen to send each reading over the port as it's made. That might be handy for some applications, but it's not compatible with my code. You could of course use this to test the comms though!</p>
<p>The Mac isn't well blessed with nice serial terminal emulators, but the <a href="http://www.gnu.org/software/screen/">screen</a> program can be press-ganged into the task. A word of warning: it's hardly user friendly!</p>
<p>To try it:</p>
<ul>
<li>Start the screen program:</li>
</ul>
<pre><code>$ screen /dev/tty.PL* 9600</code></pre>
<ul style="margin-top:0.5em;">
<li>Type the following (note that you won't be able to see what you're typing):</li>
</ul>
<pre><code>*IDN?</code></pre>
<ul style="margin-top:0.5em;">
<li>Send the command by hiting <span class="caps">CTRL</span>-J. You should see the meter identify itself:</li>
</ul>
<pre><code>Keysight Technologies,U1272A,MY12345678,V1.30</code></pre>
<ul style="margin-top:0.5em;">
<li>Make sure that the meter's data logging mode is set to <span class="caps">AUTO.</span> Start the meter logging data by holding the 'Hz % ms / Log' button down until the 'LOG' icon is displayed. You should see a message every time a new datum is stored. Frankly I don't understand the format properly, but the meter reading seems to be stored in the third to seventh digits (XXXXX) in this example:</li>
</ul>
<pre><code>"01XXXXX110000"</code></pre>
<ul style="margin-top:0.5em;">
<li>Stop the logging by holding the 'Hz % ms / Log' button down until the 'LOG' icon disappears.</li>
</ul>
<ul style="margin-top:0.5em;">
<li>Quit screen by typing <span class="caps">CTRL</span>-A k.</li>
</ul>
<p>Sadly there's not yet any proper documentation from Keysight for commands the meter understands, but the Internet is full of helpful and knowledgable people. Thanks to `insurgent' on the <span class="caps">EEV</span>blog forums for <a href="http://www.eevblog.com/forum/index.php?topic=3259.msg46838%23msg46838">posting the basic information.</a></p>
<h2>The software</h2>
<p>Obviously it's a pain to keep typing commands into a terminal emulator, so I wrote a toy utility to save my fingers. Perl seemed a good choice because compiling's a bore, and nothing here is remotely performance critical. Given that the command set is based on <a href="http://en.wikipedia.org/wiki/Standard_Commands_for_Programmable_Instruments"><span class="caps">SCPI</span></a> I'd hoped to find some helpful Perl modules lying around on <span class="caps">CPAN </span>too.</p>
<p>A couple of things seemed as though they might help: <a href="http://search.cpan.org/~jeffmock/GPIB_0_30/GPIB.pm"><span class="caps">GPIB</span></a> and <a href="http://search.cpan.org/~schroeer/Lab-Instrument-2.01/lib/Lab/Instrument.pm">Lab::Instrument.</a> Sadly though, neither actually helped. Both packages are large and quite complicated: <span class="caps">GPIB </span>seems to have suffered bitrot and didn't compile, whilst Lab::Instrument wants an underlying C library.</p>
<p>All of this seems rather large and baroque: the <span class="caps">SCPI </span>spec runs to nearly 1000 pages, and both of Perl libraries bristle with classes. If you're building a big experiment I can see the sense in this, but I just wanted a simple way to tickle the serial port.</p>
<p>So my <a href="http://www.mjoldfield.com/atelier/2011/06/u1272a.pl">toy program</a> is a thin wrapper around <a href="http://search.cpan.org/~cook/Device-SerialPort-1.04/SerialPort.pm">Device::SerialPort</a> It's a noddy program, very much in the `send a line, read a line' style, but knows enough to handle error conditions from the meter, and to iterate through the saved data log.</p>
<p>I should emphasize that it's not proper production quality code though: regard it more as research than solution. For example, the code assumes that there's just a single /dev/tty.PL* device and that it corresponds to the Keysight meter.</p>
<p>A more significant limitation is its failure to do much parsing of the data coming back from the meter.</p>
<h3>Installation</h3>
<p>The only real dependency is Device::SerialPort, which you can get from <span class="caps">CPAN.</span> So, once you've installed the device driver all you'll need to do is:</p>
<pre><code>$ sudo cpan Device::SerialPort
...
$ curl http://www.mjoldfield.com/atelier/2011-06/u1272a.pl -o u1272a
$ chmod a+rx ./u1272a
$ ./u1272a</code></pre>
<h3>Basic operation</h3>
<p>The basic command queries the meter for some basic data. You'll notice the absence of any parsing!</p>
<pre><code>$ ./u1272a
Opened /dev/tty.PL2303-003312FD :)
—
battery: 77%
config: '"V,0,DC"'
identity: 'Keysight Technologies,U1272A,MY12345678,V1.30'
reading: +4.23800000E-02
reading2: +2.09500000E+01 </code></pre>
<h3>Tracing</h3>
<p>If you want to watch what's happening at a low-level add the --trace option:</p>
<pre><code>$ ./u1272a --trace
Opened /dev/tty.PL2303-003312FD :)
0.000 >: *IDN?
0.070 <: Keysight Technologies,U1272A,MY12345678,V1.30
0.070 >: SYST:BATT?
0.092 <: 77%
0.092 >: CONF?
0.110 <: "V,0,DC"
0.110 >: FETC?
0.142 <: +4.23500000E-02
0.142 >: FETC? @2
0.174 <: +2.09500000E+01
—
battery: 77%
config: '"V,0,DC"'
identity: 'Keysight Technologies,U1272A,MY12345678,V1.30'
reading: +4.23500000E-02
reading2: +2.09500000E+01 </code></pre>
<h3>Downloading logs</h3>
<p>Probably the single most useful task for the program is to grab the meter's data logs. The --get_log option does this, but the returned data are parsed very crudely. Please check the output carefully.</p>
<p>Specifically we read the entire log and write it to a text file, one measurement per line. The meter returns a string of digits like "AABBBBBCCCCCC" for each datum, where <span class="caps">BBBBB </span>contains the measurement. This is written to the file as <span class="caps">NNNNN BBBBB</span> AA <span class="caps">CCCCCC, </span>where <span class="caps">NNNNN </span>just counts upwards.</p>
<p>So, to plot the readings get the x-coordinate from the first column, and the y-coordinate from the second. In <a href="http://www.gnuplot.info/">gnuplot:</a></p>
<pre><code>gnuplot> plot 'auto.txt' using 1:2 with dots</code></pre>
<p>Here's an example:</p>
<pre><code>$ ./u1272a --get_log=AUTO
Opened /dev/tty.PL2303-003312FD :)
Grabbing AUTO log:
0: .........
1000: .........
2000: .......
Finished
2727 data from AUTO log written to auto.txt </code></pre>
<h2>An example</h2>
<p>The graph below was produced using the code. It shows the voltage across an alkaline AA cell as it's cruelly discharged through a small resistor.</p>
<p>Data were logged every 10s by the meter, then the logs downloaded to the computer. The data were scaled, then plotted with gnuplot. There's a real pleasure in measuring and plotting 1700 points so easily!</p>
<p><img src="batt.svg" alt="Voltage against time as a cell's discharged" /></p>
<h2>Generalizing</h2>
<p>As you'll have seen, very little of this is Mac specific. I expect it would work as-is on a Linux box and there are some suggestions that a suitable <span class="caps">USB </span>driver is already baked into the kernel.</p>
<p>Device::SerialPort is supposed to emulate the <span class="caps">API </span>of the Windows specific <a href="http://search.cpan.org/~bbirth/Win32-SerialPort-0.22/lib/Win32/SerialPort.pm">Win32::SerialPort</a> module, so perhaps a Windows port wouldn't be hard either.</p>
<p>On the device side, Keysight's official software claims to support the <span class="caps">U1230 </span>and <span class="caps">U1250 </span>as well, so perhaps they'd work with this software too. </p>4726089E-896E-11E3-8497-CBDB10218DB22014-01-30T05:19:52:52Z2015-03-25T13:22:20:20ZMonads in Haskell: Lists &c.Martin Oldfield<p>Brief notes on the list and similar monads in Haskell. </p><h2>Introduction</h2>
<p>Some very brief notes summarizing Haskell’s listlike monads. It’s my crib sheet, written partly to straighten matters in my own mind and partly for future reference.</p>
<p>Most of the information here comes from the usual places, notably the <a href="http://www.haskell.org/haskellwiki/Typeclassopedia">Typeclassopedia.</a> I’m also indebted to Dominic Prior for many helpful discussions. Dominic is collecting <a href="https://docs.google.com/document/d/1DvbcQTibeUEOVmoLO14vvRa27kf6y29sObUmQpyFn9g/pub">useful and interesting monad examples</a> on Google Docs.</p>
<h2>The list monad</h2>
<p>If you use <span class="caps">GHC, </span>the list instance is defined in <a href="http://hackage.haskell.org/package/base-4.6.0.1/docs/src/GHC-Base.html"><span class="caps">GHC</span>-Base</a> these days:</p>
<pre><code>instance Monad [] where
x >>= f = foldr ((++) . f) [] x
x >> f = foldr ((++) . (\ _ -> f)) [] x
return x = [x]
fail _ = [] </code></pre>
<p>Perhaps the following equivalent definitions of bind (<code>>>=</code>) are clearer though. I’ve always found list comprehensions intuitive, and I think that’s my favourite:</p>
<pre><code>x >>= f = [ z | y <- x, z <- f y ]
x >>= f = concat (map f x)
(x:xs) >>= f = (f x) ++ (xs >>= f)
[] >>= f = []</code></pre>
<p>It’s probably also good to compare the general monadic and specific list instance types:</p>
<pre><code>(>>=) :: Monad m => m a -> (a -> m b) -> m b
(>>=) :: [a] -> (a -> [b]) -> [b]</code></pre>
<p>With these it’s easy to see how bind unwraps the monad (here by treating the incoming list one element at a time), applies <code>f</code>, then joins the resulting lists together.</p>
<h3><code>join</code></h3>
<p>Speaking of <code>join</code>, recall,</p>
<pre><code>join x = x >>= id</code></pre>
<p>and thus,</p>
<pre><code>join x = [ z | y <- x, z <- y ]
join = concat</code></pre>
<h3><code>>=></code></h3>
<p>Given the general result that</p>
<pre><code>(f >=> g) x = f x >>= g</code></pre>
<p>It’s easy to derive the pleasingly symmetric result that,</p>
<pre><code>(f >=> g) x = [ z | y <- f x, z <- g y ]</code></pre>
<h2>Monad laws</h2>
<p>We should check that these definitions comply with the monad laws. Let’s use the Kleisli set for elegance:</p>
<pre><code>return >=> f = f
f >=> return = f
(f >=> g) >=> h = f >=> (g >=> h)</code></pre>
<p>We’ll begin with the left-identity, recalling <code>return x = [x]</code>:</p>
<pre><code>(return >=> f) x = [ z | y <- [x], z <- f y ]
= [ z | z <- f x ]
= f x</code></pre>
<p>The right identity is essentially the same, so we’ll look at the associativity law:</p>
<pre><code>((f >=> g) >=> h) x
= [ z | y <- (f >=> g) x, z <- h y ]
= [ z | y <- [ w | v <- f x, w <- g v ], z <- h y ]
= [ z | v <- f x, w <- g v, z <- h w ]
= [ z | v <- f x, z <- [ t | w <- g v, t <- h w ] ]
= [ z | v <- f x, z <- (g >=> h) v ]
= (f >=> (g >=> h)) x</code></pre>
<p>The key observation is that the middle expression is symmetric.</p>
<h2>Intuition</h2>
<p>We’ll look at two ways to think about the list monad: the first is simpler, but corresponds to a special case; the second is more abstract but more faithful.</p>
<h3>The Cartesian Product</h3>
<p>A helpful intuition for the list monad is the <a href="http://en.wikipedia.org/wiki/Cartesian_product">Cartesian product.</a> To see this, think how bind operates:</p>
<ul>
<li>given a list consider the elements one at-a-time;</li>
<li>for each element generate a new list;</li>
<li>join all those lists together.</li>
</ul>
<p>Here’s an example:</p>
<pre><code>p as bs = do
a <- as
b <- bs
return (a,b)
p [1,2] "ab" = [(1,'a'),(1,'b'),(2,'a'),(2,'b')]</code></pre>
<p>one could define <code>p</code> more succinctly:</p>
<pre><code>p = liftM2 (,)</code></pre>
<p>Actually, for the Cartesian product, we don’t need the full power of monads: applicatives are enough. We could also define <code>p</code> thus:</p>
<pre><code>p as bs = (,) <$> as <*> bs</code></pre>
<h4><code>sequence</code></h4>
<p>If we are happy to work with lists (which implies that all the values have the same type), <code>sequence</code> does just what we want:</p>
<pre><code>> sequence [[1,2],[3,4]]
[[1,3],[1,4],[2,3],[2,4]]
</code></pre>
<h3>Nondeterministic calculations</h3>
<p>Suppose we’re exploring a space: at each step there are several possible directions we could take: storing those as a list seem natural enough. The key insight is that the monad instance chains these steps together naturally.</p>
<p>For example, suppose we’re exploring a set of cells { 1,2,3,...,maxCell } a step at a time, and at each step can either stay still, or move to a neighbour. We could model it thus:</p>
<pre><code>import Control.Monad
import qualified Data.List as L
type Cell = Int
maxCell = 3
inBounds :: Cell -> Bool
inBounds x = x >= 1 && x <= maxCell
explore :: Cell -> [Cell]
explore x = filter inBounds [x-1..x+1]</code></pre>
<p>What’s going on here ?</p>
<ul>
<li><code>inBounds x</code> returns True iff <code>x</code> is a valid cell.</li>
<li><code>explore x</code> returns the list of cells to which we could move. Note that it has type <code>Cell -> [Cell]</code> which is of form <code>a -> m a</code> when <code>m</code> is the list monad.</li>
</ul>
<p>Suppose we always start in cell 1, which we’d represent by the singleton list <code>[1]</code>.</p>
<p>Where can we move ? Let’s ask <code>explore</code>:</p>
<pre><code>*Main> [1] >>= explore
[1,2]</code></pre>
<p>That makes sense: we can’t move down (because there’s no cell 0), so we must either stay in cell 1, or move up to cell 2. What about another step ?</p>
<pre><code>[1] >>= explore >>= explore
= [1,2] >>= explore
= [1,2,1,2,3]</code></pre>
<p>To understand this step split <code>[1,2,1,2,3]</code> into two parts:</p>
<ul>
<li><code>[1,2]</code> are the cells we could reach if we stayed in cell 1 last time;</li>
<li><code>[1,2,3]</code> are the cells we could reach if we moved to cell 2 last time.</li>
</ul>
<p>To get the final answer, i.e. a list of all the places we might end up, we just concatenate the two lists.</p>
<p>Note that although there are three possible move at each step, and thus nine paths after two steps, we only consider the five valid paths here. This is not a simple Cartesian product.</p>
<p>It can be tedious to work with long lists, so let’s just count the number of times we end up in each cell.</p>
<pre><code>freq :: Ord a => [a] -> [(a,Int)]
freq = map (\as -> (head as, length as)) . L.group . L.sort
*Main> freq $ [1] >>= explore >>= explore
[(1,2),(2,2),(3,1)]</code></pre>
<p>In other words of the five possible sets of two moves, we end up in cell 1 twice, cell 2 twice, and cell 3 once.</p>
<p>Finally, it gets boring iterating <code>explore</code> by hand, so let’s automate it. We’ll introduce <code>nTimesM n f</code> which composes <code>f</code> with itself <code>n</code> times:</p>
<pre><code>nTimesM n f = foldr (>=>) return (replicate n f)</code></pre>
<p>Then we can do <code>n</code> steps easily: </p>
<pre><code>stepN n = nTimesM n explore 2
*Main> freq $ stepN 12
[(1,13860),(2,19601),(3,13860)]</code></pre>
<p>It’s easy to show algebraically that as the number of steps increases we’ll end up in cell 2 about √2 times as often as cell 1. Happily:</p>
<pre><code>*Main> (19601 / 13860)^2
2.000000005205633</code></pre>
<h2>The probability monad</h2>
<p>These nondeterministic calculations have the whiff of probability distributions about them, in particular distributions where each outcome is equally likely.</p>
<p>One could imagine weighting the outcomes by replicating each case a commensurate number of times, but a better approach is to replace the outcome with a tuple of outcome and probability.</p>
<p>Unsurprisingly this is still a monad. Although the idea is older, I originally read about the idea on <a href="http://blog.plover.com/prog/haskell/probmonad.html">The Universe of Discourse,</a> to which <span class="caps">MJD </span>has now added a <a href="http://blog.plover.com/prog/haskell/probmonad-refs.html">bibliography.</a> You could just grab an <a href="http://hackage.haskell.org/package/probability">implementation from Hackage.</a></p>
<h2>Trivial lists</h2>
<p>Suppose we limit ourselves to lists of a single element, can we make a monad ? We can, but only if we restrict ourselves to functions which return a singleton lists.</p>
<p>Recall,</p>
<pre><code>return x = [x]</code></pre>
<p>and so all of our functions must decompose thus:</p>
<pre><code>f :: a -> m b
f' :: a -> b
f = return . f'</code></pre>
<p>We can simplify bind too:</p>
<pre><code>x >>= f = [ z | y <- x, z <- f y ]
[x] >>= f = f x
= [ f' x ]</code></pre>
<p>and for completeness:</p>
<pre><code>join [[x]] = [x]
(f >=> g) [x] = [ (f.g) x ]</code></pre>
<p>It all works, but it’s all trivial!</p>
<p>Note that <code>return</code> is a universal constructor for these restricted lists. This isn’t true for the normal list monad, because you can’t construct e.g. <code>[]</code> or <code>[1,2]</code> with <code>return</code>.</p>
<p>Rather than singleton lists, which the type system can’t easily enforce, we might as well define a new monad instance:</p>
<pre><code>data Trivial a = Trivial a
deriving (Show, Eq, Ord)
instance Monad Trivial where
(Trivial x) >>= f = f x
return x = Trivial x
join (Trivial (Trivial x)) = Trivial x
(f >=> g) (Trivial x) = Trivial ((f.g) x)</code></pre>
<p> This is essentially the <a href="http://hackage.haskell.org/package/mtl-1.1.0.2/docs/Control-Monad-Identity.html">Identity monad,</a> which Dan Piponi <a href="http://blog.sigfpe.com/2007/04/trivial-monad.html">has discussed</a> on sigfpe.com.</p>
<h2>Almost trivial lists</h2>
<p>If trivial lists aren’t much fun, let’s consider lists with zero or one elements. Return is easy:</p>
<pre><code>return x = [x]</code></pre>
<p>There are two cases for bind:</p>
<pre><code>[] >>= _ = []
[x] >>= f = f x</code></pre>
<p>and three for <code>join</code>:</p>
<pre><code>join [[x]] = [x]
join [[]] = []
join [] = []</code></pre>
<p> The Kleisli arrow is messy but as long as the values are all singleton lists it will behave as the trivial monad above:</p>
<pre><code>(f >=> g) [x] = [ (f.g) x ]</code></pre>
<p>However as soon as a null list appears, the calculation immediately returns <code>[]</code>.</p>
<p>Finally we know that <code>f x</code> must return either <code>[]</code> or <code>[x']</code> for some <code>x'</code>. Immediately we can see there's a richer structure here: unlike the Trivial monad above we can’t always decompose <code>f</code> into a pure function and <code>return</code>.</p>
<p>Seasoned Haskell programmers will recognize all this as the <a href="http://hackage.haskell.org/package/base-4.6.0.1/docs/Data-Maybe.html">Maybe monad.</a> We define two constructors:</p>
<ul>
<li><code>Just x</code> which is analogous to <code>[x]</code>;</li>
<li><code>Nothing</code> which is analogous to <code>[]</code>.</li>
</ul>
<p>We can then say:</p>
<pre><code>data Maybe a = Nothing | Just a
deriving (Show, Eq, Ord)
instance Monad Maybe where
Nothing >>= _ = Nothing
(Just x) >>= f = f x
return x = Just x
join (Just (Just x)) = Just x
join _ = Nothing</code></pre>
<p>The standard intuition for the Maybe monad is that it represents a calculation which might fail: for example a database query. A chain of such calculations should proceed normally until one fails, at which point the whole calculation fails.</p>
<p>In the context of our nondeterministic search example for the full list monad, this is equivalent to saying that at each step at most one solution can be found.</p>
<h2>Other lists</h2>
<p>It’s tempting to ask if other subsets of the list monad exist. For example. simply including lists of length 2 seems doomed to fail, because we can easily make lists of length 4:</p>
<pre><code>Prelude Control.Monad> sequence [[1],[1]]
[[1,1]]
Prelude Control.Monad> sequence [[1,2],[1,2]]
[[1,1],[1,2],[2,1],[2,2]]</code></pre>
<p>We don’t have this problem with the Trivial and Maybe monads because the sets {1} and {0,1} are closed under multiplication.</p>
<p>Job Vranish has implemented a <a href="http://hackage.haskell.org/package/fixed-list-0.1.5">fixed-length list</a> which includes a monad instance, but I think it’s closer in spirit to a monad instance for <a href="http://hackage.haskell.org/package/base-4.6.0.1/docs/Control-Applicative.html#g:3">ZipList</a></p>
<p>You can’t make a monad instance from normal ZipLists though. See the following discussions in the Haskell Café:</p>
<ul>
<li><a href="http://www.haskell.org/pipermail/haskell-cafe/2009-April/059079.html">ZipList monad, anyone ?</a> from April 2009;</li>
<li><a href="http://www.haskell.org/pipermail/haskell-cafe/2013-October/111004.html">Applicative but not Monad</a> from October 2009;</li>
<li><a href="http://www.haskell.org/pipermail/haskell-cafe/2013-October/111004.html">[ZipList Monad] Final answer ?</a> from October 2013.</li>
</ul>
<h2>Cookbook</h2>
<h3>Powersets: all subsets from a list of items</h3>
<pre><code>> filterM (const [True,False]) "abc"
["abc","ab","ac","a","bc","b","c",""]</code></pre>
<p>I saw this in a comment Cale Gibbard made on Cristiano Paris’ <a href="http://monadicheadaches.blogspot.co.uk/2007/10/is-haskell-reallt-expressive.html">Monadic headaches blog.</a></p>
<p>To see why it works, first consider a map rather than a filter:</p>
<pre><code>> mapM (const [True,False]) "abc"
[[True,True,True],[True,True,False],[True,False,True],...]</code></pre>
<p>Our nondeterministic list monad maps each element of the <code>"abc"</code> list into a pair of alternatives (<code>[True,False]</code>), then generates a list of all eight possible combinations.</p>
<p>We could use the same idea to make list of binary numbers split into digits:</p>
<pre><code>> mapM (const [0,1]) [1..2]
[[0,0],[0,1],[1,0],[1,1]]</code></pre>
<p>Although personally I find <code>sequence</code> a more intuitive solution:</p>
<pre><code>> sequence $ replicate 2 [0,1]
[[0,0],[0,1],[1,0],[1,1]]</code></pre>
<p>In <a href="http://stackoverflow.com/questions/4634962/where-can-i-learn-advanced-haskell">an article</a> on Stack Exchange someone pointed out that you can improve this:</p>
<pre><code>> replicateM 2 [0,1]
[[0,0],[0,1],[1,0],[1,1]]</code></pre>
<h3>Words</h3>
<p>Suppose we want to extend the powerset by allowing repetition of the elements. The <a href="http://www.haskell.org/haskellwiki/Blow_your_mind">haskell wiki</a> gives us a helpful recipe:</p>
<pre><code>(inits . repeat) ['a'..'b'] >>= sequence
Prelude Data.List> (inits . repeat) "ab" >>= sequence
["","a","b","aa","ab","ba","bb","aaa","aab","aba","abb",...]</code></pre>
<p>How does this work ?</p>
<p>Let’s work up to it in stages. <code>repeat</code> generates an infinite list of strings, then <code>inits</code> gives us a list of lists of strings of increasing lengths:</p>
<pre><code>Prelude Data.List> repeat "ab"
["ab","ab","ab","ab","ab","ab","ab","ab",...]
Prelude Data.List> (inits . repeat) "ab"
[[],["ab"],["ab","ab"],["ab","ab","ab"],...]</code></pre>
<p>As we saw above <code>sequence</code> gives the Cartesian product of lists:</p>
<pre><code>Prelude Data.List> sequence []
[]
Prelude Data.List> sequence ["ab"]
["a","b"]
Prelude Data.List> sequence ["cd","ef"]
["ce","cf","de","df"]</code></pre>
<p>To process all of these at once, just bind <code>sequence</code> to a list of lists:</p>
<pre><code>Prelude Data.List> [[],["ab"],["cd","ef"]] >>= sequence
["","a","b","ce","cf","de","df"]</code></pre>
<p>which is essentially what we need:</p>
<pre><code>Prelude Data.List> (inits . repeat) "ab" >>= sequence
["","a","b","aa","ab","ba","bb","aaa","aab","aba","abb",...]</code></pre>
<h3>Words II</h3>
<p>In his excellent <a href="http://dev.stephendiehl.com/fun/">Write You a Haskell</a> series, Stephen Diehl gives a neater version:</p>
<pre><code>Prelude Control.Monad> take 10 $ [1..] >>= flip replicateM ['a'..'c']
["a","b","c","aa","ab","ac","ba","bb","bc","ca"]</code></pre>
<p>To unpick this, start with the <code>replicateM</code>:</p>
<pre><code>Prelude Control.Monad> replicateM 2 ['a'..'c']
["aa","ab","ac","ba","bb","bc","ca","cb","cc"]
Prelude Control.Monad> flip replicateM ['a'..'c'] $ 2
["aa","ab","ac","ba","bb","bc","ca","cb","cc"]</code></pre>
<p>So <code>flip replicateM ['a'..'c']</code> maps n to all the n letter words. If we pipe increasing integers to this with <code>>>=</code>, we will get the words in the desired order. </p>D94CDAD6-B6FE-11E4-8DBC-3999BDB1ADEF2015-02-17T23:43:41:41Z2015-02-18T00:04:15:15ZSeabassMartin Oldfield<p>A small, silent, MinnowBoard Max in a box. </p><h2>The basic plan</h2>
<p>For a while now I’ve wanted to build a small, silent Linux box to sit on my <span class="caps">LAN </span>and do random tasks. An obvious choice would be a Raspberry Pi, but I wanted something with a proper drive.</p>
<p>When I discovered the <a href="http://www.minnowboard.org/meet-minnowboard-max/">MinnowBoard Max</a> it seemed just the thing. It was easy enough to put the MinnowBoard Max and a 2.5” <span class="caps">SATA SSD </span>in a little box, and it seems to run reliably. However, I was also quite keen to pimp the box, adding a small <span class="caps">LCD </span>display and <a href="http://en.wikipedia.org/wiki/Blinkenlights">blinkenlights.</a></p>
<p><img src="sb1.jpg" alt="" class="img_border" /></p>
<p>I’ve written <a href="/atelier/index/t_MinnowBoard%20Max.html">other articles</a> describing different aspects of the project, but it seemed sensible to discuss the overall project. You can also grab all the <a href="https://github.com/mjoldfield/seabass">design files</a> from GitHub, though be warmed that there are issues!</p>
<p><img src="sb2.jpg" alt="" class="img_border" /></p>
<h2>Bugs</h2>
<p>For such a simple project there are numerous bugs which it would be annoying to forget.</p>
<ul>
<li>One of the front-panel <span class="caps">LED</span>s is 0.1” out. Happily I spotted this before ordering the panel, but sadly not before ordering the <span class="caps">PCB.</span></li>
<li>The <span class="caps">PCB </span>connects the <span class="caps">LED</span>s between the <span class="caps">GPIO </span>pin and ground. It transpires that the output can sink a lot more current than it can source, and things work rather better if you connect the <span class="caps">LED</span>s between +3.3V and the <span class="caps">GPIO </span>pin. Happily this was easy enough to bodge. It was also nice to recall playing with 74xx logic back some decades ago, where the same issue applied.</li>
<li>The front panel <span class="caps">LCD </span>window is a bit big. In fact the whole <span class="caps">LCD </span>mounting is a bit dodgy: a fair amount of light bleeds from the side of the <span class="caps">LCD, </span>but attempts to block this by sandwiching funky foam between the display and the front panel seemed to fail. Such schemes seem all too often to crack the display.</li>
<li>The back-panel connector cutouts could be made smaller.</li>
<li>There’s a small ridge running along the inside of the case which fouls the <span class="caps">PCB.</span> Happily just cutting off two of the corners solved the problem.</li>
<li>There is a bit of light-bleed between the <span class="caps">LED</span>s: I think the light pipes should be a bit longer.</li>
</ul>
<h2>Lessons</h2>
<h3>Which OS ?</h3>
<p>Without much thought I installed Linux Mint on seabass. It runs well-enough but there seem to be a lot of unnecessary processes running, and they seem hard to remove. Whether this is a feature of systemd isn’t entirely clear to me. However, at some point I might replace it with Ubuntu Server.</p>
<h3><span class="caps">LCD </span>issues</h3>
<p>The <span class="caps">LCD </span>is surprisingly hard to mount well. It’s fragile and easy to break if anything touches the glass, yet without that light bleeds from the backlight. Black isn’t particularly black either.</p>
<h3><span class="caps">ARM </span>or Intel</h3>
<p>Perhaps it’s just the grass being greener, but I came to the conclusion that life would have been easier were I working on an <span class="caps">ARM </span>board. Much of the online stuff talks about DeviceTree for configuring boards rather than <span class="caps">ACPI, </span>and although people at Intel were helpful it seemed to me that much of the knowledge one would like was locked up inside Intel rather than lurking in the community as a whole.</p>
<h2>Final conclusions</h2>
<p>Despite my moans above I think seabass is a success. It runs reliably and silently, and was much more fun to build than just buying an Intel <span class="caps">NUC </span>or similar. </p>71C808DA-8AF4-11E4-82DE-C9ED29729E012014-12-23T22:29:54:54Z2015-02-17T23:30:09:09ZMinnowBoard Max: DisplayMartin Oldfield<p>A <span class="caps">LCD </span>display for the MinnowBoard Max. </p><h2>The display</h2>
<p>I was keen to have a small <span class="caps">LCD </span>display on my MinnowBoard Max based computer. Adafruit have <a href="http://www.adafruit.com/categories/97">a wide range of <span class="caps">LCD</span>s</a> including <a href="http://www.adafruit.com/products/1480">a fine 320×240 version.</a></p>
<p>The <span class="caps">LCD </span>is controlled by a <a href="http://www.adafruit.com/datasheets/ILI9340.pdf"><span class="caps">ILI9340</span></a> chip, which includes a framebuffer, so the MinnowBoard doesn’t have to send data in real time. Instead it sends commands over <span class="caps">SPI </span>which update the framebuffer.</p>
<p>The display <span class="caps">PCB </span>also contains a microSD card slot, but we can ignore that.</p>
<h3>Wiring</h3>
<p>The display has ten pins, of which we use eight:</p>
<table class="spaced" cellspacing="0"><tr><th><span class="caps">LCD</span> Legend</th><th>Signal</th><th colspan="2">MinnowBoard Max</th></tr><tr><td>BL</td><td>Backlight</td><td>pin 21</td><td><span class="caps">GPIO</span> 338</td></tr><tr><td><span class="caps">SCK</span></td><td><span class="caps">SPI </span>clock</td><td colspan="2">pin 11</td></tr><tr><td><span class="caps">MISO</span></td><td><span class="caps">SPI MISO</span></td><td colspan="2">unused</td></tr><tr><td><span class="caps">MOSI</span></td><td><span class="caps">SPI MOSI</span></td><td colspan="2">pin 9</td></tr><tr><td>CS</td><td>Display select</td><td colspan="2">pin 5</td></tr><tr><td><span class="caps">SDCD</span></td><td>SD card select</td><td colspan="2">unused</td></tr><tr><td><span class="caps">RST</span></td><td>Reset</td><td>pin 23</td><td><span class="caps">GPIO</span> 339</td></tr><tr><td>D/C</td><td>Display mode</td><td>pin 25</td><td><span class="caps">GPIO</span> 340</td></tr><tr><td><span class="caps">VIN</span></td><td>Vcc</td><td>pin 3</td><td>5V</td></tr><tr><td><span class="caps">GND</span></td><td>Ground</td><td>pin 1</td><td>0V</td></tr></table>
<h2>Userspace <span class="caps">ILI9340 </span>drivers</h2>
<p>Adafruit have written <a href="https://github.com/adafruit/Adafruit_Python_ILI9341">a python library for the <span class="caps">ILI9340</span></a> which you can (in principle) use with the MinnowBoard Max with my (crude) <a href="https://github.com/mjoldfield/Adafruit_Python_GPIO">port of the Adafruit Python <span class="caps">GPIO </span>library</a></p>
<p>However, the code does not install or compile cleanly. If you are really interested, contact me.</p>
<p>The guts of the python library have <a href="https://github.com/MinnowBoard/max-ILI9341-C-Driver-port">been ported to C</a> which is easier to use:</p>
<pre><code>$ git clone https://github.com/MinnowBoard/max-ILI9341-C-Driver-port.git
$ cd max-ILI9341-C-Driver-port/</code></pre>
<p>Now edit the <span class="caps">GPIO </span>and <span class="caps">SPI </span>device numbers. Here are the changes I made:</p>
<pre><code>$ git diff
diff --git a/testprogram.c b/testprogram.c
index 4be10e8..e2e3e3a 100644
— a/testprogram.c
+++ b/testprogram.c
@@ -36,9 +36,9 @@ int main (int argc, const char *argv[])
#define INTERLACE 3
#define ROWSPERFRAME (ILI9341_TFTHEIGHT/(2*INTERLACE))
- data_command_select_fd = init_output_gpio (82);
- reset_fd = init_output_gpio (83);
- display_fd = init_spidev (32766, 0);
+ data_command_select_fd = init_output_gpio (340);
+ reset_fd = init_output_gpio (339);
+ display_fd = init_spidev (0, 0);
ili_reset ();
ili_init ();</code></pre>
<p>Now compile and run, but don’t forget to load the module to create the <a href="./mbmx-spi.html"><span class="caps">SPI </span>userspace devices:</a></p>
<pre><code>$ make
cc -std=c99 -g -Ofast -Wall -Wextra -c -o testprogram.o testprogram.c
testprogram.c: In function ‘main’:
testprogram.c:29:15: warning: unused parameter ‘argc’ [-Wunused-parameter]
int main (int argc, const char *argv[])
^
testprogram.c:29:33: warning: unused parameter ‘argv’ [-Wunused-parameter]
int main (int argc, const char *argv[])
^
cc -std=c99 -g -Ofast -Wall -Wextra -c -o ILI9341.o ILI9341.c
cc testprogram.o ILI9341.o -lm -o testprogram
$ sudo insmod .../low-speed-spidev.ko
$ sudo ./testprogram </code></pre>
<h2>The <span class="caps">FBTFT </span>driver</h2>
<p>On the Raspberry Pi, <span class="caps">SPI </span>driven displays are well supported. A chap called notro has written a fine <a href="https://github.com/notro/fbtft">framebuffer device</a> for <a href="https://github.com/notro/fbtft/wiki/LCD-Modules">many different <span class="caps">LCD </span>displays.</a></p>
<p>Compiling this device for the MinnowBoard Max was easy: I just followed the instructions. The only problem was that <span class="caps">DMA </span>didn’t work. I think I ought to be able to disable this from the command line, but I did not manage to get that to work. So I just patched fbtft-core.c:</p>
<pre><code>— a/fbtft-core.c
+++ b/fbtft-core.c
@@ -52,7 +52,7 @@ static unsigned long debug;
module_param(debug, ulong , 0);
MODULE_PARM_DESC(debug, "override device debug level");
-static bool dma = true;
+static bool dma = false;
module_param(dma, bool, 0);
MODULE_PARM_DESC(dma, "Use DMA buffer");
</code></pre>
<h3>Module arguments</h3>
<p>In principle one could write a board-specific kernel module to describe the <span class="caps">LCD </span>in use, and bind it to a specific <span class="caps">SPI </span>device.</p>
<p>Happily there is an alternative. The <a href="https://github.com/notro/fbtft/wiki/fbtft_device">fbtft_device</a> accomodates many configurations by suitable choices of its (many) module arguments. For my setup the magic runes are:</p>
<pre><code>sudo modprobe fbtft_device name=adafruit22a speed=15000000 \
gpios=reset:339,dc:338,led:340 rotate=270</code></pre>
<p>The fbtft device supports a simple binary backlight: it is either on or off. Given the <span class="caps">PWM </span>possibilities of the MinnowBoard Max it is easy to have quasi-continuous control of the backlight, just not within the fbtft framework.</p>
<h2>A Cairo example</h2>
<p>Having created a suitable /dev/fb1 device (/dev/fb0 being the MinnowBoard Max’s main display) we will need something to drive it.</p>
<p><a href="http://cairographics.org">Cairo</a> is a popular and featureful library for generating 2D images, and happily it’s easy to make it work with the framebuffer.</p>
<p>I found <a href="http://lists.cairographics.org/archives/cairo/2010-July/020378.html">working code from Andrea Rossignoli</a> on the Cairo mailing list which dates back to 2010. The only change I needed to make was to change /dev/fb0 to /dev/fb1.</p>
<p>Thank you Andrea.</p>
<p>Lest it disappear, I’ve put a copy of the code on <a href="https://github.com/mjoldfield/seabass.git">github.</a> </p>E21142F8-5315-11E4-A0A5-AACA08B378822014-10-13T20:16:26:26Z2015-01-30T13:29:11:11ZMinnowBoard Max: BasicsMartin Oldfield<p>Brief notes on booting up a MinnowBoard Max. </p><h2>Introduction</h2>
<p>Roughly speaking, the <a href="http://www.elinux.org/Minnowboard:MinnowMax">MinnowBoard Max</a> is a Raspberry Pi like board based around an Intel Atom. It’s rather more expensive than the Pi, but crucially has a <span class="caps">SATA </span>port which allows one to connect a normal disk-drive.</p>
<h2>Software</h2>
<p>I tried installing the 64-bit versions of both <a href="http://www.linuxmint.com/release.php?id=22">Mint 17 <span class="caps">MATE</span></a> and <a href="http://www.ubuntu.com/download/server">Ubuntu 14.04.01 server</a> edition. Both appeared to work, but I’ve not run either in anger yet.</p>
<p>One needs a magic incantation to convert the OS <span class="caps">ISO </span>image into a format suitable for a <span class="caps">USB </span>stick. On OS X:</p>
<pre><code>hdiutil convert -format UDRW -o out.img in.iso</code></pre>
<h3>A new kernel</h3>
<p>When playing with hardware the Linux kernel has many useful modules which are not enabled in the stock Ubuntu & Mint builds. So I <a href="../12/kernel-cookbook.html">compiled my own kernel.</a></p>
<p>I put a copy of the <a href="https://github.com/mjoldfield/seabass/blob/master/config">kernel config</a> on GitHub. My kernel tree’s been patched to include notro’s <a href="https://github.com/notro/fbtft">tftfb</a> device.</p>
<h2>Ethernet</h2>
<p>The <a href="http://en.wikipedia.org/wiki/MAC_address"><span class="caps">MAC </span>address</a> of the Ethernet adapter was set to 00:00:00:00:00:00 which seemed to break the network.</p>
<p>Happily it’s easy to hack this on a live system, and patch /etc/network/interfaces to solve the problem permanently. Wikibooks tells you <a href="http://en.wikibooks.org/wiki/Changing_Your_MAC_Address">all you need to know.</a></p>
<h3>Patch it in firmware</h3>
<p>In principle, I think <a href="https://uefidk.com/content/minnowboard-max">Intel’s firmware updater</a> should allow you to set the <span class="caps">MAC </span>address. It didn’t appear to work for me though.</p>
<h2><span class="caps">SATA </span>power</h2>
<p>I hooked up an old Crucial <span class="caps">SSD </span>drive to the <span class="caps">SATA </span>port. Experiments suggest that the drive only needs a 5V power rail, and 5V is conveniently available on J2 next to the <span class="caps">SATA </span>port.</p>
<p>However:</p>
<ul>
<li>The <strong>polarity was shown wrongly in some online documentation</strong>, so check it yourself before connecting anything up.</li>
<li>I’ve no idea whether J2 can supply enough current.</li>
<li>I’ve no idea whether running the <span class="caps">SSD </span>with a 12V rail is sensible.</li>
</ul>
<p>In other words, if you do try this it might, as far as I know, break something.</p>
<p>Luis Montoya pointed out to me that this approach is discussed on the <a href="http://minnowboard.57273.x6.nabble.com/MinnowBoard-SATA-Power-Options-td555.html">mailing list.</a></p>
<h2>Online help.</h2>
<ul>
<li>The main source is at <a href="http://www.elinux.org/Minnowboard:MinnowMax">eLinux.</a></li>
<li>There is a helpful mailing list which is archived on <a href="http://minnowboard.57273.x6.nabble.com">nabble.</a></li>
<li>There is useful software on <a href="https://github.com/MinnowBoard/minnow-max-extras">GitHub</a></li>
</ul>
<h2><span class="caps">CPU </span>data</h2>
<p>I’ve been playing with the dual-core version, which uses an Intel Atom <a href="http://ark.intel.com/products/78474/Intel-Atom-Processor-E3825-1M-Cache-1_33-GHz"><span class="caps">E3825.</span></a> The <a href="http://www.intel.com/content/dam/www/public/us/en/documents/datasheets/atom-e3800-family-datasheet.pdf">datasheet</a> for the <span class="caps">CPU </span>runs to some 5,000 pages, but can be helpful! </p>53CB6DB2-BFBF-11E0-942E-B5D9707682972011-08-05T16:21:21:21Z2015-01-18T21:28:02:02ZAn IntervalometerMartin Oldfield<p>A simple intervalometer driven by a <span class="caps">PIC </span>microcontroller. </p><h2>Preamble</h2>
<p>If one wants to make time-lapse movies, then one really needs a device which regularly tells the camera to take a picture. Despite its active role, such things are called <a href="http://en.wikipedia.org/wiki/Intervalometer">intervalometers.</a> You can buy them ready made, and the Internet is awash with designs for <span class="caps">DIY </span>versions: just <a href="http://www.google.com/search?q=diy+intervalometer">ask Google.</a> However, I designed and built my own.</p>
<p><img src="intervalometer/intervalometer.jpg" alt="" class="img_border" /></p>
<h2><em>Desiderata</em> and the broad design</h2>
<p>I decided that the following were important to me:</p>
<ul>
<li>long battery life;</li>
<li>easy, repeatable setting;</li>
<li>good accuracy;</li>
</ul>
<p>but that I didn’t worry too much about:</p>
<ul>
<li>precision;</li>
<li>fancy status displays;</li>
<li>non-temporal triggers.</li>
</ul>
<p>Perhaps the key decision was the high-accuracy, low-precision trade-off. By this I mean that I want to be able to easily specify e.g. a 1 minute interval between frames, but I don’t need the ability to specify 1 minute and 3 seconds. However, when I say one minute I really do mean one minute.</p>
<p>This naturally led me to a digital <span class="caps">UI, </span>rather than e.g. a potentiometer and some sort of display. In fact, I used a couple of 4 way switches: one specifies 1,3,10 or 30, the other seconds, minutes, hours or days. One can simply look at the positions of the switches to see how the device is configured: we don’t need a fancy <span class="caps">LCD </span>display.</p>
<p>To actually generate the pulses there are a few choices. One might distinguish between analogue solutions like the ubiquitous 555 timer, and digital ones which are effectively a quartz-oscillator and a programmable divider. The latter seem to offer better long-term stability, so that’s what I used.</p>
<p>For no better reason than I had some to hand, I based the device on a <a href="http://www.microchip.com/wwwproducts/Devices.aspx?dDocName=en023112"><span class="caps">PIC</span> 16F690.</a> To keep the power consumption down, it’s clocked with a 32kHz quartz crystal. The whole thing is easy to power from a couple of AA batteries, but this isn’t critical. For example a 5V supply could be used instead if that were more convenient.</p>
<p>Finally one has to decide what to do when the time comes to take a photo. I want the intervalometer to drive a Canon 400D <span class="caps">DSLR, </span>which has separate focus and trigger inputs. It seems sensible to drive both of them independently, giving the camera a few seconds to focus before taking the photo. That way any variation in the time the camera takes to focus won’t affect the time at which the photo is taken.</p>
<p>On top of this, I wanted an extra output so that, for example, I could break the power to the camera when it wasn’t needed (the 400D draws about 40mA when ‘resting’).</p>
<p>In total then, then are three outputs which all all opto-isolated. I’m not sure whether this was necessary, or indeed desirable, for the camera, but still.</p>
<p>Finally, as you’d expect, there’s also a status <span class="caps">LED </span>on the front panel. Normally this is off, but it flashes when the camera’s being asked to focus then glows steadily when the shutter’s triggered.</p>
<p>Abstracting, our device takes a 5-bit input configuration (4 numbers × 4 units × 2 power modes) and generates a sequence of 4-bit outputs. Simples!</p>
<h2>Hardware design</h2>
<p>The basic schematic is shown below, exported from <a href="http://www.diptrace.com/index.php">DipTrace.</a> Feel free to play with the <a href="http://www.mjoldfield.com/atelier/2011/08/intervalometer/prod-0.1.dch">source code.</a></p>
<p><a href="http://www.mjoldfield.com/atelier/2011/08/intervalometer/schematic-big.png"><img src="intervalometer/schematic.png" alt="" class="img_border" /></a></p>
<p>There’s very little here beyond the basic outline above. Switches K1 and K2 set the interval. To save input pins we arrange them in a matrix which we scan in software. Switch <span class="caps">K3, </span>an addition simply enables or disables the third output, which might be used to control camera power.</p>
<table class="cspaced" cellspacing="0"><tr><td></td><th>K2 / <span class="caps">RC6 </span>/ value</th><th>K1 / <span class="caps">RC7 </span>/ unit</th></tr><tr><th><span class="caps">RB4</span></th><td>1</td><td>second</td></tr><tr><th><span class="caps">RB5</span></th><td>3</td><td>minute</td></tr><tr><th><span class="caps">RB6</span></th><td>10</td><td>hour</td></tr><tr><th><span class="caps">RB7</span></th><td>30</td><td>day</td></tr></table>
<p>We have a couple of spare outputs on <span class="caps">PORTC, </span>and they’re brought out to test pads. Normally, we use <span class="caps">RC4 </span>to generate a calibration signal.</p>
<p>Finally, U2 is a standard Microchip <span class="caps">ICSP </span>header, suitable for a <span class="caps">PICK</span>it2 or similar programmer.</p>
<p>A prototype was built on stripboard,</p>
<p><img src="intervalometer/prototype.jpg" alt="" class="img_border" /></p>
<p>but I designed a simple <span class="caps">PCB </span>too. If you’d like to build your own, then you might find <a href="https://github.com/mjoldfield/into-meter">the gerber files</a> on GitHub. As you can see, it’s a very simple <span class="caps">PCB</span>:</p>
<p><img src="intervalometer/pcb.jpg" alt="" class="img_noborder_small" /></p>
<h3>Testing</h3>
<p>If you do build the board, it is probably wise to set the interval to 10 seconds so that you get some action fairly quickly.</p>
<p>There is also a useful test signal on <span class="caps">RC5.</span> You should see a 16kHz signal with a very low duty-cycle: the on-time is about 490µs.</p>
<p>Incidentally at the time of writing I have some spare <span class="caps">PCB</span>s. If you’d like one please let me know.</p>
<h2>Software</h2>
<p>The software is a simple bit of assembler compatible with <span class="caps">GNU</span>’s gpasm. It’s not particularly elegant or efficient, but seems to work. You can grab <a href="https://github.com/mjoldfield/into-meter">the source and a hex file</a> from GitHub.</p>
<p>To understand the code, it’s helpful to know that Timer1 is configured to generate interrupts at about 16Hz, and most of the code runs inside the Timer1 interrupt. Timer1’s is clocked by the system oscillator, prescaled 4:1, and so counts at about 8kHz. Thus, it’s synchronous with the instruction clock—instructions take four ticks on this processor. To generate the 16Hz interrupts, we need 512 counts per interrupt. Incidentally 16Hz is somewhat arbitrary: it needs to be fast enough to scan the switches, but that’s about all.</p>
<p>When controlling the outputs it’s worth remembering that the longest interval between triggers is 30 days, or about 42 million periods of 16Hz. That’s just too much for a 24-bit counter:</p>
\[
\begin{align} 30 \times 86400 \times 16 &= 41,472,000, \\ &\approx 2^{25.3}. \end{align}
\]
<p>However we could use a 24-bit counter if we increment it at 4Hz i.e. on every fourth timer interrupt:</p>
\[
\begin{align} 30 \times 86400 \times 4 &= 10,368,000, \\ &\approx 2^{23.3}. \end{align}
\]
<p>So then there’s our basic design. Timer1 generates interrupts at 16Hz which we’ll use to scan the inputs. Every fourth interrupt we we’ll increment a 24-bit counter which controls the outputs. All the output transitions happen at small values of the counter, after which there will be a period of inactivity while we wait for things to start again.</p>
<p>Happily all the interesting transitions happen in the first 64s, so the state machine which drives them can deal with purely 8-bit quantities. It’s only the overflow detection which needs the full 24-bit calculation.</p>
<p>The transition state machine can be further simplified because The state of the status <span class="caps">LED </span>can be inferred from the other outputs and the clock phase. Accordingly we don’t need to explicitly list its transitions. This is both simpler and reduces the chance that the <span class="caps">LED </span>doesn’t reflect the true status.</p>
<p>If you want to understand the details of the output transitions then read the code, but basic idea is that we first ask the camera to focus, then ask it to take a picture. If we’re controlling power, then we need to apply it before focussing and turn it off again some time after taking the photo. The status <span class="caps">LED </span>is off when nothing’s happening then flashes progressively brighter, staying on continuously when the shutter’s triggered.</p>
<p>The traces below illustrate this. We begin with the standard behaviour, used when the interval’s a minute or longer. As the interval increases the featureless area to the right extends. It’s been arbitrarily truncated here.</p>
<p><a href="http://www.mjoldfield.com/atelier/2011/08/intervalometer/unpowered.svg"><img src="intervalometer/unpowered.svg" alt="" class="img_indent200" /></a></p>
<p>For shorter intervals the whole sequence is compressed. Here’s the 10s version:</p>
<p><a href="http://www.mjoldfield.com/atelier/2011/08/intervalometer/unpowered10.svg"><img src="intervalometer/unpowered10.svg" alt="" class="img_indent200" /></a></p>
<p>Finally the <a href="http://www.mjoldfield.com/atelier/2011/08/intervalometer/powered.svg">sequence used when we want to control the power</a> is longer still: too long to sensibly display on the page.</p>
<p>Although the pictures are pretty, if you want to explore this in more detail you’re better off reading the source code.</p>
<h2>Power consumption</h2>
<p>One of the key things I wanted from the design was to keep the power consumption low. By putting the <span class="caps">PIC </span>to sleep one can get ridiculously low power consumption but it’s tricky then to keep the timer going.</p>
<p>In practice, when clocked at 32kHz and the supply voltage is 3V from a couple of AA batteries, the power consumption is about 60µA. If the AA based battery has a capacity of 1Ah it should last for nearly two years, which seems reasonable.</p>
<p>However there’s another significant power drain: the <span class="caps">LED</span>s and opto-couplers draw about 5mA each when on. Each exposure clocks up about 15 <span class="caps">LED </span>seconds, or about 2 × 10⁻⁵ Ah. So a better model for battery life is to say that we’ll be able to take about 50,000 photos.</p>
<p>Although not disasterous, it’s obvious that this could be improved. The opto-isolators driving the camera might work with smaller currents, or could be replaced entirely with transistors: it’s not quite clear that the isolation is needed.</p>
<p>Without the optos, it makes more sense to reduce the power drawn by the status <span class="caps">LED</span>: either lower current, a smaller duty-cycle, or both.</p>
<h2>Accurate timing</h2>
<p>I mentioned above that I was keen to get reasonably good accuracy from the intervalometer. One movie I wanted to make was a clock with the frames exactly one minute apart. Then, the second hand would stay still whilst the other hands moved. If such a movie were to last a day, then we’d need an accuracy of about 1 in 10⁵.</p>
<p>We’ll ignore the issue of frequency drift for now, and pretend that the only problem to overcome is that the crystal’s frequency isn’t exactly 32,768 Hz. I don’t have a data sheet for the specific crystal I used, but an accuracy of ±20ppm seems to be fairly standard. That’s about twice what we’d like to achieve.</p>
<p>One could try to fix this in the analogue realm changing the oscillator capacitors and thus its frequency. However, it’s more sensible to handle the problem in the digital domain. We’ll need to get some sense of scale though. Recall that each instruction takes about 122µs to execute, and that the time between exposures must be an integral number of instructions.</p>
<p>Now, if we want to change the interval between exposures by 1ppm we’ll need to execute at least a million instructions. That will take about two minutes. In these days where so much software is written without much regard to the instruction count (because <span class="caps">CPU</span>s are so fast), it’s sobering to be in realm where we’re concerned with a single extra instruction every two minutes!</p>
<p>Quite often we’ll want the interval between exposures to be less than two minutes, so it’s clear that to get the right average exposure, we’ll have to vary the interval between exposures. For example, if the clock runs a bit fast so that we’d like (ideally) to have 512.3 ticks between exposures, we’ll choose between 512 and 513.</p>
<p>Now, over how many Timer1 cycles should we do the averaging ? We know that to get 1ppm adjustments we’ll need to wait about 2 minutes, which is roughly 2000 complete Timer1 cycles. Given that it’s not going to be a particularly quick process I thought it worth waiting 4096 Timer1 cycles. That should take a pleasing 256s to complete.</p>
<p>Explicitly we’ll have a tuning parameter between 0 and 4095, and implement a counter which counts from 0 to 4095. When the sum of the two is more than 4095 then we’ll load the timer with the higher number.</p>
<p>There’s one minor twist: rather than have a 12-bit counter (2¹² = 4096) which increments one digit at a time, there’s a 16-bit counter which counts up 16 at-a-time. Putting the four unused bits at the least-significant end means that we can specify the instruction count for the 256s cycle in a single 32-bit value.</p>
<p>In 256s we’ll execute about two million instructions, so our precision will be about 0.5ppm. Happily, my bench timer claims an accuracy of 0.2ppm which gives us a good way to do the tuning. To drive the timer, we’ll pulse one of the spare <span class="caps">PORTC </span>outputs every 256s.</p>
<p>There are a couple of details to consider. All the counters increment and things happen when they overflow. So, if we want n cycles in our 256s period the configuration datum will be 0x100000000 - n.</p>
<p>Loading a new value into Timer1 costs a couple of ticks, so actually the datum will be 0x100020000 - n.</p>
<p>Finally time, and instructions, will elapse between the interrupt being triggered and us reloading the timer. So, we should <strong>add</strong> a correction to the timer’s <span class="caps">LSB </span>rather than setting it.</p>
<p>In practice this scheme works well enough. Rather lazily though, there’s no convenient way to calibrate the intervalometer: instead one has to edit the constant in the source code, assemble, and upload it. Here’s the relevant code:</p>
<pre><code>;; higher number => shorter period
constant tmr1_dh = 0xfe ; 0xfe00 -> 0x10000 = 512 => 16Hz fast clock
constant tmr1_dl = 0x01 ; these 2 cycles are lost when we reload
;; tweak setting (only the 12 most significant bits matter)
;; this is device/crystal specific
constant tmr1_adj_h = 0xff ;
constant tmr1_adj_l = 0x70 ;
;; increment to adjustment clock (0x80 => 32s cycle, 0x40 = 64s cycle, ..)
constant adj_clk_inc = 0x10 </code></pre>
<p>We’ll see later that the oscillator frequency depends a bit on the supply voltage, so irritatingly if we program the <span class="caps">PIC </span>at 5V but deploy the intervalometer at 3V we’ll have to take this into account.</p>
<h3>A helpful shuffle</h3>
<p>Whilst the scheme above works, there’s a snag. All of the (n+1) cycle periods are clumped together. This makes the deviation from the ideal behaviour worse than it need be.</p>
<p>Happily, it’s easy to make a significant improvement. Recall that the heart of the problem is that the code for picking the Timer1 period is:</p>
<pre><code>inc = (cycle + offset) > 4096 ? n : n + 1;</code></pre>
<p>where cycle counts from 0 to 4095, and offset is fixed.</p>
<p>Suppose we change that to:</p>
<pre><code>inc = (P(cycle) + offset) > 4096 ? n : n + 1;</code></pre>
<p>where P(i) shuffles the numbers [0,4095]. Over a complete cycle the test will be true just as often, but it will be true at different times.</p>
<p>One could imagine all manner of clever definitions for P, but this doesn’t do a bad job:</p>
<pre><code>P(i) = i `xor` ((i && 0xff) << 8)</code></pre>
<p>That is, just <span class="caps">XOR </span>the high byte with the low.</p>
<p>It’s probably obvious that this just shuffles the elements, but if it’s not here’s a demonstration (in Haskell):</p>
<pre><code>> :m Data.Bits Data.List
> let states = [ (h,l) | h <- [0..255], l <- [0,16..255] ]
> take 8 states
[(0,0),(0,16),(0,32),(0,48),(0,64),(0,80),(0,96),(0,112)]
> let states' = map (\(h,l) -> (h `xor` l, l)) states
> take 8 states'
[(0,0),(16,16),(32,32),(48,48),(64,64),(80,80),(96,96),(112,112)]
> sort states' == sort states
True</code></pre>
<p>Or, if you prefer pictures, the plot below shows the shuffle. It might make more sense to think of the plot as a bit map in which every row and column has precisely one cell filled. To see which cycles will enjoy the extra timer tick, mentally draw a horizontal line at the relevant level. Then regard the x-axis as time: if there’s a dot below the line at that time, then we’ll get an extra tick.</p>
<p>By contrast, the unpermuted code would simply have a ‘y = x’ line here: if you play the same game with the horizontal line, all the extra-tick times will be clumped at the left-side of the graph.</p>
<p><img src="intervalometer/munge.svg" alt="" class="img_noborder" /></p>
<p>Ultimately of course we care about the effect on the interval between shutter triggers. The plots below show these, but some interpretation is needed.</p>
<p>Suppose we just plotted the interval over time. The interval’s nominally 10s, and we’re looking for changes on the order of 100µs: the time to execute an instruction. Clearly we’re looking for a small effect!</p>
<p>If there were no oscillator drift, we could simply pick the a suitable but sadly that’s not the case. The oscillator does drift over time, so we pre-process the signal to remove this. Explictly we plot the difference between the measured time and the local (±3 samples) minimum. This should remove the drift in both the intervalometer’s oscillator and the meter’s timebase (the latter might be significant because I took the measurements with an Arduino).</p>
<p>Rather than plot the difference in seconds, we’ll show it in clock ticks i.e. 2⁻¹³s. Despite appearances to the contrary, there’s no rounding to the nearest integer: if you look at the data you’ll see variation at the ±0.03 instruction level.</p>
<p>It’s immediately obvious that the <span class="caps">XOR </span>instruction improves the distribution: there are only two different intervals and they vary by a single instruction. By contrast the niave code sometimes generates an interval some nine ticks longer. Before we lose perspective though, that’s about one millisecond!</p>
<p><img src="intervalometer/dith2000.svg" alt="" class="img_noborder" /></p>
<p>The <span class="caps">XOR </span>code isn’t perfect though. The plot below shows a small section of fifty intervals, and it’s obvious that the longer intervals aren’t quite evenly distributed over time. There are better solutions, but most need significantly more than one single instruction.</p>
<p><img src="intervalometer/dith50.svg" alt="" class="img_noborder" /></p>
<h2>Oscillator drift</h2>
<p>Although it would be nice to ignore it, in practice the oscillator frequency does depend on the environment.</p>
<h3>Voltage dependence</h3>
<p>It’s easy to verify that the frequency depends on the intervalometer’s supply voltage, and that higher voltages correspond to higher frequencies.</p>
<p>The graph below shows some experimental data covering the range 3–5 volts. You’ll see that the interval was measured twice at each voltage, in an attempt to isolate the voltage dependence from e.g. coincidentally correlated changes in temperature. Given that the difference between the two measurements is much smaller than the change between successive voltages, we seem fairly justified to claim that it’s voltage driving this.</p>
<p><img src="intervalometer/freq-v.svg" alt="" class="img_noborder" /></p>
<p>It’s clear that the relationship is roughly linear, and a simple least-squares fit gives:</p>
\[
\tau(V) = 255.99786 \left(1 - 1.12 × 10^{-6} (V - 4.0) \right).
\]
<p>The basic story though is that over this range of voltages, there’s an approximately-linear fractional-change of about -1.12 × 10⁻⁶ per volt.</p>
<p>Thus as the battery discharges we’ll see a drop of less than 0.5V, which translates to a fractional-change of about 5 × 10⁻⁷, so we don’t have to worry about it.</p>
<p>On the other hand, if we program the <span class="caps">PIC </span>at 5V but deploy at 3V we’ll expect the period to rise by about 0.6ms. Accordingly we should tune for 255.9994s under 5V to see 256.0000s at 3V.</p>
<h3>Temperature dependence</h3>
<p>Quartz oscillators rely on the piezo-electric property of quartz to connect its electrical and mechanical properties: it turns a physical resonance into an electrical one. Accordingly we’d expect temperature, which changes the physical characteristics to change the electrical ones too.</p>
<p>Typically data sheets say that the resonant frequency change with temperature is well-modelled by,</p>
\[
\frac{f_{res}(T)}{f_0} = 1 - \alpha (T - T_0)^2,
\]
<p>where \( \alpha \approx 0.04 \times {10}^{-6} \textrm{C}^{-2} \), \( T_0 = 25^{\circ}\textrm{C} \).</p>
<p>Given that the effect is small, it’s easy to convert this into an expression for the period:</p>
\[
\tau_(T) = \tau_0 \left(1 + \alpha (T - T_0)^2\right).
\]
<p>Sadly I don’t have any sort of temperature controlled chamber to hand, so I just left the intervalometer on the bench for a while, logging the period and temperature automatically. Here’s what I found:</p>
<p><img src="intervalometer/freq-t.svg" alt="" class="img_noborder" /></p>
<p>The solid line is a parabola fitted to the data by eye. It has equation:</p>
\[
\tau(T) = 255.99929 \left(1 + 3.5 \times 10^{-8} (T - 19.75)^2\right).
\]
<p>That seems broadly consistent with what we’d expect though the temperature of the extremum seems lower than I’d expected.</p>
<p>It seems foolish to infer too much of a quantitative nature from these data: they’re just not good enough:</p>
<ol>
<li>The temperature measurements are only precise to the nearest 0.5°C, and could easily have have a few degrees of systematic inaccuracy.</li>
<li>The time measurements appear to have an interesting likelihood structure which probably comes from the algorithm used by the counter (a TTi <span class="caps">TF930</span>). For example, one sees a gap of about 50µs between the clusters of points at given temperature. That’s roughly the period of a 16kHz clock: two ticks of the intervalometer’s master oscillator or about half an instruction. Neither of those seem a particularly good explanation.</li>
</ol>
<p>Overall though, it seems reasonable to say that if we’d expect if the temperature remains within 20±5° C the clock won’t vary by more than about 1ppm. In other words, providing we’re not working outside, we can forget about the problem.</p>
<h2>Useful files</h2>
<p>The <a href="https://github.com/mjoldfield/into-meter">project files</a> are now on GitHub.</p>
<p>The design is available under the <a href="http://creativecommons.org/licenses/by-sa/3.0/"><span class="caps">CCSA</span> 3.0 license:</a></p>
<h2>A gratutitous movie</h2>
<iframe src="http://player.vimeo.com/video/27400494?byline=0&title=0&portrait=0" width="400" height="300" frameborder="0"></iframe>
<h2>Acknowledgements</h2>
<p>I am grateful to Peter Mann who built one of these, despite a lack of clear instructions. The article has now been improved by Peter’s feedback. </p>1ACF5724-870A-11E4-BCF2-DED076FE6E4C2014-12-14T16:49:52:52Z2015-01-18T17:11:00:00ZMinnowBoard Max: GPIOMartin Oldfield<p>Controlling <span class="caps">GPIO </span>lines on the MinnowBoard Max. </p><p>Although the MinnowBoard Max has a <span class="caps">SATA </span>port like a normal <span class="caps">PC, </span>it also has nice IO lines like a Raspberry Pi. The easiest pins to use are on a 26-pin 0.1" DIL header, whose <a href="http://www.elinux.org/MinnowBoard:MinnowMax#Low_Speed_Expansion_.28Top.29">pinout</a> is shown on the eLinux website.</p>
<h2>sysfs</h2>
<p>If the proper kernel modules are installed, you can access the <span class="caps">GPIO </span>lines from userspace through devices in /sys.</p>
<p>Happily this is well <a href="https://www.kernel.org/doc/Documentation/gpio/sysfs.txt">documented in the kernel sources,</a> but there is <a href="http://elinux.org/GPIO">less abstract documentation</a> on the eLinux website.</p>
<p>Sadly though the stock Ubuntu and Mint kernels <em>do not</em> include the relevant devices, so you will probably end up <a href="./kernel-cookbook.html">compiling your own.</a> As <a href="http://minnowboard.57273.x6.nabble.com/MinnowBoard-MinnowBoard-MAX-getting-started-with-GPIO-tp736p743.html">Peter Ogden notes,</a> the key configuration settings are:</p>
<pre><code>CONFIG_PINCTRL_BAYTRAIL=y
CONFIG_GPIOLIB=y
CONFIG_GPIO_SYSFS=y</code></pre>
<p>Thanks to Peter for pointing this out.</p>
<p>You can also grab <a href="https://github.com/mjoldfield/seabass/blob/master/config">the config I used</a> from GitHub.</p>
<h2><span class="caps">GPIO </span>numbering</h2>
<p><em><span class="caps">GPIO </span>numbering changed between versions 3.17 and 3.18 of the kernel. The discussions below assume the latter, but you can adjust them to the old world order by subtracting 256.</em></p>
<p>Having compiled a new kernel, and rebooted into it, you can see the <span class="caps">GPIO </span>entries in /sys:</p>
<pre><code># ls /sys/class/gpio/
export gpiochip338/ gpiochip382/ gpiochip410/ unexport</code></pre>
<p>Each gpiochip entry corresponds to a bank of <span class="caps">GPIO </span>pins.</p>
<p>We can find out more about the <span class="caps">GPIO </span>banks from sysfs:</p>
<pre><code># cat /sys/class/gpio/gpiochip*/base
338
382
410
# cat /sys/class/gpio/gpiochip*/ngpio
44
28
102</code></pre>
<p> So, for example, we can see that there is a block of 44 <span class="caps">GPIO </span>pins starting at <span class="caps">GPIO</span> 338.</p>
<h3>Hello Blinky</h3>
<p>Pin 25 on the <span class="caps">DIL </span>header is easy to identify, so let’s connect an <span class="caps">LED </span>between there and Ground (with a suitable series resistor). Pin 25 is labelled (on the <a href="http://www.elinux.org/images/f/fd/MinnowMax_RevA1_sch.pdf">schematic</a>) as <span class="caps">GPIO</span>_S5_2, which can be identified as the third pin in the block of 44. For me, that block starts at <span class="caps">GPIO</span> 338, so my pin 25 corresponds to <span class="caps">GPIO</span> 340.</p>
<p>Having sorted out the hardware, we can flash the <span class="caps">LED </span>thus:</p>
<pre><code># echo 340 > /sys/class/gpio/export
# echo out > /sys/class/gpio/gpio340/direction
# echo 0 > /sys/class/gpio/gpio340/value
# echo 1 > /sys/class/gpio/gpio340/value
# echo 0 > /sys/class/gpio/gpio340/value
# echo 1 > /sys/class/gpio/gpio340/value
# echo 0 > /sys/class/gpio/gpio340/value
# echo 1 > /sys/class/gpio/gpio340/value
...</code></pre>
<p>Similarly, we can work out that pin 26 of the header corresponds to an offset of 54 within the block of 102 <span class="caps">GPIO </span>lines. Given the base of 410, this makes it <span class="caps">GPIO</span> 464.</p>
<p>Thus to interpret the pinout on the eLinux site:</p>
<ul>
<li><span class="caps">GPIO</span> 8x have an implied base of 82;</li>
<li><span class="caps">GPIO</span> 2xx have an implied base of 154.</li>
</ul>
<h3>The gory details</h3>
<p>All the <span class="caps">GPIO </span>numbers jumped by 256 when the kernel reached version 3.18, which caused me some confusion. The notes below might be helpful if it happens again.</p>
<ul>
<li>On the <a href="http://www.elinux.org/images/f/fd/MinnowMax_RevA1_sch.pdf">schematic</a> trace the hardware pin the the <span class="caps">CPU, </span>and note the ball number. For example pin 26 on the <span class="caps">DIL </span>header goes to <span class="caps">ILB</span>_8254_SPKR on ball <span class="caps">BH12.</span></li>
<li>Look up the ball in table 142 of the <span class="caps">E38</span>xx <a href="http://www.intel.com/content/dam/www/public/us/en/documents/datasheets/atom-e3800-family-datasheet.pdf">datasheet</a> For example, ball <span class="caps">BH12 </span>is associated with GPIO_S0_SC[054].</li>
<li>SC stands for South Core, which is the block of 102 <span class="caps">GPIO </span>lines. On my board these start at <span class="caps">GPIO</span> 410. 54 is the offset, so pin 26 on the <span class="caps">DIL </span>header is <span class="caps">GPIO</span> 464.</li>
</ul>
<p>Let’s try another. Header pin 25 connects to ball <span class="caps">C18 </span>which is GPIO_S5[02]. S5 is the block of 44 <span class="caps">GPIO </span>lines, which starts at <span class="caps">GPIO</span> 338 on my board. Thus, pin 25 is <span class="caps">GPIO</span> 340.</p>
<h2>Adafruit Python <span class="caps">GPIO </span>library</h2>
<p>I have hacked support for sysfs driven <span class="caps">GPIO </span>into the <a href="https://github.com/adafruit/Adafruit_Python_GPIO">Adafruit Python <span class="caps">GPIO</span> Library.</a> This lets you flash <span class="caps">LED</span>s thus:</p>
<pre><code>import Adafruit_GPIO as GPIO
import time
gpio = GPIO.get_platform_gpio()
pin = 338
gpio.setup(pin, GPIO.OUT)
gpio.output(pin, 1)
time.sleep(0.5)
gpio.output(pin, 0)
time.sleep(0.5) </code></pre>
<p>You can get the library from <a href="https://github.com/mjoldfield/Adafruit_Python_GPIO">GitHub</a> but be warned: it’s only a proof of concept and not production quality! </p>3E1D7AB6-8247-11E4-8CFB-F8733AB76E7F2014-12-12T11:03:51:51Z2014-12-30T22:14:17:17ZKernel Munging CookbookMartin Oldfield<p>Some recipes I find useful when mucking around with the Linux kernel. </p><h2>Compiling Debian Modules</h2>
<p>Today, the kernel knows how build debian packages:</p>
<pre><code>$ make deb-pkg LOCALVERSION=-XXXX KDEB_PKGVERSION=YYYY</code></pre>
<h2>netconsole</h2>
<p>If you're trying to debug a crashing kernel, or generally manipulate the logs, it's handy to have those logs on a different system. <a href="https://www.kernel.org/doc/Documentation/networking/netconsole.txt">netconsole</a> to the rescue!</p>
<p>On the system you're debugging:</p>
<pre><code>$ modprobe netconsole netconsole=@/,9876@10.0.0.2/</code></pre>
<p>On your stable machine (here a Mac):</p>
<pre><code>$ nc -u -l 9876</code></pre>
<h3>dmesg level</h3>
<p>It's often helpful to crank up the debugging level:</p>
<p>$ sudo dmesg -n debug</p>
<h2>Documentation</h2>
<p>Free Electrons have a fine <a href="http://free-electrons.com/doc/training/linux-kernel/linux-kernel-slides.pdf">presentation</a> on kernel hacking. </p>52BA38DA-87D1-11E4-9F7A-01FB77FE6E4C2014-12-19T22:49:07:07Z2014-12-30T19:47:03:03ZMinnowBoard Max: SPIMartin Oldfield<p>Controlling the <span class="caps">SPI </span>bus on the MinnowBoard Max. </p><p>Besides a collection of <span class="caps">GPIO </span>pins, the MinnowBoard Max also sports a <span class="caps">SPI </span>interface on its <a href="http://www.elinux.org/Minnowboard:MinnowMax#Low_Speed_Expansion_.28Top.29">26-pin <span class="caps">DIL </span>header.</a></p>
<p>The <span class="caps">SPI </span>controller is integrated into the <span class="caps">CPU, </span>and has a maximum clock rate of 15MHz.</p>
<h2>The userspace <span class="caps">API</span></h2>
<p>There is a standard Linux <a href="https://www.kernel.org/doc/Documentation/spi/spidev"><span class="caps">SPI </span>userspace <span class="caps">API</span></a> but the <a href="https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/drivers/spi/spi-pxa2xx.c"><span class="caps">PXA2</span>xx <span class="caps">SPI </span>driver for the <span class="caps">E38</span>xx</a> does not enable it directly.</p>
<p>In other words no /dev/spi* devices are created:</p>
<pre><code>$ ls /dev/spi*
ls: cannot access /dev/spi*: No such file or directory</code></pre>
<p>The simplest solution I found involved two steps. Firstly compile a suitable kernel: you can grab <a href="https://github.com/mjoldfield/seabass/blob/master/config">the config I used</a> from GitHub.</p>
<p>Secondly, I compiled <a href="https://github.com/MinnowBoard/minnow-max-extras/tree/master/modules/low-speed-spidev">low-speed-spidev</a> module outside the kernel tree, then loaded it.</p>
<pre><code>$ git clone https://github.com/MinnowBoard/minnow-max-extras.git
Cloning into 'minnow-max-extras'...
remote: Counting objects: 40, done.
remote: Total 40 (delta 0), reused 0 (delta 0)
Unpacking objects: 100% (40/40), done.
Checking connectivity... done.
$ cd minnow-max-extras/modules/low-speed-spidev/
$ make KERNEL_SRC=~/local-kernel/linux-3.18
...
$ sudo insmod low-speed-spidev.ko </code></pre>
<p>You can check that the driver loaded successfully with dmesg:</p>
<pre><code>$ dmesg
...
[79497.388732] low-speed-spidev: module init
[79497.388757] low-speed-spidev: master=ffff8800797a1000
[79497.390751] low-speed-spidev: dev=ffff88003774f400
[79497.390768] low-speed-spidev: spidev registered </code></pre>
<p>Or just look in /dev:</p>
<pre><code>$ ls -l /dev/spi*
crw——- 1 root root 153, 0 Dec 19 22:45 /dev/spidev0.0 </code></pre>
<h3>Other approaches</h3>
<p><em>I’m not sure about the information in this section: caveat lector!</em></p>
<p>On <span class="caps">ARM </span>based systems, I think one would use <a href="http://www.devicetree.org/Main_Page">Device Tree</a> to connect the <span class="caps">PXA2</span>xx low-level driver to the spidev driver. This avoids any need to compile code.</p>
<p>The MinnowBoard Max has an Intel <span class="caps">CPU </span>though, so rather than Device Tree, the relevant technology is <a href="https://www.kernel.org/doc/Documentation/acpi/namespace.txt"><span class="caps">ACPI</span></a></p>
<p>I didn’t explore either of these, because in practice I didn’t want a userspace <span class="caps">API </span>at all. Instead I compiled a kernel driver for the <span class="caps">SPI LCD </span>display I wanted to drive, and then exported a framebuffer <span class="caps">API </span>to userspace.</p>
<h2>A bug</h2>
<p>The <span class="caps">PXA2</span>xx driver in version 3.18 of the linux kernel has a <a href="http://minnowboard.57273.x6.nabble.com/Crash-with-SPI-and-general-load-td786.html">bug which makes it crash.</a></p>
<p>Mika Westerberg diagnosed the problem, and provided a patch:</p>
<pre><code>diff --git a/drivers/spi/spi-pxa2xx.c b/drivers/spi/spi-pxa2xx.c
index d8a105f76837..3720d84f266b 100644
— a/drivers/spi/spi-pxa2xx.c
+++ b/drivers/spi/spi-pxa2xx.c
@@ -402,8 +402,8 @@ static void giveback(struct driver_data *drv_data)
cs_deassert(drv_data);
}
- spi_finalize_current_message(drv_data->master);
drv_data->cur_chip = NULL;
+ spi_finalize_current_message(drv_data->master);
}
static void reset_sccr1(struct driver_data *drv_data)</code></pre>
<p>Thanks Mika!</p>
<h2>Adafruit Python <span class="caps">GPIO </span>library</h2>
<p>I started adding support for the spidev <span class="caps">SPI API </span>to the <a href="https://github.com/adafruit/Adafruit_Python_GPIO">Adafruit Python <span class="caps">GPIO</span> Library.</a></p>
<p>You can get the code from <a href="https://github.com/mjoldfield/Adafruit_Python_GPIO">GitHub</a> but be warned: it’s only a proof of concept and not production quality. I’m not even sure that the <span class="caps">API </span>is consistent with the other platforms the library supports. </p>36A532AA-87E8-11E4-A45C-290778FE6E4C2014-12-20T01:33:28:28Z2014-12-30T19:32:13:13ZMinnowBoard Max: LEDsMartin Oldfield<p>Using the Linux <span class="caps">LED </span>subsystem on the MinnowBoard Max. </p><h2>The Linux <span class="caps">LED </span>subsystem</h2>
<p>Although it is perfectly possible to drive a <span class="caps">LED </span>from a standard <span class="caps">GPIO </span>port, Linux provides a <a href="https://www.kernel.org/doc/Documentation/leds/leds-class.txt">dedicated <span class="caps">LED API</span></a> too.</p>
<p>Using the <span class="caps">LED API </span>does make it clearer which pins are associated with <span class="caps">LED</span>s, but that’s not the main benefit. Rather, it allows the kernel to manage the output in more complicated ways.</p>
<p>For example, userspace can ask the kernel to turn the <span class="caps">LED </span>on for a fixed period of time without having to wait around to turn the <span class="caps">LED </span>off again afterwards. As an aside, we can also be confident that the <span class="caps">LED </span>won’t stay on for ever, even if the userspace program crashes.</p>
<p>Moving such logic inside the kernel, also makes it easier to drive <span class="caps">LED</span>s from internal kernel information. For example, we might want to indicate disk or network activity.</p>
<p>Kernel modules which implement such behaviour are called <span class="caps">LED</span> Triggers, and you can find them in the <a href="https://git.kernel.org/cgit/linux/kernel/git/stable/linux-stable.git/tree/drivers/leds/trigger">drivers/leds/trigger directory</a> of the kernel.</p>
<h2>An <span class="caps">LED </span>driver for the MinnowBoard Max</h2>
<p>It’s probably obvious that we need to have some way to tell the kernel to associate a particular <span class="caps">GPIO </span>pin with the <span class="caps">LED </span>before it can control it. In fact the situation is more general: although we’re using a <span class="caps">GPIO </span>pin to control the <span class="caps">LED, </span>other people might have more complicated hardware.</p>
<p>So our real task will be to identify a driver for ‘LEDs attached to a <span class="caps">GPIO </span>pin’, then associate a Linux <span class="caps">LED </span>device with that.</p>
<p>The module is easy: it’s helpfully called <a href="https://git.kernel.org/cgit/linux/kernel/git/stable/linux-stable.git/tree/drivers/leds/leds-gpio.c">leds-gpio.c</a> You will need a kernel which includes this module: you can grab <a href="https://github.com/mjoldfield/seabass/blob/master/config">the config I used</a> from GitHub.</p>
<p>I found the simplest way to make the <span class="caps">LED </span>device was with a simple kernel module of my own. You can get this from <a href="https://github.com/mjoldfield/seabass">github.</a></p>
<p>Just as with the <a href="./mbmx-spi.html"><span class="caps">SPI</span></a> device, I suspect you could make the connections by suitably tweaking <a href="http://www.devicetree.org/Main_Page">Device Tree</a> or the <a href="https://www.kernel.org/doc/Documentation/acpi/namespace.txt"><span class="caps">ACPI</span></a> tables.</p>
<p>Much of the module code is just boilerplate. The key definition is small enough to contemplate though:</p>
<pre><code>static struct gpio_led seabass_led[] = {
{
.name = "seabass::user",
.default_trigger = "heartbeat",
.gpio = 474,
.active_low = 0,
}, </code></pre>
<p>The name should match the devicename:colour:function pattern, but is otherwise arbitrary. More importantly:</p>
<ul>
<li>We are using <span class="caps">GPIO</span> 474, which is pin 20 of the MinnowBoard Max’s <span class="caps">DIL </span>header.</li>
<li>The pin is active high, so the <span class="caps">LED </span>should be connected to ground via a suitable resistor.</li>
<li>The default trigger will make the <span class="caps">LED </span>beat with a familiar thump-thump-pause... pattern. As the machine load increases the <span class="caps">LED </span>will beat faster.</li>
</ul>
<p>Note: <span class="caps">GPIO</span> 474 assumes that the 102-entry <span class="caps">GPIO </span>block starts at <span class="caps">GPIO</span> 410.</p>
<h2>Instructions</h2>
<h3>Hardware</h3>
<p>Set up the <span class="caps">LED </span>on pin 20.</p>
<h3>Build the module</h3>
<pre><code>$ git clone https://github.com/mjoldfield/seabass.git
Cloning into 'seabass'...
remote: Counting objects: 21, done.
remote: Compressing objects: 100% (19/19), done.
remote: Total 21 (delta 4), reused 14 (delta 1)
Unpacking objects: 100% (21/21), done.
Checking connectivity... done.
$ cd seabass/seabass-leds/
$ make
...</code></pre>
<h3>Load the modules </h3>
<pre><code>$ sudo modprobe ledtrig_heartbeat leds_gpio
$ sudo insmod seabass-leds.ko </code></pre>
<p>The <span class="caps">LED </span>should now start beating.</p>
<h2>The sysfs <span class="caps">API</span></h2>
<p>You can configure the <span class="caps">LED </span>system via a sysfs <span class="caps">API.</span></p>
<p>Query the current trigger:</p>
<pre><code># cat /sys/class/leds/seabass::user/trigger
none mmc0 mmc1 mmc2 gpio [heartbeat]</code></pre>
<p>Disable the heartbeat:</p>
<pre><code># echo none > /sys/class/leds/seabass::user/trigger
# cat /sys/class/leds/seabass::user/trigger
[none] mmc0 mmc1 mmc2 gpio heartbeat</code></pre>
<p>Load the single shot trigger, and select it:</p>
<pre><code># modprobe ledtrig_oneshot
# echo oneshot > /sys/class/leds/seabass::user/trigger
# cat /sys/class/leds/seabass::user/trigger
none mmc0 mmc1 mmc2 gpio heartbeat [oneshot]</code></pre>
<p>Fire!</p>
<pre><code># echo 1 > /sys/class/leds/seabass::user/shot</code></pre>
<p>Make the pulse last longer:</p>
<pre><code># cat /sys/class/leds/seabass::user/delay_on
100
# echo 1000 > /sys/class/leds/seabass::user/delay_on
# echo 1 > /sys/class/leds/seabass::user/shot</code></pre>
<p>Restore the heartbeat:</p>
<pre><code># echo heartbeat > /sys/class/leds/seabass::user/trigger</code></pre>
<p>If you now stress the machine, you will see the heart rate rise. </p>6B0944F4-87E4-11E4-A200-540478FE6E4C2014-12-20T01:05:27:27Z2014-12-30T17:14:14:14ZMinnowBoard Max: PWMMartin Oldfield<p>Controlling the <span class="caps">PWM </span>drivers on the MinnowBoard Max. </p><p>Besides a collection of <span class="caps">GPIO </span>pins, the MinnowBoard Max also sports two <span class="caps">PWM </span>drivers on its <a href="http://www.elinux.org/Minnowboard:MinnowMax#Low_Speed_Expansion_.28Top.29">26-pin <span class="caps">DIL </span>header.</a></p>
<h2>The userspace <span class="caps">API</span></h2>
<p>The <span class="caps">PWM </span>outputs can be controlled by a standard <a href="https://www.kernel.org/doc/Documentation/pwm.txt">userspace <span class="caps">API </span>in sysfs.</a></p>
<p>This assumes that the kernel has been compiled with the relevant modules: I forget the details, but you can grab <a href="https://github.com/mjoldfield/seabass/blob/master/config">the config I used</a> from GitHub.</p>
<p>On the MinnowBoard Max the two <span class="caps">PWM </span>drivers appear as separate chips:</p>
<pre><code># ls /sys/class/pwm/
pwmchip0 pwmchip1
# ls /sys/class/pwm/pwmchip0/
device export npwm power pwm0 subsystem uevent unexport
# ls /sys/class/pwm/pwmchip1/
device export npwm power subsystem uevent unexport </code></pre>
<p>In the example above, channel 0 of pwmchip0 has been exported for use.</p>
<p>The only oddity I encountered was that the duty_cycle parameter is the active <em>time</em> of the signal (measured in ns), and not the active <em>fraction.</em></p>
<h2>Adafruit Python <span class="caps">GPIO </span>library</h2>
<p>I hacked support for sysfs-driven <span class="caps">PWM </span>into the <a href="https://github.com/adafruit/Adafruit_Python_GPIO">Adafruit Python <span class="caps">GPIO</span> Library.</a></p>
<p>You can get the library from <a href="https://github.com/mjoldfield/Adafruit_Python_GPIO">GitHub</a> but be warned: it’s only a proof of concept and not production quality!</p>
<p>Once installed you can fade <span class="caps">LED</span>s thus:</p>
<pre><code>import Adafruit_GPIO.PWM as PWM
import time
pwm = PWM.get_platform_pwm()
pin = 0
pwm.start(pin, 50.0, 10000.0)
while True:
for x in range(0,2):
for f in range(0,100):
if x == 0:
g = f
else:
g = 99 - f
pwm.set_duty_cycle(pin, g)
time.sleep(0.01)
</code></pre>
<h3>Wiring</h3>
<p><span class="caps">PWM0 </span>is on pin 22 of the <span class="caps">DIL </span>header, so connect an <span class="caps">LED </span>(plus series resistor) between there and ground on pin 2. </p>92C9DE10-DC52-11DE-8AD6-EBABBFEA12782009-11-28T19:12:05:05Z2014-12-14T16:01:45:45ZSons and DaughtersMartin Oldfield<p>Exploring why various forms of family management don’t lead to more boys. </p><h2>Abstract</h2>
<p>On of the Internet’s recurring memes is a world where families have children until they have a son, then stop. There seems to be a fairly common misapprehension that this will lead to an asymmetry of sexes in the next generation.</p>
<p>This short note explores this question: we quickly see that symmetry is preserved, but then spend time exploring what’s going on in more detail.Finally we introduce a new element to the model which, on its own, favours neither boys nor girls, but allows social engineering to skew the sex distribution.</p>
<h2>Introduction</h2>
<p>As is usually the case with this sort of probability puzzle, the main problem is to find the right way of looking at the problem. Once you’ve found that, the answer’s obvious.</p>
<p>In this problem we’re asked to think imagine families who have children until they have a son, then stop. As posed it’s natural to think about what happens on a family-by-family basis, but that’s a mistake!</p>
<p>Rather, enlightenment comes when you think about each birth individually: although families might be indulging in social engineering, nothing changes the statistics of the birth itself. Every baby is equally likely to be a boy or a girl—we’ll ignore the observed fact that actually about 51% of babies are male.</p>
<p>The key insight is that the next generation is just the total of all the births: if half of the births are male, then the next generation will be half-male too. Similarly even at the family level, we’d expect to see the same number of boys and girls in each family.</p>
<p>Moreover, we know that every family has exactly one son in it and so <em>on average</em> it must have one daughter too. However, there’s no symmetry which relates the <em>distributions</em> of sons and daughters, so the number of daughters will vary.</p>
<h3>Simulation</h3>
<p>Given how easy it is to mislead oneself with this sort of problem, I think it’s usually sensible to simulate it. Often it’s easy to mechanically generate a single sample of what’s happening, and the computer is good at doing this many times then averaging the results. Happily modern computers are fast enough to simulate simple situations in a fraction of a second, even when the program is written in a slow language like Perl.</p>
<p>I think there are three advantages to doing the simulation:</p>
<ul>
<li>By forcing us to explicitly model what’s going on, it exposes gaps or contradictions in our assumptions.</li>
<li>By mechanically calculating the consequences for one particular random choice at a time, it makes it less likely we’ll make a mistake when thinking about correlations.</li>
<li>In general writing the program is a different sort of thinking to doing the mathematical analysis, so the chances of making the same error are quite small.</li>
</ul>
<p>Of course, there are potential problems too. One is that it may not be clear how many samples one needs to take. A simple check is to simply run the simulation three times and check that the outcomes are roughly the same: if they’re not something’s wrong. However the converse sadly doesn’t apply: if we were simulating the lottery (say a-million-to-one shot) with three 100,000 sample runs there’s a reasonable chance we’d conclude that nobody ever won and you shouldn’t play. Perhaps that error would be a good thing!</p>
<p>Here, for each family we simply simulate the family by picking sexes at random—each equally likely—until we produce a son. Then, we forget about that family and do another one. All we have to do is keep track of the number of sons and daughters, and perhaps how often we see a particular shape of family.</p>
<p>Explicitly, the heart of the program looks something like this::</p>
<pre><code>n_boys = 0;
n_girls = 0;
do {
if (rand() < 0.5) { n_boys++; }
else { n_girls++; }
} while (n_boys == 0);</code></pre>
<p>I hope that by making it quite explicit that every birth has an equal chance of being a son or a daughter, it’s clear that the expected numbers of sons and daughters in the next generation are the same. In fact, once you’ve written this bit of code, it’s debatable whether you actually need to run it!</p>
<p>On the other hand, it’s but a single command to run it, so I simulated 100,000 random families (which took all of about 0.3s on my laptop). Here are the results after running the program three times. I think it’s clear that there’s no asymmetry between the sexes here!</p>
<table class="spaced" cellspacing="0"><tr><th> </th><th>Run 1</th><th>Run 2</th><th>Run 3</th></tr><tr><td>Mean no. of sons</td><td>1.000</td><td>1.000</td><td>1.000</td></tr><tr><td>Mean no. of daughters</td><td>0.994</td><td>1.004</td><td>1.003</td></tr><tr><td>Mean no. of children</td><td>1.994</td><td>2.004</td><td>2.003</td></tr><tr><td>Fraction of sons</td><td>50.2%</td><td>49.9%</td><td>49.9%</td></tr></table>
<h3>Family imbalances</h3>
<p>As we said above, although the total number of sons in the next generation will be about the same as the total number of daughters, this doesn’t mean that the distribution of boys will match the distribution of girls. For example, about half the families will have no daughters (because the first-child was a son); on the other hand <em>all</em> the families will have exactly one son.</p>
<p><img src="sd-fig1.svg" alt="" class="img_noborder_small" /></p>
<p>The figure above shows how families grow. We assume that every family starts out with no children and represent this by the \([0,0]\) node at the top of the diagram.</p>
<p>Every time a new child is born, we move down a row in the diagram. If the child’s male then we move down and left along the S arrow; if the child’s female we move down and right along the D arrow.</p>
<p>Each node is labelled with a pair of numbers \([s,d]\) which shows the numbers of sons and daughters. Given our rule that families stop growing when the first son is born, none of the \([1,d]\) nodes have arrows leading from them: these nodes correspond to final states of the family. These are shown as rectangular boxes. Conversely, all the \([0,d]\) states have two arrows leading from them which correspond to the birth of a new child.</p>
<p>Returning to our one-son-per-family model, we can immediately see several things:</p>
<ul>
<li>All families have one son.</li>
<li>½ of the families have no daughters.</li>
<li>¼ of the families have one daughter.</li>
<li>¼ of the families have more than one daughter.</li>
</ul>
<p>It is easy to check these conclusions by simulating 100,000 families:</p>
<table class="centered" cellspacing="0"><tr style="background: #ccc"><th>Number of daughters</th><th>Run 1</th><th>Run 2</th><th>Run 3</th></tr><tr><th>0</th><td align="right">49.9%</td><td align="right">49.8%</td><td align="right">50.1%</td></tr><tr><th>1</th><td align="right">25.1%</td><td align="right">25.1%</td><td align="right">25.0%</td></tr><tr><th>2</th><td align="right">12.5%</td><td align="right">12.4%</td><td align="right">12.5%</td></tr><tr><th>3</th><td align="right">6.3%</td><td align="right">6.3%</td><td align="right">6.2%</td></tr><tr><th>4</th><td align="right">3.1%</td><td align="right">3.1%</td><td align="right">3.1%</td></tr><tr><th>5</th><td align="right">1.6%</td><td align="right">1.6%</td><td align="right">1.6%</td></tr><tr><th>5 or more</th><td align="right">1.5%</td><td align="right">1.7%</td><td align="right">1.5%</td></tr></table>
<p>More generally we can use the diagram to help us calculate the chances of any particular pattern of sons and daughters. To do this, note that every time we make a choice we choose the S or D arrow with equal chance. So, to find out the chance of getting to a particular state we just count the number of arrows we need to follow to get from the start to that node. Each step has a 50% chance of being taken, so we just raise ½ to the relevant power to get the probability.</p>
<p>For example, we have to follow three arrows to get from [0,0] to [1,2], so the chance of getting one son and two daughters is ½ × ½ × ½ i.e. ⅛.</p>
<p>Although we’ve already reasoned that the expected number of daughters in each family must be one, the we can verify this in three different ways. These verifications aren’t really meant to convince you that the answer really is one, rather they illustrate how the apparently asymmetric situation cunningly manages to be consistent with the symmetric result.</p>
<h4>By recursion</h4>
<p>Suppose \(d\) is the expected number of daughters. After the first child is born, we know there are two possibilities:</p>
<ul>
<li>A son was born and so we stop without any daughters.</li>
<li>A daughter was born. This basically leaves us where we started but with one daughter already in the family. To see this, cover up the top node in the picture and observe that the picture hasn’t changed much (except that all the \(d\) numbers have gone up by one).</li>
</ul>
<p>Hence,</p>
\[
\begin{align} d &= ½ \times 0 + ½ \times (d + 1), \\
d &= 1. \end{align}
\]
<h4>By summation over final states</h4>
<p>Consider all the nodes where a son has just been born: these are the rectangular nodes which correspond to the final state of the family.</p>
<p>If we sum over all these states then,</p>
\[
\begin{align} d &= ½ \times 0 + ¼ \times 1 + ⅛ \times 2 + \ldots,\\
d &= ½ \sum_{i=0}^{\infty}i\,\left(½\right)^i,\\
d &= 1. \end{align}
\]
<p>The sum is a standard one you can find in tables or ask Mathematica about. However, ignoring issues like convergence, there’s a cute trick to sum it:</p>
\[
\begin{align} \sum_{i=0}^{\infty} i \theta^i &= \sum_{i=0}^{\infty} \theta \frac{d}{d\theta} \theta^i, \\
&= \theta \frac{d}{d\theta} \sum_{i=0}^{\infty} \theta^i, \\
&= \theta \frac{d}{d\theta} \left(\frac{1}{1 - \theta}\right), \\
&= \frac{\theta}{\left(1 - \theta\right)^2}. \end{align}
\]
<p>This is just like the old trick of ‘differentiating-under-the-integral-sign’ which Feynman talks about in ‘Surely You’re Joking, Mr Feynman!’.</p>
<h4>By summation over births</h4>
<p>If we consider each row of the tree, then we can ask how much it contributes to the expected number of daughters.</p>
<p>The probability of getting to row \(i\) is just \( (½)^i \), and half the time when we move down a row we’ll welcome another daughter to the family. So, moving from row 0 to row 1 adds half a daughter to the expected number, from row 1 to 2 adds a quarter, and so on:</p>
\[
\begin{align} d &= ½ + ¼ + ⅛ + \ldots,\\
d &= 1. \end{align}
\]
<p>One could calculate the expected number of sons in exactly the same way, and get the same result.</p>
<p>Incidentally for a quick way to do this sum, just write the number in binary: \(0.11111\dot{1}\).</p>
<h3>Finite families</h3>
<p>The diagram above also helps us understand what happens if we limit the total number of children. The tree no longer goes on forever, but stops after a fixed number of rows:</p>
<p><img src="sd-fig2.svg" alt="" class="img_noborder_small" /></p>
<p>I hope it’s clear that there’s still no asymmetry in the expected number of sons and daughters but if you’re not sure, let’s enumerate the three possibilities.</p>
<table class="centered" cellspacing="0"><tr style="background: #ccc"><th>Final State</th><th>Probability</th><th>Sons</th><th>Daughters</th></tr><tr><th>[1,0]</th><td>½</td><td>1</td><td>0</td></tr><tr><th>[1,1]</th><td>¼</td><td>1</td><td>1</td></tr><tr><th>[0,2]</th><td>¼</td><td>0</td><td>2</td></tr><tr><th style="border-top: 2px solid black">Expected value</th><th style="border-top: 2px solid black"> </th><th style="border-top: 2px solid black">¾</th><th style="border-top: 2px solid black">¾</th></tr></table>
<p>If we wanted to extend our simulation program to handle this case, it would be easy:</p>
<pre><code>n_boys = 0;
n_girls = 0;
do {
if (rand() < 0.5) { n_boys++; }
else { n_girls++; }
} while (n_boys == 0 && (n_boys + n_girls <= 2)); </code></pre>
<p>Again it’s easy to see that the central assumption—on average we produce as many sons as daughters—hasn’t changed.</p>
<h2>Breaking the symmetry</h2>
<p>Having shown that the basic model won’t produce a sex-asymmetry, it’s natural to ask what will. Obviously we could just change the proportion of sons born, but that’s rather crude and not really in the spirit of the original model. It would be nice if we could find a model which by itself produces equal numbers of sons and daughters, but which generates an asymmetry when coupled with the ‘one-son-per-family’ rule.</p>
<p>Happily we can do this! Simply suppose that some families are predisposed to have daughters and others sons—I should say that this idea is wholly hypothetical and any agreement with real biology is entirely accidental.</p>
<p>In particular assume that half the families have a probability \(½ + \Delta\) of having sons, while the other half have probability \(½ - \Delta\). In other words \(\Delta\) just sets the scale of the effect: \(\Delta = 0\) makes the effect vanish, \(\Delta = ½\) makes families either have <em>all</em> sons, or <em>all</em> daughters. However, the two biases cancel each other out, and we’ll be back to the 50:50 split in the next generation.</p>
<p>It’s important to see that the effects will only balance if size of the boy-biased families is the same as the size of the girl-biased families. This requirement is broken when social engineering rears its ugly head. Since families stop growing as soon as they have a son, families which are biased in favour of having sons will typically be smaller than those biased in favour of daughters. So the next generation will have more girl-biased babies in it, which implies that overall the fraction of sons will fall below 50%.</p>
<h3>Simulation</h3>
<p>It’s straightforward to extend our simulation to handle this model—and one of the advantages of doing the simulation is that we this change is so small that we can be fairly confident of doing it correctly.</p>
<pre><code>x = (rand() < 0.5) ? 0.5 + delta : 0.5 - delta;
n_boys = 0;
n_girls = 0;
do {
if (rand() < x) { n_boys++; }
else { n_girls++; }
} while (n_boys == 0);</code></pre>
<p>The following table shows some typical results from these simulations. We consider three different cases which correspond to different values of \(\Delta\). Happily when \(\Delta = 0\) we recover the results from the last section, and as \(\Delta\) grows the fraction of males in the next generation falls.</p>
<table class="centered" cellspacing="0"><tr style="background: #ccc"><th>Δ</th><th>0</th><th>⅛</th><th>¼</th></tr><tr><th>{ \(\textrm{p}_S\), \(\textrm{p}_D\) }</th><td>{ ½, ½ }</td><td>{⅜, ⅝}</td><td>{¼, ¾}</td></tr><tr><th>Run 1</th><td>0.500</td><td>0.469</td><td>0.377</td></tr><tr><th>Run 2</th><td>0.497</td><td>0.469</td><td>0.376</td></tr><tr><th>Run 3</th><td>0.498</td><td>0.470</td><td>0.375</td></tr><tr><th>Average</th><td>0.499</td><td>0.469</td><td>0.376</td></tr></table>
<h3>The analytic result</h3>
<p>Having worked out what the ‘right’ answer is, it would be nice to find an analytic result.</p>
<p>For a moment, suppose the bias were fixed and calculate how many daughters we’d expect. Formally suppose the probability of getting a daughter is \(\textrm{p}_D\). Then the chance of getting exactly \(i\) daughters (and one son) is just,</p>
\[
\textrm{p}(d = i, s = 1) = \textrm{p}_D^i \, (1-\textrm{p}_D),
\]
<p>and so the expected number of daughters \(\mathbb{E}(d\,|\,\textrm{p}_D)\) is just the expected value of \(d\) <em>given</em> a particular value for \(\textrm{p}_D\):</p>
\[
\begin{align} \mathbb{E}(d\,|\,\textrm{p}_D) &= \sum_{i = 0}^\infty i \times \textrm{p}(d = i, s = 1),\\
&= \sum_{i = 0}^\infty i \, \textrm{p}_D^i(1-\textrm{p}_D),\\
&= (1 - \textrm{p}_D) \frac{\textrm{p}_D}{(1-\textrm{p}_D)^2},\\
&= \frac{\textrm{p}_D}{1-\textrm{p}_D}. \end{align}
\]
<p>Having solved for a particular bias, now we have to average over the two possible biases:</p>
\[
\begin{align} \mathbb{E}(d) &= ½ \left( \mathbb{E}(d\,|\,\textrm{p}_D = (½ + \Delta)) + \mathbb{E}(d\,|\,\textrm{p}_D = (½ - \Delta)) \right),\\
&= ½\left( \frac{½ + \Delta}{½ - \Delta} + \frac{½ - \Delta}{½ + \Delta}\right),\\
&= \frac{1 + 4 \Delta^2}{1 - 4 \Delta^2}. \end{align}
\]
<p>Or, in terms of the fraction of boys \(\beta\),</p>
\[
\beta = ½\left(1 - 4\Delta^2\right).
\]
<p>It’s easy to verify that these results are consistent with our simulations:</p>
<table class="centered" cellspacing="0"><tr style="background: #ccc"><th>Δ</th><th>Simulation</th><th>Analytic Result</th></tr><tr><th>0</th><td>0.499</td><td>1/2 = 0.5000</td></tr><tr><th>⅛</th><td>0.469</td><td>15/16 ≅ 0.4688</td></tr><tr><th>¼</th><td>0.376</td><td>3/8 = 0.375</td></tr></table>
<h3>Other effects</h3>
<p>We claimed before that without the one-son-per-family rule, there wouldn’t be any overall effect of this model. That’s true if you look at the fraction of sons in the next generation, but there is an effect on the distribution of sons and daughters in a particular family.</p>
<p>Let’s calculate the probabilities of different families, given a particular \(\Delta\). In every case, we’ll need to average over two cases:</p>
\[
\begin{align} \textrm{p}_S &= \left\{ (½ - \Delta), (½ + \Delta) \right\}, \\
\textrm{p}_D &= \left\{ (½ + \Delta), (½ - \Delta) \right\} \end{align}
\]
<p>Consider first familes with only child: there are only two possibilities, a son or a daughter. Quantitatively:</p>
\[
\begin{align} \textrm{p}_S &= ½\left( (½ - \Delta) + (½ + \Delta)\right), \\
&= ½, \\
\textrm{p}_D &= ½\left( (½ + \Delta) + (½ - \Delta)\right), \\
&= ½. \end{align}
\]
<p>As expected perhaps, there’s nothing to see there. However, now consider two children. This time we have four cases:</p>
\[
\begin{align} \textrm{p}_{SS} &= ½\left( (½ - \Delta)^2 + (½ + \Delta)^2\right), \\
&= ¼ + \Delta^2, \\
\textrm{p}_{SD} &= ½\left( (½ - \Delta)(½ + \Delta) + (½ + \Delta)(½ - \Delta)\right), \\
&= ¼ - \Delta^2,\\
\textrm{p}_{DS} &= ½\left( (½ + \Delta)(½ - \Delta) + (½ - \Delta)(½ + \Delta)\right), \\
&= ¼ - \Delta^2,\\
\textrm{p}_{DD} &= ½\left( (½ + \Delta)^2 + (½ - \Delta)^2\right), \\
&= ¼ + \Delta^2. \end{align}
\]
<p>Now there is an effect! Not in the difference between sons and daughters though: observe that swapping \(S\) and \(D\) in the equations above doesn’t change the probability. Rather in this model as \(\Delta\) increases single-sex families become more probable—this really isn’t a surprise.</p>
<p>Although we won’t do it here, if you considered bigger families then the effect would become more pronounced.</p>
<p>In principle these results provide a signature which you could test empirically. However, to match these results against real-world data you’d need to model the non-uniform distribution of sexes, and think about twins and other multiple-births. For example, about one-in-a-thousand deliveries are identical twins, and they’re almost always of the same sex.</p>
<p>One would either have to control for that, or perhaps regard twins as a special case of the general phenomenon this model captures. It’s not quite the same though, because it’s hard to have only one child when twins are born!</p>
<h2>Conclusions</h2>
<p>I hope it’s clear that choosing to stop having children as soon as you’ve had a son won’t affect the fraction of men in the next generation. Indeed I hope it’s clear that no choice which only affects how many children people have will change the sex ratio assuming that we can treat each birth as an independent random event.</p>
<p>However, logically that assumption might not be true. We considered a simple way in which nature might be different, and saw that it couples the choice of when to stop having children to the sex of the children produced.</p>
<p>Although there’s nothing very deep in any of this, I still think it’s a nice problem.</p>
<p>Finally, you can download the code and source for the figures from <a href="https://github.com/mjoldfield/Sons-and-Daughters">Github.</a> </p>0C005DAC-7058-11E3-A2B7-BC0F46C81EDA2013-12-29T07:07:31:31Z2014-08-30T12:23:20:20ZMonads in Haskell: AlgebraMartin Oldfield<p>Brief notes on the algebraic side of monads in Haskell. </p><h2>Introduction</h2>
<p>Some very brief notes summarizing the abstract side of Haskell monads. It’s my crib sheet, written partly to straighten matters in my own mind and partly for future reference.</p>
<p>Most of the information here comes from the usual places, notably the <a href="http://www.haskell.org/haskellwiki/Typeclassopedia">Typeclassopedia.</a> I’m also indebted to Dominic Prior for many helpful discussions. Dominic is collecting <a href="https://docs.google.com/document/d/1DvbcQTibeUEOVmoLO14vvRa27kf6y29sObUmQpyFn9g/pub">useful and interesting monad examples</a> on Google Docs.</p>
<h2>Basic definitions</h2>
<p>There are (at least) four sensible ways to define monads, but they’re all equivalent: you get the same monad in every case.</p>
<ul>
<li><code>>>=</code> is called ‘bind’ .</li>
<li><code>return</code> isn’t like <code>return</code> in other languages.</li>
<li>Monads also define <code>>></code> and <code>fail</code>, but we’ll ignore them for now.</li>
</ul>
<h3>The standard Haskell formulation</h3>
<p>The standard Prelude defines monads thus:</p>
<pre><code>class Monad m where
(>>=) :: m a -> (a -> m b) -> m b
return :: a -> m a</code></pre>
<p>with the (unenforced) rules that:</p>
<pre><code>return a >>= f = f a
a >>= return = a
(a >>= f) >>= g = a >>= (\x -> f x >>= g)</code></pre>
<p>Intuitively <code>return x</code> ‘puts’ x into the monad in a ‘natural’ way. Continuing the intuition, <code>x >>= f</code> applies function <code>f</code> to value <code>x</code></p>
<p>It’s worth noting the signature for <code>f</code>,</p>
<pre><code>f :: Monad m => a -> m b</code></pre>
<p> which implies that it’s the function’s responsibility to put its result into the monad. Conversely <code>>>=</code> gets the value from the monad, then applies the function to it. From the outside, everything stays inside the monad. ‘Get’ and ‘put’ are deliberately vague because they mean different things in each monad.</p>
<p>We can make a stronger statement: there are no generic monad functions to take things out of the monad. Put another way, all the function types end in <code>m x</code>, never just <code>x</code>.</p>
<p>Our intuitive view of <code>>>=</code> and <code>return</code> make the first two monad laws easy to understand.</p>
<ul>
<li>The first says that the unwrapping bit of <code>>>=</code> exactly cancels out the wrapping done by <code>return</code>, leaving only the function applying bit.</li>
<li>The second says that you get the same cancellation if you do the unwrapping then the wrapping.</li>
</ul>
<p>The third law tells us how to compose two monadic functions. On the left we apply first <code>f</code> then <code>g</code> to <code>a</code>, whilst on the right we apply the lambda expression to <code>a</code>. So, that lambda expression must encode applying first <code>f</code> then <code>g</code>.</p>
<p>Note that the monad laws are exhaustive in the sense that they cover all the non-trivial binary combinations of <code>return</code> and <code>>>=</code>:</p>
<ul>
<li><code>return</code> … <code>>>=</code></li>
<li><code>>>=</code> … <code>return</code></li>
<li><code>>>=</code> … <code>>>=</code></li>
</ul>
<h3>Building on functors</h3>
<p>Instead of building monads from scratch, we can build them from some of Haskell’s simpler abstract type classes: functor and applicative. In future Haskell might well make this the default.</p>
<p>Let’s look at the declarations:</p>
<pre><code>class Functor f where
fmap :: (a -> b) -> f a -> f b
class Functor f => Applicative f where
(<*>) :: f (a -> b) -> f a -> f b
pure :: a -> f a
class Applicative m => Monad m where
(>>=) :: m a -> (a -> m b) -> m b
return :: a -> m a</code></pre>
<p>and the laws which instances of these classes should obey:</p>
<pre><code>fmap id = id
fmap (g . h) = (fmap g) . (fmap h)
pure id <*> v = v
pure f <*> pure x = pure (f x)
u <*> pure y = pure ($ y) <*> u
u <*> (v <*> w) = pure (.) <*> u <*> v <*> w
return a >>= f = f a
a >>= return = a
(a >>= f) >>= g = a >>= (\x -> f x >>= g)</code></pre>
<p>Finally, because every monad is an applicative, and every applicative is a functor, we can write the characteristic functions of the simpler classes in terms of the more complicated ones:</p>
<pre><code>fmap f x = pure f <*> x
fmap f x = x >>= return . f
pure = return
f <*> a = f >>= \x ->
a >>= \y ->
return $ x y</code></pre>
<p>Clearly <code>pure</code> and <code>return</code> are very similar animals, but let’s look instead at the function-applying functions:</p>
<pre><code>fmap :: Functor f => (u -> v) -> f u -> f v
(<*>) :: Applicative a => a (u -> v) -> a u -> a v
(=<<) :: Monad m => (u -> m v) -> m u -> m v
(=<<) = flip (>>=)</code></pre>
<p>We can regard all three functions as tweaking a function so that it applies to a wrapped value. However the function being transformed is different in each case:</p>
<ul>
<li><code>fmap</code> takes a pure function: <code>(u -> v)</code>.</li>
<li><code><*></code> takes a function already in the applicative: <code>a (u -> v)</code>.</li>
<li><code>=<<</code> takes a function which puts its result into the monad: <code>u -> m v</code>.</li>
</ul>
<h3>Doing it with join</h3>
<p>Consider the implementation of <code>fmap</code> with <code>>>=</code>:</p>
<pre><code>fmap f x = x >>= return . f</code></pre>
<p>It’s clear that to some extent <code>>>=</code> duplicates the functionality in <code>fmap</code>, and somewhat begs the question whether we could distil the unique part of <code>>>=</code> into a different function. Happily we can: it’s called <code>join</code>, and gives us a third way to define a monad:</p>
<pre><code>class Applicative m => Monad m where
join :: m (m a) -> m a
return :: a -> m a</code></pre>
<p>Note that <code>join</code> is almost the inverse to <code>return</code>, but <code>join</code> will only collapse two lots of wrapping into one: it won’t return a pure value from the monad. More poetically (ex <a href="http://blog.plover.com/prog/burritos.html">The Universe of Discourse</a> ):</p>
<blockquote><p> …a monad must possess a join function that takes a ridiculous burrito of burritos and turns them into a regular burrito.</p></blockquote>
<p>We can implement <code>join</code> in terms of <code>>>=</code>, but we need both <code>join</code> and <code>fmap</code> to implement <code>>>=</code>:</p>
<pre><code>join x = x >>= id
x >>= f = join (fmap f x)</code></pre>
<p>Finally we need different, but equivalent laws for this definition of monads:</p>
<pre><code>return . f = fmap f . return
join . return = id
join . fmap return = id
join . fmap join = join . join
join . fmap (fmap f) = fmap f . join</code></pre>
<h3>Kleisli composition</h3>
<p>Recall that in the third monad law for <code>>>=</code> we discussed how to compose monadic functions:</p>
<pre><code>(a >>= f) >>= g = a >>= (\x -> f x >>= g)</code></pre>
<p> where the lambda expression on the left hand side applies <code>f</code> then <code>g</code>. The lambda looks a bit unwieldy but happily there is a standard name for this, the Kleisli composition arrow:</p>
<pre><code>(>=>) :: Monad m => (a -> m b) -> (b -> m c) -> a -> m c
f >=> g = \x -> f x >>= g</code></pre>
<p>This gives us our fourth and final definition:</p>
<pre><code>class Monad m where
(>=>) :: Monad m => (a -> m b) -> (b -> m c) -> a -> m c
return :: a -> m a</code></pre>
<p>It transpires that if we rewrite the laws to use <code>>=></code> instead of <code>>>=</code> they take on a much more elegant form:</p>
<pre><code>return >=> f = f
f >=> return = f
(f >=> g) >=> h = f >=> (g >=> h)</code></pre>
<p>In other words <code>return</code> is the left and right identify for <code>>=></code>, and <code>>=></code> is associative.</p>
<p>We can also express <code>fmap</code>, <code>join</code>, and <code>>>=</code> succinctly:</p>
<pre><code>fmap f = id >=> return . f
join = id >=> id
(>>= f) = id >=> f</code></pre>
<p>There’s a fun game to play with the types in the expression for <code>join</code>. Recall:</p>
<pre><code>(>=>) :: Monad m => (a -> m b) -> (b -> m c) -> a -> m c
id :: a -> a</code></pre>
<p>and so in <code>id >=> id</code> we must have:</p>
<pre><code>a ≣ m b
b ≣ m c</code></pre>
<p>and thus:</p>
<pre><code>a ≣ m (m c)
(id >=> id) :: Monad m => m (m c) -> m c</code></pre>
<p>Finally note that the Kleisli arrow is the monadic take on <code>flip (.)</code>, not <code>.</code>:</p>
<pre><code>(.) :: (b -> c) -> (a -> b) -> a -> c
(>=>) :: Monad m => (a -> m b) -> (b -> m c) -> a -> m c</code></pre>
<h2>Standard Monad Functions</h2>
<p>Having defined the monad, one gets a whole variety of fun little functions to play with it. Many of these are listed in <a href="http://hackage.haskell.org/package/base/docs/Control-Monad.html">Control.Monad</a> and you should consult that for full documentation. The notes below, are my notes on top of that.</p>
<h3><code>>></code></h3>
<p><code>>></code> is a specialized version of <code>>>=</code>, which is defined for every monad. We omitted it above because it doesn’t add anything conceptual to the picture:</p>
<pre><code>(>>) :: Monad m => m a -> m b -> m b
f >> g = f >>= const g</code></pre>
<h3><code>fail</code></h3>
<p>Although it’s included in every monad, <code>fail</code> is a mistake, born of <code>do</code>-notation.</p>
<h3>liftM, liftM2, …, liftM5</h3>
<p>These lift functions of n-arguments into a monadic form:</p>
<pre><code>liftM :: Monad m => (a -> r) -> m a -> m r
liftM2 :: Monad m => (a -> a1 -> r) -> m a -> m a1 -> m r
…</code></pre>
<p>They can be expressed as a chain of <code>>>=</code>. For example:</p>
<pre><code>liftM2 f x y = x >>= \u ->
y >>= \v ->
return (f u v)</code></pre>
<p>though perhaps <a href="http://www.haskell.org/haskellwiki/Typeclassopedia#do_notation"><code>do</code>-notation</a> is nicer:</p>
<pre><code>liftM2 f x y = do u <- x
v <- y
return (f u v)</code></pre>
<p> Finally,</p>
<pre><code>liftM ≣ fmap</code></pre>
<h3><code>ap</code></h3>
<p><code>ap</code> provides a more scalable way to lift functions into the monad:</p>
<pre><code>liftMn f x1 x2 … xn ≣ return f `ap` x1 `ap` … `ap` xn</code></pre>
<p>The right-hand-side might remind you of applicative:</p>
<pre><code>(pure f) <*> x1 <*> x2 <*> … <*> xn</code></pre>
<p>and indeed we find:</p>
<pre><code>pure ≣ return
<*> ≣ `ap`</code></pre>
<p>It’s easy to implement <code>ap</code> directly:</p>
<pre><code>f `ap` x = f >>= \g ->
x >>= \y ->
return (g y)</code></pre>
<p>But there’s also an elegant relation to <code>liftM2</code>:</p>
<pre><code>ap = liftM2 id</code></pre>
<p>This is obviously true from the expression for <code>liftM2</code> above, but I think there is merit in pondering the result until it is obvious without seeing the innards of the lift.</p>
<h3><code>sequence</code></h3>
<p><code>sequence</code> interchanges the monad and the list:</p>
<pre><code>sequence :: Monad m => [m a] -> m [a]</code></pre>
<p>It can be implemented with <code>foldr</code>, where it bears a striking resemblance to the identity fold:</p>
<pre><code>sequence = foldr (liftM2 (:)) (return [])
idFold = foldr (:) []</code></pre>
<p>Given the fold, we just convert both the step and base-case to their monadic equivalents and get <code>sequence</code>.</p>0C2FE99C-2BC8-11E4-A6B1-90061929DB7D2014-08-24T19:51:12:12Z2014-08-30T12:20:54:54ZMonads in Haskell: ReaderMartin Oldfield<p>Brief notes on the reader monad in Haskell. </p><h2>Introduction</h2>
<p>Some very brief notes summarizing Haskell’s reader monad. It’s my crib sheet, written partly to straighten matters in my own mind and partly for future reference.</p>
<p>Most of the information here comes from the usual places, notably the <a href="http://www.haskell.org/haskellwiki/Typeclassopedia">Typeclassopedia.</a> I’m also indebted to Dominic Prior for many helpful discussions. Dominic is collecting <a href="https://docs.google.com/document/d/1DvbcQTibeUEOVmoLO14vvRa27kf6y29sObUmQpyFn9g/pub">useful and interesting monad examples</a> on Google Docs.</p>
<h2>The Reader monad</h2>
<p>In Haskell the <a href="http://hackage.haskell.org/package/mtl-2.2.1/docs/Control-Monad-Reader.html#1">Reader Monad</a> is the standard way to handle functions which need to read some sort of immutable environment.</p>
<p>As we’ve discussed before, the basic idea is to use the <a href="../07/monads-fn.html">function monad</a> <code>((->) e)</code> wrapped in a type to make things more readable. However, if you look at the source the reality seems more complicated.</p>
<p>For me there are three distinct areas of confusion:</p>
<ul>
<li>The Reader monad is defined as a <a href="http://en.wikipedia.org/wiki/Monad_transformer">monad transformer</a>, namely <code>ReaderT Identity</code>.</li>
<li>There are many ways to implement a Reader Monad, and so we work in terms of a MonadReader class.</li>
<li>The Reader constructor is defined using record syntax, which I always find confusing in non-trivial cases. I think this is a rather subjective failing though.</li>
</ul>
<p>However, given that we have the source, we can easily make our own standalone version:</p>
<pre><code>import Control.Monad
import Control.Applicative
data R e a = R (e -> a)
instance Functor (R e) where
fmap f (R x) = R $ \e -> (f . x) e
instance Applicative (R e) where
pure x = R $ \e -> x
(R f) <*> (R x) = R $ \e -> (f e) (x e)
instance Monad (R e) where
return x = R $ \e -> x
x >>= f = R $ \e -> runR (f (runR x e)) e
runR :: R e a -> e -> a
runR (R f) e = f e
ask :: R a a
ask = R $ \e -> e </code></pre>
<p>Here we’ve called the monad <code>R</code> rather than <code>Reader</code> to avoid conflicts with the real thing.</p>
<p>Notice that the functions we must implement to make the functor, applicative and monad instances all begin <code>R $ \e -> ...</code> on the right-hand side. This means that when we’re thinking about such values we can write them as <code>R x</code> without loss of generality.</p>
<p>The definitions above are written so as to emphasize the <code>R $ \e -> ...</code> part. You might prefer to say:</p>
<pre><code>return = R . const
ask = R id</code></pre>
<p> I think the key intuition is that Readers are functions from a particular enviroment (wrapped in <code>R</code>).</p>
<p>Beside the core functions above, our new Reader also provides <code>runR</code> and <code>ask</code>. We’ll need these to use the monad in practice.</p>
<h3><code>runR</code></h3>
<p>There’s no standard way to extract a value from a monad, which means that for the Reader instance to be useful we will need a function to actually run the Reader in a given environment, and return the result. <code>runR</code> is that function!</p>
<p>In many ways <code>runR</code> does the opposite of <code>R</code>:</p>
<pre><code>*Main> :t R
R :: (e -> a) -> R e a
*Main> :t runR
runR :: R e a -> e -> a
*Main> :t runR . R
runR . R :: (e -> a) -> e -> a
*Main> (runR . R) (+1) 1234
1235 </code></pre>
<p>So</p>
<pre><code>runR . R = ($)</code></pre>
<h3><code>ask</code></h3>
<p>The other extra function, <code>ask</code>, provides a way to easily access the environment. Typically we’ll use it in a string of actions expressed in do-notation, but for now let’s try something simpler. <code>ask</code> has type <code>R a a</code>, so given an environment of type <code>a</code> we can run it:</p>
<pre><code>*Main> runR ask 1234
1234</code></pre>
<p>It’s straightforward to see this:</p>
<pre><code>runR ask = runR (R $ \e -> e)
= runR (R id)
= (runR . R) id
= id</code></pre>
<h3><code>return</code> and <code>>>=</code></h3>
<p>Being a monad we will need <code>return</code> and <code>>>=</code>. Happily these are just translations of the definitions for <code>((->) e)</code> sprinkled with <code>R</code> and <code>runR</code>:</p>
<pre><code>instance Monad ((->) e) where
return x = \e -> x
x >>= f = \e -> f (x e) e
instance Monad (R e) where
return x = R $ \e -> x
x >>= f = R $ \e -> runR (f (runR x e)) e </code></pre>
<p>A simple <code>return</code> Reader doesn’t depend on the environment at all:</p>
<pre><code>*Main> runR (return "Banana") 1234
"Banana"</code></pre>
<p>To see why consider:</p>
<pre><code>runR (return x) e = runR (R $ const x) e
= (runR . R) (const x) e
= (const x) e
= x</code></pre>
<h2>Subsidiary functions</h2>
<p>Besides the functions above, we also define a couple more to make life easier. I <em>think</em> the functions above are a sufficient set, in the sense that you can define everything in terms of them, but I’m not sure that they’re the set actually used by the <code>MonadReader</code> class.</p>
<h3><code>asks</code></h3>
<p><code>ask</code> lets us read the environment and then play with it. <code>asks</code> takes a complementary approach: given a function it returns a Reader which evaluates that function and returns the result.</p>
<pre><code>asks :: (e -> a) -> R e a
asks f = do
e <- ask
return $ f e</code></pre>
<p>Here’s an example:</p>
<pre><code>*Main> runR (asks length) "Banana"
6</code></pre>
<p><code>asks</code> can be very elegantly implemented in terms of <code>fmap</code>:</p>
<pre><code>asks f = fmap f ask</code></pre>
<p>This simplicity hints at a deeper observation: <code>asks</code> is effectively the constructor <code>R</code>. Just look at the types:</p>
<pre><code>*Main> :t R
R :: (e -> a) -> R e a
*Main> :t asks
asks :: (e -> a) -> R e a</code></pre>
<p>I’m still slightly unsure whether this is necessarily true for all <code>MonadReaders</code> though. Caveat lector!</p>
<h3><code>local</code></h3>
<p><code>local</code> transforms the environment a Reader sees:</p>
<pre><code>local :: (e -> t) -> R t a -> R e a
local f r = do
e <- ask
return $ runR r (f e)</code></pre>
<p>Again I prefer the desugared version:</p>
<pre><code>local f r = fmap (\e -> runR r (f e)) ask</code></pre>
<p>In the example below we’ll use <code>ask</code> as the Reader, which will just show us the environment:</p>
<pre><code>*Main> runR ask "Chocolate"
"Chocolate"
*Main> runR (local (++ " sauce") ask) "Chocolate"
"Chocolate sauce" </code></pre>
<h2>Examples</h2>
<p>Here’s a simple, contrived example:</p>
<pre><code>f :: a -> R e (a, e)
f x = do
e <- ask
return $ (x, e)</code></pre>
<p><code>f</code> takes one explicit parameter and uses <code>ask</code> to read the environment. It returns both in a tuple.</p>
<p>To run the function, just use <code>runR</code>:</p>
<p>*Main> runR (f 10) 20<br />
(10,20)</p>
<p>There are more sensible examples in the <a href="http://hackage.haskell.org/package/mtl-2.2.1/docs/Control-Monad-Reader.html#1">Control.Monad.Reader</a> documentation.</p>
<h2><code>R</code> golf</h2>
<p>One can play the usual hunt for cute ways to compose things.</p>
<p>If we desugar our toy <code>f</code> above it becomes,</p>
<pre><code>f x = ask >>= \e -> return $ (x,e)
= ask >>= return . (\e -> (x,e))
= fmap (\e -> (x,e)) ask</code></pre>
<p>or,</p>
<pre><code>f x = asks (\e -> (x,e))</code></pre>
<p>Given this:</p>
<pre><code>*Main> runR (return 10 >>= f) 20
(10,20)
*Main> runR ((f >=> f) 10) 20
((10,20),20)
*Main> runR (ask >>= f) 20
(20,20) </code></pre>
<p>Or perhaps a spot of lifting:</p>
<pre><code>*Main> runR (liftM (*10) ask) 20
200
*Main> runR (liftM2 (,) ask ask) 20
(20,20)
*Main> runR (liftM2 (,) (f 100) ask) 20
((100,20),20)</code></pre>
<p>Finally we could get Applicative:</p>
<pre><code>*Main> runR ((,) <$> (f 10) <*> (f 100)) 20
((10,20),(100,20))</code></pre>
0A775B7E-EDE1-11DD-8189-8800540FD1A72009-01-29T08:42:56:56Z2014-08-15T20:54:20:20ZPlaces to drink in ParisMartin Oldfield<p>Some brief notes on places to drink in Paris. </p><h2>Websites</h2>
<p>Recently I’ve tried a few places from <a href="http://parisbymouth.com/our-guide-to-paris-restaurants/">Paris by Mouth</a> and they’ve been good.</p>
<h2>Bars</h2>
<h3>Le Fumoir, 6 Rue de l’Amiral de Coligny, 75001</h3>
<p>One of my very favourite places to drink, particularly in the early evening. It’s handy for the Louvre (just head east from le pavillion d’horologe) and the Louvre-Rivoli metro station.</p>
<p>The bar’s quite dark inside which lends it a slightly subdued elegance, though photophiles would probably prefer somewhere brighter. I particularly like the tables next to the south-facing windows through which warm light flows if you’re lucky with the time and weather.</p>
<p>One can dine here too, but I usually stick to their dry martinis.</p>
<p>November 2010 update: I fear their dry martinis have got a bit too wet, but damp is often a problem in winter.</p>
<p>October 2011 update: They now appear to offer some amazing Happy Hour deals where the marginal cost of a second drink is trivially small!</p>
<p>October 2012 update: The overly wet martinis persist, but thankfully the problem can be mitigated by ‘very, very dry’.</p>
<p>August 2014 update: The martinis have improved, the atmosphere remains wonderful, and I had dinner here too. The food was great, so I had lunch a few days later too!</p>
<p>For more details see <a href="http://www.lefumoir.com/">their website,</a> or <a href="http://maps.google.com/maps?q=N+48+51.626+E+2+20.443">Google Maps.</a></p>
<p><small><em>Last visited August 2014.</em></small></p>
<h3>Ô Chateau, 68, rue Jean-Jacques Rousseau, 75001</h3>
<p>A fine wine bar with a varied cellar and many wines by the glass. There’s a limited, but good, range of tapas and more substantial food too.</p>
<p>For more details see <a href="http://www.o-chateau.com/">their website,</a> or <a href="http://maps.google.com/maps?q=N+48+51.864+E+2+20.654">Google Maps.</a></p>
<p><small><em>Last visited October 2012.</em></small></p>
<h3>Bar 8, Mandarin Oriental Hotel, 251 rue Saint-Honoré, 75001</h3>
<p>Just perfect!</p>
<p>A lovely atmosphere if you’re in the mood for that elegant, high-end hotel chic, and quite magnificent cocktails. I tried the ‘Laphroaig smash’, which manages to seamlessly interpolate blackberries and mint into the smokey complexity of the whisky. Amazing!</p>
<p>For more details see <a href="http://www.mandarinoriental.com/paris/fine-dining/bar-eight/">their website,</a> or <a href="http://maps.google.com/maps?q=N+48+52.024+E+2+19.640">Google Maps.</a></p>
<p><small><em>Last visited August 2014.</em></small> h3. Môm, 4–6, rue Pierre Demours, 75017</p>
<p>A stylish and somewhat trendy place, which nevertheless made a perfectly good martini.</p>
<p>For more details see <a href="http://www.momparis.fr/">their website,</a> or <a href="http://maps.google.com/maps?q=N+48+52.797+E+2+17.507">Google Maps.</a></p>
<p><small><em>Last visited October 2012.</em></small> </p>E923C9CC-E2C6-11E3-BBAE-0108ECEED5E02014-05-23T22:09:15:15Z2014-05-24T09:40:45:45ZLeaving ArduinoMartin Oldfield<p>Although the Arduino continues to provide good, cheap, development boards, I increasingly dislike the <span class="caps">IDE </span>and software framework. Here are some notes on moving away from it. </p><h2>Introduction</h2>
<p>The Arduino project has been a great success. Lots of people are building hardware who wouldn’t have done were it not to exist. It’s also given us <a href="http://arduino.cc/en/Main/Products">numerous cheap development boards.</a></p>
<p>However, I don’t like the software. The <span class="caps">IDE </span>feels clunky, and increasingly I don’t like using the Arduino runtime because it hides so much from the programmer.</p>
<p>Here are some brief notes on my attempts to move away from the Arduino software to a standard avr-gcc and avr-glibc environment. Happily it’s quite a painless process.</p>
<h2>Toolchain, library, and programmer</h2>
<p>Besides the <span class="caps">IDE, </span>the Arduino application also provides all the <a href="http://www.nongnu.org/avr-libc/"><span class="caps">GNU AVR </span>tools,</a> so we’ll need to install those ourselves.</p>
<p><span class="caps">GNU </span>provide:</p>
<ul>
<li>the <a href="http://www.nongnu.org/avr-libc/">avr-gcc</a> toolchain;</li>
<li>the <a href="http://www.nongnu.org/avr-libc/user-manual/index.html">avr-libc</a> library;</li>
<li>the <a href="http://savannah.nongnu.org/projects/avrdude">avrdude</a> programmer.</li>
</ul>
<p>I found avrdude a bit complicated at first, and wrote <a href="../../2009/02/avrdude-cookbook.html">some notes</a> about using it.</p>
<p>All the tools are easy to install!</p>
<h3>MacOS installation</h3>
<p>Avrdude is in <a href="http://brew.sh">homebrew,</a> and Lars Immisch has <a href="https://github.com/larsimmisch/homebrew-avr">packaged the rest.</a></p>
<pre><code>$ brew tap larsimmisch/avr
$ brew install avr-libc avrdude</code></pre>
<h3>Debian/Ubuntu installation</h3>
<p>Everything is already packaged:</p>
<pre><code>$ sudo apt-get install avr-libc avrdude</code></pre>
<h3>Windows installation</h3>
<p>No idea!</p>
<h2>Benefits</h2>
<h3>Better control of tools</h3>
<p>Ages ago I wrote a <a href="../../2009/02/arduino-cli.html">Makefile to compile Arduino sketches from the command line.</a> Other people have <a href="https://github.com/sudar/Arduino-Makefile/">developed it further</a> and it now goes to heroic lengths to compile sketches without bothering the user with pesky details.</p>
<p>However, I find these Makefiles are complicated and difficult to adjust: they’re designed to hide the details of the compilation process just as the Arduino <span class="caps">IDE </span>does, but that makes it harder to take control of things yourself.</p>
<p>For example, I think <a href="http://www.nongnu.org/avr-libc/user-manual/group__demo__project.html">a more traditional development environment</a> makes it easier to experiment with <a href="http://www.tty1.net/blog/2008/avr-gcc-optimisations_en.html"><span class="caps">GCC </span>optimizations.</a></p>
<p>Replacing the Arduino runtime with avr-libc also makes it more explicit which hardware resources are being used, and thus makes it easier to reason about potential races between e.g. user code and interrupt handlers.</p>
<p>Finally, having a smaller runtime and simpler Makefiles appears to make compilation cycles significantly shorter, though I’ve not timed this.</p>
<h3>Fewer impedence mismatches</h3>
<p>It’s hard to be objective about this, but I find myself much more receptive to interesting but slightly subtle issues when I’m dealing with the hardware directly rather than through the Arduino code.</p>
<p>For example:</p>
<ul>
<li><a href="https://gcc.gnu.org/onlinedocs/gcc/Inline.html#Inline">thoughts about inline functions</a> seem irrelevant when calls to digitalWrite() are such resource hogs;</li>
<li><a href="http://www.nongnu.org/avr-libc/user-manual/group__util__atomic.html">concerns about interrupts and atomicity</a> seem more pertinent when you know that <em>all</em> the code running on the microcontroller has a consistent policy towards them;</li>
<li><a href="http://www.nongnu.org/avr-libc/user-manual/modules.html">other function provided by avr-libc</a> appear more accessible when you’re always targetting that <span class="caps">API.</span></li>
</ul>
<h3>Freedom from C++</h3>
<p>The Arduino runtime is written in C++, so if you use it you’re committed to including the C++ runtime in your code too. Parts of this seem quite large.</p>
<p>Of course, there’s nothing stopping you writting your own application in C++, but I think that choice should be up to you.</p>
<h2>Gotchas</h2>
<h3>Arduino Leonardo</h3>
<p>Although I’ve been dismissive of the Arduino runtime above, it’s not without its merits. It’s particularly important on e.g. the Leonardo, where the runtime handles comms with the <span class="caps">PC.</span></p>
<p>Older Arduino boards use the microcontroller’s <span class="caps">UART </span>to talk to a <span class="caps">FTDI USB</span>-serial chip. However some newer boards including the <a href="http://arduino.cc/en/Main/arduinoBoardLeonardo">Leonardo</a> sport a <span class="caps">USB</span>-aware microcontroller which provides a virtual serial port.</p>
<p>This means that if you want the program on the microcontroller to talk to the <span class="caps">PC, </span>you’ll need to implement a <span class="caps">USB </span>handler in your code. You’ll also need this if you want to support the Leonardo’s reset scheme.</p>
<p>You can read about the microcontroller, a <a href="http://www.atmel.com/devices/atmega32u4.aspx">ATmega32U4</a> on Atmel’s site. You might also find the <a href="http://www.usb.org/developers/devclass_docs/usbcdc11.pdf"><span class="caps">USB CDC </span>specification</a> helpful.</p>
<p>Alternatively, you can wrench suitable code from the Arduino runtime (see .../hardware/arduino/cores/arduino/ in the Arduino tree), but I found it hard to disentangle the necessary code from the rest of the Arduino framework.</p>
<p>A better approach might be to look at the <a href="http://www.fourwalledcubicle.com/LUFA.php"><span class="caps">LUFA </span>project,</a> or perhaps <a href="http://www.atmel.com/images/doc8360.pdf">Atmel’s own <span class="caps">USB </span>stack.</a> I’ve not tried these though.</p>
<p>You’ll need to pay due attention to the licenses when doing this.</p>
<h3>Libraries</h3>
<p>Besides the boards, one of the main strength’s of the platform is that libraries exist for all sorts of devices. Most of these target the Arduino <span class="caps">API, </span>and so won’t work without it.</p>
<p>A reasonable strategy seems to be:</p>
<ul>
<li>start by writing a simple sketch in the Arduino <span class="caps">IDE</span>;</li>
<li>refactor the library so that it doesn’t use the Arduino library;</li>
<li>move the new library into the Arduino-free environment.</li>
</ul>
<p>Again, you’ll need to pay due attention to the licenses when doing this.</p>
<h2>Is it worth it ?</h2>
<p>The short answer is that I don’t know. Moving away from the Arduino software whilst continuing to use their development boards certainly feels like taking the path less travelled, and it’s not yet clear if that will make ‛all the difference’.</p>
<p>Most of the process has been easy and felt like the right thing to do, but getting the <span class="caps">USB </span>connection working to the PC proved complicated, and I’m still not sure that it works properly.</p>
<p>Having got this far I suspect that most of my future projects won’t use the Arduino software, but ultimately even if I do go back, I’m glad I’ve do the experiment. </p>7D20D13E-FDA7-11DD-8B3A-3C39292444C42009-02-18T10:32:23:23Z2014-05-23T21:03:48:48ZAvrdude CookbookMartin Oldfield<p>I find it hard to remember the options I need when calling avrdude, but I don’t know enough to work them out reliably every time. So, here’s my handy crib-sheet. </p><h2>Avrdude</h2>
<p><a href="http://www.nongnu.org/avrdude/">Avrdude</a> seems to be the standard way that people program <span class="caps">AVR </span>microcontrollers (unless they’re using Atmel’s own software).</p>
<p>Obviously the first step is to find the avrdude software. You could compile it yourself, but I tend to leave that to someone else. Here are a couple of ways which work for me.</p>
<h3>Avrdude on Linux</h3>
<p>Debian and Ubuntu (at least) have avrdude packages so just install it. Everything’s in the right place so just run it:</p>
<pre><code>$ avrdude
Usage: avrdude [options]
Options:
-p <partno> Required. Specify AVR device.
...</code></pre>
<h3>Avrdude on the Mac</h3>
<p>These days, May 2014, avrdude is available through homebrew, so just:</p>
<pre><code>$ brew install avrdude
$ avrdude
Usage: avrdude [options]
...
</code></pre>
<h3>Avrdude in the Arduino software</h3>
<p>A little more effort is needed here, because you have to tell the avrdude where to find its configuration file too. Suppose the root of the Arduino software is stored in $A, then:</p>
<pre><code>$ $A/hardware/tools/avr/bin/avrdude \
-C $A/hardware/tools/avr/etc/avrdude.conf
avrdude: no programmer has been specified
on the command line or the config file
Specify a programmer using the -c option
and try again
Actually it seems that the Linux Arduino package doesn’t contain avrdude.</code></pre>
<h2>Software format</h2>
<p>You’ll also need to a file to send to the board. Avrdude likes ihex files which you can make with objcopy in the <span class="caps">GNU </span>binutils software.</p>
<h2>Talking to the Arduino bootloader</h2>
<p>Normally the <a href="http://www.arduino.cc/">Arduino</a> is programmed with a small <a href="http://arduino.cc/en/Tutorial/Bootloader">bootloader</a> which accepts the programs we write. Under the hood, the Arduino <span class="caps">IDE </span>calls avrdude to do the task. Happily the Arduino appears to be a <span class="caps">STK500 </span>programmer which avrdude’s supported for ages.</p>
<p>The examples below all work on the Diecimila and Duemilanove. They might work on other types of board too, but I’ve not tested them.</p>
<p>We’ll assume that $AVRDUDE has been set to a suitable incantation to invoke avrdude. Then to send foo.hex to the Arduino just do this:</p>
<pre><code>$AVRDUDE -q -V -F -c stk500v1 -v 19200 -P /dev/cu.usb* \
-p atmega168 -U flash:w:foo.hex</code></pre>
<p>What does all this mean ?</p>
<ul>
<li><strong>-q</strong><br />
Suppress some messages (nicer if you’re running the command inside emacs).</li>
</ul>
<ul>
<li><strong>-V</strong><br />
Don’t verify.</li>
</ul>
<ul>
<li><strong>-F</strong><br />
Ignore signature check.</li>
</ul>
<ul>
<li><strong>-c stk500v1 -v 19200</strong><br />
Talk <span class="caps">STK500 </span>to the Arduino at 19200 baud.</li>
</ul>
<ul>
<li><strong>-p atmega168</strong><br />
We’re programming a ATmega 168.</li>
</ul>
<ul>
<li><strong>-U flash:w:foo.hex</strong><br />
Write foo.hex to the flash.</li>
</ul>
<h2>Writing a bootloader to the Arduino</h2>
<p>Another fairly common task is to write a bootloader to a new ATmega chip: that turns an ATmega168 chip as produced by Atmel into a chip which works in the Arduino.</p>
<p>The information here I cribbed from the web, <em>inter alia</em>:</p>
<ul>
<li><a href="http://wolfpaulus.com/journal/embedded/arduino2.html">Wolf Paulus’ Web Journal</a></li>
<li><a href="http://www.ladyada.net/forums/viewtopic.php?t=3558">LadyAda’s tea party forums</a></li>
</ul>
<h3>Musing on fusing</h3>
<p>The recipes in the articles above almost work, but there seems to be a problem with the value of the extended fuse. Happily, <a href="http://tinker.it/now/2007/02/24/the-tale-of-avrdude-atmega168-and-extended-bits-fuses/">an article at tinker.it</a> explains what’s going on.</p>
<p>In essence: the ATmega chip has a bunch of configuration settings called fuses which control various aspects of memory management, what sort of clock’s being used and so on. When the chip is programmed these have to be set correctly or the thing just won’t work.</p>
<p>Although most configuration settings need only a few bits of data, they’re arranged into three fuse bytes: low, high, and extended. On the ATmega168’s extended fuse it seems that only the three least-significant bits actually matter, and avrdude knows to mask out the value sent. However, when the fuse is read back, those masked bits are read as 0, which results in an error like this:</p>
<pre><code>avrdude: safemode: lfuse reads as FF
avrdude: safemode: hfuse reads as DF
avrdude: safemode: efuse reads as 0
avrdude: safemode: efuse changed! Was f8, and is now 0
Would you like this fuse to be changed back? [y/n] n
avrdude: safemode: Fuses OK</code></pre>
<p>That’s all a bit tedious! Incidentally, if you answer ‛y’ at the prompt then avrdude just hangs.</p>
<p>Obviously the solution is just to set the extended fuse to 0x00, which gives us this prescription:</p>
<pre><code>AVRDUDE_ALL=$AVRDUDE -q -V -c stk500v2 \
-P /dev/cu.usbmodem* -p atmega168
$AVRDUDE_ALL -e -U lock:w:0x3F:m -U hfuse:w:0xDF:m \
-U lfuse:w:0xFF:m -U efuse:w:0x00:m
$AVRDUDE_ALL -D -U flash:w:bootloader.hex:i
$AVRDUDE_ALL -U lock:w:0xCF:m</code></pre>
<h3>Hardware</h3>
<p>You’ll need some extra hardware to do this. I’ve got an <a href="http://www.olimex.com/dev/avr-isp500.html">Olimex <span class="caps">AVR</span>-ISP500</a> which does the job nicely. It is a stk500v2 compatible device, and on my Mac it appears at /dev/cu.usbmodem*. So the basic incantation is straightforward:</p>
<pre><code>$AVRDUDE -q -V -c stk500v2 -P /dev/cu.usbmodem* -p atmega168</code></pre>
<p>Other programmers will work too. For example:</p>
<ul>
<li>LadyAda’s <a href="http://www.ladyada.net/make/usbtinyisp/download.html"><span class="caps">USB</span>tiny based programmer.</a></li>
<li>Use a spare <a href="http://tinker.it/now/2006/12/04/turn-arduino-into-an-avr-isp-programmer/">Arduino.</a></li>
<li>Use the same <a href="http://www.geocities.jp/arduino_diecimila/bootloader/index_en.html">Arduino.</a></li>
</ul>
<h3>Firmware</h3>
<p>The only other thing you’ll need is the bootloader itself. If $A points to the Arduino root directory, then you can find the file in $A/hardware/bootloaders/atmega168/ATmegaBOOT_168_diecimila.hex.</p>
<h2>Writing software to the Arduino without a bootloader.</h2>
<p>This is all somewhat speculative. I <em>think</em> that we basically follow the recipe above for writing a bootloader but upload our own hex file instead. I <em>think</em> that the extended fuse bits should be set to 0x01 though, so that on reset we execute from 0x0000 and not the start of the bootloader area.</p>
<p>If anyone can confirm or correct this, I’d be delighted. </p>41692970-E131-11E3-BB42-FFD1EBEED5E02014-05-21T21:45:17:17Z2014-05-22T08:53:49:49ZAtmel pin macrosMartin Oldfield<p>Brief notes on experimental macros for manipulating pins on <span class="caps">AVR </span>mega microcontrollers. </p><h2>Introduction</h2>
<p>When writing simple digital I/O code on Atmel microcontrollers, the standard approach is to access the <code>PIN</code>, <code>PORT</code> and <code>DDR</code> registers directly.</p>
<p>Here is an <a href="http://stackoverflow.com/questions/20993661/c-avr-simple-portb-ddrb-pinb-explanation">example on Stack Overflow.</a> As a concrete example, we might consider code like this:</p>
<pre><code>#include <avr/io.h>
...
DDRB |= _BV(PORTB2)
...
PORTB |= _BV(PORTB2)
...
PORTB &= ~(_BV(PORTB2))</code></pre>
<p>The <code>_BV</code> macro here is provided by <a href="http://www.nongnu.org/avr-libc/user-manual/group__avr__sfr.html">avr-libc</a> and just wraps the necessary shifts.</p>
<p>That code works, but it’s a pain to manage: typically in application code we want to refer to the pins in the language of the application, abstracting away from the pin which happens to perform the role.</p>
<p>Having chosen a pin, on the <span class="caps">AVR </span>mega, we need to keep track of both which 8-bit port is used, and which bit within that port. It would nice if we could say things like:</p>
<pre><code>#define LIGHT_DDR PORTB
#define LIGHT_PORT PORTB
#define LIGHT_BIT PORTB2
...
LIGHT_DDR |= _BV(LIGHT_BIT)
LIGHT_PORT |= _BV(LIGHT_BIT)</code></pre>
<p>Nicer still might be to hide the bitwise operations entirely:</p>
<pre><code>#include "mjo-pin.h"
...
SINGLE_PIN(light, B, 2)
...
light_init_write();
light_set_high();</code></pre>
<p>Behind the scenes, <a href="./mjo-pin.h">mjo-pin.h</a> defines the <code>SINGLE_PIN</code> macro which expands the name and definition into a series of inline functions. For example, <code>light_set_high()</code>:</p>
<pre><code>static inline light_set_high(void) __attribute__((always_inline));
static inline light_set_high(void) { PORTB |= _BV(PORTB2); }</code></pre>
<p>The functions compile down to compact machine code, so inlining them is both faster <em>and</em> smaller. As a bonus, if a function isn’t used, it won’t be included.</p>
<h3>No composability</h3>
<p>There are disadvantages to this approach though: because there’s a 1:1 correspondence between pins and functions, you can’t easily parameterize things, or clone them.</p>
<p>For example, in Arduino land, all the I/O pins have a single number and the I/O routines accept that number at runtime. So you can write code like this:</p>
<pre><code>uint8_t pins[] = { 1,2,3,5,6,7,0 };
for(int i = 0; pins[i] != 0; i++)
{
digitalWrite(pins[i], HIGH);
}</code></pre>
<p>Most of the controller libraries also accept pin numbers in their constructors. Here’s a <a href="http://arduino.cc/en/Tutorial/MotorKnob">stepper</a> example:</p>
<pre><code>#define STEPS 100
...
Stepper stepper(STEPS, 8, 9, 10, 11);</code></pre>
<p>The natural way to write a <code>SIMPLE_PIN</code> stepper controller would be to define pins outside of the object, generating e.g. <code>stepA_set_high()</code> functions, and then call those functions explicitly in the controller code. This makes it hard to instantiate two controllers in the same code.</p>
<h3>No fusability</h3>
<p>Another disadvantage is that there’s no scope for fusing separate operations. For example, consider:</p>
<pre><code>SINGLE_PIN(front_led, B, 2);
SINGLE_PIN(back_led, B, 3);
...
front_led_set_write();
back_led_set_write();</code></pre>
<p>which will compile down to:</p>
<pre><code>DDRB |= _BV(PORTB2);
DDRB |= _BV(PORTB3);</code></pre>
<p>In an ideal world, we’d replace this with:</p>
<pre><code>DDRB |= (_BV(PORTB2) | _BV(PORTB3))</code></pre>
<p>Sometimes this is just a matter of efficiency, but in some cases it’s important that the two bits are updated simultaneously.</p>
<h3>Portability</h3>
<p>On the plus side though, this scheme does hide all the <span class="caps">AVR </span>mega specific stuff in the macro definition. You could imagine writing analogous macros for the <span class="caps">ARM </span>say, and the only change to the application code would be to define the pins differently.</p>
<p>Or, if you were really wedded to the Arduino <span class="caps">API </span>you could target that. Given:</p>
<pre><code>SINGLE_PIN(light, 13);</code></pre>
<p>the macro would give us:</p>
<pre><code>static inline light_set_high(void) __attribute__((always_inline));
static inline light_set_high(void) { digitalWrite(13, HIGH); }</code></pre>
<h3>Open issues</h3>
<p>I don’t yet have a good feel for how well defining the pins this way will interact with the need to access hardware registers directly for more complicated tasks. Equally, I’m not sure how much I’ll miss the ability to parameterize and compose things.</p>
<p>Writing the macros is somewhat of an experiment, and writing this note is more to clarify my own thinking than suggest you embrace the idea. Still, for now, on the toy things I’ve used them for, I’m happy with the results. </p>766D7446-CFAC-11E3-A327-0563E5CF43042014-04-29T14:42:06:06Z2014-04-29T21:27:12:12ZSSR in a boxMartin Oldfield<p>Brief notes on putting a solid-state relay in a box. </p><h2>Introduction</h2>
<p>I wanted to control an oven from a microcontroller, which inevitably means switching reasonably high-power mains electricity. The scope for lethal mistakes here seems all too real, and worrying about those is a fine way to get nothing done. Reductionism seemed a good antidote, so I built a box with a solid-state relay in it, tested and debugged it before worrying about the oven. Unsurprisingly, it turned out to be quite easy!</p>
<p><img src="ssr.jpg" alt="" class="img_border" /></p>
<h2>Crude specifications</h2>
<p>The main thing to decide is how much current the box should handle. The oven I wanted to control is rated at 1.4kW or about 6A. <span class="caps">IEC </span>mains sockets are good for 10A, so I chose that.</p>
<h2>Bill of materials</h2>
<h3><span class="caps">SSR</span></h3>
<p> In a fairly unscientific way, I ended up looking at the <a href="http://www.crydom.com/en/Products/Catalog/s_1.pdf">Crydom Series 1 <span class="caps">SSR</span>s.</a> I wasn’t sure that the 10A model would suffice without an enormous heatsink, the 25A model was out of stock at Farnell, so I ended up with the 50A rated <a href="http://uk.farnell.com/jsp/search/productdetail.jsp?SKU=1200245"><span class="caps">D2450PG.</span></a></p>
<p>This <span class="caps">SSR </span>accepts a control voltage of 3–32V and will tolerate a reverse polarity input in this range. So I just wired the input of the <span class="caps">SSR </span>straight to the front panel. Low-voltage logic will need a boost to drive it, but such is life.</p>
<p>Assuming the worst-case voltage drop of 1.15V across the relay whilst delivering 10A leads to 11.5W of power being dissipated. The 50A model has a thermal resistance between the junction and case of 0.45°C/W, so we’d expect the junction to be less than about 5°C above the <span class="caps">SSR </span>case.</p>
<p>The <span class="caps">SSR </span>was mounted on a <a href="http://www.crydom.com/en/Products/Catalog/h_sp_1.pdf"><span class="caps">HSP</span>-1 thermal pad.</a></p>
<h3>Case</h3>
<p>Somewhat inevitably the case came from <a href="http://www.hammondmfg.com/sinkbox.htm">Hammond:</a> a natty black anodized-aluminium affair with heatsink-like ridges. Case <a href="http://www.hammondmfg.com/pdf/531621.pdf">431621</a> is wide enough to bolt the <span class="caps">SSR </span>to the base.</p>
<p>I ordered front and back panels from <a href="http://www.schaeffer-ag.de/en/">Schaeffer <span class="caps">AG.</span></a> You’re welcome to the <a href="http://mjoldfield.com/atelier/2014/04/ssr/panels.tar.gz">design files,</a> but note:</p>
<ul>
<li>The corner radius is too small: you can either make it larger or just slightly trim the plastic end plates.</li>
<li>It’s probably sensible to use an isolated connector for the DC control voltage. This didn’t occur to me until I’d got the panels and I ended up using the two signal lines in a 3.5mm stereo jack plug. You could probably do better.</li>
</ul>
<h3><span class="caps">IEC </span>connectors</h3>
<p>The case cutouts match connectors from Schurter:</p>
<ul>
<li>the <a href="http://www.schurter.co.uk/var/schurter/storage/ilcatalogue/files/document/datasheet/en/pdf/typ_6200.pdf">6200.4115</a> fused input,</li>
<li>and the <a href="http://www.schurter.co.uk/var/schurter/storage/ilcatalogue/files/document/datasheet/en/pdf/typ_6600-4.pdf">6600.4115</a> output.</li>
</ul>
<h3>Front panel</h3>
<p>None of this is critical:</p>
<ul>
<li>A 20A <span class="caps">SPST </span>toggle switch, 12.7mm cutout, e.g. <a href="http://www.arcolectric.co.uk/PDFS/catalogue/Pages/P036-037|LeverSwitches_171.pdf"><span class="caps">C1700HOAAC </span>from Arcolectric</a></li>
<li>Two 250V neons, 6.4mm cutout, e.g. <a href="http://www.camdenboss.com/indicators/neon/6-4mm-cutout/6-4mm-cutout-threaded-240v-stripped-wire"><span class="caps">IND515205</span>-240-T/RD from CamdenBoss</a></li>
<li>Control socket, 6.3mm cutout.</li>
</ul>
<h2>In operation</h2>
<h3>Heating</h3>
<p>I’ve not tried the relay at 10A, but I powered the 1.4kW over for about twenty minutes quite happily. Indeed I could couldn’t discern any appreciable heating by feeling the bottom.</p>
<h3>Zero-crossing</h3>
<p>To reduce noise, the <span class="caps">SSR </span>only switches when there’s no current flowing i.e. at the AC’s next zero-crossing. This implies that it would make a lousy dimmer switch, but for the oven it’s fine.</p>
<p>The plot below shows the current flow during switching (measured in a very crude and noisy fashion) through a light-bulb when the <span class="caps">SSR </span>was driven with a 1Hz, 80% duty signal. Red points correspond to the time when the control signal was off; green to on.</p>
<p><img src="start.svg" alt="Turning on" class="img_noborder_2up" /> <img src="end.svg" alt="Turning off" class="img_noborder_2up" /></p>
<p>Notice that the <span class="caps">SSR </span>only turns the current on or off at the first zero-crossing after the control signal has changed state. </p>73B9246E-FD18-11DD-8C2F-A521292444C42009-02-17T17:28:23:23Z2013-12-29T08:09:09:09ZArduino from the command lineMartin Oldfield<p>How to compile Arduino code from the command line. </p><h2>Retirement</h2>
<p>2013-05-19 : I've let this languish for too long, mainly because I don't do much with Arduinos now. Happily Sudar Muthu has taken over maintainance of the code. His github repository is <a href="https://github.com/sudar/Arduino-Makefile/">https://github.com/sudar/Arduino-Makefile/</a> though by the magic of github, the old <span class="caps">URL </span>continues to work.</p>
<p>Thanks to Sudar and everyone else who has helped for keeping this alive.</p>
<h2>Update News</h2>
<p>2012-09-17 : After letting this languish for many months there’s now <a href="http://www.mjoldfield.com/atelier/2009/02/acli/arduino-mk_0.10.tar.gz">version 0.10.</a> This:</p>
<ul>
<li>Supports the Leonardo board.</li>
<li>Moves the board reset code to Perl (this means you’ll need the Device::SerialPort module).</li>
<li>Has new path handling code.</li>
</ul>
<p>There are other small changes, for more details see the <a href="https://github.com/mjoldfield/Arduino-Makefile/commits/master">commit history.</a></p>
<p>The new path calculations mean that:</p>
<ol>
<li>Few, if any, paths need to specified in project specific Makefiles.</li>
<li>Paths can be grabbed from the environment e.g. from .bashrc.</li>
<li>It should be easier to move projects between e.g. Mac & Linux.</li>
</ol>
<p>However, you’ll need to set up some new variables to make this work:</p>
<dl>
<dt><span class="caps">ARDMK</span>_DIR</dt>
<dd>Things which are included in this distribution e.g. ard-parse-boards</dd>
<dt><span class="caps">ARDUINO</span>_DIR</dt>
<dd>Things which are always in the Arduino distribution e.g. boards.txt, libraries, &c.</dd>
<dt><span class="caps">AVR</span>_TOOLS_DIR</dt>
<dd>Things which might be bundled with the Arduino distribution, but might come from the system. Most of the toolchain is like this: on Linux it’s supplied by the system.</dd>
</dl>
<p>Thanks to Dan Villiom Podlaski Christiansen, Tom Hall, Scott Howard, Kalin Kozhuharov, Rickard Lindberg, Christopher Peplin, Marc Plano-Lesay, Jared Szechy, and Matthias Urlichs for patches and comments.</p>
<h3>Github</h3>
<p>You can now grab the source for this from <a href="https://github.com/mjoldfield/Arduino-Makefile">github.</a> Besides my version there are several others under development which offer better integration with the <span class="caps">IDE, </span>support for Microchip’s ChipKIT boards, and other delights.</p>
<h2>Introduction</h2>
<p>The <a href="http://www.arduino.cc/">Arduino</a> has done much to popularize microcontrollers for the casual tinkerer. Its success suggests that there’s considerable value in combining a standard microcontroller (the ATmega) and a <span class="caps">GCC </span>based toolchain into an easily digesible package. For myself, it’s certainly easier to just install the latest release of the Arduino software than worry about building my own cross-compilers, particularly when it’s all new to me and consequently somewhat confusing.</p>
<p>After working through the toy tutorials though, I found myself wishing that writing code for the Arduino were more like writing other C programs. In my case, that means editing it with emacs then building it with make. I must emphasize that I’m not criticizing the Arduino <span class="caps">IDE</span>: there’s nothing wrong with it beyond it not being emacs...</p>
<p>It turns out that others have been along this path before: in the past the Arduino website had a hopeful sounding ‘Arduino from the Command Line’ article, but it’s gone now. There is still <a href="http://arduino.cc/en/Hacking/HomePage">some information</a> though it’s more limited.</p>
<p>Without an official Makefile, I wrote my own. You might wonder why I should embark on such a task. Well:</p>
<ul>
<li>I was keen that all of my objects and random other files were completely separate from the main Arduino stuff in the applet directory.</li>
<li>Although I wanted to be able to build Arduino sketches, I also wanted a suitable jumping-off point for code which didn’t use wiring. In other words, to regard the Arduino software as a convenient way to get the <span class="caps">AVR GCC </span>toolchain.</li>
<li>Rather than dumping a big Makefile in each sketch directory, I wanted to have a few definitions in the directory which then included a large project-independent file from elsewhere.</li>
</ul>
<p>Finally, one of the things I enjoy about writing code for microcontrollers is the sense of continuity between the hardware datasheets published by the chip manufacturer and the code I write (by contrast if you’re writing code on Linux there’s a vast gulf between the code executing printf and stuff appearing on the screen). Writing my own Makefile seemed a good way to make sure I understood what was going on.</p>
<p>So to the Makefile. Obviously it owes a great debt to the people who wrote the Makefile shipped with the Arduino <span class="caps">IDE </span>and here’s the credit list from that file:</p>
<pre><code># Arduino 0011 Makefile
# Arduino adaptation by mellis, eighthave, oli.keller</code></pre>
<p>Thanks then to mellis, eighthavem and oli.keller.</p>
<h2>Installation instructions</h2>
<p>If you’re using Debian or Ubuntu, then just grab the arduino-mk package.</p>
<p>You should then set up environment variables thus:</p>
<pre><code>ARDUINO_DIR = /usr/share/arduino
ARDMK_DIR = /usr
AVR_TOOLS_DIR = /usr</code></pre>
<p>Otherwise, you’ll need to download <a href="http://www.mjoldfield.com/atelier/2009/02/acli/arduino-mk_0.10.tar.gz">the tarball containing the Makefile,</a> unpack it, and then copy the files somewhere sensible:</p>
<pre><code>$ wget http://www.mjoldfield.com/atelier/2009/02/acli/arduino-mk_0.10.tar.gz
$ tar xzvf arduino-mk_0.10.tar.gz
$ cp arduino-mk-0.10/arduino-mk/Arduino.mk /usr/local/arduino/Arduino.mk
$ cp arduino-mk-0.10/bin/* /usr/local/bin</code></pre>
<p>The next step is to set up environment variables which point to the different files.</p>
<p>On the Mac you might want to set:</p>
<pre><code>ARDUINO_DIR = /Applications/Arduino.app/Contents/Resources/Java
ARDMK_DIR = /usr/local</code></pre>
<p>On Linux, where the toolchain is installed in /usr, you might prefer:</p>
<pre><code>ARDUINO_DIR = /usr/share/arduino
ARDMK_DIR = /usr/local
AVR_TOOLS_DIR = /usr</code></pre>
<p> The final step is to create a small Makefile for the sketch you actually want to build. Let’s build the <a href="http://arduino.cc/en/Tutorial/WebServer">WebServer example</a> from the Arduino distribution: it’s a good example because software-wise it’s as complicated as the stardard examples get, but you can just plug the hardware together.</p>
<p>Create a new directory and copy the WebServer.ino file into it.</p>
<p><strong>Note: If you’re using version 1.0 of the Arduino software, you’ll need to make sure that the sketch’s name ends in .ino and not .pde.</strong></p>
<p>Now this we’ll add a Makefile:</p>
<pre><code>BOARD_TAG = uno
ARDUINO_PORT = /dev/cu.usb*
ARDUINO_LIBS = Ethernet Ethernet/utility SPI
include /usr/local/arduino/Arduino.mk</code></pre>
<p>Hopefully these will be self-explanatory but in case they’re not:</p>
<dl>
<dt><span class="caps">BOARD</span>_TAG</dt>
<dd>A tag identifying which type of Arduino you’re using. This only works in version 0.6 and later.</dd>
<dt><span class="caps">ARDUINO</span>_PORT</dt>
<dd>The port where the Arduino can be found (only needed when uploading) If this expands to several ports, the first will be used.</dd>
<dt><span class="caps">ARDUINO</span>_LIBS</dt>
<dd>A list of any libraries used by the sketch—we assume these are in $(ARDUINO_DIR)/hardware/libraries.</dd>
</dl>
<p>Until version 0.8 you had to specify a <span class="caps">TARGET </span>name which set the basename for the executables. You still <em>can</em> do this, but it’s not necessary: thanks to a patch from Daniele Vergini it now defaults to the name of the current directory.</p>
<p>In the past, the following options were used, and indeed you can still use them. However it’s probably better to use set <span class="caps">BOARD</span>_TAG and let the Makefile look up the values in boards.txt:</p>
<dl>
<dt><span class="caps">MCU</span></dt>
<dd>The target processor (atmega168 for the Duemilanove).</dd>
<dt>F_CPU</dt>
<dd>The target’s clock speed (16000000 for the Duemilanove).</dd>
<dt><span class="caps">AVRDUDE</span>_ARD_PROGRAMMER</dt>
<dd>The protocol avrdude speaks—defaults to stk500v1.</dd>
<dt><span class="caps">AVRDUDE</span>_ARD_BAUDRATE</dt>
<dd>The rate at which we talk to the board—defaults to 19,200.</dd>
</dl>
<h3><span class="caps">BOARD</span>_TAG</h3>
<p>Makefiles before version 0.5 had to specify which processor and speed the target used. For standard boards, this information can be found in the boards.txt file, so it seemed sensible to use that instead.</p>
<p>Now, one need only define <span class="caps">BOARD</span>_TAG to match the target hardware and it should work. Internally the Makefile invokes ard-parse-boards—a small Perl utility included with the software—which parses board.txt.</p>
<p>If you’re not sure which board tag you need, ard-parse-board will dump a full list:</p>
<pre><code>$ ard-parse-boards --boards
Tag Board Name
atmega168 Arduino NG or older w/ ATmega168
atmega328 Arduino Duemilanove or Nano w/ ATmega328
atmega8 Arduino NG or older w/ ATmega8
bt Arduino BT w/ ATmega168
bt328 Arduino BT w/ ATmega328
diecimila Arduino Diecimila, Duemilanove, or Nano w/ ATmega168
fio Arduino Fio
lilypad LilyPad Arduino w/ ATmega168
lilypad328 LilyPad Arduino w/ ATmega328
mega Arduino Mega (ATmega1280)
mega2560 Arduino Mega 2560
mini Arduino Mini
pro Arduino Pro or Pro Mini (3.3V, 8 MHz) w/ ATmega168
pro328 Arduino Pro or Pro Mini (3.3V, 8 MHz) w/ ATmega328
pro5v Arduino Pro or Pro Mini (5V, 16 MHz) w/ ATmega168
pro5v328 Arduino Pro or Pro Mini (5V, 16 MHz) w/ ATmega328
uno Arduino Uno </code></pre>
<p>If you don’t set it, <span class="caps">BOARD</span>_TAG defaults to uno.</p>
<p>You can, of course, continue to set F_CPU and <span class="caps">MCU </span>directly should you prefer that.</p>
<h3><span class="caps">ARDUINO</span>_LIBS</h3>
<p>Early (up to and including version 0.4) of this Makefile didn’t really support this (despite claims to the contrary). Happily various kind people sorted out the problem, one of whom patched the Debian and Ubuntu version.</p>
<p>In the official <span class="caps">IDE, </span>it’s enough to select the library from a menu: this puts the relevant #include into the Sketch and adds the necessarily linker tweaks too.</p>
<p>In this Makefile, you’ll need to both add the #include yourself and append the directories which contain the library to the <span class="caps">ARDUINO</span>_LIBS variable. Often these will both have the same name, though it’s worth noting that the #include refers to a single file, but the <span class="caps">ARDUINO</span>_LIBS entry refers to an entire directory of source files.</p>
<p>However, care is needed if the library’s source files aren’t in a single directory. For example, the webserver example uses the <a href="http://www.arduino.cc/en/Reference/Ethernet">Ethernet library</a> and we needed to include both Ethernet and Ethernet/utility in <span class="caps">ARDUINO</span>_LIBS.</p>
<p>If you omit the .../utility library, you’ll get messy looking link errors from the bowels of the Ethernet library. The <span class="caps">SPI </span>and Wire libraries are like this too!</p>
<h2>Building</h2>
<p>If you’re used to Unix then this is easy:</p>
<pre><code>$ make
...</code></pre>
<p>The output is pretty verbose, but I think it should be obvious if it worked. After building you’ll see a new directory has been created which contains all the object files: build-uno. Since version 0.10, if you rebuild the software with a different <span class="caps">BOARD</span>_TAG, you’ll get a different directory name.</p>
<pre><code>$ $ ls -lR
total 16
-rw-r--r-- 1 mjo staff 263 12 Feb 11:06 Makefile
-rw-r--r-- 1 mjo staff 2308 12 Feb 10:57 WebServer.ino
drwxr-xr-x 28 mjo staff 952 12 Feb 11:07 build-uno</code></pre>
<h3>build-uno</h3>
<p>Let’s peek inside the build-uno directory:</p>
<pre><code>$ ls -l build-uno
total 2136
-rw-r--r-- 1 mjo staff 2292 12 Feb 11:07 CDC.o
-rw-r--r-- 1 mjo staff 2292 12 Feb 11:07 HID.o
-rw-r--r-- 1 mjo staff 23452 12 Feb 11:07 HardwareSerial.o
-rw-r--r-- 1 mjo staff 16008 12 Feb 11:07 IPAddress.o
-rw-r--r-- 1 mjo staff 40012 12 Feb 11:07 Print.o
-rw-r--r-- 1 mjo staff 21068 12 Feb 11:07 Stream.o
-rw-r--r-- 1 mjo staff 16580 12 Feb 11:07 Tone.o
-rw-r--r-- 1 mjo staff 2300 12 Feb 11:07 USBCore.o
-rw-r--r-- 1 mjo staff 6048 12 Feb 11:06 WInterrupts.o
-rw-r--r-- 1 mjo staff 7068 12 Feb 11:07 WMath.o
-rw-r--r-- 1 mjo staff 79196 12 Feb 11:07 WString.o
-rw-r--r-- 1 mjo staff 2329 12 Feb 10:57 WebServer.cpp
-rw-r--r-- 1 mjo staff 1920 12 Feb 11:06 WebServer.d
-rw-r--r-- 1 mjo staff 11324 12 Feb 11:06 WebServer.o
-rwxr-xr-x 1 mjo staff 193852 12 Feb 11:07 WebServer.elf
-rw-r--r-- 1 mjo staff 28572 12 Feb 11:07 WebServer.hex
-rw-r--r-- 1 mjo staff 1920 12 Feb 11:08 depends.mk
-rw-r--r-- 1 mjo staff 541002 12 Feb 11:07 libcore.a
drwxr-xr-x 4 mjo staff 136 12 Feb 10:57 libs
-rw-r--r-- 1 mjo staff 3616 12 Feb 11:07 main.o
<-rw-r--r-- 1 mjo staff 5544 12 Feb 11:07 new.o
-rw-r--r-- 1 mjo staff 9780 12 Feb 11:06 wiring.o
-rw-r--r-- 1 mjo staff 7024 12 Feb 11:06 wiring_analog.o
-rw-r--r-- 1 mjo staff 9704 12 Feb 11:06 wiring_digital.o
-rw-r--r-- 1 mjo staff 7056 12 Feb 11:06 wiring_pulse.o
-rw-r--r-- 1 mjo staff 5736 12 Feb 11:06 wiring_shift.o
./build-uno/libs:
total 0
drwxr-xr-x 9 mjo staff 306 12 Feb 11:07 Ethernet
drwxr-xr-x 3 mjo staff 102 12 Feb 11:07 SPI
./build-uno/libs/Ethernet:
total 392
-rw-r--r-- 1 mjo staff 24836 12 Feb 11:07 Dhcp.o
-rw-r--r-- 1 mjo staff 23112 12 Feb 11:07 Dns.o
-rw-r--r-- 1 mjo staff 33008 12 Feb 11:07 Ethernet.o
-rw-r--r-- 1 mjo staff 42000 12 Feb 11:07 EthernetClient.o
-rw-r--r-- 1 mjo staff 19420 12 Feb 11:07 EthernetServer.o
-rw-r--r-- 1 mjo staff 41244 12 Feb 11:07 EthernetUdp.o
drwxr-xr-x 4 mjo staff 136 12 Feb 11:07 utility
./build-uno/libs/Ethernet/utility:
total 152
-rw-r--r-- 1 mjo staff 40480 12 Feb 11:07 socket.o
-rw-r--r-- 1 mjo staff 34840 12 Feb 11:07 w5100.o
./build-uno/libs/SPI:
total 16
-rw-r--r-- 1 mjo staff 6812 12 Feb 11:07 SPI.o </code></pre>
<p>Most of the files in here are object files for the wiring library. What about the others ?</p>
<dl>
<dt>WebServer.cpp</dt>
<dd>This is the .pde sketch file with a small main program prepended and a suitable #include prepended.</dd>
<dt>WebServer.d</dt>
<dd>This tracks the dependencies used by WebServer.pde</dd>
<dt>WebServer.elf</dt>
<dd>This is executable produced by the linker</dd>
<dt>WebServer.hex</dt>
<dd>This is a hex dump of (the code part) of the executable in a format understood by the Arduino’s bootloader.</dd>
<dt>WebServer.o</dt>
<dd>The object file we got by compiling WebServer.cpp.</dd>
<dt>depends.mk</dt>
<dd>A single file containing all the dependency relations (it’s the concatentation of all the .d files).</dd>
<dt>libcore.a</dt>
<dd>Rather than link all the system supplied objects directly, we build them into this library first, then link against it.</dd>
</dl>
<h2>Uploading code</h2>
<p>This is easy:</p>
<pre><code>$ make upload</code></pre>
<h2>Uploading via <span class="caps">ISP</span></h2>
<p>If you’re using target hardware which doesn’t have a bootloader then you might want to use <span class="caps">ISP </span>to upload the code. Though you’ll obviously need some extra hardware to do this.</p>
<p>Assuming that avrdude supports your programmer though, you’ll only need to make a few changes to the Makefile to tell avrdude where it can find the programmer and how to talk to it:</p>
<pre><code>ISP_PORT = /dev/ttyACM0
ISP_PROG = -c stk500v2</code></pre>
<p>Then to upload:</p>
<pre><code>$ make ispload</code></pre>
<h3>Fuses</h3>
<p>You might need to change the fuse settings when programming, though some care needs to be taken here or you might irreversibly damage the chip.</p>
<p>Normally the fuse settings are chosen from the boards.txt file to match the value of <span class="caps">BOARD</span>_TAG (assuming you’re running version 0.6 or higher), but you can set them yourself:</p>
<pre><code>ISP_LOCK_FUSE_PRE = 0x3f
ISP_LOCK_FUSE_POST = 0xcf
ISP_HIGH_FUSE = 0xdf
ISP_LOW_FUSE = 0xff
ISP_EXT_FUSE = 0x01
</code></pre>
<h2>Growing the project</h2>
<p>There a couple of obvious things to do now. You might want to edit the sketch. That’s easy: just edit the .ino file and run make again.</p>
<p>Alternatively you might want to add some more source files to the project. That’s easy too: the Makefile understands C, C++ and assembler files in the source directory (with .c, .cpp, and .s extensions). Everything <strong>should</strong> just work.</p>
<h2>Wiring-less development</h2>
<p>Finally you might want to develop code which isn’t linked against the Wiring library. There’s some scope for this: just set NO_CORE in the Makefile e.g.</p>
<pre><code>NO_CORE = 1</code></pre>
<h2>Bugs and problems</h2>
<ul>
<li>The Makefile isn’t very elegant.</li>
<li>When compiling the sketch file, the compiler actually sees the .cpp file derived from it. Accordingly the line numbers of any errors will be wrong (but not by that much).</li>
<li>The Makefile doesn’t do some of the things that the Makefile distributed with the Arduino software does e.g. generating <span class="caps">COFF </span>files. I worry that some of these might be important.</li>
<li>This hasn’t been used very much yet, even by me. I’m writing this now as much for my benefit as anyone else’s, though I’d be delighted to know if anyone else finds it useful.</li>
</ul>
<h2>Changelog</h2>
<h3>2010-05-21, <a href="http://www.mjoldfield.com/atelier/2009/02/acli/arduino-mk_0.3.tar.gz">version 0.3</a></h3>
<ul>
<li>Tidied up the licensing, making it clear that it’s released under <span class="caps">LGPL</span> 2.1.</li>
<li><a href="http://hands.com/~phil/">Philip Hands</a> sent me some code to reset the Arduino by dropping <span class="caps">DTR </span>for 100ms, and I added it.</li>
<li>Tweaked the Makefile to handle version 0018 of the Arduino software which now includes main.cpp. Accordingly we don’t need to—and indeed must not—add main.cxx to the .pde sketch file. The paths seem to have changed a bit too.</li>
</ul>
<h3>2010-05-24, <a href="http://www.mjoldfield.com/atelier/2009/02/acli/arduino-mk_0.4.tar.gz">version 0.4</a></h3>
<ul>
<li>Tweaked rules for the reset target on Philip Hands’ advice.</li>
</ul>
<h3>2011-06-23, <a href="http://www.mjoldfield.com/atelier/2009/02/acli/arduino-mk_0.5.tar.gz">version 0.5</a></h3>
<ul>
<li>Imported changes from Debian/Ubuntu, which incorporate a patch from Stefan Tomanek so that libraries would be compiled too.</li>
</ul>
<p>Note: Many other people sent me similar patches, but I didn’t get around to using them. In the end, I took the patch from Debian and Ubuntu: there seems merit in not forking the code and using a tested version. So, thanks and apologies to Nick Andrew, Leandro Coletto Biazon, Thibaud Chupin, Craig Hollabaugh, Johannes H. Jensen, Fabien Le Lez, Craig Leres, and Mark Sproul.</p>
<h3>2011-06-23, <a href="http://www.mjoldfield.com/atelier/2009/02/acli/arduino-mk_0.6.tar.gz">version 0.6</a></h3>
<ul>
<li>Added ard-parse-boards. Mark Sproul suggested doing something like this ages ago, but I’ve only recently looked at it in detail.</li>
<li>Fabien Le Lez reported that one needs to link with -lc to avoid <a href="http://www.arduino.cc/cgi-bin/yabb2/YaBB.pl?num=1290294587">linker errors.</a></li>
</ul>
<h3>Unreleased, <a href="http://www.mjoldfield.com/atelier/2009/02/acli/arduino-mk_0.7.tar.gz">version 0.7</a></h3>
<ul>
<li>Added -lm to the linker options, and -F to stty.</li>
</ul>
<h3>2012-02-12, <a href="http://www.mjoldfield.com/atelier/2009/02/acli/arduino-mk_0.8.tar.gz">version 0.8</a></h3>
<ul>
<li>Patches for version 1.0 of the Arduino <span class="caps">IDE.</span> Older versions might still work, but I’ve not tested it.</li>
<li>A change to the build process: rather than link all the system objects directly into the executable, bundle them in a library first. This should make the final executable smaller.</li>
<li>If <span class="caps">TARGET </span>isn’t explicitly set, default to the current directory name. Thanks to Daniele Vergini for this patch.</li>
<li>Add support for .c files in system libraries: Dirk-Willem van Gulik and Evan Goldenberg both reported this and provided patches in the same spirit.</li>
<li>Added a size target as suggested by Alex Satrapa.</li>
</ul>
<h3>Later versions</h3>
<p>Please consult the <a href="https://github.com/mjoldfield/Arduino-Makefile/commits/master">commit history</a> on github.</p>
<h2>Similar work</h2>
<p>It’s not a derivative of this, but Alan Burlison has written <a href="http://bleaklow.com/2010/06/04/a_makefile_for_arduino_sketches.html">a similar thing.</a></p>
<p>Alan’s Makefile was used in <a href="http://pragprog.com/magazines/2011-04/advanced-arduino-hacking">a Pragmatic Programmer’s article.</a></p>
<p>Rei Vilo wrote to tell me that he’s using the Makefile ina Xcode 4 template called <a href="http://embedxcode.weebly.com">embedXcode.</a> Apparently it supports many platforms and boards, including <span class="caps">AVR</span>-based Arduino, <span class="caps">AVR</span>-based Wiring, <span class="caps">PIC32</span>-based chipKIT, <span class="caps">MSP430</span>-based LaunchPad and <span class="caps">ARM3</span>-based Maple. </p>B223D758-E9D2-11E1-B821-8B86F3EA51C92012-08-18T23:03:17:17Z2013-12-29T08:08:33:33ZA NTP driven Nixie ClockMartin Oldfield<p>A Nixie clock which gets its time from a Raspberry Pi pretending to be a <span class="caps">GPS </span>receiver. </p><p>Over the last few years, clocks based on <a href="http://en.wikipedia.org/wiki/Nixie_tube">Nixie Tubes</a> have become popular and attractive kits are now widely available.</p>
<p>I thought it would be fun to build a Nixie Clock, but I wanted one which got the time from the Internet using <a href="http://en.wikipedia.org/wiki/Network_Time_Protocol"><span class="caps">NTP.</span></a> That way, it would always be accurate, even jumping forwards and back to accommodate summer time.</p>
<p>My initial plan was to design my own clock, but it seemed unlikely that I’d make anything which looked as good as the kits, even were I to spend a lot of time on the process. Perhaps I could adapt a kit: most of the clocks have a microcontroller inside, so my next idea was to take an existing clock kit and hack the firmware.</p>
<p>That seemed easier, but then a better idea struck me: some of the clock designs accept <a href="http://en.wikipedia.org/wiki/NMEA_0183"><span class="caps">NMEA</span></a> data from a <span class="caps">GPS </span>receiver. I was sure I could replace the <span class="caps">GPS</span>r with a computer and fake the signal.</p>
<h2>The ИН-18 Blue Dream</h2>
<p><img src="nixie-clock-2.jpg" alt="" class="img_border" /></p>
<p>The ИН-18 <a href="http://www.nocrotec.com/shop/product_info.php/language/en/info/p127_IN-18-Blue-Dream-Nixie-Uhr.html">Blue Dream</a> was the nicest looking clock kit I could find, so I bought one. The kit is almost too easy to build, because most of the components are <span class="caps">SMD </span>parts which are pre-soldered to the <span class="caps">PCB.</span></p>
<p>Although the documentation includes the pin-out for the <span class="caps">GPS, </span>it is silent about which signals are expected. Happily Dieter at Nocrotec quickly supplied the answers. The Blue Dream:</p>
<ul>
<li>Expects a normal 4800 baud, 8N1 <span class="caps">NMEA </span>serial signal.</li>
<li>Uses <a href="http://en.wikipedia.org/wiki/RS-232">RS-232</a> levels. This implies that a ‘1’ is represented by a negative voltage and a ‘0’ by a positive one.</li>
<li>Extracts the time from the <a href="http://www.gpsinformation.org/dale/nmea.htm#RMC"><span class="caps">GPRMC </span>sentence.</a></li>
</ul>
<h2>Raspberry Pi</h2>
<p>To synthesize the fake <span class="caps">GPS </span>signal, a <a href="http://www.raspberrypi.org/faqs">Raspberry Pi</a> seemed the obvious choice: it’s cheap and small, yet runs a full Linux distribution. To avoid running an Ethernet cable, I installed one of those dinky <span class="caps">USB</span> WiFi adapters.</p>
<p>The software is pretty trivial. In fact, the most difficult part wasn’t making the serial port say the right things, but telling the kernel to keep its hands off! Happily Clayton Smith provided <a href="http://www.irrational.net/2012/04/19/using-the-raspberry-pis-serial-port/">an excellent recipe</a> to do this.</p>
<p>By contrast, the software itself is just a few lines of Perl. You can grab the code from <a href="https://github.com/mjoldfield/nmea-time-daemon">github</a></p>
<h2>Hardware</h2>
<p><img src="nixie-clock-3.jpg" alt="" class="img_border" /></p>
<p>Interfacing the Raspberry Pi’s serial output to the Nixie clock proved equally easy. A simple <span class="caps">NPN</span>-transistor inverter converts the levels into something the clock will accept:</p>
<table class="spaced" cellspacing="0"><tr><th rowspan="2">Logic Level</th><th>RS-232</th><th>Raspberry Pi</th><th rowspan="2">‘Inverter’ Output</th></tr><tr><th>(typical)</th><th><span class="caps">GPIO </span>port</th></tr><tr><td align="center">0</td><td align="center">+5V</td><td align="center">0V</td><td align="center">+5V</td></tr><tr><td align="center">1</td><td align="center">-5V</td><td align="center">3.3V</td><td align="center">0V</td></tr></table>
<p>As you’ll see the conversion isn’t faithful, but it works.</p>
<h3>Power</h3>
<p>Obviously it would be nice to minimize the number of power supplies and cables we need. The clock needs a 12V supply, and draws about 500mA in normal operation. The power supply supplied with the clock is rated at 1.5A though, so obviously the best solution would be to power the Raspberry Pi from that too.</p>
<p>The Pi wants 5V, so I built a simple switching regulator around a <a href="http://www.ti.com/product/lmz12002&lpos=Middle_Container&lid=Alternative_Devices"><span class="caps">LMZ12002</span></a> to drop the voltage. It’s true that something like a 7805 would work, but you’d have to dissipate about 3W of heat which is a bit of a bore. Needless to say the regulator was the most significant part of the electronics!</p>
<p>Both the inverter and the regulator fitted easily on a small piece of stripboard.</p>
<p>Finally, we obviously want to run only one cable to the clock, so I hacked the clock to accept the +12V from one of the spare pins on the <span class="caps">GPS </span>input connection (pin 6 in case I forget).</p>
<h2>Observations</h2>
<p>I’ve wanted to build an <span class="caps">NTP </span>driven clock for ages, but using an Arduino or a <span class="caps">PIC </span>seemed to be quite a lot of hassle and fairly expensive. By contrast the Raspberry Pi is both cheaper and easier to use. In fact, its biggest downside is the complete lack of any mounting holes on the <span class="caps">PCB</span>!</p>
<p>The cost difference seems significant. As of August 2012, here are some representative prices:</p>
<table class="spaced" cellspacing="0"><tr><th colspan="2">Arduino Ethernet</th></tr><tr><td>Arduino Uno</td><td>€20.00</td></tr><tr><td>Ethernet Shield</td><td>€29.00</td></tr><tr><td> </td><td style="border-top: solid black 1px; border-bottom: solid black 1px;">€49.00</td></tr><tr><td colspan="2"> </td></tr><tr><th colspan="2">Raspberry Pi Ethernet</th></tr><tr><td>Raspberry Pi</td><td>€30.00</td></tr><tr><td> </td><td style="border-top: solid black 1px; border-bottom: solid black 1px;">€30.00</td></tr><tr><td colspan="2"> </td></tr><tr><th colspan="2">Arduino WiFi</th></tr><tr><td>Arduino Uno</td><td>€20.00</td></tr><tr><td>WiFi Shield</td><td>€69.00</td></tr><tr><td> </td><td style="border-top: solid black 1px; border-bottom: solid black 1px;">€89.00</td></tr><tr><td colspan="2"> </td></tr><tr><th colspan="2">Raspberry Pi WiFi</th></tr><tr><td>Raspberry Pi</td><td>€30.00</td></tr><tr><td><span class="caps">USB</span> WiFi adapter</td><td>€15.00</td></tr><tr><td> </td><td style="border-top: solid black 1px; border-bottom: solid black 1px;">€45.00</td></tr></table>
<p>Despite being cheaper, software development is easier on the Raspberry Pi. Given that it’s a full Linux environment, we get a full networking stack as standard which extends all the way up to clients for <span class="caps">NTP </span>and any other protocol you want.</p>
<p>For the clock the only software we actually needed to write was a noddy little loop which just formats a message and sends it to the serial port. Neither performance nor memory matter much, so it’s an obvious task for something like Perl or Python.</p>
<p>I’m sure that we’ll see many projects, particularly those with a network connection, migrate from the Arduino and its ilk to the Raspberry Pi.</p>
<p><img src="nixie-clock-4.jpg" alt="" class="img_border" /> </p>7E5ACA08-D785-11E1-BE69-C647634E34322012-07-27T00:53:20:20Z2013-12-29T08:05:43:43ZUseful Raspberry Pi LinksMartin Oldfield<p>Links I found useful when starting to play with the Raspberry Pi. </p><h2>Abstract</h2>
<p>The Raspberry Pi is a nice cheap Linux board, but inevitably with anything so new useful information is scattered around the Internet. Here are some articles I found useful, but I’ve made no attempt to be exhaustive or definitive!</p>
<h2>Images, kernels, and firmware</h2>
<p>The canonical source for SD card images is <a href="http://www.raspberrypi.org/downloads">the RPi download page.</a></p>
<p>When I originally wrote this in July 2012, things were a bit complicated but now, in April 2013, ‘just use Raspbian’ seems to be the universal choice.</p>
<p>At the time of editing, 2013-04-01, the <a href="http://downloads.raspberrypi.org/images/raspbian/2013-02-09-wheezy-raspbian/2013-02-09-wheezy-raspbian.zip">2013-02-09 release</a> of <a href="http://www.raspbian.org">Raspbian</a> is recommended.</p>
<h3>The past is a different country</h3>
<p>Progress is a wonderful thing, and I no longer worry about:</p>
<ul>
<li>The distinction between ‘hard-float’ and ‘soft-float’. For more details, read <a href="http://www.raspbian.org/RaspbianFAQ">Raspbian’s <span class="caps">FAQ.</span></a></li>
<li>Special kernels from <a href="http://www.bootc.net/about/">Chris Boot</a>, or <a href="http://www.ctrl-alt-del.cc/2012/05/raspberry-pi-meets-edimax-ew-7811un-wireless-ada.html">special WiFi drivers</a>.</li>
<li>Firmware updating, though I suspect Hexxah’s <a href="https://github.com/Hexxeh/rpi-update">rpi-update script</a> is still the best solution here.</li>
</ul>
<h3>Writing the card on the Mac</h3>
<p>Happily the key instructions <a href="http://elinux.org/RPi_Easy_SD_Card_Setup">have been documented.</a></p>
<p>In essence (change disk3 to suit):</p>
<ul>
<li>diskutil unmountDisk /dev/disk3s1</li>
<li>sudo dd bs=1m if=foo.img of=/dev/rdisk3</li>
<li>diskutil eject /dev/rdisk3</li>
</ul>
<p>At least on my MacBook Pro, an external <span class="caps">USB</span> SD card reader supports a wider range of cards than the laptops’s own SD slot.</p>
<h3>Initial configuration</h3>
<p>The initial Raspbian boot leads to a menu which allows for some crude configuration.</p>
<ul>
<li><span class="caps">RAM </span>allocation. The Raspberry Pi has 256MB of <span class="caps">RAM </span>split between main system <span class="caps">RAM </span>and graphics. If you’re primarily running the machine headless, it makes sense to minimize the memory devoted to graphics.</li>
<li>Partition resizing. The Raspbian image is small: only about 1.9GB. If you install it on a 8GB SD card, then that leaves 6GB unused. Happily the root partition can be resized to fill the card. Magic!</li>
</ul>
<h2><span class="caps">GPIO </span>and other animals</h2>
<p>One of the nice things about the Raspberry is the <a href="http://elinux.org/RPi_Low-level_peripherals#General_Purpose_Input.2FOutput_.28GPIO.29"><span class="caps">GPIO </span>interface:</a> a set of pins you can control at will.</p>
<p>The kernel exposes most of the functionality in various /dev devices, but Mike McCauley has written a <a href="http://www.open.com.au/mikem/bcm2835/">nice library</a> to make the process smoother.</p>
<p>Finally there’s an official <a href="http://www.raspberrypi.org/wp-content/uploads/2012/02/BCM2835-ARM-Peripherals.pdf">datasheet</a> from Broadcom. </p>56479038-E472-11E1-9114-89630E016B332012-08-12T11:38:58:58Z2013-12-29T08:04:28:28ZThe TM1638 & the Raspberry PiMartin Oldfield<p>A tiny library to interface cheap <span class="caps">TM1638 </span>displays to the Raspberry Pi. </p><p><img src="pi-tm1638-1.jpg" alt="" class="img_border" /></p>
<p>One of the Raspberry Pi’s many features is that it’s easy to connect to displays: it has both a modern <span class="caps">HDMI </span>connector and a phono socket dispensing old-school composite video. However, there are times when one doesn’t want a large monitor around but would rather have a few seven-segment displays instead.</p>
<p>Whilst one could build that from scratch, one can also buy small modules with eight seven-segment displays, eight red-green <span class="caps">LED</span>s and eight push buttons for about $7 (as of August 2012). I bought mine from <a href="http://www.dealextreme.com/p/8x-digital-tube-8x-key-8x-double-color-led-module-81873?item=8">dealextreme</a> but there might be other sources.</p>
<p>The boards are basically just the <span class="caps">LED</span>s and switches, plus a <span class="caps">TM1638 </span>driver chip. The chip sits on a two-wire serial bus which makes it fairly easy to connect the boards to a computer/microcontroller of your choice. Of course, one needs a little bit of software, so I wrote some.</p>
<h2>Arduino woz ’ere.</h2>
<p>People have already worked out how to do all this on the Arduino:</p>
<ul>
<li>John Boxall wrote <a href="http://tronixstuff.wordpress.com/2012/03/11/arduino-and-tm1638-led-display-modules/">a blog about it</a> which describes the boards in great detail.</li>
<li>Ricardo Batista wrote <a href="http://code.google.com/p/tm1638-library/">a library to handle the comms.</a></li>
<li><a href="http://www.freetronics.com">Marc</a> (via John above) <a href="http://dl.dropbox.com/u/8663580/TM1638English%20version.pdf">found a datasheet.</a></li>
</ul>
<h2>On the Raspberry Pi</h2>
<h3>Hardware issues</h3>
<p>The only important difference between the Arduino and the Raspberry Pi in this case is that the Arduino’s a 5V beast but the Pi prefers 3.3V. Happily though the dealextreme board appears to cope perfectly well with the lower supply voltage.</p>
<h3>Software</h3>
<p>My code isn’t really a port of Ricardo’s Arduino library: I wanted a different <span class="caps">API. </span> However, I did copy his nice 7-segment font, and his code was very helpful when it came to understanding the data-sheet.</p>
<h2>Installation</h2>
<p>Before you start, you’ll need Mike McCauley’s <a href="http://www.open.com.au/mikem/bcm2835/">nice bcm2835 library.</a></p>
<p>You can then grab my <span class="caps">TM1638 </span>library from <a href="https://github.com/mjoldfield/pi-tm1638">github</a> in a couple of ways.</p>
<p>If you’ve got autotools then you can just clone the repository:</p>
<pre><code>$ git clone https://github.com/mjoldfield/pi-tm1638.git
$ cd pi-tm1638
$ autoreconf -vfi
$ ./configure
$ make
$ sudo make install </code></pre>
<p>You might find this easier and faster though:</p>
<pre><code>$ wget https://github.com/downloads/mjoldfield/pi-tm1638/pi-tm1638-1.0.tar.gz
$ tar xzvf pi-tm1638-1.0.tar.gz
$ cd pi-tm1638-1.0
$ ./configure
$ make
$ sudo make install</code></pre>
<h3>Generic <span class="caps">AVR </span>support</h3>
<p>In a pleasingly symmetric way, Filipe Moraes has ported this back to generic <span class="caps">AVR </span>chips, and added a scroll feature. You can read about it in Brazilian Portuguese on <a href="http://devpix.net/blog/?p=323">his blog,</a> or grab the code from <a href="http://devpix.net/blog/wp-content/uploads/2013/05/tm1638pjt.zip">http://devpix.net/blog/wp-content/uploads/2013/05/tm1638pjt.zip.</a></p>
<h3>Documentation</h3>
<p>If you’ve got doxygen installed the compilation process leaves <span class="caps">HTML </span>files in doc/html. Otherwise feel free to <a href="http://mjoldfield.github.com/pi-tm1638/tm1638_8h.html">browse them on github.</a></p>
<h2>Examples</h2>
<p>Three example programs are included with the software, and you’ll find them all in the examples directory:</p>
<ul>
<li>tm1638-hello: The canonical ‘Hello World’ program.</li>
<li>tm1638-buttons: A simple demonstration which reads the buttons.</li>
<li>tm1638-clock: A digital clock.</li>
</ul>
<p><strong><span class="caps">N.B.</span> All three examples hard code the pin numbers into the executable.</strong> So to run the examples you’ll need to make the following connections:</p>
<table class="spaced" cellspacing="0"><tr><th colspan="2">Raspberry Pi</th><th colspan="2"><span class="caps">TM1638</span> Board</th></tr><tr><th>Name</th><th>Pin</th><th>Name</th><th>Pin</th></tr><tr><td>3.3V</td><td>P1-01</td><td>Vcc</td><td>Pin 1</td></tr><tr><td><span class="caps">GROUND</span></td><td>P1-06</td><td><span class="caps">GND</span></td><td>Pin 2</td></tr><tr><td><span class="caps">GPIO</span> 17</td><td>P1-11</td><td><span class="caps">DIO</span></td><td>Pin 4</td></tr><tr><td><span class="caps">GPIO</span> 21</td><td>P1-13</td><td><span class="caps">CLK</span></td><td>Pin 3</td></tr><tr><td><span class="caps">GPIO</span> 22</td><td>P1-15</td><td><span class="caps">STB1</span></td><td>Pin 5</td></tr></table>
<h3>Raspberry Pi Revision 2</h3>
<p> Dominik Eschenmoser pointed out that if you’re using revision 2 of the Pi hardware, you have to chance the clock pin from <span class="caps">GPIO21 </span>to <span class="caps">GPIO27.</span></p>
<h3>Software</h3>
<p>Once you’ve sorted out the hardware, doing simple things with the hardware is easy.</p>
<p>Let’s look at one of the example programs, which turns your Raspberry Pi into a digital clock. Stripping out the comments and error checks, here’s the code:</p>
<pre><code>#include <stdio.h>
#include <stdlib.h>
#include <stdbool.h>
#include <time.h>
#include <bcm2835.h>
#include <tm1638.h>
int main(int argc, char *argv[])
{
bcm2835_init();
tm1638_p t = tm1638_alloc(17, 21, 22);
while(t)
{
time_t now = time(NULL);
struct tm *tm = localtime(&now);
char text[10];
snprintf(text, 9, "%02d %02d %02d",
tm->tm_hour, tm->tm_min, tm->tm_sec);
tm1638_set_7seg_text(t, text, 0x00);
delay(100);
}
return 0;
} </code></pre>
<p> If you’ve installed the tm1638 library, and saved the code above as clk.c, to compile and run it:</p>
<pre><code>$ gcc -std=c99 clk.c -o clk -lbcm2835 -ltm1638
$ sudo ./clk</code></pre>
<p>You need to run the program as root (which is what the sudo does above), so that the code can talk to the <span class="caps">GPIO </span>hardware.</p>
<p>Having done that little lot, you should get something like this: an automatic wireless clock:</p>
<p><img src="pi-tm1638-2.jpg" alt="" class="img_border" /> </p>DD000138-F9A2-11E1-8DE7-D5709A4BEC852012-09-08T08:27:11:11Z2013-12-29T08:02:45:45ZPlaces to eat in CaliforniaMartin Oldfield<p>Some brief notes on places to eat in California. </p><p>In the past, I've found some great food in California. Sadly on my most recent visit, in 2012, most of the places were pretty disappointing. Here are a few exceptions:</p>
<h2>Sears Fine Food</h2>
<p>‘Fine Food’ is pushing it, but Sears makes a fine breakfast pancake!</p>
<p>For more informaion visit <a href="http://www.searsfinefood.com/">their website,</a> or see <a href="http://maps.google.com/maps?q=N+37+47.331+W+122+24.517">Google Maps.</a></p>
<p><small><em>Last visited August 2012.</em></small></p>
<h2>Drake's Beach Cafe</h2>
<p>Great, very simple food in the <a href="http://www.nps.gov/pore/index.htm">Point Reyes National Seashore.</a> I had the fried chicken with garlic fries and it was simply perfect.</p>
<p>For more informaion visit <a href="http://drakescafe.com/">their website,</a> or see <a href="http://maps.google.com/maps?q=N+38+1.672+W+122+57.688">Google Maps.</a></p>
<p><small><em>Last visited August 2012.</em></small></p>
<h2>The House of Beef</h2>
<p>A steak house in Oakdale which more than lives up to its wonderful name! Yosemite seems to be a gastronomic desert (and I include the Ahwahnee in this) so ignore this place at your peril!</p>
<p>For more informaion visit <a href="http://www.houseofbeef.com/">their website,</a> or see <a href="http://maps.google.com/maps?q=N+37+46.063+W+120+50.904">Google Maps.</a></p>
<p><small><em>Last visited September 2012.</em></small> </p>CFB18CD4-9AC1-11E2-AC03-1E1708EBF0BD2013-04-01T11:46:27:27Z2013-12-29T07:20:21:21ZRaspberry Pi IdentificationMartin Oldfield<p>Being rather cheap little animals, Raspberry Pis tend to proliferate, and before long it’s hard to keep track of them all. Naming them helps, and software can make that more efficient. </p><h2>Names</h2>
<p>Most of the time, I set up a new Raspberry Pi by downloading a copy of <a href="http://www.raspberrypi.org/downloads">Raspbian,</a> and writing it to a SD card. Inevitably that leads to machines all called “raspberrypi” which is far from helpful when it comes to distinguishing them from each other.</p>
<p>So, I give each machine a name. This is easy to do, just edit /etc/hostname and /etc/hosts:</p>
<pre><code>pi@breadpi ~ $ cat /etc/hostname
breadpi
pi@breadpi ~ $ cat /etc/hosts
127.0.0.1 localhost
::1 localhost ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
127.0.1.1 breadpi </code></pre>
<p>Editing /etc/hostname changes the machine’s name, but if you don’t have a matching entry in /etc/hosts things like sudo complain:</p>
<pre><code>$ sudo emacs /etc/hosts
sudo: unable to resolve host breadpi</code></pre>
<p>You’ll probably need to reboot to make sure the hostname propagates properly. There’s also a risk that the old hostname has become explicitly embedded in other places.</p>
<h2>Zeroconf and avahi</h2>
<p>Having given something a name, it’s helpful if others can use it! In olden days we’d give the computer a fixed IP number, and then use the <a href="http://en.wikipedia.org/wiki/Domain_Name_System"><span class="caps">DNS</span></a> to associate the computer’s name with that number. Today though it’s much more convenient to have the IP addresses assigned by <a href="http://en.wikipedia.org/wiki/Dhcp"><span class="caps">DHCP.</span></a> Obviously this makes it hard to put the machine into a <a href="http://en.wikipedia.org/wiki/Zone_file">zone file!</a></p>
<p>Today we can use <a href="http://en.wikipedia.org/wiki/Zeroconf">zeroconf</a> techniques to solve the problem. Apple have used this for ages, under the <a href="https://developer.apple.com/bonjour/">Bonjour</a> moniker, but happily it’s also available on Linux. In particular the avahi-daemon package lets us broadcast our name to the world.</p>
<p>Installing and configuring the package is easy, in fact it’s a one-liner:</p>
<pre><code>$ sudo apt-get install avahi-daemon</code></pre>
<p>Then, from a suitably zeroconf enabled machine, you can find the name in the .local domain:</p>
<pre><code>$ ssh pi@breadpi.local
...
$ curl http://breadpi.local/~pi/
...</code></pre>
<p>Normally I connect from a Mac which understands .local: if you’re using a Linux box then I think you’ll need to install avahi on it too.</p>
<p>Either way, after a few trivial configuration changes it’s now easy to access Raspberry Pis remotely without digging in <span class="caps">DHCP</span>d logs to get their IP addresses.</p>
<h3>An unexpected bonus</h3>
<p>If you use ssh to connect to a number of different hosts whose IP addresses keep changing, then connecting by name is a particularly good policy.</p>
<p>As a security measure, ssh keeps track of the identity of servers to which it connects, and warns you if they change. However, if you store those identities by IP address, and the IP addresses are different today, ssh will rightly complain that, say, server 192.168.1.34 isn’t the machine we spoke to yesterday.</p>
<p>This problem just disappears if you have a unique name for each box. No more spurious messages like this:</p>
<pre><code>@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
@ WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED! @
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
IT IS POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY! \ ...</code></pre>
<h2>The mechanics</h2>
<p><em>Caveat: I’m a bit hazy about how this works, so you should read the following with more skepticism than normal.</em></p>
<p>Although zeroconf will do more, we’re only using <a href="http://en.wikipedia.org/wiki/MDNS" title="multicast DNS">mDNS</a> here. This runs alongside the normal <span class="caps">DNS, </span>and so <span class="caps">DNS </span>only tools won’t work. For instance:</p>
<pre><code>$ host breadpi.local
Host breadpi.local not found: 3(NXDOMAIN)
$ ssh pi@breadpi.local
Linux breadpi 3.6.11+ #371 PREEMPT Thu Feb 7 16:31:35 GMT 2013 armv6l
...</code></pre>
<p> The only minor issue is that resolving the name in .local takes noticably longer than other names: several seconds. Presumably the name resolution waits for normal <span class="caps">DNS </span>to fail before looking in .local.</p>
<p>On the Mac you can look at the <span class="caps">DNS </span>setup with scutil, and it seems vaguely consistent with this idea:</p>
<pre><code>$ scutil --dns
DNS configuration
resolver #1
nameserver[0] : 8.8.8.8
nameserver[1] : 192.168.1.19
if_index : 8 (en2)
reach : Reachable
resolver #2
domain : local
options : mdns
timeout : 5
order : 300000
resolver #3
... </code></pre>4203D228-6593-11E3-8610-C8F7B106D3EB2013-12-15T14:14:22:22Z2013-12-15T23:00:35:35ZRaspberry Pi Streaming videoMartin Oldfield<p>Notes on streaming video from a Raspberry Pi camera to MacOS X and iOS devices. </p><h2>Introduction</h2>
<p><a href="http://www.raspberrypi.org">Raspberry Pis</a> and their <a href="http://www.raspberrypi.org/camera">camera modules</a> are a popular and cheap way to stream video on the Internet. Anything video related on the web always seems to be messy though, and although there are lots of articles discussing this on the web, I found it far from trivial to set up. Perhaps the most useful article <a href="http://raspberrypi.stackexchange.com/questions/7446/how-can-i-stream-h264-video-from-raspberry-camera-module-via-apache-nginx-for-re">was on StackExchange</a></p>
<p>One reason that the task is difficult is simply that there isn’t an approach which is best for everyone. So let’s begin by deciding what we want, and what we don’t care about.</p>
<ul>
<li>I want to stream video from the Raspberry Pi’s camera to iOS and MacOS clients.</li>
<li>I don’t care too much about the latency as long as it’s less than about 30s.</li>
<li>I don’t want to install any software on the clients, and use as much ‘standard’ software on the Pi as possible.</li>
</ul>
<h2>Grabbing video with <code>raspivid</code></h2>
<p>Although there are many recipes for online video streaming, they all use <code>raspivid</code> to get data from the camera and save it in <a href="http://en.wikipedia.org/wiki/H.264"><span class="caps">H.264</span></a> format. <code>Raspivid</code> is a fine choice because it knows how to use the <a href="http://en.wikipedia.org/wiki/VideoCore">VideoCore 4</a> hardware in the Pi’s <span class="caps">GPU </span>to encode the video efficiently.</p>
<p>Getting the video into <span class="caps">H.264 </span>bodes well for displaying it on Apple hardware: both iOS and MacOS know how to decode it without any extra software.</p>
<p>Unsurprisingly <code>raspivid</code> boasts many options, and you’ll probably want to consult <a href="https://github.com/raspberrypi/userland/blob/master/host_applications/linux/apps/raspicam/RaspiCamDocs.odt">the documentation</a> or even <a href="https://github.com/raspberrypi/userland/blob/master/host_applications/linux/apps/raspicam/RaspiVid.c">the source</a> on github.</p>
<p>Although I don’t claim that it’s optimal, here’s the command I used:</p>
<pre><code>raspivid -n -ih -t 0 -ISO 800 -ex night -w 720 -h 405 -fps 25 -b 20000000 -o -</code></pre>
<p>Here’s a brief explanation:</p>
<dl>
<dt><code>-n</code></dt>
<dd>Disable preview.</dd>
</dl>
<dl>
<dt><code>-ih</code></dt>
<dd>Discussed below. For now, note that omitting this makes for frustrating debugging!</dd>
</dl>
<dl>
<dt><code>-ISO 800 -ex night</code></dt>
<dd>Try to get the best image in the dark.</dd>
</dl>
<dl>
<dt><code>-w 720 -h 405 -fps 25 -b 20000000</code></dt>
<dd>Specify the video parameters.</dd>
</dl>
<dl>
<dt><code>-o -</code></dt>
<dd>Send the video to <span class="caps">STDOUT.</span></dd>
</dl>
<h2>Preparing video with <code>ffmpeg</code></h2>
<p>Having got a source of <span class="caps">H.264 </span>video, we need to get it to the clients. There are any number of streaming solutions, which use special protocols to send a continuous stream of data to the viewer.</p>
<p>However, in the interests of simplicity, we’ll use <a href="http://tools.ietf.org/html/draft-pantos-http-live-streaming-12"><span class="caps">HTTP</span> Live Streaming.</a> Although this sounds gradiose, it’s about the simplest thing which might work:</p>
<ul>
<li>Cut the video stream into short clips.</li>
<li>Maintain a separate file which lists which clips to play.</li>
<li>Expose both the clips and the playlist via a normal <span class="caps">HTTP </span>server.</li>
</ul>
<p>Happily <a href="http://ffmpeg.org"><code>ffmpeg</code></a> can do this for us:</p>
<pre><code>ffmpeg -y \
-loglevel panic \
-i - \
-c:v copy \
-map 0 \
-f ssegment \
-segment_time 1 \
-segment_format mpegts \
-segment_list "$base/stream.m3u8" \
-segment_list_size 10 \
-segment_wrap 20 \
-segment_list_flags +live \
-segment_list_type m3u8 \
-segment_list_entry_prefix /cam/segments/ \
"$base/segments/%03d.ts" </code></pre>
<p>In essence this takes the <span class="caps">H.264 </span>stream on <span class="caps">STDIN </span>and saves it in one second clips to files in $base/segments. It also keeps a playlist for those segments in $base/stream.m3u8. For more details, see the <a href="http://ffmpeg.org/ffmpeg-formats.html#segment_002c-stream_005fsegment_002c-ssegment">documentation for <code>ffmpeg</code>.</a></p>
<p>Note that <code>ffmpeg</code> is clever enough to reuse files for the clips, so we won’t gradually fill the disk.</p>
<p>The biggest problem with this approach is latency. By working in discrete blocks each a second long, we might expect a few seconds of latency, but in practice 10--20s seems common. Perhaps different parameters would improve this.</p>
<p>Finally we’ll need some <span class="caps">HTML </span>to wrap things up:</p>
<pre><code><html>
<head>
<title>PiVid</title>
</head>
<body>
<video controls="controls" width="720" height="405" autoplay="autoplay" >
<source src="stream.m3u8" type="application/x-mpegURL" />
</video>
</body>
</html> </code></pre>
<h3><code>raspivid -ih</code></h3>
<p>You may recall the <code>-ih</code> option to <code>raspivid</code> which inserts <span class="caps">PPS </span>and <span class="caps">SPS </span>headers on every I-frame. I don’t understand in detail what this means, but if you don’t specify it you’ll find that clients which connect to the webcam quickly work, whilst laggards won’t.</p>
<p>Debugging this problem can be fun, because it manifests itself as a system which works well in testing, but then fails when you reload the stream a bit later. Given that the window when it works depends on the clip length, it is also easy to think the problem lies there.</p>
<h3><code>ffmpeg</code> versions</h3>
<p>At time of writing, December 2013, it’s claimed that Raspbian ship an old version of <code>ffmpeg</code> which doesn’t support segmenting, so you’ll need to compile your own. This takes many hours, and runs out of memory on Pis with only 256MB of <span class="caps">RAM </span>(model A and version 1 model B).</p>
<h3><code>raspivid -segment</code></h3>
<p>Although <code>ffmpeg</code> can do all manner of video conversions, it’s clear that here it’s not doing very much. Perhaps <code>raspivid</code> will learn how to segment the video itself.</p>
<p>In fact, it’s well on the way! The <a href="https://github.com/raspberrypi/userland/commit/d49a9a537dd6a60fc0902732a26e3550e1f79d76">most recent commit</a> to the software appears to allow just that. As yet though, there’s no support for generating the .m3u8 playlist though. Still, if you’re reading this in 2014 or later, it might be worth checking before you spend ages compiling <code>ffmpeg</code>.</p>
<h2>Serving video</h2>
<p>The recipe above generates a handful of files:</p>
<pre><code>$ ls -lR
.:
total 68
-rw-r--r-- 1 pi pi 233 Dec 12 23:04 index.html
drwxr-xr-x 2 pi pi 57344 Dec 14 09:30 segments
-rw-r--r-- 1 pi pi 525 Dec 15 15:17 stream.m3u8
./segments:
total 12832
-rw-r--r-- 1 pi pi 410028 Dec 15 15:17 000.ts
-rw-r--r-- 1 pi pi 663264 Dec 15 15:16 001.ts
-rw-r--r-- 1 pi pi 664204 Dec 15 15:16 002.ts
...
-rw-r--r-- 1 pi pi 673040 Dec 15 15:17 019.ts</code></pre>
<p>To serve the video to clients, just let any old webserver see them. I used nginx, but I’d expect apache and lighttpd would work too. </p>09239804-3518-11E3-BAF5-7E163029149D2013-10-14T21:31:36:36Z2013-10-14T21:36:37:37ZPaste dispensingMartin Oldfield<p>Notes on dispensing solder paste </p><h2>Shopping</h2>
<p>Having decided a while ago that if I wanted to build things with <span class="caps">SMD </span>components I should probably start using solder paste. I know that in production people get stencils made for their boards, but they don’t seem to be as readily available as cheap <span class="caps">PCB</span>s. So, I looked instead at paste dispensers: things which squirt a bit of paste onto the board.</p>
<h3>Solder dispenser</h3>
<p>Having read lots of random Internet postings about this, there seemed to be fairly clear consensus that a compressed-air driven dispenser was a good thing, particularly if it was controlled by a foot pedal. The basic idea is that when you push the pedal, a measured bit of paste is extruded.</p>
<p>The most common model seems to be the <a href="http://www.ebay.co.uk/itm/Solder-Paste-Glue-Dropper-Liquid-Auto-Dispenser-Controller-KLT-982A-/251278904396"><span class="caps">KLT</span>-982A,</a> which is available from many sources: I bought mine from a Chinese eBay shop.</p>
<h3>Compressor</h3>
<p>The dispenser needs a supply of compressed-air to operate, and following random Internet comments I decided to buy a compressor designed for airbrushes. The <a href="http://www.amazon.co.uk/Airbrush-Compressor-Double-Action-Airbrushes/dp/B004XP7K9W"><span class="caps">AS186</span></a> got a good writeup, so I bought one, and it seems to work well. It’s quite noisy when compressing, but the compressor has a large storage tank so it’s usually silent even when you’re dispensing.</p>
<h3>Solder Paste</h3>
<p>I bought the paste in a 30cc syringe which holds 100g of paste. There’s a standard fastening on the end of the paste tube, to which the tube from the dispenser attaches.</p>
<p>At the other end of the syringe, you’ll need to attach a <a href="http://www.somersetsolders.com/product.php/488/222/stainless_steel_dispensing_needle___23_gauge__kds2312p_">dispensing needle</a> with a <a href="http://en.wikipedia.org/wiki/Luer_taper">Luer-Lock</a> fitting. eBay sells them very cheaply. Somewhat randomly, I picked <a href="http://en.wikipedia.org/wiki/Needle_gauge_comparison_chart">23 gauge.</a></p>
<h3>Connections and tubing</h3>
<p>I seem to remember that I needed to buy some tubing to connect the compressor to the dispenser, and at least one connector, but I’ve forgotten the details. </p>191AE100-3512-11E3-9678-A04D2F29149D2013-10-14T20:49:01:01Z2013-10-14T21:03:24:24ZA blast of hot airMartin Oldfield<p>Notes on soldering <span class="caps">SMD </span>parts with hot air. </p><h2>A simple recipe</h2>
<p>A while ago I bought an <a href="http://www.atten.eu/atten-858d-smd-rework-reflow-station.html">Atten 858D+</a> rework station from a seller on Amazon.</p>
<p>Here are a few notes on using it:</p>
<ul>
<li>I applied <a href="http://www.somersetsolders.com/product.php/392/218/leaded_solder_paste_syringe_qualitek_619d">Qualitek 619D paste</a> to the board with a compressed-air dispenser though a 23 gauge needle. This is a Sn62/Pb36/Ag2 no-clean (ROL0) paste, with 86% metal content.</li>
<li>I didn’t apply any extra flux: the only flux I’ve got on hand is in a pen and applying that before the paste stopped the paste sticking to the pads.</li>
<li>For 0805 parts I put a dab on each pad, for a <a href="http://www.analog.com/static/imported-files/packages/PKG_PDF/LFCSP(CP)/CP_16_17.pdf">16-pin <span class="caps">LFCSP</span></a> package, I put a sausage along the rows of pads, and a couple more in the centre tab.</li>
<li>I set the temperature on the 858D+ to 275̣°C, the air-flow to a medium setting and used the medium nozzle. In most cases the solder melted quite quickly and surface tension pulled the parts into place.</li>
<li>The surface tension pulled the <span class="caps">LFCSP </span>chip close the board, squeezing out some of the solder into blobs on the side of the chip: they were easy to remove with braid.</li>
</ul>
<p>I’m not saying that this is the best way to proceed, merely that it worked for me.</p>
<h3>Failures</h3>
<p>I had a couple of problems:</p>
<ul>
<li>If the air was too fierce or too close to the part, or the nozzle too small, the part tended to blow away from the right location. Small deviations are fine: surface tension pulled the parts back; larger movements become a problem if the part moves to the next pad.</li>
<li>On occasion, the paste ‛popped’, sometimes blowing the part off the board. I think it might indicate the the paste has absorbed water, which boiled, but I’m really not sure. </li>
</ul>7454FBF2-DF3C-11DF-B86B-1F9DF7D5620A2010-10-24T07:00:32:32Z2013-08-31T11:05:22:22ZImporting tasks into OmniFocusMartin Oldfield<p>A simple command line program for importing tasks described by a <span class="caps">YAML </span>document into OmniFocus. </p><h2>Motivation</h2>
<p><a href="http://www.omnigroup.com/products/omnifocus/">OmniFocus</a> is a wonderful task management application. It runs on the Mac, iPad and iPhone, and I simply wouldn't be without it.</p>
<p>However, somewhat inexplicably, it's hard to import a list of tasks generated elsewhere into OmniFocus. Happily, the Mac application has a rich AppleScript <span class="caps">API, </span>so in principle one could use this to solve the task. In practice though I know very little AppleScript, and when I've tried to write command line scripts with it before, it's always been a pain.</p>
<p>So, I'd rather use a mainstream language with AppleScript bindings. Sadly the Perl interface seems a bit flakey and unloved, perhaps because the cool guys seem to use Ruby's <a href="http://appscript.sourceforge.net/rb-appscript/index.html">rb_appscript</a> for this sort of thing. I've not written much Ruby, but how hard can it be ?</p>
<h2>Discussion</h2>
<p>Happily there are a number of OmniFocus AppleScript examples floating around the web. <a href="http://andy.theschotts.net/omnifocus-applescript-integration/">Andy Schott's examples</a> were helpful, as was browsing the OmniFocus dictionary with the AppleScript Editor application (File | Open Dictionary). The dictionary is particularly handy for details about the properties supported by each task.</p>
<p>The program tries to do as little work as possible. It takes data from a <a href="http://www.yaml.org/"><span class="caps">YAML</span></a> file, treats them as simple task properties, and calls the relevant <span class="caps">API.</span> In most cases it's just a case of passing strings around, but there are three exceptions:</p>
<ul>
<li>Dates need special handling because AppleScript wants a date object not a string.</li>
<li>The project name should be passed as an object reference, not a string. To find the relevant project I do a global name search: this means that the project name should probably be unique!</li>
<li>ditto for the Context.</li>
</ul>
<p>There's one other twist. Incoming fields beginning with an underscore aren't passed to the AppleScript <span class="caps">API, </span>rather they're understood as being 'internal' to the import process. Some of these are interpreted by the program:</p>
<ul>
<li>_no_dupes : If true, don't import the task if it already exists. Exists here just means that there's already a task with the same name in that project.</li>
</ul>
<p>Finally you can ask the program to dump a list of all the tasks it actually added.</p>
<h2>Sample data</h2>
<p>You'll probably generate the <span class="caps">YAML </span>with another program, but here's the sort of thing you should get:</p>
<pre><code>---
- context: Administriva
due date: 2010-12-24
name: Blossom 2
note: Cherry 2
project: Auto
- context: Administriva
due date: 2010-12-26
name: Blossom 3
note: Cherry 3
project: Auto
_no_dupes: 1</code></pre>
<h2>Practical Matters</h2>
<p>You can <a href="http://www.mjoldfield.com/atelier/2010/10/of-import-yaml-0.1.tar.gz">download the program,</a> but you'll then need to copy it to somewhere on your <span class="caps">PATH.</span></p>
<p>To run it, you probably want something like this:</p>
<pre><code>% of-import-yaml --help
...
% of-import-yaml tasks.yml
...
% of-import-yaml --output YAML tasks.yml</code></pre>
<h2>Internals</h2>
<p>Internally the code is pretty simple. Simplified slightly in the interests of clarity, the basic core is shown below. Feel free to steal it for your own applications.</p>
<pre><code>...
of = app('OmniFocus')
dd = of.default_document
...
add_task(dd, task)
...
def add_task(dd, props)
proj_name = props["project"]
proj = dd.flattened_tasks[proj_name]
ctx_name = props["context"]
ctx = dd.flattened_contexts[ctx_name]
tprops = props.inject({}) do |h, (k, v)|
h[:"#{k}"] = v
h
end
tprops.delete(:project)
tprops[:context] = ctx
t = dd.make(:new => :inbox_task, :with_properties => tprops)
t.assigned_container.set(proj)
return true
end
</code></pre>
<h2>Disclaimer</h2>
<p>I've used this program myself and it appears to work, but you should be aware that I know very little Ruby and even less AppleScript.</p>
<p>Ultimately, the program sends OmniFocus a series of commands asking it to modify your task lists, so it's perfectly possible that things will get trashed!</p>
<p>You should probably backup your OmniFocus data before playing with this. </p>576380E8-DC52-11E2-BA4F-C5FE03F208E02013-06-23T22:10:15:15Z2013-06-23T23:23:54:54ZA DIY Vacuum Pickup ToolMartin Oldfield<p>A cheap <span class="caps">DIY </span>vacuum pickup tool, helpful for <span class="caps">SMD </span>assembly. </p><h2>Rationale</h2>
<p><span class="caps">SMD </span>components are small and can be tricky to manipulate. Apparently a small vacuum pump attached to a nozzle is helpful, so I thought I’d build one. This isn’t a new idea: just <a href="https://www.google.com/search?client=safari&rls=en&q=diy+vacuum+pickup">look on the Internet.</a></p>
<p>The basic idea is to make something a bit like a small vacuum cleaner, but one where the nozzle is so narrow that small parts get stuck on the end of it rather than disappearing into it.</p>
<h3>Vacuum Pen</h3>
<p>You can buy cheap (~£2) manual vacuum pickups on eBay, for example the <a href="http://www.aoyue.com/en/ArticleShow.asp?ArticleID=351">Aoyue 939</a> and clones. They have the requisite nozzle but instead a vacuum pump have a small rubber balloon inside. To pick up a part, squeeze the balloon with a button expelling the air, place the part over the end of the nozzle then release the button. The balloon will expand, sucking the component to the tip.</p>
<p>It’s all a bit hit-and-miss, so I intended to remove the balloon, attach a vacuum pump, and control the whole thing by leaving an air hole in the side. To get vacuum I’d just cover the hole.</p>
<p>In practice though it was hard to get the hole in a comfortable place, so I used the nozzle but discarded both the balloon and barrel. Instead I used a short length of black polyurethane tube (10mm outside diameter, 6.5mm inside) which was a snug, airtight, friction fit on the nozzle.</p>
<p>I melted a ~3mm hole in the side, abusing a soldering iron, and glued the tube to the vacuum pump in the end with hot melt glue.</p>
<p><img src="vp-pen.jpg" alt="" class="img_border" /></p>
<p>You could probably make your own nozzle by bending a dispensing needle, and that might be better: 0603 parts are small enough to fit easily in the nozzle from the 939. It’s a <a href="http://en.wikipedia.org/wiki/Needle_gauge">16-gauge</a> nozzle, so perhaps you could get a smaller one.</p>
<h3>Vacuum Pump</h3>
<p>I bought a cheap (~£10) aquarium pump from eBay,the <a href="http://www.hidom-china.com/ga/en/products/products_detail.asp?id=76">HiDOM HD-603.</a> It’s designed to blow air, but by reversing the non-return valves inside it becomes a vacuum pump. This is a <a href="https://www.google.com/search?client=safari&rls=en&q=aquarium+pump+vacuum+conversion">well-trodden path.</a></p>
<p>Reversing the valves is easy and basically follow these <a href="http://fillwithcoolblogname.blogspot.co.uk/2011/07/turn-aquarium-pump-into-vacuum-cheapie.html">notes from The Danger Zone.</a> The key steps are:</p>
<ol>
<li>Unscrew the base, and take the bellows apart.</li>
<li>Rotate the discs holding the non-return valves through 180°, cutting away a locating pin so they fit.</li>
<li>Put it back together.</li>
</ol>
<p><img src="vp-pump.jpg" alt="" class="img_border" /></p>
<p>The HD-603 boasts a couple of vacuum ports: I plugged one of them with a spare bit of tube filled with hot-melt glue.</p>
<p>Although not silent, the pump is really rather quiet.</p>
<h3>Tube</h3>
<p>It’s obviously good if the tubing is light and flexible. Clear <span class="caps">PVC </span>tube with an inner diameter of 4mm and an outer diameter of 6mm seems to fit the bill, and eBay supplied a metre of it for less than two pounds.</p>
<p>The tube is a good friction fit on the pump, and doesn’t leave too big a gap in the pen barrel.</p>
<h2>In conclusion</h2>
<p>It’s easy and cheap to build one of these! I spent about £16 and you could probably reduce that a bit by tweaking the pen. A smaller pump might work too: I just don't know. </p>C4C541F4-9B2E-11E2-9DF3-811D08EBF0BD2013-04-02T00:46:28:28Z2013-06-09T23:26:13:13ZADT74x0 temperature sensingMartin Oldfield<p>Analog Devices make some nice I²C temperature sensors, including the <span class="caps">ADT7420 </span>which is accurate to 0.25̣̣°C. Here’s a user space client for them. </p><p>Analog Devices make a couple of I²C temperature sensors with a precision of about 0.008̣°C, though rather worse accuracy:</p>
<ul>
<li>the <a href="http://www.analog.com/en/mems-sensors/digital-temperature-sensors/adt7410/products/product.html"><span class="caps">ADT7410</span></a> accurate to ±0.5̣°C and available in a 8-pin <span class="caps">SOIC</span>;</li>
<li>the newer <a href="http://www.analog.com/en/mems-sensors/digital-temperature-sensors/adt7420/products/product.html"><span class="caps">ADT7420</span></a> accurate to ±0.25̣°C and available in a 16-pin <span class="caps">LFCSP.</span></li>
</ul>
<p>The latter’s obviously a nicer part, but the former’s easier to buy and play with. The <span class="caps">LFCSP </span>package is a pain to solder by hand, but you can mount it dead-bug style.</p>
<p><img src="bug-adt7410.jpg" alt="ADT7410" class="img_border_2up" /> <img src="bug-adt7420.jpg" alt="ADT7420" class="img_border_2up" /></p>
<h2>Hardware</h2>
<p>Happily the hardware here is trivially simple: we simply need to connect the chip to an I²C bus. We don’t even need to worry too much about voltage levels: both chips are happy with 2.7–5.5V supplies.</p>
<p><img src="adt-schem.svg" alt="" class="img_noborder_small" /></p>
<p>Besides the four wires required for power and the I²C bus, the only other pins we have to consider are A0 and <span class="caps">A1, </span>which set the sensor’s I²C address: 0x48 to 0x4b.</p>
<p>There are also a couple of outputs which signal outlandish temperatures: I’m ignoring those here.</p>
<p>Actually I am skating over important details. The I²C bus was originally designed for short-runs (I²C is a contraction of Inter IC), so if you want to put the sensor on the end of a long wire you really ought to think carefully about it.</p>
<p>If you want to do the job properly, it’s worth reading the <a href="http://www.nxp.com/documents/user_manual/UM10204.pdf">specification</a> from Philips (now <span class="caps">NXP</span>). Both the power and signal connections need thought, and if you just plonk the sensors on a bit of ribbon cable, it’s effectively an <a href="http://www.ni.com/white-paper/3854/en">unterminated transmission line</a> which implies we’ll see reflections.</p>
<p>However, in practice, I’ve found that a metre of ribbon cable works reliably without taking any precautions. In fact, I think the main problem with my <em>laissez-faire</em> attitude is that there’s noise on the supply rails, which presumably adds noise to the readings. Of course if the application were more important, I’d take more care.</p>
<h2>Software</h2>
<p>I wanted some software to read the temperatures from a Raspberry Pi. In principle, there’s a module for the Linux kernel which talks to the <span class="caps">ADT74</span>x0, but it’s not included in the stock Raspbian distribution and I think life’s too short to keep compiling kernel modules.</p>
<p>Instead I wrote a trivial little user space program which talks to the I²C device in /dev. You can grab the code from the <a href="https://github.com/mjoldfield/adt74x0">adt74×0 repository on github</a></p>
<p>The code is easy to compile:</p>
<pre><code>$ gcc -O9 -std=c99 adt74x0.c -o adt74x0</code></pre>
<p>It’s also easy to run:</p>
<pre><code>$ ./adt74x0
# Scanning /dev/i2c-0 for ADT74x0...
0x4b 20.71094C</code></pre>
<p> The code simply returns the address (here 0x4b) and temperature of all the <span class="caps">ADT74</span>x0 devices it finds on the bus.</p>
<p>The client assumes that you’re using /dev/i2c-0. If not, you’ll have to tell it where to look e.g.:</p>
<pre><code>$ ./adt74x0 /dev/i2c-1
# Scanning /dev/i2c-1 for ADT74x0...
0x4b 20.71094C</code></pre>
<p>You’ll need permission to access the device: popular ways to get this include using sudo or adding the user to the i2c group.</p>
<h2>I²C buses I have known</h2>
<p>It transpires that almost PCs have an external I²C bus on their monitor port. Most monitors send configuration data to the PC via I²C, and accordingly almost all <span class="caps">VGA </span>and <span class="caps">DVI </span>sockets sport an I²C bus. So, you could connect a sensor to this port, run the software above, and measure the temperature.</p>
<p><img src="adt-eee.jpg" alt="" class="img_border" /></p>
<p>Obviously this is easier if there’s no monitor attached to the PC: ironically when I’ve wanted to do this in the past, it was to monitor the state of headless servers.</p>
<h3>The Raspberry Pi</h3>
<p>Sadly there’s a problem with the Raspberry Pi’s I²C device: it doesn’t like talking to <span class="caps">ADT74</span>x0s. I don’t think it’s just me, because it appears to have <a href="http://www.raspberrypi.org/phpBB3/viewtopic.php?f=44&t=15840">affected other people too.</a></p>
<p>In practice I found that I could read the temperature reliably from single sensor on a short cable, but failed to read anything else e.g. the device’s ID code. Ever the temperature reading failed when more devices were added and the bus got longer.</p>
<p>Happily there’s an easy solution: instead of talking to the Linux I²C device, we can use Mike McCauley’s nice <a href="http://www.airspayce.com/mikem/bcm2835/">bcm2835 library.</a></p>
<p>Note: if you’re using the I²C devices in /dev you need to stop blacklisting the i2c-dev module by editing /etc/modules. Conversely if you want to use Mike’s library, you’ll need to blacklist i2c-dev so that the ports are still free.</p>
<p>There’s a different client program, but it’s on <a href="https://github.com/mjoldfield/adt74x0">github</a> and easy to compile:</p>
<pre><code>$ gcc -O9 -std=c99 adt74x0b.c -lbcm2835 -o adt74x0b</code></pre>
<p>It’s a drop in replacement for the other version, but needs access to mmap the device:</p>
<pre><code>$ sudo ./adt74x0b
0x4a 18.69531C
0x4b 20.59375C</code></pre>
<p>The program also slows the I²C clock down to 10kHz (from 100kHz) to accomodate long cables and dodgy terminators.</p>
<p><img src="adt-rpi.jpg" alt="" class="img_border" /></p>
<h3>Connections</h3>
<table class="spaced" cellspacing="0"><tr><th>Signal</th><th>Raspberry Pi <a href="http://elinux.org/RPi_Low-level_peripherals#General_Purpose_Input.2FOutput_.28GPIO.29"><span class="caps">GPIO </span>pin</a></th><th><a href="http://en.wikipedia.org/wiki/VGA_connector"><span class="caps">VGA </span>pin</a></th></tr><tr><td align="center"><span class="caps">VDD</span></td><td align="center">1 (3.3V)</td><td align="center">9 (5V)</td></tr><tr><td align="center"><span class="caps">GND</span></td><td align="center">6</td><td align="center">10</td></tr><tr><td align="center"><span class="caps">SDA</span></td><td align="center">3</td><td align="center">12</td></tr><tr><td align="center"><span class="caps">SCL</span></td><td align="center">5</td><td align="center">15</td></tr></table>
<h2>Results</h2>
<p>I’ve had a <span class="caps">ADT7410 </span>logging the temperature at home every minute for a while now. Here’s a typical result:</p>
<p><img src="stemp.svg" alt="" class="img_noborder" /> </p>41227434-9AB1-11E2-89DE-882B07EBF0BD2013-04-01T09:48:02:02Z2013-06-05T18:13:43:43ZLinux WiFi configurationMartin Oldfield<p><em>Aidés-memoires</em> for configuring WiFi on a Raspberry Pi. </p><h2>Software</h2>
<p>On most computers with a <span class="caps">GUI, </span>configuring WiFi is easy. However, on a headless Linux box—often a Raspberry Pi in my case—it often strikes me as a painfully baroque task.</p>
<p>Part of the problem is simply that Linux’s WiFi stack supports a very wide range of different configurations, but I just want to connect to a particular WiFi network with a given password. As is often the case, it’s actually very easy to do this, once you’ve worked out what you don’t need.</p>
<p>Only two files matter: /etc/network/interfaces and /etc/wpa_supplicant/wpa_supplicant.conf.</p>
<h3>/etc/network/interfaces</h3>
<p>Add the definition for wlan0 so the file looks like this:</p>
<pre><code>auto lo
iface lo inet loopback
iface eth0 inet dhcp
auto wlan0
allow-hotplug wlan0
iface wlan0 inet manual
wpa-roam /etc/wpa_supplicant/wpa_supplicant.conf
iface default inet dhcp </code></pre>
<h3>/etc/wpa_supplicant/wpa_supplicant.conf</h3>
<p>This splendidly named file contains the WiFi configuration data. I’m happy to rely on the defaults, but obviously I need to specify the name (SSID) and password (PSK) for the WiFi network I want to use. My file looks like this:</p>
<pre><code>network={
ssid="NETWORK NAME"
#psk="Banana"
psk=757474b54c7f338f5cfb2db98735a36bcf893c0ec2193f117d53884d84510bdc
} </code></pre>
<p>You might reasonably guess that I’m supplying the <span class="caps">PSK </span>(pre-shared key) twice, and you’d be right. This is simply because the documentation claims it makes things a bit faster.</p>
<p>Generating the file is easy:</p>
<pre><code>$ wpa_passphrase "NETWORK NAME" BananaBanana |
sudo tee /etc/wpa_supplicant/wpa_supplicant.conf</code></pre>
<h2>Raspberry Pi Hardware</h2>
<p>Sadly the Raspberry Pi doesn’t come with Wi-Fi. However it does have <span class="caps">USB </span>ports, and <span class="caps">USB</span> Wi-Fi adaptors are cheap. In the <span class="caps">UK,</span> Amazon sell the <a href="http://www.amazon.co.uk/Edimax-EW-7811UN-Wireless-802-11b-150Mbps/dp/B003MTTJOY/">Edimax EW-7811UN</a> for less than ten pounds. It seems to work reliably if you just plug it into the Raspberry Pi’s <span class="caps">USB </span>socket: no hub required.</p>
<p>Six months’ ago, we had to fettle the kernel drivers to make all this work, but in the 2013-02-09 release of Raspbian it all just works. </p>E2528FC4-757A-11DF-95C7-8A0CE410ABC52010-06-11T17:00:45:45Z2013-06-05T18:13:43:43ZFinding Geocaches in EnglishMartin Oldfield<p>A short program to find caches which are described in English. Very handy when caching abroad! </p><h2>Motivation</h2>
<p><a href="http://en.wikipedia.org/wiki/Geocaching">Geocaching</a> is all about finding things. Sometimes they are hard to find because they've been hidden cleverly whilst at other times one has to solve a puzzle before starting to look. I'm a big fan of both sorts of caches, but sometimes there's a third problem which I find less enjoyable to tackle: the geocache description might be in a language I don't understand.</p>
<h2>Resolution</h2>
<p>Groundspeak offer a Pocket Query service to premium subscribers which lets them specify a search e.g. caches near the centre of Prague. A list of caches meeting these criteria is then encoded in <span class="caps">GPX </span>(a <span class="caps">XML </span>application). This file can be viewed with e.g. Google Earth, and a caching extravaganza planned.</p>
<p>However it's a bit tricky to plan the trip if many of the caches are unintelligible to me. It would be much more convenient if I could process the <span class="caps">GPX </span>file automatically, selecting only those caches which have a reasonable amount of English in their description.</p>
<p>Happily, and not entirely coincidentally, I recently wrote about a <a href="http://www.mjoldfield.com/atelier/2010/06/toy-langmod.html">toy language model</a> which is fine for solving this sort of problem.</p>
<h2>Software</h2>
<p>You can <a href="http://www.mjoldfield.com/atelier/2010/06/glf/gc-lang-filter_0.1.tar.gz">download some Perl</a> which does all this, at least if the only languages in play are English, French, German, Spanish, Italian, and Czech.</p>
<p>For example, here's a demo:</p>
<pre><code>$ wget http://www.mjoldfield.com/atelier/2010/06/glf/gc-lang-filter_0.1.tar.gz
$ tar xzvf gc-lang-filter_0.1.tar.gz
$ cd gc-lang-filter-0.1
$ ./gc-lang-filter Prague.gpx
$ # only on Mac OS
$ open -a 'Google Earth' Prague-en.gpx</code></pre>
<h2>Results</h2>
<p>Starting from 1,000 caches in Prague and selecting those with more than a quarter of the description in English produced a list of 261—changing the threshold changes this a bit but not too much.</p>
<p>All 261 caches seem to be worth investigating, though of course whether I actually find any remains to be seen! </p>BC0ECFEE-7412-11DF-A429-B479E310ABC52010-06-09T22:02:19:19Z2013-06-05T18:13:43:43ZA Toy Language ModelMartin Oldfield<p>I wanted a toy language model so that I could write software which would identify the language of text.</p>
<p>The motivation for this is simple: given a list of geocaches, which ones have descriptions which I can understand! </p><h2>General Theory</h2>
<p>It would be nice to have some software which recognized languages. For example, given some greetings, I want to be able to produce something like this:</p>
<table class="std center" style="margin-left:auto;margin-right:auto" cellspacing="0"><tr><th align="center" rowspan="2">Greeting</th><th align="center" colspan="4">Language</th></tr><tr><th align="center">English</th><th align="center">German</th><th>Czech</th><th>French</th></tr><tr><td align="left">Good morning</td><td align="right" style="color:green">99.8%</td><td align="right">0.1%</td><td align="right">0.1%</td><td align="right">0.0%</td></tr><tr><td align="left">Guten Morgen</td><td align="right">0.4%</td><td align="right" style="color:green">98.2%</td><td align="right">0.2%</td><td align="right">1.2%</td></tr><tr><td align="left">Dobre jitro</td><td align="right">0.3%</td><td align="right">0.0%</td><td align="right" style="color:green">99.5%</td><td align="right">0.2%</td></tr><tr><td align="left">Bonjour</td><td align="right">11.8%</td><td align="right">0.0%</td><td align="right">7.4%</td><td align="right" style="color:green">80.7%</td></tr></table>
<p>The actual motivation for this is not to parse greetings, but rather to see which geocaches in places like Prague have English descriptions. Accordingly I can't just compile a list of common greetings and map them onto the relevant languages.</p>
<p>In fact I'd like to have a model of language. This is, something which, given a string of characters will return the probability that the string comes from that language.</p>
<p>Moreover, I'd like to have multiple models, each corresponding to a different language: then I can ask 'How likely are these data, given that they're in French ?', or '... in English ?', and so on.</p>
<p>Formally, the language model tells us the likelihood of the data given our assumptions i.e. p(D|L). From this Bayes' theorem lets us calculate the probability of each language given the data:</p>
<p class="center" style="text-align:center">p(L|D) = p(D|L) p(L) / p(D)</p>
<ul>
<li>p(D) is effectively a normalizing constant here.</li>
<li>p(L) encodes our prior expectation of each language: if we think all languages are equally probable <i>a priori</i>, then it's another constant.</li>
</ul>
<p>Accordingly,</p>
<p class="center" style="text-align:center">p(L|D) ∝ p(D|L).</p>
<h2>A toy language model</h2>
<p>Recall that the language model must be able to award a number to each and every string saying how likely it is. For example, in English we'd expect 'the cat sat on the mat' to get a higher score than 'dfkdsfnksd'.</p>
<p>Happily it isn't necessary to actually understand a language to write a model for it. Rather we can just look at lots of English text and infer a model from them: we can think of this is teaching the model about English by getting it to study a corpus of English sentences.</p>
<p>It's important to realize that there isn't one 'right' model, and that some models might be very sophisticated. For example a clever model might award a higher score to 'the cat sat on the mat' than 'the cat sat on the dog'.</p>
<p>Our resources are much more modest, so we'll do something very much simpler. In fact I'm interested in the simplest model which has a reasonable chance of working.</p>
<p>Two notational points:</p>
<ul>
<li>In the interests of brevity I'll drop the 'given L' bit from the probability distributions in this section i.e. when I write p(D|X) I mean p(D|L,X).</li>
<li>D are our data which correspond to a string of characters. We'll write</li>
</ul>
<p class="center" style="text-align:center">D = { d<sub>1</sub>, d<sub>2</sub>, d<sub>3</sub>, ... }.</p>
<p>We can always expand joint probability distributions: p(A,B) = p(A) p(B|A). Applying this twice:</p>
<p class="center" style="text-align:center">p(d<sub>1</sub>, d<sub>2</sub>, d<sub>3</sub>, ...) = p(d<sub>1</sub>) p(d<sub>2</sub>|d<sub>1</sub>) p(d<sub>3</sub>, ...|d<sub>1</sub>,d<sub>2</sub>)</p>
<p> What does this mean ? Well, the first term p(d<sub>1</sub>) is just the probability of the first letter. We might build our model by looking at the start of lots of English texts and counting how often we saw each letter.</p>
<p>The next term p(d<sub>2</sub>|d<sub>1</sub>) is more complicated: it's the probability of the second letter given a particular first letter. For example, if the first letter is 'T' then it's quite likely to be followed by 'H'. Again this is something we could estimate by looking at lots of texts.</p>
<p>We could go on like this, but at some point we'd find ourselves asking what't we'd expect to see after <span class="caps">T,H,E,C,A,T,S,A,T,O,N,T,H,E,M,A.</span> This approach just isn't feasible: quite apart from the computational burden it would be impossible to find data to get reasonably representative samples for strings this long.</p>
<p>Rather than regarding the string of characters blindly, we have to start breaking it up.</p>
<p>The simplest way to do this is to just ignore the conditioning on previous letters:</p>
<p class="center" style="text-align:center">p(d<sub>2</sub>|d<sub>1</sub>) ≅ p(d<sub>2</sub>).</p>
<p>This is equivalent to making the assumption that all letters are independent: the probability of the string is just the product of the probabilities of each letter in the string.</p>
<p class="center" style="text-align:center">p(D) = <big>∏</big><sub>i</sub> p(d<sub>i</sub>).</p>
<p>Now our training process is much easier: once we've seen enough text to estimate the probabilities of each character there's little point in doing much more.</p>
<p>In practice, this model doesn't do too badly, but it relies rather heavily on different languages having different letter distributions. In some very unscientific tests, this failed to discriminate between short strings of English and French.</p>
<p>Perhaps, we've just gone a bit too far: instead of completely ignoring the context of each character, let's keep the single previous character.</p>
<p>Formally, expand the joint probability distribution again but this time go a step further:</p>
<p class="center" style="text-align:center">p(d<sub>1</sub>, d<sub>2</sub>, d<sub>3</sub>, d<sub>4</sub>, ...) = p(d<sub>1</sub>) p(d<sub>2</sub>|d<sub>1</sub>) p(d<sub>3</sub>|d<sub>1</sub>,d<sub>2</sub>) p(d<sub>4</sub>, ...|d<sub>1</sub>,d<sub>2</sub>,d<sub>3</sub>)</p>
<p>Now approximate:</p>
<p class="center" style="text-align:center">p(d<sub>3</sub>|d<sub>2</sub>, d<sub>1</sub>) ≅ p(d<sub>3</sub>|d<sub>2</sub>),<br />
p(d<sub>4</sub>|d<sub>1</sub>,d<sub>2</sub>,d<sub>3</sub>) ≅ p(d<sub>4</sub>|d<sub>3</sub>),<br />
...</p>
<p>and thus,</p>
<p class="center" style="text-align:center">p(D) = p(d<sub>1</sub>) <big>∏</big><sub>i > 1</sub> p(d<sub>i</sub>|d<sub>i-1</sub>).</p>
<p>This is a simple model for language which uses the probability of seeing each pair of letters. Normally we'd call this a bi-gram model. We'll obviously need a lot more data to train it properly than we needed for the uni-gram version above, but it should still be tractable.</p>
<h2>Technical details</h2>
<h3>Spaces</h3>
<p>It's obvious that spaces in prose are helpful and contain information: justtrytoreadthisforexample! However, deciding how to encode this information is more bother than I want to think about so I'll simply throw away any non-alphabetic characters from my strings.</p>
<p>Similarly I'll ignore case of characters too.</p>
<p>Obviously it's important to do this both when estimating the probabilities to build the model, and when scoring a particular string with the model.</p>
<h3>Unseen tokens</h3>
<p>Key to the model are the probabilities p(d<sub>i</sub>|d<sub>i-1</sub>) and we plan to estimate by those by just looking at some training corpus. Of course it's perfectly possible that when we're working with real data we'll encounter a situation which wasn't in the training data. For example, we might train it on English prose then present it with a bit of Chinese.</p>
<p>One possibility is to simply assign a zero probability to unseen letter pairs, but that seems a trifle harsh. Instead it's probably better to just assign some low probability, so the possibility is deprecated rather than ruled-out completely.</p>
<p>To do the job properly we should assign a probability to all of possible bigrams, but there are an awful lot of those: Unicode defines about 100,000 characters so there are about 10<sup>10</sup> bigrams. If we assumed each unseen bigram was equally likely then it would thus have a probability of somewhat less than 10<sup>-10</sup>.</p>
<p>By contrast in my tiny English training set of about 3 million letter pairs characters, only 642 of the possible 676 bigrams were seen. Dealing with unseen combinations properly surely should give these a significantly higher probability than 10<sup>-10</sup>.</p>
<p>In practice, if we've seen N letter pairs then assigning a probability of (1/N) seems to work well enough. In principle we should renormalize the distribution after we've extended it i.e. rescale so that the probabilities sum to one again, but in practice I don't bother—after all expanding the distribution on the fly is a little bit dodgy.</p>
<p>Another approach we might use is to synthesize the bigram probabilities from the unigram statistics: again I'm not persuing that here.</p>
<h3>logprob</h3>
<p>Multiplying together lots of numbers, some of which might be close to zero, is always fraught with error. In practice then, it's much easier to work with logs, and just add them up.</p>
<h2>Sample data</h2>
<p>Happily the Internet is full of good prose in different languages. <a href="http://www.gutenberg.org/catalog/">Project Gutenberg</a> is a fine source.</p>
<p>Although there aren't many texts on Gutenberg in, say, Czech, these less common languages often have a significantly different alphabet which compensates.</p>
<h2>Software</h2>
<p>It's easy to write some software to implement this, but I've done it for you. <a href="http://www.mjoldfield.com/atelier/2010/06/toy2l/toy2l_0.1.tar.gz">This tarball</a> contains both a pre-trained model (MultiLM.pm) and the software to generate a new model from your own training data (compile).</p>
<p>Further documentation is available in those files:</p>
<pre><code>$ wget http://www.mjoldfield.com/atelier/2010/06/toy2l/toy2l_0.1.tar.gz
$ tar xzvf toy2l_0.1.tar.gz
$ cd toy2l-0.1
$ ./hello
$ perldoc MultiML.pm
$ perldoc compile</code></pre>
<p>Here's an example which generated the data used in the table at the top of this article:</p>
<pre><code>#! /usr/bin/perl</code></pre>
<p>use strict;<br />
use warnings;</p>
<p>use MultiLM;<br />
use <span class="caps">YAML</span>;</p>
<p>my $lm = MultiLM->new;</p>
<p>foreach my $txt ('Good morning', 'Bonjour', 'Guten Morgen', 'Dobre jitro')<br />
{<br />
my $h = $lm->prob_language($txt);<br />
print "$txt\n", Dump($h), "\n";<br />
}</p>
<h3>Training data</h3>
<p>Given that we're only interested in the frequencies of letter pairs, I don't expect the training data to matter much, but if you're interested I used:</p>
<ul>
<li>Czech<ul>
<li><span class="caps">R.U.R., </span>by Karel Čapek.</li>
<li>Úplná učebnice mezinárodní řeči.</li>
<li>Cvičení maličkých ve svatém náboženství.</li>
<li>Hore dedinú, by F. Omelka.</li>
<li>Vlci proti mustangum, by F. Omelka.</li>
<li>Pasáek Ali, by F. Omelka.</li>
<li>Stafeta, by F. Omelka.</li>
</ul>
</li>
<li>German<ul>
<li>Faust, by Johann Wolfgang von Goethe.</li>
</ul>
</li>
<li>English<ul>
<li>Adventures of Huckleberry Finn, by Mark Twain.</li>
<li>The Adventures of Sherlock Holmes, by Arthur Conan Doyle.</li>
<li>Alice's Adventures in Wonderland, by Lewis Carroll.</li>
<li>Pride and Prejudice, by Jane Austen.</li>
<li>Ulysses, by James Joyce.</li>
</ul>
</li>
<li>Spanish<ul>
<li>El ingenioso hidalgo don Quijote de la Mancha, by Miguel de Cervantes Saavedra.</li>
</ul>
</li>
<li>French<ul>
<li>A l'ombre des jeunes filles en fleurs, by Marcel Proust.</li>
<li>Du Côté de Chez Swann, by Marcel Proust.</li>
</ul>
</li>
<li>Italian<ul>
<li>I manifesti del futurismo, by Filippo Tommaso Marinetti. </li>
</ul></li>
</ul>5F0F6FEA-DB00-11E1-BD61-BC7CFF10EFB52012-07-30T12:10:48:48Z2013-06-05T18:13:43:43ZRaspberry Pi GPIO cableMartin Oldfield<p>A useful cable to connect the <span class="caps">GPIO </span>port to a breadboard. </p><p> <img src="gpio-cable.jpg" alt="" class="img_border" /></p>
<h2>The <span class="caps">GPIO </span>port</h2>
<p>If you want to connect fun hardware to the Raspberry Pi, then the chances are that you'll be using the <a href="http://elinux.org/RPi_Low-level_peripherals#General_Purpose_Input.2FOutput_.28GPIO.29"><span class="caps">GPIO </span>port.</a> This is a 26-way <span class="caps">DIL </span>header, much like the printer port on the <span class="caps">BBC</span> Micro. Purists might have prefered a 20-pin connector to match the User Port, but no matter.</p>
<p>Even though decades have elapsed since those days, the best way to connect to the header is still probably an <span class="caps">IDC </span>connector. You simply take one of these and <a href="http://www.raspberrypi.org/archives/1404">clamp it on the end of a piece of ribbon cable with a vice.</a> No fiddly wires to strip, no solder required.</p>
<h2>The other end</h2>
<p>Of course it's all very well to connect the cable to the Raspberry Pi. but it's not much use if the other end is just flapping in the breeze. On could simply use another <span class="caps">IDC </span>header, and that would be quite sensible if you were connecting the Pi to another <span class="caps">PCB.</span></p>
<p>However, if you'd like to connect to some components on breadboard or stripboard, the 0.1" gap between the rows of connectors is a nuisance. Happily there's a better alternative: use an <span class="caps">IDC DIP </span>header. I couldn't find any 26-way versions, but Farnell will sell you a <a href="http://uk.farnell.com/jsp/search/productdetail.jsp?sku=1106722">28-pin header</a> from Harting for about £3.50 ($5.50) plus <span class="caps">VAT.</span> Doubtless there are other sources.</p>
<p>It's easy to plug this into a breadboard, or a suitable IC socket. The main disadvantage is that the connector is physically quite large. Another potential issue if that the exposed pins carry the Raspberry Pi's supply rails, and shorting them out is bad. So don't do that!</p>
<p><img src="dil-label.png" alt="" height="387" width="231" class="right" style="float:right;padding-left:1em" /></p>
<h2>Labelling</h2>
<p>In one respect though, a large connector is an advantage: we can easily label the pins.</p>
<p>If you want to make of these for yourself, download the <a href="http://www.mjoldfield.com/atelier/2012/07/rpi-gpio-label.tar.gz">tarball</a> and print the <span class="caps">PDF </span>file. If you want to make changes, feel free to edit the noddy PostScript file, and then print that.</p>
<p>In the best Blue Peter tradition, cut out the label, then stick it to the header with double-sided sticky table or super-glue. </p>030AF94A-9234-11DC-99C8-D2A71E87E0CE2007-11-13T22:01:21:21Z2013-06-05T18:13:43:43ZApple CakeMartin Oldfield<p>A recipe for apple cake. </p><p>I'm lucky enough to have apple trees in the garden and this year they've produced loads of cooking apples. Here's one of the ways I've been eating them.</p>
<h2>Ingredients</h2>
<ul>
<li>750g unprepared cooking apples</li>
<li>225g self-raising flour</li>
<li>1½ teaspoons baking powder</li>
<li>225g castor sugar</li>
<li>150g butter</li>
<li>2 large eggs</li>
<li>1 tsp vanilla essence</li>
</ul>
<h2>Method</h2>
<ol>
<li>Preheat the oven to 180°C.</li>
<li>Grease a 24cm (or thereabouts) cake tin with a removable base.</li>
<li>Peel, core, and slice the apples into pieces about 3mm thick.</li>
<li>Melt the butter. A microwave is good for this.</li>
<li>Beat the eggs with the vanilla essence.</li>
<li>Put the flour, sugar, and baking powder into a food processor and mix. If you're using unsalted butter then add a pinch of salt as well.</li>
<li>Add the eggs and vanilla essence to the food processor and process briefly.</li>
<li>Pour in the melted butter, and process briefly. You should end up with a fairly sludgy consistency.</li>
<li>Put half of the sludge into the baking tin, cover with the apple pieces, the add the rest of the sludge. Roughly even out the mixture with a spatula and allow to stand for a few minutes.</li>
<li>Bake the cake until it's done i.e. a skewer inserted into the middle of the cake comes out clean. This should take about 45 minutes. If the cake starts to burn, e.g. because you set the oven too high, cover it loosely with a piece of baking parchment.</li>
<li>Remove the cake from the tin, then cool on a wire rack until desired. It's good eaten warm with cream or ice-cream, but is also good cold with tea. Assuming you can resist eating it all the cake keeps well for a few days.</li>
</ol>
<h2>Some variations</h2>
<ul>
<li>Almond goes well with apple, so you could try swapping the vanilla essence for almond essence, or replacing some of the flour with ground almonds.</li>
<li>If you want an even more moist cake with the same basic idea, then Chocolate and Zucchini <a href="http://chocolateandzucchini.com/archives/2003/10/my_grandmothers_pear_cake.php">have a recipe</a></li>
<li>I'm tempted to try some sort of caramel variation too. Perhaps just using darker sugar with molasses would work, but I'm tempted to preprocess the sugar by partially caramelizing it and then blitzing it back to a powder with the food processor. If you try this, please let me know how it came out. </li>
</ul>58597FAA-5472-11DE-8919-7D5C446D55572009-03-22T13:23:41:41Z2013-06-05T18:13:43:43ZStarch Free TiramisuMartin Oldfield<p>At a recent dinner party, one of my guests couldn't eat starch. This didn't seem compatible with my desire to serve tiramisu, but happily the lure of creamy coffee proved enough of an inspiration to find a solution: instead of coffee soaked sponge use coffee jelly. </p><h2>Ingredients (serves about 8)</h2>
<h3>For the white gloop</h3>
<ul>
<li>5 egg yolks</li>
<li>150g sugar</li>
<li>vanilla essence to taste (I use about 1/2 teaspoon but I like vanilla)</li>
<li>750g mascarpone</li>
<li>150g double cream</li>
</ul>
<h3>For the coffee jelly</h3>
<ul>
<li>200ml boiling water</li>
<li>4 heaped teaspoons of instant coffee</li>
<li>2 sheets of gelatine</li>
</ul>
<h2>Method</h2>
<ol>
<li>Soak the gelatine sheets in cold water for a few minutes until they become spongy.</li>
<li>Disolve the coffee in the boiling water. You could obviously just use espresso here, but I've read that Michel Roux uses instant coffee in his tiramisu and I see no reason to argue with him. Don't add any sugar though: you want to jelly to be almost bitter.</li>
<li>Remove the gelatine from the water, and squeeze to remove any excess water. Add them to the coffee while it's still hot, then stir to disolve.</li>
<li>Pour the liquid onto a <strong>non-stick</strong> baking tray (with raised edges) and leave somewhere cool, e.g. a fridge, to set. You should find that the sheet of jelly is about 0.5mm thick. This thin sheet has a large surface-area to volume ratio, which gives a strong hit of coffee in the mouth.</li>
<li>Whisk the egg yolks, vanilla and sugar until they reach a mousse-like consistency.</li>
<li>Thin the mascarpone by mixing the cream with it, then genty fold in the egg and sugar mixture.</li>
<li>Cut the sheet of jelly into small squares (it's not necessary to take much care over this), then gently mix them into the mascarpone mixture. You might not need all the jelly, so check the balance of flavours as you proceed.</li>
<li>Put the mixture into a serving dish and chill for at least a few hours. </li>
</ol>1829A932-2BCB-11E0-B925-8BBEB98621932011-01-29T17:13:17:17Z2013-06-05T18:13:43:43ZChurch NumeralsMartin Oldfield<p>Fun and games with Church numerals in Haskell. </p><h2>The basic idea</h2>
<p>Church numerals are representations of the natural numbers as functions. The web is blessed with many articles about this, including <a href="http://en.wikipedia.org/wiki/Church_numeral">a fine one at Wikipedia,</a> but I wanted to write something to clarify matters in my own mind.</p>
<p><em>Caveat:</em> Most of the stuff I found on the web is couched in the language of lambda calculus which I find less readable than <a href="http://en.wikipedia.org/wiki/Haskell_%28programming_language%29">Haskell</a> It's possible that I'll stray a bit from what's normally accepted as Church Numerals as a consequence.</p>
<h2>Counting</h2>
<p>Most people probably first encounter the natural numbers when learning to count: “no blocks, one block, two blocks” and so on. The central idea of Church Numerals is to count how many times a function is applied. More specifically, given some arbitrary function, f, and a value z, the Church Numeral for two is a function which will apply f twice to z.</p>
<p>For example:</p>
<pre><code>two f z = f ( f z )</code></pre>
<p>Now, it might not be immediately obvious that this is a reasonable representation of 2, but that's really because we don't have any good way to visualize what's happening. As is often the case, things become clearer when they get less abstract. Instead of arbitrary f and z, let's choose some specific examples. After some thought the following are useful:</p>
<pre><code>f = (1 +)
z = 0</code></pre>
<p> The odd looking definition for f is just a <a href="http://www.haskell.org/tutorial/functions.html">section</a>. It's the same as saying</p>
<pre><code>f x = (1 + x)</code></pre>
<p>To understand why we choose f and z thus, recall that the Church Numeral for two is supposed to apply f (which we're defining to just increment) to z (here 0) twice. Explicitly:</p>
<pre><code> two f z = f (f z) -- definition of two
= f (f 0) -- definition of z
= f (1) -- applying f
= 2 -- applying f</code></pre>
<p>In other words these choices give us a pretty printer:</p>
<pre><code>pp ch = ch f z
where f x = x + 1
z = 0
pp two
> 2</code></pre>
<p>There's nothing particularly special about these f and z though: they're just ways of visualizing the Church Numeral. For example:</p>
<pre><code>two (1 +) 0
> 2
two ('*':) ""
> "**"</code></pre>
<h2>Small integers</h2>
<p>It's not really conventional two start the integers with two, so let's go back to zero and one.</p>
<p>Recall that our guiding principle is that the Church Numeral for n should apply some other function n times. Now it's not too hard to find these:</p>
<pre><code>zero f z = z
one f z = f z
two f z = f (f z)
And if you substitute f and z from above, you'll see they work as expected.</code></pre>
<p>Most of the time it will be neater to omit z:</p>
<pre><code>zero f z = z
one f = f
two f = f . f
three f = f . f . f</code></pre>
<p> which emphasizes that function composition is at the heart of our new method of counting. With our favourite choice for f:</p>
<pre><code>three (1 +) = (1 +) . (1 +) . (1 +)
= (3 +)</code></pre>
<p>Obviously it will get a bit boring if we have to keep on generating the numerals by hand. However we could recursively define the numeral for any n:</p>
<pre><code>church 0 f z = zero
church n f z = f ( church (n - 1) f z)
twelve = church 12
pp twelve
> 12</code></pre>
<h2>Arithmetic</h2>
<p>Having mastered counting, most people move on to arithmetic. So, it's natural to ask if, given a couple of Church Numerals, can we make a new Church Numeral which corresponds to their sum.</p>
<p>It's probably worth reminding ourselves at this stage that the Church Numerals are functions: so the result of the addition will be another function. All things considered, it's a good thing we're using a functional language!</p>
<p>In the following examples, we'll use ci to denote the Church Numeral for i. Often we'll check that things work with the particular choice of f = (1 +). This isn't necessary of course, and you could prove all the results for general f if you so wanted.</p>
<p>These results will be handy and hopefully intuitively obvious:</p>
<pre><code>ci (1 +) = (i +)
ci (j +) = (i * j +)</code></pre>
<p>We'll also define operators like <+> to represent the addition of Church Numerals. Haskell's happy to let us do that.</p>
<h3>Addition</h3>
<p>Let's begin with the answer:</p>
<pre><code>(<+>) ci cj f = (ci f) . (cj f)</code></pre>
<p>The key idea here is that to add ci and cj, we need to apply f j times, then apply it i times (or vice versa):</p>
<p>With f = (1 +):</p>
<pre><code>(ci <+> cj) (1 +) = (ci (1 +)) . (cj (1 +))
= (i +) . (j +)
= (i + j +)
</code></pre>
<p>Of if you're skeptical:</p>
<pre><code>pp $ three <+> two
> 5</code></pre>
<h3>Multiplication</h3>
<p>Again we'll begin with the answer:</p>
<pre><code>(<*>) ci cj = ci . cj</code></pre>
<p>This time, instead of applying f i times then j times, we want to apply f j times, then do that whole operation i times.</p>
<pre><code>(ci <*> cj) f = (ci . cj) f
= ci (cj f)</code></pre>
<p>or with f = (1 +)</p>
<pre><code>(ci <*> cj) (1 +) = ci (cj (1 + ))
= ci (j + )
= (i * j +)
</code></pre>
<p>Again for the skeptics:</p>
<pre><code>pp $ three <*> two
> 6</code></pre>
<h3>Exponentiation</h3>
<p>Following the usual pattern:</p>
<pre><code>(<^>) ci cj = cj ci</code></pre>
<p>For addition we applied each Church Numeral to its own copy of f. When multiplying we had one f which we applied both Church Numerals to in succession. Finally when exponentiating we apply one Church Numeral the other.</p>
<p>To see why this works recall that (cj g) applies g j times. So,</p>
<pre><code>(ci <^> cj) = cj ci
= ci . ((c(j-1)) ci)
= ci . ci ((c(j-2)) ci)
= ci . ci ... ci -- j times</code></pre>
<p>But ci applies f i times, so in total f gets applied i<sup>j</sup> times.</p>
<p>Or to give an example:</p>
<pre><code>pp $ three <^> two
> 9</code></pre>
<h3>Subtraction</h3>
<p>Subtraction of Church Numerals is hard. There's no really good way to do it, but there is a hack.</p>
<p>Recall that adding one to a Church Numeral involves applying a function: accordingly subtracting one would involve unapplying the function, or finding the function's inverse. In general that's a tad tricky!</p>
<p>There is a hack though, which goes roughly as follows. Given a Church Numeral we can always construct the next one. So, we could construct an infinite list of all Church Numerals: then to work out the result of subtracting one from N we could look in the list until we find N, then return the preceding Church Numeral.</p>
<p>Having worked out how to implement the inverse function, we can handle subtraction by just applying it the relevant number of times.</p>
<p>In one final twist, it's worth remembering that our numbers 'start' at zero, i.e. we're dealing with the natural numbers. That's not a problem for addition, but we'll need to handle (0 - 1) as a special case. Canonically setting 0 - 1 to 0 seems popular.</p>
<p>One could do something analogous for division, but we'd need to store all possible products which would be even less efficient. We'd also have to decide how to handle e.g. 4 / 3 which doesn't have a solution in the naturals.</p>
<p>If you're interested in this, there's lots more about it on the Internet.</p>
<h3>Operator precedence</h3>
<p>In principle one should define the precedence of these operators to respect the normal rules of arithmetic, but I've not bothered here. However, it's worth noting that one can use parentheses to specify the evaluation order.</p>
<h2>Example code</h2>
<p>If you want to play with this you may find <a href="http://www.mjoldfield.com/atelier/2011/01/church/basic.hs">this toy example</a> helpful. It defines operators for addition, multiplication and exponentiation, and zero, one, and two.</p>
<p>There's also a small demonstration which constructs Church Numerals for 0 to 9, and checks they evaluate properly:</p>
<pre><code>demo_data = zip [zero, one, two, three, four,
five, six, seven, eight, nine ]
[0 .. ]
where four = two <*> two
five = two <+> three
six = two <*> three
seven = six <+> one
eight = two <^> three
nine = three <^> two
demo = concatMap test demo_data
test (ch, n) = "N: " ++ (show n )
++ " PP: " ++ (show n')
++ ok (n == n')
where n' = pp ch
ok True = " OK\n"
ok False = " FAIL\n"</code></pre>
<p>You can check it thus:</p>
<pre><code>$ ghci basic.hs
GHCi, version 6.12.3: http://www.haskell.org/ghc/ :? for help
Loading package ghc-prim ... linking ... done.
Loading package integer-gmp ... linking ... done.
Loading package base ... linking ... done.
Loading package ffi-1.0 ... linking ... done.
[1 of 1] Compiling Main ( basic.hs, interpreted )
Ok, modules loaded: Main.
*Main> putStr demo
N: 0 PP: 0 OK
N: 1 PP: 1 OK
N: 2 PP: 2 OK
N: 3 PP: 3 OK
N: 4 PP: 4 OK
N: 5 PP: 5 OK
N: 6 PP: 6 OK
N: 7 PP: 7 OK
N: 8 PP: 8 OK
N: 9 PP: 9 OK</code></pre>
<h2>A different view</h2>
<p>Up until now, we've been playing with expressions and using a pretty printer which turns the Church Numeral into an integer. Perhaps though, it would be nice to actually see the source code of the function itself.</p>
<p>In principle I suppose one could ask the Haskell compiler to generate this, but I don't know how. It's probably also the case then even if one could get <em>a</em> function definition, it might not be the one we want. For example, the compiler might have optimized the expression.</p>
<p>If we want to have control, I think it's better that we assemble the code ourselves. In essence we'll work with pairs of a Church Numeral and a string representation of it.</p>
<p>When we make a Church Numeral we'll have to make a string version of it as well, and we'll need to modify our arithmetic operators to combine the string representations too.</p>
<p>In type terms:</p>
<pre><code>basic_church :: (t -> t) -> t -> t
paired_church :: ( (t -> t) -> t -> t, String )</code></pre>
<p>Although conceptually straightforward, it's a bit fiddly to get right. Here are a couple of examples:</p>
<pre><code>three = (\f -> f . f . f, "\\s z -> (s . s . s) z")
(<^>) (ff,fs) (gf,gs) = (gf ff,
"\\s z -> ((" ++ gs ++ ") (" ++ fs ++ ")) s z")</code></pre>
<p>You'll see that for consistency's sake all the string versions are lambda functions from s z.</p>
<p>If you're keen to play along at home, you can download <a href="http://www.mjoldfield.com/atelier/2011/01/church/paired.hs">code which implements this.</a> The only new function is pq which returns the stringy version of the Numeral:</p>
<pre><code>$ ghci paired.hs
GHCi, version 6.12.3: http://www.haskell.org/ghc/ :? for help
Loading package ghc-prim ... linking ... done.
Loading package integer-gmp ... linking ... done.
Loading package base ... linking ... done.
Loading package ffi-1.0 ... linking ... done.
[1 of 1] Compiling Main ( paired.hs, interpreted )
Ok, modules loaded: Main.
*Main> pp three
3
*Main> pq three
"\\s z -> (s . s . s) z"</code></pre>
<p>The demo function also shows the expressions:</p>
<pre><code>*Main> putStr demo
N: 0 PP: 0 OK PQ: \s z -> z
N: 1 PP: 1 OK PQ: \s z -> s z
N: 2 PP: 2 OK PQ: \s z -> (s . s) z
N: 3 PP: 3 OK PQ: \s z -> (s . s . s) z
N: 4 PP: 4 OK PQ: \s z -> ((\s z -> (s . s) z) . (\s z -> (s . s) z)) s z
N: 5 PP: 5 OK PQ: \s z -> (((\s z -> (s . s) z) s) . ((\s z -> (s . s . s) z) s)) z
N: 6 PP: 6 OK PQ: \s z -> ((\s z -> (s . s) z) . (\s z -> (s . s . s) z)) s z
N: 7 PP: 7 OK PQ: \s z -> (((\s z -> ((\s z -> (s . s) z) .
(\s z -> (s . s . s) z)) s z) s) . ((\s z -> s z) s)) z
N: 8 PP: 8 OK PQ: \s z -> ((\s z -> (s . s . s) z) (\s z -> (s . s) z)) s z
N: 9 PP: 9 OK PQ: \s z -> ((\s z -> (s . s) z) (\s z -> (s . s . s) z)) s z</code></pre>
<p>So if one felt like obfuscating constants, one might replace</p>
<pre><code>i = 7</code></pre>
<p>with</p>
<pre><code>i = (\s z -> (((\s z -> ((\s z -> (s . s) z) . (\s z -> (s . s . s)
z)) s z) s) . ((\s z -> s z) s)) z) (1+) 0</code></pre>
<p>There's no reason to limit ourselves to small numbers either (well unless we care about execution speed!). For example:</p>
<pre><code>x = seven <*> thirteen <*> nineteen <*> twentythree <*> seventynine
where seven = two <*> three <+> one
thirteen = two <*> two <*> three <+> one
sixteen = (two <*> two) <^> two
nineteen = sixteen <+> three
twentythree = sixteen <+> seven
seventynine = two <*> three <*> thirteen <+> one</code></pre>
<p> lets us represent 3141593 as:</p>
<p style="font-family: monospace; margin-left: 3em; margin-right: 3em; ">\s z -> ((\s z -> ((\s z -> ((\s z -> ((\s z -> (((\s z -> ((\s z -> (s . s) z) . (\s z -> (s . s . s) z)) s z) s) . ((\s z -> s z) s)) z) . (\s z -> (((\s z -> ((\s z -> ((\s z -> (s . s) z) . (\s z -> (s . s) z)) s z) . (\s z -> (s . s . s) z)) s z) s) . ((\s z -> s z) s)) z)) s z) . (\s z -> (((\s z -> ((\s z -> (s . s) z) (\s z -> ((\s z -> (s . s) z) . (\s z -> (s . s) z)) s z)) s z) s) . ((\s z -> (s . s . s) z) s)) z)) s z) . (\s z -> (((\s z -> ((\s z -> (s . s) z) (\s z -> ((\s z -> (s . s) z) . (\s z -> (s . s) z)) s z)) s z) s) . ((\s z -> (((\s z -> ((\s z -> (s . s) z) . (\s z -> (s . s . s) z)) s z) s) . ((\s z -> s z) s)) z) s)) z)) s z) . (\s z -> (((\s z -> ((\s z -> ((\s z -> (s . s) z) . (\s z -> (s . s . s) z)) s z) . (\s z -> (((\s z -> ((\s z -> ((\s z -> (s . s) z) . (\s z -> (s . s) z)) s z) . (\s z -> (s . s . s) z)) s z) s) . ((\s z -> s z) s)) z)) s z) s) . ((\s z -> s z) s)) z)) s z</p>
<p> Try it for yourself in ghci:</p>
<ol>
<li>let x = monstrous function from above (cut and paste it).</li>
<li>x (1+) 0</li>
<li>Be glad that all your arithmetic isn't this slow!</li>
</ol>
<h2>Conclusions</h2>
<p>I can't say that this is particularly useful, though people who think about the basis of computation might disagree. On the other hand, I think it's indisputably fun, which is justification enough for me.</p>
<p>Although I'm sure I've encountered this before in the past, I was reminded of it by a fine geocaching puzzle. I'd like to thank the setter of the cache for this, but sadly can't really link to the relevant work without spoiling it for other people. </p>AFED5ED2-C7AE-11DC-B56A-BA07C31599DF2008-01-20T23:22:56:56Z2013-06-05T18:13:43:43ZTCP-IP v4.16 and Olimex flashMartin Oldfield<p>An easy way to patch recent versions (e.g. 4.16) of the Microchip <span class="caps">PIC TCP</span>-IP stack to work with the Atmel flash <span class="caps">EEPROM</span>s on <a href="http://www.olimex.com/">Olimex</a> boards. </p><p><a href="http://www.olimex.com/">Olimex</a> make splendid <span class="caps">PIC </span>based ethernet boards, but supply them with an older version of the Microchip <span class="caps">TCP</span>-IP stack. It's relatively easy to compile more recent versions of the stack for Olimex hardware, but some work is needed to access the on board Atmel flash memory.</p>
<p>All the hard work of writing the Atmel flash interface has already been done by Olimex in the stack they supply with the board, but that's version 4.02 of the stack.</p>
<p>However it makes Generating the patch from scratch is relatively easy: just compare the <span class="caps">TCP</span>-IP stack supplied by Olimex with the same stack from Microchip, then tweak the resulting diff a bit. Alternatively, just download the patch from here.</p>
<ul>
<li><a href="http://www.mjoldfield.com/pic/olimex-flash-patch-0.1">Olimex Flash memory patch version 0.1</a></li>
</ul>
<p>The patch has been tested on version 4.16 of the <span class="caps">TCP</span>-IP stack, running on the Olimex <a href="http://www.olimex.com/dev/pic-maxi-web.html"><span class="caps">MAXI</span>-WEB</a>. It's probably worth pointing out that this stack is probably too big to use on the <span class="caps">PIC18F25J10 </span>based Olimex <span class="caps">MINI</span>-WEB, which has only 32kB of program memory. </p>D6A29F8A-68A4-11DF-9CAE-B1DD50C74A892010-05-26T08:57:44:44Z2013-06-05T18:13:43:43ZThe Bus Pirate on MacOSMartin Oldfield<p>Self-help notes on using the bus pirate on MacOS—mainly links to other documentation. </p><p>The <a href="http://dangerousprototypes.com/bus-pirate-manual/">Bus Pirate</a> seems to be a fun device, but as with anything which is being developed rapidly sometimes the documentation on the web is patchy or relates to an older version.</p>
<p>These notes are to remind me what's what when using the Bus Pirate on MacOS.</p>
<h2>Talking to the device</h2>
<p>Oddly screen is your friend:</p>
<pre><code>screen /dev/tty.usbXXXX 115200</code></pre>
<p>Note: I found that unplugging the Bus Pirate without first quitting screen crashed the Mac, so it's best to avoid that!</p>
<h2>Firmware Upgrades</h2>
<h3>Old (version 2) firmware</h3>
<p>As of April 2010 Bus Pirates came with quite old firmware (though I think this may have changed now). Accordingly updating the firmware is a two-step process:</p>
<ol>
<li>The basic process is <a href="http://dangerousprototypes.com/2009/08/06/bus-pirate-firmware-upgrades-on-linux-osx/">documented</a></li>
<li>The Python script which does the upgrade needs a serial library which isn't installed by default. The no-brainer solution is to use MacPorts' py26-serial module.</li>
</ol>
<h3>New firmware</h3>
<p>Again the <a href="http://dangerousprototypes.com/2010/01/22/how-to-firmware-upgrades-with-the-linux-mac-windows-console/">process is documented,</a> but here's my crib sheet:</p>
<ul>
<li>Download the software and unpack it.</li>
<li>Put a link between <span class="caps">PGC </span>and <span class="caps">PGD </span>on the five-pin header, the connect the Bus Pirate.</li>
<li>Run the uploader:</li>
</ul>
<pre><code>./pirate-loader_mac --dev=/dev/tty.usbXXX --hex=BPv3\&v2go/BPv3-Firmware-v4.2.hex</code></pre>
<ul>
<li>Unplug the Bus Pirate, remove the link, plug it in again.</li>
<li>Test it:</li>
</ul>
<pre><code>screen /dev/tty.usbXXXX 115200</code></pre>
<h2>Nice device names</h2>
<p>You can tweak the device name which appears in /dev by tweaking the <span class="caps">EEPROM </span>in the Bus Pirate's <span class="caps">FTDI </span>chip: <a href="http://dangerousprototypes.com/2010/01/27/pirate-rename-get-a-nicely-named-serial-device/">here's how.</a> </p>4ACE76EA-BF75-11E0-84A2-FDCB707682972011-08-05T14:59:38:38Z2013-06-05T18:13:43:43ZTime-lapse photographyMartin Oldfield<p>Brief notes on making time-lapse movies with a <span class="caps">DSLR </span>camera. </p><h2>Preamble</h2>
<p>It's obvious that some things happen too quickly for us to follow by eye. <a href="http://en.wikipedia.org/wiki/High_speed_photography">High-speed photography</a> can help us understand them by slowing the action down to a human scale. It has helped us resolve whether horses' feet all leave the ground during a gallop, what happens when water drips into a glass, and study the explosions of everything from fruit to atomic bombs. Of course, to capture such videos we'll typically need very specialized and expensive equipment.</p>
<p>However, the opposite process can help us too: sometimes we perceive more by speeding up the action. Although it's clear that we're not getting more data, our brains somehow see more if the motion is fast enough.</p>
<p>Happily this sort of movie, a time-lapse video, is much easier to make. All we need is a camera, the means to trigger it on a regular basis, and some software to turn the photos into a movie.</p>
<p>The notes below document my own experiences in making such movies using a Canon <span class="caps">EOS400D DSLR </span>camera. There's very little novel here, but I wanted to note things down rather than have reinvent them next time I want do this.</p>
<h2>Camera settings</h2>
<p>Broadly speaking the camera lets us choose between automatic and manual. On automatic the camera tries to optimize the settings for that particular image: as a consequence successive frames in the movie might have different exposures and thus the film flickers. Thus we're naturally led to set the camera in full manual mode. Full here means aperture, exposure, <span class="caps">ISO, </span>and focus.</p>
<p>Normally I'm a fan of capturing full-resolution <span class="caps">RAW </span>images. In practice though with thousands of images this makes the post-processing unduly onerous. Instead then, when making movies I capture small (1936 × 1288) <span class="caps">JPEG</span>s. As a bonus these images are small enough (~1MB) that a single CF card is sufficient for most movies.</p>
<h2>Control</h2>
<p>Obviously a critical task is to trigger the camera on a regular basis, and pressing the shutter manually rapidly gets boring! Some cameras can be programmed to do this themselves, otherwise you'll need an <a href="http://en.wikipedia.org/wiki/Intervalometer">intervalometer.</a></p>
<h3>The <span class="caps">DIY </span>route</h3>
<p>It's easy enough to build an intervalometer, and I think this is the most enjoyable solution. You can see <a href="http://www.mjoldfield.com/atelier/2011/08/intervalometer.html">my design,</a> or <a href="http://www.google.com/search?q=diy+intervalometer">consult Google</a> for many others.</p>
<h3>The commercial route</h3>
<p>Unsurprisingly you can simply throw money at this problem, and buy and intervalometer.</p>
<h3>The Swiss-army-knife approach</h3>
<p>It's quite likely that you've already got a programmable device lying around which could do the job. There are countless <a href="http://www.google.com/search?q=arduino+intervalometer">Arduino intervalometer hacks,</a> or you could <a href="http://www.google.com/search?q=PC+intervalometer">use a <span class="caps">PC.</span></a></p>
<p>Perhaps more fun are those based around a mobile phone. I've played with <a href="http://www.dslrbot.com/"><span class="caps">DSLR</span>bot</a> on my iPhone. Rather than a wired connection, one attaches a couple of IR <span class="caps">LED</span>s to the phone's headphone socket which signal to the camera's IR remote sensor.</p>
<p>As shown below, one can even enjoy the <span class="caps">DIY </span>thrill with this by making your own IR emitter.</p>
<p><img src="IRemitter.jpg" alt="Infra-red emitter for DSLRbot/iPhone" class="img_border" /></p>
<h2>Power</h2>
<p>Taking many hundreds or thousands of images takes a lot of energy; enough to run down the small NiMH battery in my <span class="caps">EOS400D </span>anyway. Happily Canon make a battery grip which increases the capacity, but sadly this still isn't really enough. However, if you leave the batteries out of the battery grip, there's loads of space to fit a voltage regulator and a socket for an large external battery.</p>
<p>More specifically, the BG-E3 battery grip can take six AA cells. I a linear regulator around a <span class="caps">LM317 </span>on a bit of stripboard, which is then attached to the holder with a bit of gaffer tape. It's hardly the most robust design but it appears to work well enough.</p>
<p><img src="regulator.jpg" alt="The regulator installed in the AA cell carrier." class="img_border" /></p>
<p>When not taking a picture the camera draws about 130mA which falls to about 40mA if you disable the <span class="caps">LCD </span>display. The peak current when taking a (non-flash) photo is about 1.6A but I don't have a feel for the average value.</p>
<p>Canon rate the NB-2LH as a 7.2V 720mAh battery; the somewhat larger lead-acid battery I use now claims 10Ah!</p>
<p><img src="battery.jpg" alt="Now that's what I call a camera battery." class="img_border" /></p>
<h2>Data processing</h2>
<p>Finally when you've taken lots of images, one needs to convert them into a movie. I use a Mac, so iMovie is the natural choice. That works, but it seems quite slow.</p>
<p>In practice I think it's quicker and easier to combine the stills with <a href="http://www.ffmpeg.org/">ffmpeg</a> and then use iMovie to add titles or otherwise edit the video.</p>
<p>Charles Martin Reid has written <a href="http://wiki.charlesmartinreid.com:8888/wiki/Ffmpeg">an article</a> covering this, so I won't duplicate his notes here.</p>
<p>My only addition is a Perl program <a href="http://www.mjoldfield.com/atelier/timelapse/make-movie.pl">to arrange the files for ffmpeg.</a> Rather than renaming things, my program sets up a directory of links to any <span class="caps">JPEG </span>files it finds, then invokes ffmpeg.</p>
<p>Explicitly I do this:</p>
<ol>
<li>Copy images from the camera's flash card to a new directory on the Mac.</li>
<li>Run make-movie.pl.</li>
<li>Look at a.mp4</li>
</ol>
<h2>Results</h2>
<h3>Clock Time Lapse</h3>
<iframe src="http://player.vimeo.com/video/27400773?byline=0&title=0&portrait=0" width="400" height="225" frameborder="0"></iframe>
<p>Each frame is a minute apart, so the second hand appears to stay still. See the <a href="http://vimeo.com/27400773">full video</a> on <a href="http://vimeo.com">Vimeo.</a></p>
<h3>Clouds</h3>
<iframe src="http://player.vimeo.com/video/27400494?byline=0&title=0&portrait=0" width="400" height="300" frameborder="0"></iframe>
<p>The view from my back window on June day in Cambridge. See the <a href="http://vimeo.com/27400494">full video</a> on <a href="http://vimeo.com">Vimeo.</a> </p>70DB97EA-4952-11DF-AD46-F107043A60DF2010-04-16T12:19:55:55Z2013-06-05T18:13:43:43ZUnicode games with perl, MySQL, and XMLMartin Oldfield<p>Not for the first time I wasted a few hours getting Unicode things to 'just work' in a Perl project which mixed <span class="caps">XML </span>and MySQL. Hopefully these notes will prevent another repetition. </p><h2>Assumptions</h2>
<ol>
<li>Generally one would like to use <span class="caps">UTF</span>-8 and forget about the details.</li>
<li><span class="caps">XML</span>::LibXML and <span class="caps">XML</span>::LibXSLT is the sane way to munge <span class="caps">XML </span>in Perl.</li>
<li>MySQL will be involved somewhere. The cool kids probably use postgreSQL but still!</li>
</ol>
<h2>Tricks</h2>
<h3>The database</h3>
<p>MySQL is fairly happy with <span class="caps">UTF</span>-8 if you set the relevant options. It seems to be enough to set</p>
<pre><code>character-set-server = utf8</code></pre>
<h3><span class="caps">DBI</span></h3>
<p>You need to tell <span class="caps">DBD</span>::mysql to use <span class="caps">UTF</span>-8. It's easiest to do this at connect time:</p>
<pre><code>$dbh = DBI->connect(..., { RaiseError => 1, mysql_enable_utf8 => 1, });</code></pre>
<h3><span class="caps">XML</span>::LibXML</h3>
<p>Sadly <span class="caps">XML</span>::LibXML's serialization code will sometimes encode non-ASCII characters as numeric entities and not <span class="caps">UTF</span>-8. For example, you might see <i>pâte sucrée</i> instead of <i>pâte sucrée</i>. Obviously this is valid <span class="caps">XML </span>but it plays havoc with e.g. MySQL's sorting!</p>
<p>Most perniciously this transformation can be invisible if you look at the data in web browser, and of course it depends on both the data being serialized and (apparently) the version of the underlying libxml2.</p>
<p>The solution is, of course, easy:</p>
<pre><code>my $txt = $node->toString;
$txt =~ s/&#x([0-9a-f]+);/chr(hex($1))/gie;
$txt =~ s/&#([0-9]+);/chr $1/ge;</code></pre>
<h2>Handy debugging hints</h2>
<p>I think this stuff can be quite a pain to debug. There seem to be ample scope for confusion because transformations sometimes happen automatically or invisibly. Ultimately some sort of hex dump (or the like) gives us unambiguous data.</p>
<p>It also seems to be handy to memorize the byte sequences you might see. <a href="http://www.fileformat.info/info/unicode/">fileformat.info</a> has good Unicode pages.</p>
<p>For example, consider <a href="http://www.fileformat.info/info/unicode/char/00e9/index.htm">é</a>, which has codepoint 0xe9. I managed to create these sequences:</p>
<ul>
<li>The correct <span class="caps">UTF</span>-8 encoding: 0xc3, 0xa9.</li>
<li>A broken 'double' <span class="caps">UTF</span>-8 encoding: 0xc3, 0x83, 0xc2, 0xa9. To get this I managed to run the data through the <span class="caps">UTF</span>-8 encoder twice.</li>
</ul>
<h3>Look in the mysql .MYD file</h3>
<p>For applications which load data into a database, then display it, the mysql data files are a convenient place to bisect the problem.</p>
<p>If you use MySQL's default MyISAM storage engine, then the data in a table are stored in a .MYD file called $datadir/$database/$table.MYD. It's a binary file, but you can always dump it. On MacOS X:</p>
<pre><code>$ sudo od -c /usr/local/mysql/data/foo/bar.MYD
...
00024000 65 74 20 49 6e 74 65 72 6e c3 a9 65 73 20 64 65
^^^^^^</code></pre>
<p> From which we can probably conclude that the data are being correctly inserted into the database. </p>8E357004-7B86-11DD-8D09-E96EE055CCF52008-09-05T20:08:45:45Z2013-06-05T18:13:43:43ZPlaces to eat in the Lake DistrictMartin Oldfield<p>Some brief notes on places to eat in the Lake District taken on a trip in September 2008. </p><h2>The Punch Bowl, Crossthwaite</h2>
<p>Quite delightful and well worth a detour. We ate here twice, and had great food on both occasions. The highlight was probably a starter: an oxtail raviolo served on a potato cake, garnished with a bacon foam. Other dishes were less dressy, but almost as good.</p>
<p>For more details see <a href="http://www.the-punchbowl.co.uk/">their website,</a> or <a href="http://maps.google.com/maps?q=N+54+18.791+W+2+51.161">Google Maps.</a></p>
<h2>The Langstrath Country Inn, Stonethwaite, Borrowdale</h2>
<p>Rather a wonderful place in Borrowdale, ideal for walkers, but also worth a detour for dinner. The inn is hidden at the end of a very narrow lane off the Rossthwaite--Seatoller road.</p>
<p>We ate in the bar which has a warm, cosy, family feel, making both walkers and posher diners welcome. At lunch the food is good but quite basic: we had soup, welsh rarebit, and some very good local cheese.</p>
<p>In the evening meals get more elaborate without losing quality. We had boeuf borginonne, followed by a splendid ginger and rhubarb creme brulee. The menu spells boeuf `beef' which summarizes the place perfectly: no-nonsense English cooking with no qualms about borrowing from abroad where appropriate.</p>
<p>For more information <a href="http://www.thelangstrath.com/">see their website,</a> or <a href="http://maps.google.com/maps?q=N+54+30.775+W+3+8.400">Google Maps.</a></p>
<h2>The Tower Bank Arms, Sawrey</h2>
<p>A nice pub with better than average food. I had a very good pork-loin and black-pudding affair: a nice idea, decent meat, perfectly cooked. Sadly my girlfriend fared less well: her fish was both over-fussy in design and over-cooked.</p>
<p>For more information <a href="http://www.towerbankarms.co.uk/">see their website,</a> or <a href="http://maps.google.com/maps?q=N+54+21.107+W+2+58.204">Google Maps.</a></p>
<h2>The Brown Horse Inn, Winster</h2>
<p>A perfectly reasonable place to eat if you happen to be nearby. I had some very decent sausages, which you can also buy from the farm shop in the pub's car park.</p>
<p>For more information <a href="http://www.thebrownhorseinn.co.uk/">see their website,</a> or <a href="http://maps.google.com/maps?q=N+54+20.040+W+2+20.040">Google Maps.</a> </p>2D170FBA-F8C3-11DE-8591-BB14D3DC675B2010-01-03T23:52:59:59Z2013-06-05T18:13:43:43ZFiltering GPS tracksMartin Oldfield<p>In practice I find that the tracklogs stored by my Garmin <span class="caps">GPS </span>receiver are quite noisy. <a href="http://en.wikipedia.org/wiki/Gpsbabel">gpsbabel</a> easily filters the data, but having forgotten the relevant rules once, I thought I'd document them here. </p><h2>Introduction</h2>
<p>I find that my new <span class="caps">GPS</span>r makes a reasonably good job of saving tracklogs even when it's carried in a pocket or bag. Thus it's handy for things like <a href="http://en.wikipedia.org/wiki/Geotagging">geotagging</a>. However, if you view the raw tracklogs in Google Earth you get spurious lines between, say, the last point taken on one trip and the first point on the next trip, or noisy data when the <span class="caps">GPS</span>r hasn't quite worked out where it is.</p>
<p>Happily the wonderful <a href="http://en.wikipedia.org/wiki/Gpsbabel">gpsbabel</a> program will filter the data, removing these infelicities.</p>
<h2>The runes</h2>
<p>Given tracklogs a.gpx and b.gpx from the <span class="caps">GPS</span>r, this merges them, removes spurious edges, the saves the data to out.gpx</p>
<pre><code>gpsbabel -i gpx -f a.gpx -i gpx -f b.gpx \
-x track,merge,sdistance=0.3k \
-o gpx -F out.gpx </code></pre>2D14A178-06E4-11E0-ADA8-BAA00661E8FC2010-12-13T18:09:20:20Z2013-06-05T18:13:43:43ZPlaces to eat and drink in CopenhagenMartin Oldfield<p>Some brief notes on places to eat and drink in Copenhagen. </p><h2>nimb, Bernstorffsgade 5, 1577 Copenhagen V.</h2>
<p>A wonderfully airy hotel bar. It's in the same block as the Tivoli gardens, and ideally placed next to the central station for a arrival or departure drink.</p>
<p>Given encouragement, they make a fine dry martini.</p>
<p>For more details see <a href="http://www.nimb.dk/">their website,</a> or <a href="http://maps.google.com/maps?q=N+55+40.415+E+12+33.964">Google Maps.</a></p>
<p><small><em>Last visited December 2010.</em></small></p>
<h2>Aamanns - Øster Farimagsgade 10 - 2100 - Copenhagen 0</h2>
<p>Smørrebrød seems pretty ubiquitous in Copenhagen, but from limited experience of it, too often it's just an excuse for an open-sandwich. Functional to be sure, but hardly worth celebrating.</p>
<p>Aamanns seems much better: the ingredients radiate quality, and they are mixed with flair and panache. The decor chimes well with the food's understated elegance, which makes for an enjoyable meal.</p>
<p>Don't miss the wonderful aquavits!</p>
<p>For more details see <a href="http://www.aamanns.dk/">their website,</a> or <a href="http://maps.google.com/maps?q=N+55+41.406+E+12+34.511">Google Maps.</a> </p>E851A54C-62D3-11DD-82CA-F22FCDA2E3E22008-08-05T09:50:01:01Z2013-06-05T18:13:43:43ZReplacing a Logitech Mouse ButtonMartin Oldfield<p>How to fix a dodgy button on a Logitech mouse: the key was finding a replacement for the Omron <span class="caps">D2FC</span>-F-7N. </p><p> Recently my Logitech MX Revolution mouse got sick: the mouse button started sending multiple clicks instead of just one. Fixing it was conceptually simple: just replace a microswitch. The practice was mildly more difficult.</p>
<p>This isn't a step-by-step guide. It's just a note of the things I'd have liked to have known before I started. Proceed at your own risk!</p>
<h2>Opening the beast.</h2>
<p>You'll need to peel away the low-friction pads on the base. This reveals some small screws which, when undone allow the mouse to come apart. I was moderately careful when removing the pads, and they've gone back on the mouse without any problems. You only need to remove the big pad at the top of the mouse, and the two small pads at the bottom.</p>
<p>Once apart there's the usual routine of undoing some more screws, disconnecting cables and so on. To get the <span class="caps">PCB </span>away from the base (which you'll need to do if you want to replace the microswitch), you'll need to desolder the connections to the charging points.</p>
<h2>A replacement microswitch.</h2>
<p>The switches for the two main buttons have part number <span class="caps">D2FC</span>-F-7N, which a little Googling suggests is made by Omron, and used in quite a number of Logitech and Microsoft mice.</p>
<p>I couldn't find an supplier of these, but if you're in the UK Rapid Electronics part <a href="http://www.rapidonline.com/productinfo.aspx?tier1=Electronic+Components&tier2=Switches&tier3=Microswitches&tier4=Microswitches&moduleno=74549">78-067</a> works perfectly well and costs a mere 13 pence (plus <span class="caps">VAT</span>). Outside the UK you might find the <a href="http://www.rapidonline.com/netalogue/specs/78-0867.pdf">data sheet</a> more useful.</p>
<h2>Update</h2>
<p>Jehan emailed me about this saying:</p>
<blockquote><p>I had the same problem with a sony vaio mouse (manuf. by Logitech). I fixed mine by actually opening the tiny switch with a very tiny screwdriver and cleaned the contacts inside the housing!</p></blockquote>
<p>He also found <a href="http://www.overclockers.com/forums/showthread.php?t=594646">someone else</a> who'd worked this out.</p>47D80142-9097-11DC-9368-69A33F6188BE2007-11-11T20:47:05:05Z2013-06-05T18:13:43:43ZADSL modem rebootingMartin Oldfield<p>Controlling things over Ethernet, here a hung <span class="caps">ADSL </span>modem, with the <span class="caps">PIC</span> Mini Web from Olimex. This is more of a brief sketch, to test the blogging software, than anything too deep or insightful. </p><h2>Disclaimer</h2>
<p>To begin, I should say that this isn't the most interesting article. Although it's real in the sense that I've given some thought to the content, I'm publishing this to get a feel for the process rather than because I think other's will find it useful or because I think it describes a finished project.</p>
<h2>My <span class="caps">ADSL </span>connection</h2>
<p>At home I have an <span class="caps">ADSL</span> Internet connection, which works perfectly well most of the time. However, sometimes, the connection dies and the modem reconnects automatically. Occasionally, perhaps once or twice a month, the connection dies and the modem doesn't reconnect. I tried replacing the <span class="caps">ADSL </span>modem without success, so presumably there's a problem at the exchange or with the line. However, trying to get that fixed sounds like a lot of effort given the sporadic nature of the problem. After all, power-cycling the modem fixes the problem.</p>
<p>On the other hand, it's pretty irritating to reset things manually, especially if I'm not at home when the connection dies. Inspired by a <a href="http://conferences.oreillynet.com/cs/os2007/view/e_sess/13050">recent tutorial at <span class="caps">OSCON</span></a>, I realized that it would be sensible to automate the process. Essentially it's simple: if the link has hung then just cycle the power to the modem.</p>
<h2>Detecting a hung connection</h2>
<p>A simple Perl program is enough to test the link. Rather than checking that the link is alive directly, we'll just try to fetch the Google homepage. After all, people sometimes say that if something's not on Google then it might as well not exist. We're going to assume that if we can't reach Google then the link might as well not exist.</p>
<pre><code>use strict;
use warnings;
use LWP::Simple qw(get);
reset_modem()
unless link_is_up();
sub link_is_up
{
my $t0 = time;
while(time - $t0 < 180)
{
my $page = get("http://www.google.com");
return 1
if $page && $page =~ /google/i;
sleep 1;
}
return;
}
sub reset_modem
{
print "Please reset the modem!\n";
}</code></pre>
<p>Obviously we'll shortly replace the reset_modem() subroutine with something which talks to the hardware.</p>
<h2>The <span class="caps">PIC</span> Mini Web</h2>
<p>Cycling the power is easy, the only tricky part is that the modem is some distance away from the computers. One could run extra cables, but there's a perfectly good network in place which ought to be able to carry an extra bit of data: though to do this one would need a network enabled microcontroller. Happily <a href="http://www.microchip.com/">Microchip</a> make it relatively easy to connect a <a href="http://en.wikipedia.org/wiki/PIC_microcontroller"><span class="caps">PIC </span>microcontroller</a> to ethernet, and a company called <a href="http://www.olimex.com/">Olimex</a> sell a <a href="http://www.olimex.com/dev/pic-mini-web.html"><span class="caps">PIC</span> Mini Web</a> with all the necessary components on it for about 25 pounds (US $50). You can see a picture of the Mini Web below. To get an idea of the scale, the large box-like connector on the left is an ethernet socket.</p>
<p class="center" style="text-align:center"><img src="mini_web.jpg" alt="" /></p>
<p>The board has three significant components:</p>
<ul>
<li>A <a href="http://www.microchip.com/stellent/idcplg?IdcService=SS_GET_PAGE&nodeId=1335&dDocName=en024622"><span class="caps">PIC </span> 18F25J10</a> microcontroller.</li>
<li>A <span class="caps">ENC28J60 </span>ethernet controller.</li>
<li>A 1Mbit flash chip which is used to store files for the web and <span class="caps">FTP </span>servers.</li>
</ul>
<p>Obviously we need some extra hardware, but this is easy: a relay to switch the power to the modem, and transistor to drive the relay from the Mini Web. Happily the <span class="caps">ADSL </span>modem has a 12 volt external power supply, so we're only switching low voltages, and we can steal some current to supply the Olimex board to boot. 12 volts is rather too much for the Olimex board, so we drop the voltage down to 5V with a 7805. The final touch is an <span class="caps">LED </span>which comes on when the power to the modem is switched off: we don't need anything in the other leg of the switch because the modem itself is a veritable Christmas tree of light when it's powered up. Here's the circuit diagram:</p>
<p class="center" style="text-align:center"><img src="relay.png" alt="" /></p>
<p>When <span class="caps">RST </span>goes high, the transistor turns on, so current flows through the relay coil. In turn that causes the switch to move, turning on the <span class="caps">LED </span>and turning off the modem.</p>
<h2>Mini Web software</h2>
<p>The Mini Web comes pre-programmed with Olimex's software, which is derived from the standard library provided by Microchip. When I did this project version 3 of the the library was all the rage, but now it's been superseded by version 4. So, I won't talk much about it here beyond brief notes:</p>
<ul>
<li>The board's <span class="caps">HTTP </span>server can be persuaded to do things by sending it <span class="caps">GET </span>requests. In principle these should probably be <span class="caps">POST</span>s but still! To make the request parsing easy, the <span class="caps">GET </span>requests have cryptic forms e.g. <span class="caps">GET </span>http://192.168.1.100/1?3=0</li>
<li>The board is hard-wired to use 192.168.0.30. In principle this can be changed by sending a <span class="caps">GET </span>request, which tweaks some non-volatile registers. In practice I found it easier to just hack the software to use a hard-coded address.</li>
<li>Although the software has loads of functionality, as supplied it's won't let you wiggle random I/O lines up and down. Of course it's simple to add this.</li>
</ul>
<p>The software is written in C, and so needs a compiler. For Window's users Microchip provide a free demo/student edition of their commercial compiler, but this is crippled: after 60 days some of the code size optimizations are disable and the full application no longer fits in the <span class="caps">PIC</span>'s memory. Happily a simple <span class="caps">HTTP </span>over <span class="caps">TCP </span>server does still seem to fit, but extraneous stuff e.g. <span class="caps">UDP </span>and <span class="caps">FTP </span>support need to be removed.</p>
<p>For this particular project it's highly desirable that we can make the <span class="caps">PIC </span>pulse the relevant output line rather than sending one command to turn the modem off and another to reset it. Actually it's essential because the communication between the PC running the software and the Mini Web actually goes through the <span class="caps">ADSL </span>modem's internal hub.</p>
<p>Some sort of interrupt driven solution is probably the 'right' way to handle a long (~ seconds) delay, but I took the quick-and-dirty approach and just executed a delay loop. The Mini Web might well be unresponsive during the delay, but the modem's dead and so nobody can talk to it anyway.</p>
<p>One minor consequence of the interrupted network is that the Mini Web's reply to the <span class="caps">GET </span>request is never seen by the Perl program. Thus, the reset_modem() subroutine looks like this:</p>
<pre><code>sub reset_modem
{
# Set an alarm here because when we reboot the modem
# we won't see the HTTP reply!
local $SIG{ALRM} = sub { print "Bing!\n" };
alarm 30; # long enough for the modem to reboot
get("http://192.168.1.100/1?3=0");
}
</code></pre>
<h2>Programming the Mini Web</h2>
<p>Having generated a new application file, it needs to be programmed into the Mini Web. I used Microchip's <a href="http://www.microchip.com/stellent/idcplg?IdcService=SS_GET_PAGE&nodeId=1406&dDocName=en023805"><span class="caps">PIC</span>kit 2</a>, an inexpensive programmer which connects to the PC over <span class="caps">USB.</span> It comes with Windows software, but you can also drive it from Linux or MacOS. You'll need to make a cable to connect the mini-ICSP connector on the Mini Web to the 0.1" header on the <span class="caps">PIC</span>kit2, or you could <a href="http://www.sparkfun.com/commerce/product_info.php?products_id=8108">buy one from SparkFun</a> at least I think that should work.</p>
<p class="center" style="text-align:center"><img src="pic_kit2.jpg" alt="" /></p>
<h2>Conclusions</h2>
<p>This isn't really a finished project, and it's easy to see ways to improve it:</p>
<ul>
<li>Generalize the firmware so it can be used to control or monitor many things without modification. In other words, make the MiniWeb into a simple IO device, leaving the control logic to run on a distant <span class="caps">PC.</span></li>
<li>Specialize the firmware to include the code which polls the link status, so that the Perl script (and the associated PC) isn't needed.</li>
</ul>
<p>On the other hand, for all its problems the software does actually do a useful job! Perhaps the right message to take away from this, is that devices like the Mini Web make it easy to use Ethernet as a control network. </p>A647926C-5B3E-11E1-8FA1-E9BD9319BE9B2012-02-19T21:12:47:47Z2013-06-05T18:13:43:43ZThe TTi TF930 and MacOSMartin Oldfield<p>Brief notes on reading data from a TTi <span class="caps">TF930</span> 3GHz counter from MacOS. </p><h2>The TTi <span class="caps">TF930</span></h2>
<p>Thurlby Thandar's <a href="http://www.ttid.co.uk/products-tti/rf/frequency-counters.htm"><span class="caps">TF930</span></a> is a fine addition to any workbench. It features a <span class="caps">USB </span>port, and happily TTi document the commands in the <a href="http://tti1.co.uk/downloads/manuals-rf.htm">manual.</a></p>
<p>However if you connect the <span class="caps">TF930 </span>to a Mac, nothing much happens. Happily the System Information system sees the device on the <span class="caps">USB </span>bus, but no entries are created in /dev.</p>
<h2><span class="caps">FTDI</span></h2>
<p>Although I'm sure there are alternatives, in practice almost all the <span class="caps">USB</span>-to-serial converters I've encountered use one of <span class="caps">FTDI</span>'s chips. Happily the <span class="caps">TF930 </span>is no different, but sadly the TTi specific vendor and product IDs aren't supported by the stock driver.</p>
<p>The Internet being what it is though, it turns out that all the details of solving this problem have already been sorted out by someone else. Datafusion Systems have a <a href="http://dfusion.com.au/wiki/tiki-index.php?page=Installing+FTDI+USB+Serial+Driver+on+Mac">fine article</a> which tells you all you need to know.</p>
<p>In essence:</p>
<ol>
<li>Download and install the <a href="http://www.ftdichip.com/Drivers/VCP.htm">virtual <span class="caps">COM </span>port drivers</a> from the <span class="caps">FTDI </span>website.</li>
<li>Edit System/Library/Extensions/FTDIUSBSerialDriver.kext/Contents/Info.plist and add a record for the <span class="caps">TF930.</span> You'll need to know:<ul>
<li>Vendor ID: 0x103e = 4158</li>
<li>Product ID: 0x0442 = 1090</li>
</ul>
</li>
<li>Connect the <span class="caps">TF930 </span>and enjoy!</li>
</ol>
<h2>A toy program</h2>
<p>Here's a little toy program to read data from the device (you'll probably have to change the serial port):</p>
<pre><code>#! /usr/bin/perl
use strict;
use warnings;
use Device::SerialPort;
use Date::Manip;
my $port = "/dev/tty.usbserial-soUC8ESW";
my $dev = Device::SerialPort->new($port)
or die "Unable to open $port, ";
$dev->databits(8);
$dev->baudrate(115200);
$dev->parity("none");
$dev->stopbits(1);
$dev->write_settings;
$dev->write("*IDN?\n\r");
$dev->write_drain;
print "# Identity: ", read_packet($dev);
$dev->write("E?\n\r");
while(1)
{
my $data = read_packet($dev);
chomp $data;
my $time = UnixDate("now", "%Y-%m-%d %H:%M:%S");
print "$time $data\n";
}
sub read_packet
{
my $dev = shift;
while(1)
{
my ($n, $string) = $dev->read(255);
return $string if $n;
sleep 1;
}
} </code></pre>
<p>If you run it, you'll see something like this:</p>
<pre><code># Identity: Thurlby-Thandar,TF930,0,V1.20
2012-02-19 21:06:33 0000000000.e+0
2012-02-19 21:06:34 0000000000.e+0
2012-02-19 21:06:35 0000000000.e+0
2012-02-19 21:06:36 0000000000.e+0</code></pre>
<h2>Linux</h2>
<p>Unsurprisingly you can do something similar on Linux. The relevant crib sheet is the <a href="http://ftdi-usb-sio.sourceforge.net/">documentation for the ftdi-usb-sio driver.</a></p>
<p>I think the approved solution is to simply add the device IDs to the driver and recompile, but you can hack things when you load the driver. This worked for me:</p>
<pre><code>sudo modprobe ftdi_sio vendor=0x103e product=0x0442 debug </code></pre>04339478-5DE0-11E0-B96B-D8A57B0AA95E2011-04-03T10:48:55:55Z2013-06-05T18:13:43:43ZA Spinning SphereMartin Oldfield<p>Calculating the moment of inertia of sphere. Overlong in itself, but hopefully a fine preliminary to the related calculation for a tetrahedron. </p><h2>Introduction</h2>
<p>A while ago I wanted to know the moment of inertia of a tetrahedron. I'd forgotten some of the basic stuff, and the calculation was a bit fiddly so I thought I'd write it up on here. There are three related articles:</p>
<ol>
<li><a href="http://www.mjoldfield.com/atelier/2011/03/cartesian-moi.html">Some basic results.</a></li>
<li>A toy problem: <a href="http://www.mjoldfield.com/atelier/2011/03/sphere-moi.html">the sphere</a> (this article).</li>
<li><a href="http://www.mjoldfield.com/atelier/2011/03/tetra-moi.html">The final calculation.</a></li>
</ol>
<h2>A rotating solid sphere</h2>
<p>Consider a solid uniform sphere rotating about an axis through its centre. Explicitly, assume that is has radius \(a\) and density \(\mu\). We'll work with spherical polars \((\rho, \theta, \phi)\) centred on the centre of the sphere.</p>
<p>As we discussed in <a href="http://www.mjoldfield.com/atelier/2011/03/cartesian-moi.mba">the general notes,</a> the sphere has sufficient symmetry that we can treat the problem with scalars.</p>
<p>Basically we need to consider an element of the sphere, work out its distance to the rotation axis and thus its moment of inertia. Then we'll just integrate over the sphere.</p>
<p>Hopefully it's clear that an element at \((\rho, \theta, \phi)\) moves in a circle of radius \(\rho \sin \theta\), and has mass \(\mu \rho^2 \sin \theta \; d\rho \; d\theta \; d\phi\). So,</p>
\[
\begin{align} I_{sphere} &= \int r^2 dm,\\ &= \int_0^a \int_0^{\pi} \int_0^{2 \pi} (\rho \sin \theta)^2 \mu \rho^2 \sin \theta \; d\phi \; d\theta \; d\rho,\\ &= 2 \pi \mu \left(\int_0^a \rho^4 d\rho \right) \left( \int_0^\pi \sin^3 \theta \; d\theta \right),\\ &= 2 \pi \mu \frac{a^5}{5} \frac{4}{3},\\ &= \frac{8}{15} \pi \mu a^5. \end{align}
\]
<p>It's more usual to express this in terms of the mass \(M\) rather than the density:</p>
\[
\begin{align} I_{sphere} &= \frac{8}{15} \pi \mu a^5,\\ &= \left( \frac{4}{3} \pi a^3 \mu \right) \frac{2}{5} a^2,\\ &= \frac{2}{5} M a^2. \end{align}
\]
<p>Happily this agrees with <a href="http://en.wikipedia.org/wiki/List_of_moments_of_inertia">Wikipedia!</a></p>
<p>If we want the full moment of inertia tensor, the symmetry of the sphere implies that it's diagonal viz.:</p>
\[
\textbf{I} = \frac{2}{5} M a^2 \mathbb{I},
\]
<p>where \(\mathbb{I}\) is the identity matrix.</p>
<h2>A less thoughtful approach</h2>
<p>That was nice and easy, but our job was simplified because the sphere has lots of symmetry. Let's redo the calculation in a slightly more formal way. It will be messier but easier to apply when the symmetry deserts us.</p>
<p>We'll use <a href="http://www.mjoldfield.com/atelier/2011/03/cartesian-moi.html">the computation trick</a> we met earlier or in other words, begin by integrating \(\mu \; \textbf{r} \textbf{r}^T\) over the volume.</p>
\[
\textbf{C} = \int \mu \; \left(\begin{array}{ccc} x^2 & xy & zx \\ xy & y^2 & yz \\ zx & yz & z^2 \end{array} \right) \;dV
\]
<p>The sphere is highly symmetric, and so we only have to do two integrals. Firstly, the diagonal terms:</p>
\[
\begin{align} C_{xx} = C_{yy} = C_{zz} &= \int \; z^2 \; \mu \; dV,\\ &= \int_0^a \int_0^{\pi} \int_0^{2 \pi} (\rho \cos \theta)^2 \mu \; \rho^2 \sin \theta \; d\phi \; d\theta \; d\rho,\\ &= 2 \pi \mu \left(\int_0^a \rho^4 d\rho \right) \left( \int_0^\pi \cos^2 \theta \sin \theta \; d\theta \right),\\ &= \frac{4}{15} \pi \mu a^5,\\ &= \frac{1}{5} M a^2, \end{align}
\]
<p>And now the off-diagonals—which are all zero because the integrand's odd over the volume:</p>
\[
\begin{align} C_{xy} = C_{yz} = C_{zx} &= \int \; xy \; \mu \; dV, \\ &= 0. \end{align}
\]
<p>Thus,</p>
\[
\begin{align} \textbf{C} &= \frac{1}{5} M a^2 \mathbb{I}, \\ \textrm{Tr} \textbf{C} &= \frac{3}{5} M a^2. \end{align}
\]
<p>and so (as expected),</p>
\[
\begin{align} \textbf{I} &= \left( \textrm{Tr} \, \textbf{C} \right) \; \mathbb{I} - \textbf{C}, \\ &= \frac{2}{5} M a^2 \mathbb{I}. \end{align}
\] 6402D61C-5ABA-11E0-AEE2-AA8A7B0AA95E2011-03-30T10:42:14:14Z2013-06-05T18:13:43:43ZA Turning TetrahedronMartin Oldfield<p>The tetrahedron's a simple solid, but its moment of inertia isn't in the usual tables, so I thought I'd calculate it. There are two calculations here: one's straightforward but long, the other's short and cunning. </p><h2>Introduction</h2>
<p>A while ago I wanted to know the moment of inertia of a tetrahedron. I'd forgotten some of the basic stuff, and the calculation was a bit fiddly so I thought I'd write it up on here. There are three related articles:</p>
<ol>
<li><a href="http://www.mjoldfield.com/atelier/2011/03/cartesian-moi.html">Some basic results.</a></li>
<li>A toy problem: <a href="http://www.mjoldfield.com/atelier/2011/03/sphere-moi.html">the sphere.</a></li>
<li><a href="http://www.mjoldfield.com/atelier/2011/03/tetra-moi.html">The final calculation</a> (this article).</li>
</ol>
<h2>A first attack</h2>
<p>Following the line we sketched when thinking about the sphere, we'll proceeed by integrating \(\textbf{r}\textbf{r}^T\) over the body:</p>
\[
\textbf{C} = \int \;\ \textbf{r}\; \textbf{r}^T \; \mu \; dV.
\]
<p>This looks simple enough, but it's messy. In particular, if we treat it as three nested integrals over Cartesian axes, the limits of the integration will be fiddly to get right.</p>
<p>So, let's consider a simpler case.</p>
<h2>A special case</h2>
<p>Consider a tetrahedron with vertices \((0,0,0)\), \((1,0,0)\), \((0,1,0)\), and \((0,0,1)\) in Cartesian coordinates \(\textbf{u} = (u,v,w)\). Suppose further that it has unit uniform density.</p>
<p>A moment of thought will show that the volume inside the tetrahedron is defined by these four inequalities:</p>
\[
\begin{align} u & \ge 0, \\ v & \ge 0, \\ w & \ge 0, \\ u + v + w & \le 1. \end{align}
\]
<p>Accordingly the integral over the tetrahedron is just</p>
\[
\int_0^1 \; \int_0^{1-u} \; \int_0^{1 - u - v} dw \; dv \; du
\]
<p>It's worth noting that despite appearances this is symmetric in \(u\), \(v\), and \(w\).</p>
<p>In the following calculations we'll use subscript \(s\) to indicate that we're talking about the special case.</p>
<h3>Volume</h3>
<p>The integral above is useful: it's just the volume of the tetrahedron:</p>
\[
\begin{align} V_s &= \int_0^1 \; \int_0^{1-u} \; \int_0^{1 - u - v} dw \; dv \; du, \\ &= \int_0^1 \; \int_0^{1-u} (1 - u - v) dv \; du, \\ &= \int_0^1 \; \frac{1}{2} (1 - u)^2 \; du, \\ &= \frac{1}{6}. \end{align}
\]
<p>Given that it has unit density, that's also its mass:</p>
\[
M_s = \frac{1}{6}.
\]
<h3>Centre of mass</h3>
<p>Although we don't actually need this, let's calculate it anyway.</p>
<p>Given unit density, the centre of mass \(\textbf{p}_s\) is just:</p>
\[
\textbf{p}_s = \frac{1}{M_s} \; \int \;\ \textbf{u}\; du \; dv \; dz.
\]
<p>We'll find that integral useful, so define</p>
\[
\textbf{q}_s = \int \;\ \textbf{u}\; du \; dv \; dz,
\]
<p>Although it might seem that we have to do three integrals, one for each component of \(\textbf{u}\), recall that the tetrahedron is symmetric in \(u\), \(v\) and \(w\), so the answers will be the same. Let's just do the integral of \(u\), because it's the easiest:</p>
\[
\begin{align} q_u = q_v = q_w &= \int_0^1 \; \int_0^{1-u} \; \int_0^{1 - u - v} u \; dw \; dv \; du, \\ &= \frac{1}{2} \int_0^1 \; u (1 - u)^2 \; du, \\ &= \frac{1}{24}, \\ \textbf{q}_s &= \frac{1}{24} \left(\begin{array}{c}1 \\ 1 \\ 1\end{array}\right). \end{align}
\]
<p>Accordingly,</p>
\[
\begin{align} \textbf{p}_s &= \frac{1}{M_s} \textbf{q}_s, \\ &= \frac{1}{4} \left(\begin{array}{c}1 \\ 1 \\ 1\end{array}\right). \end{align}
\]
<h3>Moment of Inertia</h3>
<p>Let's start with</p>
\[
\begin{align} \textbf{C}_s &= \int \textbf{u} \textbf{u}^T \; du \; dv \; dw \\ &= \int \left(\begin{array}{ccc} u^2 & uv & wu \\ uv & v^2 & vw \\ wu & vw & w^2 \end{array} \right) \; du \; dv \; dw \end{align}
\]
<p>and again note that symmetry dictates that there are only two distinct terms:</p>
\[
\begin{align} C_{uu} = C_{vv} = C_{ww} &= \int_0^1 \; \int_0^{1-u} \; \int_0^{1 - u - v} u^2 \; dw \; dv \; du, \\ &= \int_0^1 \; \int_0^{1-u} u^2 \; (1 - u - v) \; dv \; du, \\ &= \frac{1}{2} \int_0^1 \; u^2 (1 - u)^2 \; du, \\ &= \frac{1}{60}. \end{align}
\]
<p>and</p>
\[
\begin{align} C_{uv} = C_{vw} = C_{wu} &= \; \int_0^1 \; \int_0^{1-u} \; \int_0^{1 - u - v} u v \; dw \; dv \; du, \\ &= \int_0^1 \; \int_0^{1-u} (1 - u - v) \; u v \; dv \; du, \\ &= \frac{1}{6}\int_0^1 \; u (1 - u)^3 \; du, \\ &= \frac{1}{120}. \end{align}
\]
<p>So,</p>
\[
\begin{align} \textbf{C}_s &= \frac{1}{120} \left(\begin{array}{ccc} 2 & 1 & 1 \\ 1 & 2 & 1 \\ 1 & 1 & 2 \end{array} \right), \\ &= \frac{M_s}{20} \left(\begin{array}{ccc} 2 & 1 & 1 \\ 1 & 2 & 1 \\ 1 & 1 & 2 \end{array} \right), \end{align}
\]
<p>and</p>
\[
\begin{align} \textbf{I}_s &= \left(\textrm{Tr}\;\textbf{C}_s\right)\, \boldsymbol{\mathbb{I}} - \textbf{C}_s, \\ &= \frac{M_s}{20} \left(\begin{array}{ccc} 4 & -1 & -1 \\ -1 & 4 & -1 \\ -1 & -1 & 4 \end{array} \right). \end{align}
\]
<h2>The general tetrahedron</h2>
<p>Consider a general tetrahedron with vertices \(\textbf{a}, \textbf{b}, \textbf{c}, \textbf{d}\). We obviously could proceed as above, but the limits on the integrals will be messy.</p>
<p>Instead, consider this transformation:</p>
\[
\begin{align} \textbf{x} &= \textbf{R} \textbf{u} + \textbf{a}, \\ &= \left(\begin{array}{c} \textbf{b} - \textbf{a} & \textbf{c} - \textbf{a} & \textbf{d} - \textbf{a} \end{array} \right) \textbf{u} + \textbf{a}. \end{align}
\]
<p>In other words the columns of \(\textbf{R}\) are just e.g. \(\textbf{b} - \textbf{a}\).</p>
<p>Although we've singled out \(\textbf{a}\) here, it's important to remember that the answer must be symmetric in all the vertices: after all how we label them is entirely arbitrary.</p>
<p>In particular consider these cases:</p>
\[
\textbf{u} = \left(\begin{array}{ccc}0 \\ 0 \\ 0\end{array}\right), \, \left(\begin{array}{ccc}1 \\ 0 \\ 0\end{array}\right), \, \left(\begin{array}{ccc}0 \\ 1 \\ 0\end{array}\right), \, \left(\begin{array}{ccc}0 \\ 0 \\ 1\end{array}\right).
\]
<p>and convince yourself that these points transform to:</p>
\[
\textbf{x} = \textbf{a}, \textbf{b}, \textbf{c}, \textbf{d}.
\]
<p>So, instead of integrating \(\textbf{x}\) over the general tetrahedron, we can simply integrate \(\textbf{u}\) over the particular case we considered above.</p>
<p>Of course, because there's a change of variable we'll need an appropriate Jacobian:</p>
\[
dx\; dy\; dz = \left| \textbf{R} \right| du \; dv \; dz.
\]
<p>where \(\left| \textbf{R} \right|\) is the magnitude of the determinant of \(\textbf{R}\).</p>
<p>We can now calculate the volume, centre of mass, and moment of inertia for a general tetrahedron with just a bit of matrix algebra. In every case, the transformation above will transform the general integral over the tetrahedron to the special case we've already solved.</p>
<h3>Volume</h3>
\[
\begin{align} V &= \int dx\; dy\; dz, \\ &= \left| \textbf{R} \right| \int du \; dv \; dz, \\ &= \frac{1}{6} \left| \textbf{R} \right|. \end{align}
\]
<p>A perfectly reasonable result when you consider that \(\left| \textbf{R} \right|\) is the triple scalar product of three sides of the tetrahedron.</p>
<p>We also note the the mass</p>
\[
M = \frac{1}{6} \mu \left| \textbf{R} \right|.
\]
<h3>Centre of mass</h3>
\[
\begin{align} \textbf{p} &= \frac{1}{M} \; \int \;\ \textbf{x}\; \mu \; dx \; dy \; dz, \\ &= \frac{\mu}{M} \left| \textbf{R} \right| \int \; \left( \textbf{R} \; \textbf{u} + \textbf{a} \right) \; du \; dv \; dw, \\ &= \frac{\mu}{M} \left| \textbf{R} \right| \left( \textbf{R} \int \; \textbf{u} \; du \; dv \; dw + \textbf{a} \int \; du \; dv \; dw \right) \\ &= 6 \left( \textbf{R} \textbf{q}_s + \textbf{a} V_s \right), \\ &= 6 \left( \frac{1}{24} \left(\textbf{b} + \textbf{c} + \textbf{d} - 3 \textbf{a}\right) + \frac{1}{6} \textbf{a} \right), \\ &= \frac{1}{4} \left(\textbf{a} + \textbf{b} + \textbf{c} + \textbf{d}\right), \\ &= \frac{1}{4} \sum_{i} \textbf{a}_i. \end{align}
\]
<p>Here the final sum is over the four vertices of the tetrahedron.</p>
<h3>Moment of inertia</h3>
\[
\begin{align} \textbf{C} &= \int \; \textbf{x} \textbf{x}^T \; \mu \; dx \; dy \; dz, \\ &= \mu \left| \textbf{R} \right| \int \; \left( \textbf{R} \; \textbf{u} + \textbf{a} \right) \left( \textbf{R} \; \textbf{u} + \textbf{a} \right)^T \; du \; dv \; dw. \end{align}
\]
<p>If we expand the integrand there are effectively three terms to consider:</p>
<h4>The term in \(\textbf{u} \textbf{u}^T\)</h4>
\[
\begin{align} \int \textbf{R} \textbf{u} \textbf{u}^T \textbf{R}^T \; du \; dv \; dw &= \textbf{R} \left( \int \textbf{u} \textbf{u}^T \; du \; dv \; dw \right) \textbf{R}^T \\ &= \textbf{R} \; \textbf{C}_s \; \textbf{R}^T, \\ &= \frac{1}{120}\left( 12 \textbf{a}\textbf{a}^T - 4\left(\textbf{a} \textbf{e}^T + \textbf{e} \textbf{a}^T\right) + \left(\textbf{b}\textbf{b}^T + \textbf{c}\textbf{c}^T + \textbf{d}\textbf{d}^T\right) + \textbf{e} \textbf{e}^T \right). \end{align}
\]
<p>where \(\textbf{e} = \left(\textbf{b} + \textbf{c} + \textbf{d}\right)\).</p>
<h4>The term in \(\textbf{u}\)</h4>
\[
\begin{align} \int \textbf{R} \textbf{u} \textbf{a}^T \; du \; dv \; dw &= \textbf{R} \left( \int \textbf{u} \; du \; dv \; dw \right) \textbf{a}^T \\ &= \textbf{R} \; \textbf{q}_s \; \textbf{a}^T \\ &= \frac{1}{24} \left(\textbf{b} + \textbf{c} + \textbf{d} - 3 \textbf{a}\right) \textbf{a}^T. \end{align}
\]
<p>Of course there's the transpose of this term too.</p>
<h4>The constant term</h4>
\[
\begin{align} \int \textbf{a} \textbf{a}^T \; du \; dv \; dw &= \textbf{a} \left( \int \; du \; dv \; dw \right) \textbf{a}^T \\ &= \textbf{a} \; V_s \; \textbf{a}^T \\ &= \frac{1}{6} \textbf{a} \textbf{a}^T. \end{align}
\]
<h4>The whole integral</h4>
<p>Putting these all together and collecting terms, gives this pleasing result:</p>
\[
\begin{align} \int \; \left( \textbf{R} \; \textbf{u} + \textbf{a} \right) \left( \textbf{R} \; \textbf{u} + \textbf{a} \right)^T \; du \; dv \; dw &= \frac{1}{120} \left(\sum_i \textbf{a}_i \textbf{a}_i^T + \sum_{i} \textbf{a}_i \; \sum_{i} \textbf{a}_i^T \right), \\ \textbf{C} &= \frac{M}{20} \left(\sum_i \textbf{a}_i \textbf{a}_i^T + \sum_{i} \textbf{a}_i \; \sum_{i} \textbf{a}_i^T \right). \end{align}
\]
<p>So the trace is just</p>
\[
\textrm{Tr}\;\textbf{C} = \frac{M}{20} \left(\sum_i \textbf{a}_i^T \textbf{a}_i + \sum_{i} \textbf{a}_i \; \sum_{i} \textbf{a}_i^T \right),
\]
<p>and thus</p>
\[
\textbf{I} = \frac{M}{20} \left( \left(\sum_i \textbf{a}_i^T \textbf{a}_i + \sum_{i} \textbf{a}_i^T \; \sum_{i} \textbf{a}_i \right) \mathbb{I} - \left(\sum_i \textbf{a}_i \textbf{a}_i^T + \sum_{i} \textbf{a}_i \; \sum_{i} \textbf{a}_i^T \right) \right).
\]
<h2>A more thoughtful solution</h2>
<p>Although our approach above saves us from messy integrals, we still have to do some messy matrix algebra. We can do better!</p>
<p>It's clear from above that \(\textbf{C}\) is the sum of terms like \(\textbf{a}\textbf{b}^T\).</p>
<p>However we've noted before that the final expression for the moment of inertia is symmetric with respect to permutation of the vertices. That is, if we swap say \(\textbf{a}\) and \(\textbf{c}\), then the answer must not change.</p>
<p>So, the most general expression for \(\textbf{C}\) is:</p>
\[
\textbf{C} = M \left( \alpha \sum_{i} \textbf{a}_i \textbf{a}_i^T + \beta \sum_{i} \textbf{a}_i \sum_{i} \textbf{a}_i^T \right).
\]
<p>where \(\alpha\) and \(\beta\) are unknown constants independent of the vertices.</p>
<p>Now, consider an infinitessimal tetrahedron, where all the vertices are the same, \(\textbf{h}\), say. For such a tetrahedron, it's clear that</p>
\[
\textbf{C} = M \textbf{h} \textbf{h}^T
\]
<p>and so</p>
\[
\begin{align} \textbf{h} \textbf{h}^T &= \left( \alpha \sum_{i} \textbf{h} \textbf{h}^T + \beta \sum_{i} \textbf{h} \sum_{i} \textbf{h}^T \right), \\ &= \left(4 \alpha + 16 \beta \right) \textbf{h} \textbf{h}^T \\ \beta &= \frac{1}{16} \left(1 - 4 \alpha \right). \end{align}
\]
<p>Substituting back into our expression for \(\textbf{C}\):</p>
\[
\begin{align} \textbf{C} &= M \alpha' \left( \frac{1}{4} \sum_{i} \textbf{a}_i \textbf{a}_i^T - \left( \frac{1}{4} \sum_{i} \textbf{a}_i \right)\; \left( \frac{1}{4} \sum_{i} \textbf{a}_i^T \right) \right) + M \left( \frac{1}{4} \sum_{i} \textbf{a}_i \right)\; \left( \frac{1}{4} \sum_{i} \textbf{a}_i^T \right), \\ &= M \alpha' \left( \frac{1}{4} \sum_{i} \textbf{a}_i \textbf{a}_i^T - \textbf{p} \textbf{p}^T \right) + M \textbf{p} \textbf{p}^T. \end{align}
\]
<p>where \(\textbf{p}\) is the centre of mass: \(\frac{1}{4}\sum_i \textbf{a}_i\), and \(\alpha'\) is just a rescaled \(\alpha\).</p>
<p>It's clear that the two terms correspond to the covariance of the mass distribution, and the centre of mass itself—the latter has no \(\alpha'\) dependence.</p>
<p>However, this expression applies to any object with four equivalent vertices. It might be a solid tetrahedron, four isolated point masses, a hollow tetrahedral shell or so on. Each different type will have its own value of \(\alpha'\), but for a particular type we need only one scalar to calculate its moment of inertia.</p>
<p>For the solid tetrahedron, consider the special case from above, with vertices \((0,0,0)\), \((1,0,0)\), \((0,1,0)\), and \((0,0,1)\):</p>
\[
\begin{align} \sum_{i} \textbf{a}_i \textbf{a}_i^T &= \left(\begin{array}{ccc} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{array} \right), \\ \textbf{p} \textbf{p}^T &= \frac{1}{16} \left(\begin{array}{ccc} 1 & 1 & 1 \\ 1 & 1 & 1 \\ 1 & 1 & 1 \end{array} \right). \end{align}
\]
<p>But we've already calculated \(\textbf{C}\) for this tetrahedron:</p>
\[
\textbf{C}_s = \frac{M}{20} \left(\begin{array}{ccc} 2 & 1 & 1 \\ 1 & 2 & 1 \\ 1 & 1 & 2 \end{array} \right).
\]
<p>Thus for the solid tetrahedron</p>
\[
\alpha' = \frac{1}{5},
\]
<p>And so</p>
\[
\begin{align} \textbf{C} &= \frac{M}{5} \left( \frac{1}{4} \sum_{i} \textbf{a}_i \textbf{a}_i^T - \textbf{p} \textbf{p}^T \right) + M \textbf{p} \textbf{p}^T, \\ &= \frac{M}{20} \left(\sum_i \textbf{a}_i \textbf{a}_i^T + \sum_{i} \textbf{a}_i \; \sum_{i} \textbf{a}_i^T \right). \end{align}
\]
<p>This cunning approach doesn't completely eliminate the integration, but we'd only have to integrate one component of \(\textbf{I}\) over the carefully chosen tetrahedron: the rest is really just symmetry. </p>47DAA35C-9097-11DC-9368-69A33F6188BE2007-11-04T21:14:03:03Z2013-06-05T18:13:43:43ZPost the FirstMartin Oldfield<p>The first article </p><h2>Why write this ?</h2>
<p>The last thing the world needs is another blog, so here it is! Actually I'm writing this for two reasons:</p>
<ol>
<li>Sometimes I do things which I think other people might find interesting or useful. Putting them on the web seems like a reasonable thing to do.</li>
<li>Often I find that writing down what I've done is a good discipline. It forces me to think clearly about matters, and encourages me to do them properly.</li>
</ol>
<h2>Why not use existing software ?</h2>
<p>There's loads of software out there which makes it easy to publish stuff on the web, but since my needs are so modest most of them seem complete overkill. All of this is rendered into flat files using a simple Perl program, which can then be served by Apache. </p>0FD466FA-AE7E-11DC-B26E-D34C9E938C6F2007-12-19T21:47:38:38Z2013-06-05T18:13:43:43ZPerl tools for MPFS2Martin Oldfield<p>Recent versions of Microchip's <span class="caps">TCP</span>/IP stack (after version 4.11) use a new filing system and web server: <span class="caps">MPFS2 </span>and <span class="caps">HTTP2.</span> Microchip supply a Windows program, <span class="caps">MPFS2.</span>exe to manage things, but this is quite inconvenient for people on Linux or OS X. These Perl programs try to help. </p><h2>Quickstart</h2>
<ol>
<li>Download the <a href="http://www.mjoldfield.com/mpfs2-util/mpfs2-util-0.1.1.tar.gz">tar file</a>, unpack it, and copy the four files to somewhere on the <span class="caps">PATH.</span></li>
<li>If you want to:<ul>
<li>make a <span class="caps">MPFS2 </span>file system use <a href="#make-image">mpfs2-make-image</a>;</li>
<li>generate code for the firmware use <a href="#make-code">mpfs2-make-code</a>;</li>
<li>play with an existing <span class="caps">MPFS2 </span>image use <a href="#fs-util">mpfs2-fsutil</a>.</li>
</ul></li>
</ol>
<h2>The Software</h2>
<p>There are five Perl programs which can be downloaded in a tar file. To install the software, unpack the tar file and copy the files to somewhere on your <span class="caps">PATH.</span> For example:</p>
<pre><code>% tar xzvf http://www.mjoldfield.com/mpfs2-util/mpfs2-util-0.1.1.tar.gz
% sudo mv mpfs2-* /usr/local/bin</code></pre>
<p>In the fullness of time I'll package these a bit better. None of the programs need any <span class="caps">MPFS2 </span>specific modules, but mpfs2-make-image expects to find the other programs on the <span class="caps">PATH.</span></p>
<h3>Tarballs</h3>
<ul>
<li>2007-12-20: <a href="http://www.mjoldfield.com/mpfs2-util/mpfs2-util-0.1.0.tar.gz">Version 0.1.0</a><ul>
<li>Original release.</li>
</ul>
</li>
<li>2007-02-23: <a href="http://www.mjoldfield.com/mpfs2-util/mpfs2-util-0.1.1.tar.gz">Version 0.1.1</a><ul>
<li>Inspired by patches from Emilio Frusciante.</li>
<li>Better support for systems which distinguish text and binary files e.g. <span class="caps">DOS</span>/Windows.</li>
<li>An option to generate the file system as a C array which can be linked into the firmware.</li>
<li>Support for ~foo(1)~ style tags (0.1.0 only handled ~foo~).</li>
<li>Miscellaneous bug fixes.</li>
</ul></li>
</ul>
<h2><span class="caps">MPFS2 </span>and <span class="caps">HTTP2</span></h2>
<p>There's quite a close connection between Microchip's <span class="caps">HTTP2 </span>web server and <span class="caps">MPFS2 </span>file system, or more accurately the <span class="caps">MPFS2.</span>exe program. Explicitly the web server expands tags like ~foo~ in files when it transmits them, but for this to work the files have to be pre-indexed. <span class="caps">MPFS2.</span>exe does this, and so does the mpfs2-indexer program (which will usually be run by mpfs2-make-image).</p>
<p>Tags are defined in the <span class="caps">HTTPP</span>rint.idx file, which is just an ordered list of terms separated by |. The indexing process notes the offset of each tag and its index, and saves this information in a separate file. For example, the indexer might look at foo.htm, note that tag 3 is found at byte 3241 in the file and tag 1 at byte 4523, then save that in foo.ht#. When the <span class="caps">MPFS2 </span>file is built, files which have been indexed are tagged thus, so the webserver doesn't have to search the directory.</p>
<p>When <span class="caps">HTTP2 </span>serves the foo.htm file, it looks at foo.ht# too. So, it knows that when it gets to byte 3241 it should stop sending the file to the socket and call subroutine number 3. This subroutine sends some other data to the socket e.g. the value of the <span class="caps">AN0 </span>input, then returns control back to <span class="caps">HTTP2. HTTP2 </span>skips over the rest of the tag, then sends data as normal again. When it gets to byte 4523 the process repeats again, this time to subroutine 1.</p>
<p>When we talk about subroutine 1, the index refers to the position in a dispatch table. Obviously the dispatch table and the index files must be consistent, and so both are derived from the same <span class="caps">HTTPP</span>rint.idx file. This means that whilst you don't need the whole firmware source to generate new content for the server, you will need the .idx file if you're using any ~foo~ style tags.</p>
<p>There's one other connection between <span class="caps">MPFS2 </span>and <span class="caps">HTTP2</span>: a file can be marked as compressed. When <span class="caps">HTTP2 </span>serves such files it adds the appropriate <span class="caps">HTTP </span>header so the client will decompress it. Making compressed files is easy: just use gzip. In general this is a good thing to do, but obviously it's not compatible with the ~foo~ style tags.</p>
<h2>The programs</h2>
<h3 id="make-image"> mpfs2-make-image</h3>
<p>This is the simplest way to make a <span class="caps">MPFS2 </span>image file. If you just have some files you want to upload to the <span class="caps">HTTP2 </span>based webserver, just put them in a directory, e.g. called imagedir, then just run</p>
<pre><code>% mpfs2-make-image imagedir</code></pre>
<p>which will create an imagedir.bin file which can be uploaded. Any .gz files will be tagged as compressed, though the .gz suffix will be removed.</p>
<p>If the files include ~foo~ tags, then you'll need the --idx option: see the documentation.</p>
<p>Finally, if you just want to test that all the ~foo~ tags work, then use the --test tag. This uses <a href="#make-code">mpfs2-make-code</a> to make a <span class="caps">HTML </span>file containing all the tags: it's by no means a perfect test, but it's an easy way to start.</p>
<p>Here are some examples of how the command is used:</p>
<pre><code># Normal operation: turn the image directory into image.bin
mpfs2-make-image image
<ol>
<li>Fancier operation: index files<br />
mpfs2-make-image --idx=HTTPPrint.idx image</li>
</ol>
<ol>
<li>Fancier still: include an <span class="caps">HTML </span>test file<br />
mpfs2-make-image --idx=HTTPPrint.idx --test=image/test.htm image</li>
</ol>
<ol>
<li>Get documentation<br />
mpfs2-make-image --help</li>
</ol></code></pre>
<h3 id="make-code">mpfs2-make-code</h3>
<p>This turns the .idx file into code. Normally it produces a C source file containing the dispatch table and a suitable header file: this is slightly different than <span class="caps">MPFS2.</span>exe, so you'll need to read the documentation and modify your project.</p>
<p>It can also produce a scaffold file: a C file containing toy definitions for each function in the dispatch table. Some of these may conflict with existing functions, but it's useful when starting a new project.</p>
<p>Finally it can produce an <span class="caps">HTML </span>file which contains all of the ~foo~ style tags which can be helpful to see if things are working properly.</p>
<p>Here are some examples of how the command is used:</p>
<pre><code># Normal operation
mpfs2-make-code foo.idx
<ol>
<li>Generate scaffolding C file<br />
mpfs2-make-code foo.idx --scaffold=stubs.c</li>
</ol>
<ol>
<li>Generate <span class="caps">HTML </span>file which exercises all the tags<br />
mpfs2-make-code foo.idx --html=test.htm</li>
</ol>
<ol>
<li>Get documentation<br />
mpfs2-make-code --help</li>
</ol></code></pre>
<h3 id="fsutil">mpfs2-fsutil</h3>
<p>This actually manipulates the <span class="caps">MPFS2 </span>image. If you use mpfs2-make-image you won't use it directly to make images, but it's handy to examine existing images, or extract files from them.</p>
<p>Here are some examples of how the command is used:</p>
<pre><code># Create a filesystem
mpfs2-fsutil --create foo.mpfs2 a.htm b.css a.ht#
<ol>
<li>Unpack a filesystem<br />
mpfs2-fsutil --extract foo.mpfs2</li>
</ol>
<ol>
<li>See what's in a filesystem<br />
mpfs2-fsutil --list foo.mpfs2</li>
</ol>
<ol>
<li>Get documentation<br />
mpfs2-fsutil --help</li>
</ol></code></pre>
<h3 id="mpfs2-indexer">mpfs2-indexer</h3>
<p>This indexes files. It's highly likely that you won't run it yourself, but instead will let mpfs2-make-image run it on your behalf.</p>
<p>Here are some examples of how the command is used:</p>
<pre><code># Normal operation
mpfs2-indexer HTTPPrint.idx foo.htm bar.htm ...
<ol>
<li>Dump the things we're indexing to stdout<br />
mpfs2-indexer --dump <span class="caps">HTTPP</span>rint.idx foo.htm bar.htm ...</li>
</ol>
<ol>
<li>Check that we agre with the exising index file<br />
mpfs2-indexer --check <span class="caps">HTTPP</span>rint.idx foo.htm bar.htm ...</li>
</ol>
<ol>
<li>Get documentation<br />
mpfs2-indexer --help</li>
</ol></code></pre>
<h3 id="img-to-c"> mpfs2-img-to-c</h3>
<p>Give a file system image, or indeed any other file, generate a C file which expresses the data as an array declaration, with an extra zero byte at the end.</p>
<p>Here are some examples of how the command is used:</p>
<pre><code># Normal operation: turn the image file into C file
mpfs2-img-to-c foo.bin
<ol>
<li>Change output filename and variable name<br />
mpfs2-img-to-c --output=bar.c --c_variable=bar foo.bin</li>
</ol>
<ol>
<li>Get documentation<br />
mpfs2-img-to-c --help</li>
</ol></code></pre>
<p>and here's an example of the resulting C file:</p>
<pre><code>/***************************************************************
* foo.c
*
* DO NOT EDIT BY HAND : ALL MODIFICATIONS WILL BE LOST
*
* Generated from foo.bin at Sat Feb 23 14:14:01 2008.
*
***************************************************************/
#define __MPFSIMG2_C
#include "TCPIP Stack/TCPIP.h"
#if defined(STACK_USE_MPFS2) && !defined(MPFS_USE_EEPROM)
ROM BYTE MPFS_Start[] =
{
'a', 'b', 'c', '1', '2', '3',0x0a, /* 0000 */
0x00
};
#endif // #if defined(STACK_USE_MPFS2) && !defined(MPFS_USE_EEPROM)</code></pre>
<h2>Feedback</h2>
<p>I'd be delighted to know if anyone finds these things useful. </p>584E56B6-5472-11DE-8AA9-D8A5436D55572009-06-08T21:21:16:16Z2013-06-05T18:13:43:43ZThe AArduinoMartin Oldfield<p>Although the Arduino is very convenient, I wanted to build my own from parts you can buy from RS or Farnell. Rather than cloning the Arduino, I took the opportunity to make some changes: I removed the <span class="caps">USB</span>/Serial interface because I've got a <span class="caps">ICSP </span>programmer, but added crude support for dual AA battery support. The AA cells suggest a good name: the AArduino! </p><p><img src="aarduino/int.jpg" alt="" class="img_border" /></p>
<h2>Introduction</h2>
<p>The Arduino makes it very easy to play with microcontrollers. However when I've played with <span class="caps">PIC</span>-based hardware in the past, one of the pleasures was the feeling of working on bare-hardware. You could order all the parts from RS or Farnell, and the only software you needed to understand was the processor's instruction set.</p>
<p>To some extent the very thing that makes the Arduino such a success, detract from this simplicity. On the Arduino the <span class="caps">ATM</span>ega runs a bootloader which takes care of initialization, and the Arduino's libraries often hide details of the hardware behind a cleaner <span class="caps">API.</span></p>
<p>On the hardware front a good fraction of the board's area and cost is devoted to the <span class="caps">USB</span>/Serial interface and the voltage regulator which make the device easy to use, but they aren't so useful if you plan to embed the device and run it from a couple of AA batteries.</p>
<p>That's not to say that I dislike the Arduino, far from it, but I just wanted to jump off in a slightly different direction.</p>
<h2>Desiderata</h2>
<ol>
<li>All the parts should be sold by standard component suppliers.</li>
<li>The board should be powered moderately efficiently from AA batteries.</li>
<li>Things should be simple enough to build on stripboard.</li>
</ol>
<h2>Hardware design</h2>
<p>In essense the design is simple: start with the Arduino schematic, but remove the power supply and serial interface. This makes the schematic easy to draw:</p>
<p><img src="aarduino/schematic.jpg" alt="" class="img_border" /></p>
<p>Actually of course we need to add some hardware too:</p>
<ul>
<li>a new power supply;</li>
<li>a programming interface;</li>
<li>some application specific stuff.</li>
</ul>
<h3>Power Supply</h3>
<p>For the power supply we want something to convert the 2.5--3V from a couple of AA cells into 5V. It's clear that to do this one wants some sort of switch-mode converter, for which there are many one-chip solutions from e.g. Maxim and Linear Technology.</p>
<p>Howver, rather than design one myself, I took the lazy route and simply copied Lady Ada's <a href="http://www.ladyada.net/make/mintyboost/">Minty Boost</a> converter, which is based around the <a href="http://www.linear.com/pc/productDetail.jsp?navId=H0%2CC1%2CC1003%2CC1042%2CC1031%2CC1060%2CP1029"><span class="caps">LT1302.</span></a></p>
<p>Her project is designed to deliver rather more current than we need, and in this low-current application the efficiency is likely to fall significantly. However, I'm happy to trade my design time for this when making a prototype. The <span class="caps">LT1302 </span>has a special <em>Burst Mode</em> designed for more efficiency in low-power operations, and LadyAda uses that. However, I found it added quite a bit of extra noise to the supply rails, so I've disabled it.</p>
<p><img src="aarduino/minty_schematic.jpg" alt="" class="img_border" /></p>
<h3>Programming interface</h3>
<p>When it comes to programming, we simply omit all of the serial/USB interface and rely instead in the <span class="caps">ATME</span>ga's <span class="caps">ICSP </span>(in-circuit serial programming) interface. Although the serial/USB interface is convenient, so is the direct <span class="caps">ICSP </span>route and I idly wonder why the Arduino doesn't adopt it. For people wanting a simply plug-and-play solution with a standard <span class="caps">USB </span>cable, one could presumably embed a <span class="caps">USB ICSP </span>programmer on the board.</p>
<p>The <span class="caps">ICSP </span>lines are brought out to a standard 6-pin header, to which any number of programmers can be attached. I use the <span class="caps">AVR</span>-ISP500 from <a href="http://www.olimex.com/">Olimex.</a></p>
<h3>Interface hardware</h3>
<p>Although I built my AArduino for a specific purpose (controlling a camera and flash), I'm not going to discuss the application specific hardware here. Suffice to say it's a very routine mixture of analogue sensor inputs, a few digital inputs which monitor push-buttons, digital outputs controlling a <span class="caps">LCD </span>text display, and finally a couple of opto-isolated digital outputs to trigger the camera and flash.</p>
<h2>Construction</h2>
<p>The whole thing was built on standard 0.1in stripboard. The layout was done manually with pencil, rubber, and <a href="../02/draw-sboard.html">layout sheets.</a></p>
<p>Sadly, I didn't record the Minty Boost part of the layout, but here's the microcontroller part:</p>
<p><img src="aarduino/layout.jpg" alt="" class="img_border" /></p>
<h2>In practice</h2>
<p>In practice the prototype works well most of the time, but inevitably there are some caveats.</p>
<p><img src="aarduino/board.jpg" alt="" class="img_noborder" /></p>
<h3>Noise</h3>
<p>I'm not sure whether it's noise on the supply rails, some sort of RF coupling, or what, but the analogue inputs on the <span class="caps">ATM</span>ega have significantly higher noise-levels when powered by battery than when, for example, an external 5V supply is connected to the board. The unsophisticated approach of adding extra low <span class="caps">ESR </span>capacitors across the supply rails helped a bit, but the difference remained.</p>
<p>The <span class="caps">LT1302 </span>has a special low-power mode which cycles the converter between burst and current modes. I found this added to the supply rail noise, so disabled it.</p>
<h3><span class="caps">LCD </span>power up.</h3>
<p>Sometimes the <span class="caps">LCD </span>display doesn't start up cleanly. Cycling usually fixes this, but it's not ideal. Presumably the switched mode supply has some sort of nasty power-up transient which confuses the display. Letting the supply stabilize before providing power to the rest of the circuit would probably fix that, or at least I guess it would.</p>
<h3>Power supply efficiency</h3>
<p>I think a good switch-mode design ought to yield an efficiency close to 90%. In practice, by shoe-horning the Minty Boost into my AArduino I see about 75% which falls to about 75% with <em>Burst Mode</em> disabled on the <span class="caps">LT1302.</span> By comparison, using a linear regulator to drop 9V from a <span class="caps">PP3 </span>battery to 5V for the Arduino has a <strong>maximum</strong> of about 56%.</p>
<h2>The final project</h2>
<p><img src="aarduino/ext.jpg" alt="" class="img_border" /> </p>AACD2A28-E204-11DE-AE1E-B90A27494AEE2009-12-06T01:13:57:57Z2013-06-05T18:13:43:43ZFaking geocaches in Garmin GPX filesMartin Oldfield<p>Garmin <span class="caps">GPS </span>receivers have a special mode to handle geocaches, but it's not been clear to me how the gadget decides which waypoints are geocaches. These brief notes describe something which works for me. </p><h2>Background</h2>
<p>Pocket Queries on the geocaching website send details of the cache locations in a <span class="caps">GPX </span>file. Typically, I augment these data with locally generated <span class="caps">GPX </span>files which contain, for example, the solutions to puzzle caches. Although it's easy to make something which looks like a <span class="caps">GPX </span>file and which loads into the <span class="caps">GPS</span>r or Google Earth, the waypoints aren't recognized as Geocaches.</p>
<p>I found this mildly annoying, because the Oregon's geocaching mode seems more convenient than just selecting particular classes of waypoints. Happily the recipe below seems to fix the problem. Obviously <span class="caps">YMMV</span>!</p>
<h2>The recipe</h2>
<p>The following guidelines seem to help, but it's quite possible that some of these things aren't needed or that some elements listed below are needed!</p>
<ul>
<li>In every <code><wpt></code> element you'll need a <code><groundspeak:cache></code> element, where the groundspeak namespace is bound to http://www.groundspeak.com/cache/1/0.</li>
<li>The <code><wpt></code>'s <code><name></code> element should be at most seven characters long. It's usually the GC waypoint code, and longer names are silently truncated in the cache-list display.</li>
<li>The <code><groundspeak:cache></code> element must have:<ul>
<li>a unique numeric <code>id</code> attribute;</li>
<li>a <code>available</code> attribute set to True;</li>
<li>a <code>archived</code> attribute set to False. </li>
</ul></li>
</ul>A1F5409C-4F3A-11E2-AF33-9B5441CA42B82012-12-26T08:58:00:00Z2013-06-05T18:13:43:43ZWatchdogMartin Oldfield<p>Watchdog: automatically do things when files change. </p><p>For decades, I’ve used programs like LaTeX where the workflow is:</p>
<ul>
<li>edit a file;</li>
<li>run a program e.g. LaTeX;</li>
<li>view the output e.g. in xdvi (in the 90’s), or Preview.app (today).</li>
</ul>
<p>At some point xdvi learnt to watch the output of LaTeX and automatically update itself, when the file changed. That was nice, but one still had to run LaTeX by hand after saving the source file.</p>
<p>Now nice editors make it easy to invoke the command with a keystroke, but these days, I often use <a href="https://github.com/gorakhargosh/watchdog#readme">watchdog</a> instead.</p>
<p>Once running, watchdog watches the TeX files and when one changes it automatically invokes pdflatex. Here’s the relevant command:</p>
<pre><code>% watchmedo shell-command -c "pdflatex top && open top.pdf" -p '*.tex' .</code></pre>
<p>As you’ll see, the command’s a trifle baroque, so I tend to save it as a shell script.</p>
<p>This general scheme applies to more than just LaTeX. I often write software by editing code and then invoking make: why not use watchdog instead ?</p>
<h2>Avoiding pipes to gnuplot</h2>
<p>Whilst it’s obviously useful if the source file is being edited by a person, watchdog is also a useful replacement for various inter-process communication schemes.</p>
<p>For example, I’m a big fan of <a href="http://www.gnuplot.info">gnuplot</a> which has a perfectly good command line interface. It works well if you’ve got fixed data to plot, and you want to fettle the plotting parameters.</p>
<p>It’s less good if you’re generating new data as you go, and you simply want the plot output to update automatically. Obviously one approach is to get the program generating the data to open a pipe to gnuplot, and then send commands along the pipe as necessary.</p>
<p>In practice though, that can be a bit of a hassle to do well, especially in languages with less whipuptitude than Perl. Since I increasingly use Haskell and ghci for simple calculations, using watchdog seems to be a better approach.</p>
<p>Explicitly, I get the Haskell to write:</p>
<ul>
<li>the data which needs to be plotted to a file or files;</li>
<li>the list of commands gnuplot needs to execute to gpout/script.gp.</li>
</ul>
<p>To generate the plots manually ones needs to execute a few simple commands, which I usually put in a script:</p>
<pre><code>#! /bin/sh
echo "Rebuilding"
gnuplot -e "load 'gpout/script.gp'"
open gpout/*.pdf
echo "Done\n"</code></pre>
<p>You’ll spot two implicit dependencies: the Haskell needs to write the script to gpout/script.gp, and the commands in the script must generate <span class="caps">PDF </span>files in gpout.</p>
<p>To automate this, we just need to persuade watchdog to run this script as needed. Happily, this is easy: here’s a suitable script:</p>
<pre><code>#! /bin/sh
DIR=gpout
mkdir -p $DIR
echo "Watching $DIR..."
watchmedo shell-command -c tools/run-gnuplot -p '*.gp' $DIR</code></pre>
<h2>Other benefits</h2>
<p>It’s nice to decouple generating data from plotting it, but in practice you could get a similar effect by wrapping the necessary pipes and process control into a library. In other words, having written the library I could simply call, say, runGnuplot in ghci instead of writeFile.</p>
<p>However, the watchdog solution is completely language agnostic so it’s trivial to change where the data are generated. One case is particularly useful: you can edit the gnuplot script manually and observe the effect. It’s not quite as direct as using the gnuplot command-line directly, but it’s close, and a good way to tweak the plots. Those tweaks can then be folded back into the Haskell which writes the script.</p>
<h2>Downsides</h2>
<p>It’s a rare thing which is entirely positive, and watchdog is no exception. The main downside is that there’s no feedback from the downstream program. This is particularly an issue when editing files by hand: typically a good editor will let you jump to the source of any error directly when you invoke the compiler from within the editor.</p>
<h2>Conclusions</h2>
<p>Overall I think the main reason for writing this article is that watchdog seems a useful utility. The recipes sketched above aren’t optimal, and could easily be improved if you feel like a spot of <a href="http://en.wiktionary.org/wiki/yak_shaving">yak-shaving.</a> For example, on MacOS many applications could be persuaded to update their display with a bit of AppleScript. </p>4A49E5DA-5158-11E2-99C0-BFC241CA42B82012-12-29T01:37:12:12Z2013-06-05T18:13:43:43ZA Coordinate DecoderMartin Oldfield<p>A simple <span class="caps">PIC</span>-based coordinate decoder for geocache puzzles. </p><p> For a while now I’ve wanted to set up a geocache puzzle which the cacher could only solve by building a simple electronic circuit. This is a brief description of the design, which is now deployed near Cambridge, <span class="caps">UK.</span> The cache itself is <a href="http://www.geocaching.com/seek/cache_details.aspx?guid=70d8315e-7b6b-4eb1-b541-a81c656c6c28"><span class="caps">GC40ZBT</span></a> but to tackle it, you’ll have to solve <a href="http://www.geocaching.com/seek/cache_details.aspx?guid=533d0f0b-d247-4a8b-a5fa-b5c56f50fe6b"><span class="caps">GC40ZBM</span></a> first.</p>
<p>Both the hardware designs and software are <a href="#downloads">freely available</a>, but the keys used in the caches above aren’t included. In other words, nothing in this article will help you solve the actual geocache puzzle!</p>
<h2><em>Desiderata</em></h2>
<p>It’s clear what sort of gadget we want:</p>
<ul>
<li>It should have flashing lights.</li>
<li>It should have knobs to tweak, and buttons to push.</li>
<li>It should be simple to build.</li>
</ul>
<p>It’s also clear that there are some constraints:</p>
<ul>
<li>It must not cost the cacher much money.</li>
<li>The cacher <em>should not</em> be able to solve the puzzle other than by building the circuit.</li>
<li>Most cachers should be able to build the gadget without special training.</li>
</ul>
<p>After a bit of thinking I decided that a microcontroller running some sort of cryptography software would fit the bill. I reckoned that it would be unreasonable to ask the punter to enter more than 32-bits of data, which gives us an upper bound of 2<sup>32</sup> ≈ 4 billion possibilities. To get a feel for this, if each test took one second, it would take over 130 years to try them all.</p>
<p>My initial thought was to build something a bit like an electronic safe: once built, the cacher could enter a code by turning a rotary encoder, if he got the combination right the coordinates would be displayed.</p>
<p>However, whilst thinking about this, another idea struck me. Geocachers normally use coordinates specified in thousandths of a minute of arc, so there are 60,000 divisions to each degree. That’s a bit less than 2<sup>16</sup>, so in 32-bits we could easily fit a square degree of locations. So, instead of building a digital safe, it might be better to build a digital scrambler which would assign a random looking 32-bit code to each nearby location, or more usefully convert the code into coordinates.</p>
<p>Specifically, the gadget would accept a 32-bit number as input, decrypt it into another 32-bit number, then use that as a pair of 16-bit N/E offsets. All of this design could be public, save for the key to the scrambling step. Given such a device it would be possible to give the cacher a code to any nearby coordinates, so if the cache was moved the hardware wouldn’t need to be changed. It would also be easy to use the same hardware to decode several locations in the puzzle.</p>
<p>Of course, the problem remains of getting the design to the cacher without him being able to simulate it. In the end, a simple solution presented itself: microcontrollers are very cheap, so I could simply give them to the cacher. After all, it’s not as though puzzle caches are particularly popular in Cambridge! Giving out pre-programmed microcontrollers solved another problem: how would cachers actually program the chips.</p>
<p>Having started along this path, it became awfully attractive to give away a full kit of parts. That made it easy to make sure that people were using the right components, and solved any worry that people would avoid the cache because it was expensive. On the other hand, I didn’t want to bear the cost of boundless components myself. The obvious solution was to loan out kits of components and a breadboard: cachers could build the gadget, use it, then return the kit for someone else. Obviously there’s some risk that people would just walk off with things, but geocachers seem to be a trustworthy tribe.</p>
<h2>Hardware</h2>
<p>Probably the biggest design decision is what sort of display to use. There seemed to be a couple of obvious choices: 7-segment <span class="caps">LED </span>displays or a small <span class="caps">LCD </span>matrix. The former appealed more: they’re available cheaply from China via eBay, and they give the gadget a pleasingly retro feel. I might have felt differently had I lived further west: 7-segment displays are only wide enough to display a single ‘U’, not ‘W’.</p>
<p>The display needs to be big enough to display about 32-bits of input data, and about 15 characters of coordinate output. Eight digits seems sensible: 8 hexadecimal digits are exactly 32 bits; and if we display the final northing and easting separately then they’ll fit into 8 characters. Octal 7-segment displays are rare, but quad displays are widely available, cheap, and don’t require much more wiring.</p>
<p>It would be nice to drive the displays from the microcontroller without any other driver hardware (save current limiting resistors). Counting the decimal-point, each display contains 8 <span class="caps">LED</span>s, and there are 8 such displays. An 8×8 multiplexed design will need 16 output lines on the microcontroller.</p>
<p>Adding a couple of inputs for a rotary-encoder and one input for a button, that makes 19 I/O pins. Given that we’ll need at least two power pins, a 20-pin device won’t be large enough.</p>
<p>In the end, I picked a 28-pin <a href="http://www.microchip.com/wwwproducts/Devices.aspx?dDocName=en026562"><span class="caps">PIC16F886</span></a> mainly because I had some to hand. It’s also used by Microchip in one of their <a href="http://www.microchipdirect.com/productsearch.aspx?Keywords=DM164120-3">demo boards</a>. There might be cheaper options, but I didn’t spend time investigating them.</p>
<p>Given that there are some spare I/O pins, it seemed only natural to add a couple of extras:</p>
<ul>
<li>a second push-button, integrated into the rotary encoder;</li>
<li>a <a href="http://en.wikipedia.org/wiki/UNI/O"><span class="caps">UNI</span>/O</a> <a href="http://www.microchip.com/wwwproducts/Devices.aspx?dDocName=en535106">flash chip</a> in which e.g. the decoder keys could optionally be stored.</li>
</ul>
<p>Like many <span class="caps">PIC</span>s the 16F886 has an optional internal 8MHz clock which obviates the need for any external clock circuitry: we don’t need particularly high accuracy or vast performance, and power is plentiful.</p>
<p>Happily this still leaves all of the <span class="caps">ICSP </span>pins free to facilitate programming.</p>
<p>Here’s the final design courtesy of <a href="http://www.diptrace.com">DipTrace</a>:</p>
<p><a href="./big-schem.png"><img src="schematic.png" alt="" class="img_border" /></a></p>
<h3><span class="caps">PCB </span>layout</h3>
<p>Although the final puzzle uses a breadboard, I wanted to build the project on a <span class="caps">PCB </span>too. It’s nice to have robust hardware when writing software, and the gadget might be useful elsewhere.</p>
<p>Here’s the layout, again courtesy of DipTrace:</p>
<p><img src="pcb-top.png" alt="" class="img_noborder" /> <img src="pcb-bottom.png" alt="" class="img_noborder" /></p>
<p>When built the board looks like this:</p>
<p><img src="pcb.jpg" alt="" class="img_border" /></p>
<p>Were I ever to redo the board, I’d fix a couple of issues, both rotary-encoder related:</p>
<ul>
<li>the mounting slots aren’t quite right;</li>
<li>the terminal holes are a bit too small, though abusing the pins with a pair of pliers solved this problem.</li>
</ul>
<h3>Breadboard layout</h3>
<p>Happily the project fits nicely on a breadboard, though the displays do overhang the edge a bit. There’s not enough space for the flash <span class="caps">ROM, </span>but in this application there didn’t seem to be any advantage in using the chip over simply putting the key data into the microcontroller’s firmware.</p>
<p>Given that novices would be building the gadget, I wanted to draw clear, step-by-step instructions. Surprisingly I couldn’t find any particularly helpful applications, and so resorted to drawing them by hand with the <a href="http://projects.haskell.org/diagrams/">Haskell diagrams</a> framework. You can assess for yourself whether the result is easy to follow:</p>
<p><img src="bb-insts.gif" alt="" class="img_noborder" /></p>
<p>though you might prefer to see a <a href="./bb-insts.pdf"><span class="caps">PDF</span></a> if you’re actually building one. Either way, you should end up with something like this:</p>
<p><img src="bb.jpg" alt="" class="img_border" /></p>
<h2>Cryptography</h2>
<p>Recall that the basic idea is to build a gadget which takes a 32-bit encrypted number and decrypts it, treating the 32-bit plaintext as two 16-bit milliminute offsets.</p>
<p>It’s rather pretentious to call this cryptography, after all what we really want is just a function which scrambles 32-bit quantities parameterized by a secret key. Although it’s probably overkill here, it would be nice if the scrambler had the usual good-crypto features:</p>
<ul>
<li>small changes in the input should give big changes in the output;</li>
<li>it should be hard to infer the key from a small number of plain/crypt pairs.</li>
</ul>
<h3>skipjack32</h3>
<p>I’m no cryptography expert so it would be daft to invent a scrambling algorithm from scratch: instead it’s both quicker and more sensible to look online. There aren’t a vast number of choices, presumably because 32-bits is a small enough space to succumb to brute-force attacks. Most online examples appear to use a 32-bit varientof <a href="http://en.wikipedia.org/wiki/Skipjack_(cipher)">skipjack</a>.</p>
<p>Initially I found a <a href="http://search.cpan.org/~esh/Crypt-Skip32-0.17/lib/Crypt/Skip32.pm">Perl implementation</a> which happily includes the original C source written by Greg Rose. Qualcomm’s opensource site appears to host <a href="https://opensource.qualcomm.com/assets/dotc/skip32.c">the original</a>. Despite being written in 1999, it still compiles happily today, and I used it in the microcontroller firmware.</p>
<p>For actually working out which encrypted code corresponds to particular coordinates, I reimplemented the algorithm in Haskell.</p>
<p>The Haskell assumes:</p>
<pre><code>stdKey = Key [ 0x00,0x99,0x88,0x77,0x66,0x55,0x44,0x33,0x22,0x11 ]
-- Origin: N 51 12.345 E 0 9.876
stdOffs = (51 * 60000 + 12345, 9876)</code></pre>
<p>Given these we can encode coordinates thus:</p>
<pre><code>$ ghci skip32.hs
GHCi, version 7.4.2: http://www.haskell.org/ghc/ :? for help
...
*Skip32> putStrLn $ stdEncCheck (52, 12.345) (0, 56.789)
Crypt: 0xf45655de
Plain: 0xea60b741
0x002fcbb9 => N 52 12.345
0x0000ddd5 => E 0 56.789</code></pre>
<p>So N 52 12.345 E 0 56.789 is represented by the hex code 0xf45655de. Being paranoid, the code above then decodes that code and checks we get back what we expected. It’s also useful to see the coordinate offset in hex:</p>
<pre><code>*Skip32> putStrLn $ stdHexOffsets
0x002ee159 => N 51 12.345
0x00002694 => E 0 9.876</code></pre>
<h2>Firmware</h2>
<p>The firmware was written in C, and compiled with Microchip’s free <a href="http://www.microchip.com/pagehandler/en_us/devtools/mplabxc/"><span class="caps">XC8 </span>compiler</a>. It’s a fairly trivial thousand-line affair, and fits easily into the <span class="caps">PIC.</span></p>
<p>Frankly there’s little more worth saying about the code: it’s that straightforward. Most of the I/O is handled by an interrupt routine running at about 2kHz and hung off <span class="caps">TMR2.</span> All the inputs are polled in the same handler, so we don’t need to worry about a flood of interrupts being generated by bouncing switches.</p>
<h3><span class="caps">UNI</span>/O support</h3>
<p>The only tricky bit was the code which reads the <span class="caps">UNI</span>/O memory. In essence it’s simple: there’s a simple serial protocol to implement, but all we have to is read a small amount of data from the flash chip.</p>
<p>In practice it proved tricky to get the timing right, though the following tricks helped:</p>
<ul>
<li>Put the code in macros rather than subroutines.</li>
<li>Use <span class="caps">TMR2 </span>(running at 62.5kHz) to synchronize all the state changes and reads.</li>
</ul>
<p>Essentially the key idea is to make make all the <span class="caps">UNI</span>/O changes immediately after a <span class="caps">TMR2 </span>tick. We can then run random code, provided that we’re finished some time before the next <span class="caps">TMR2 </span>tick.</p>
<p>Things were made easier becuase the code only has to run once, and for a short-time, at startup. So we can use <span class="caps">TMR2 </span>as we will, and disable interrupts too.</p>
<p>It’s fair to say that the <span class="caps">UNI</span>/O code isn’t particularly robust. If I were actually deploying it properly I’d make at least three changes:</p>
<ul>
<li>The <span class="caps">UNI</span>/O state machine doesn’t look very hard for errors.</li>
<li>There’s no checksum on reading data from the flash chip.</li>
<li>I’ve not checked that the jitter on outgoing bitstream is within tolerance (there’s probably scope to reduce jitter with a bit of assembler though).</li>
</ul>
<p>For testing I’ve used a <a href="http://www.microchip.com/wwwproducts/Devices.aspx?dDocName=en535106">11LC160</a> 16kb chip, of which only the first 208 bits actually matter! You could use any device with address 0xA0 instead e.g. any of the <a href="http://ww1.microchip.com/downloads/en/DeviceDoc/22067J.pdf">11xxyyy family</a>.</p>
<h3>Configuration</h3>
<p>Configuration data are held in a 26-byte array:</p>
<pre><code>static uint8_t inbuff[26] = {
// Start up message: remember that the LSB is on the right `so'
// this is backwards
seg_p, seg_p, seg_p, font_O, font_L, font_L, font_E, font_H,
// North offset: N 51 12.345 => 0x002ee159
0x59, 0xe1, 0x2e, 0x00,
// East offset: E 0 9.876 => 0x00002694
0x94, 0x26, 0x00, 0x00,
// Skipjack32 Key
0x00, 0x99, 0x88, 0x77, 0x66, 0x55, 0x44, 0x33, 0x22, 0x11,
};</code></pre>
<p>You can see it’s possible to change the message displayed on startup, and the two decoder parameters: the coordinate offset and skip32 key. You’ll see too that these match the values used in the Haskell skip32 implementation.</p>
<p>At startup, this block might be over-written by data from the <span class="caps">UNI</span>/O flash chip. However for this to work <span class="caps">SCAN</span>_FLASH must be set at compile time:</p>
<pre><code>/ Set this to non-zero to scan the flash on startup
#define SCAN_FLASH (1)</code></pre>
<h2 id="downloads">Downloads</h2>
<h3 id="download_hw">Hardware</h3>
<p>The hardware design is available under the <a href="http://creativecommons.org/licenses/by-sa/3.0/"><span class="caps">CCSA</span> 3.0 license</a>. You’ll need DipTrace to manipulate the files, but Gerber files suitable for <a href="http://www.seeedstudio.com/depot/fusion-pcb-service-p-835.html?cPath=185">Seeed Studio’s <span class="caps">PCB </span>service</a> are also included.</p>
<ul>
<li><a href="./decoder-hardware-0.1.tar.gz">Version 0.1</a></li>
</ul>
<h3 id="download_sw">Software</h3>
<p>The software I’ve written is available under the <a href="http://www.gnu.org/licenses/gpl.txt"><span class="caps">GPL</span> 3.0:</a>, but you should note that the tarballs below also contain a lightly-modified version of skip32.c which is ‘licensed’ thus:</p>
<pre><code>Not copyright, no rights reserved.</code></pre>
<ul>
<li><a href="./decoder-firmware-0.1.tar.gz">Firmware version, 0.1</a></li>
<li><a href="./skip32-hs-0.1.tar.gz">Haskell Skip32 code, version 0.1</a> </li>
</ul>7AAFC14C-5E91-11E0-B6FF-2AAD7B0AA95E2011-04-04T07:59:17:17Z2013-06-05T18:13:43:43ZMoments of InertiaMartin Oldfield<p>Some basic facts about the moment of inertia </p><h2>Introduction</h2>
<p>A while ago I wanted to know the moment of inertia of a tetrahedron. I'd forgotten some of the basic stuff, and the calculation was a bit fiddly so I thought I'd write it up on here. There are three related articles:</p>
<ol>
<li><a href="http://www.mjoldfield.com/atelier/2011/03/cartesian-moi.html">Some basic results</a> (this article).</li>
<li>A toy problem: <a href="http://www.mjoldfield.com/atelier/2011/03/sphere-moi.html">the sphere.</a></li>
<li><a href="http://www.mjoldfield.com/atelier/2011/03/tetra-moi.html">The final calculation.</a></li>
</ol>
<h2>General principles</h2>
<p>When things are rotating, the obvious thing to measure is the <a href="http://en.wikipedia.org/wiki/Angular_velocity">angular velocity</a> \(\boldsymbol{\omega}\). You can imagine taking two photographs of the rotating object, a fraction of a second apart, then working out the line which is (instantaneously) at rest, and the rate at which the body is rotating about that line.</p>
<p>Another important quantity is the <a href="http://en.wikipedia.org/wiki/Angular_momentum">angular momentum</a> \(\textbf{L}\), which is defined as</p>
\[
\textbf{L} = m\, \textbf{r} \times \left( \boldsymbol{\omega} \times \textbf{r} \right).
\]
<p>It's important because the angular momentum doesn't change unless a torque is applied.</p>
<h2>The moment of inertia</h2>
<p>The angular momentum is linear in \(\boldsymbol{\omega}\), so it makes sense to define the <a href="http://en.wikipedia.org/wiki/Moment_of_inertia">moment of inertia</a> tensor \(\textbf{I}\) which satisfies</p>
\[ \textbf{L} = \textbf{I} \, \boldsymbol{\omega}. \]
<p>Obviously \(\textbf{I}\) is some quadratic function of \(\textbf{r}\), but which one ?</p>
<p>Recall this identity for the <a href="http://en.wikipedia.org/wiki/Vector_triple_product#Vector_triple_product">vector triple product:</a></p>
\[
\textbf{a} \times (\textbf{b} \times \textbf{c}) = (\textbf{c} . \textbf{a}) \textbf{b} - (\textbf{b} . \textbf{a}) \textbf{c},
\]
<p>and apply it:</p>
\[
\begin{align} \frac{1}{m} \textbf{L} &= \textbf{r} \times \left( \boldsymbol{\omega} \times \textbf{r} \right), \\ &= (\textbf{r} . \textbf{r}) \; \boldsymbol{\omega} - (\textbf{r} . \boldsymbol{\omega}) \textbf{r}, \\ &= (\textbf{r}^T \textbf{r}) \; \boldsymbol{\mathbb{I}} \; \boldsymbol{\omega} - \textbf{r} \textbf{r}^T \boldsymbol{\omega},\\ &= \left( (\textbf{r}^T \textbf{r}) \; \boldsymbol{\mathbb{I}} - \textbf{r} \textbf{r}^T \right) \boldsymbol{\omega}, \\ \frac{1}{m}\textbf{I} &= (\textbf{r}^T \textbf{r}) \; \boldsymbol{\mathbb{I}} - \textbf{r} \textbf{r}^T. \end{align}
\]
<p>where \(\mathbb{I}\) is the identity matrix.</p>
<h2>Extended bodies</h2>
<p>Although the discussion above relates to a point mass, it's simple to extend to general bodies: just sum (or integrate) the contribution from each elemental mass.</p>
<p>If the body has lots of symmetry this might be easy, but the general case is often fiddly to get right.</p>
<h2>In component form</h2>
<p>When it comes to calculating a particular \(\textbf{I}\), we'll probably need the components:</p>
\[
\begin{align} \frac{1}{m}\textbf{I} &= (\textbf{r}^T \textbf{r}) \; \boldsymbol{\mathbb{I}} - \textbf{r} \textbf{r}^T, \\ &= \left(\begin{array}{c} x & y & z \end{array}\right) \left(\begin{array}{ccc} x \\ y \\ z \end{array}\right) \left(\begin{array}{ccc} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{array}\right) - \left(\begin{array}{ccc} x \\ y \\ z \end{array}\right) \left(\begin{array}{c} x & y & z \end{array}\right), \\ &= (x^2+y^2+z^2) \left(\begin{array}{ccc} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{array}\right) - \left(\begin{array}{ccc} x^2 & xy & zx \\ xy & y^2 & yz \\ zx & yz & z^2 \end{array}\right), \\ &= \left(\begin{array}{ccc} y^2 + z^2 & -xy & -zx \\ -xy & z^2 + x^2 & -yz \\ -zx & -yz & x^2 + y^2 \end{array} \right). \end{align}
\]
<h2>A computational trick</h2>
<p>It's often easier to calculate first the second moment of \(m\):</p>
\[
\begin{align} \textbf{C} &= m \, \textbf{r}\; \textbf{r}^T, \\ &= m\, \left(\begin{array}{ccc} x^2 & xy & zx \\ xy & y^2 & yz \\ zx & yz & z^2 \end{array} \right), \end{align}
\]
<p>then evaluate its trace,</p>
\[
\textrm{Tr} \, \textbf{C} = \sum_i C_{ii} = x^2 + y^2 + z^2,
\]
<p>and so calculate \(\textbf{I}\):</p>
\[
\textbf{I} = \left( \textrm{Tr} \, \textbf{C} \right) \; \mathbb{I} - \textbf{C}.
\]
<h2>The scalar simplification</h2>
<p>In general \(\textbf{I}\) will be a positive semi-definite matrix (otherwise the kinetic energy might be negative!).</p>
<p>Accordingly we can always find some frame in which \(\textbf{I}\) is diagonal. If the body is rotating about one of the axes of that frame, then the the angular velocity and momentum will be colinear, and we can write a simple scalar equation:</p>
\[
L = I \omega.
\]
<p>Usually this will only be useful if the object has been carefully spun about a particular direction, or if the object has lots of symmetry. For example, the sphere's moment of inertia (about its centre) is a multiple of the identity matrix and so diagonal in all frames.</p>
<p>If we consider the \(x\)-component equation then:</p>
\[
\begin{align} L_x &= I_{xx} \omega_x, \\ &= m (y^2 + z^2) \omega_x, \\ &= m r^2 \omega_x, \end{align}
\]
<p>where \(r^2\) is just the radius of rotation about the \(x\)-axis.</p>
<p>For an extended body:</p>
\[
L_x = \left( \int r^2 dm \right) \omega_x.
\] 58525C84-5472-11DE-8919-7D5C446D55572009-03-24T15:36:46:46Z2013-06-05T18:13:43:43ZEtch to lenny upgrades on XenMartin Oldfield<p>I have a bunch of virtual Debian machines on a Xen box. When they were installed I used etch, but I've just moved one domU to lenny: here's my crib sheet. I suspect it won't be generally useful. </p><h2>General notes</h2>
<p>One of my Debian boxes runs something close to stock etch on both dom0 and most of the domUs. I had a minor issue upgrading one of the domUs to lenny, so I thought I'd document it.</p>
<h3>Most stuff works</h3>
<p>I did the upgrade without much thought, and it didn't go entirely smoothly: it took a few iterations of aptitude dist-upgrade to get there.</p>
<h3>exim4 broke</h3>
<p>I have a custom exim configuration, and the template format for the exim4.conf file has changed. I started from the new Debian file and added my own local changes again, that seemed to fix it.</p>
<h3>libc6-xen is a bit odd</h3>
<p>I found it necessary to tweak the configuration, as described in <a href="http://wiki.debian.org/Xen">the Debian Xen Wiki</a> which points to <a href="http://wiki.xensource.com/xenwiki/XenFaq#head-e05786f1e0d6a833bc146a6096cab2d96f2b30ae">the XenFAQ.</a></p>
<p>Before this tweak the logs are full of '4gb seg fixup' errors. </p>F258C7C8-59F2-11E0-BC83-33827B0AA95E2011-03-29T10:46:41:41Z2013-06-05T18:13:43:43ZPerl and Xcode 4Martin Oldfield<p>I just bought a new Mac and installed Xcode 4 on it. However, I couldn't build any Perl modules with XS (bits of C) in them. Here's a solution! </p><h2>The basic problem</h2>
<p>It's fairly well known that many of the Perl modules on <span class="caps">CPAN </span>have little bits of C in them, and so one needs a C tool chain to build them. On MacOS that means installing Xcode, and version 4 of Xcode is now available in the Mac App Store.</p>
<p>In fact, given a new Mac running MacOS 10.6.7, Xcode 4 was one of the first things I installed. Imagine my surprise when it proved impossible to install Text::CSV_XS—not the most popular module I know, but still representative:</p>
<pre><code>...
/usr/libexec/gcc/powerpc-apple-darwin10/4.2.1/as: assembler
(/usr/bin/../libexec/gcc/darwin/ppc/as or /usr/bin/../local/libexec/gcc/darwin/ppc/as)
for architecture ppc not installed
Installed assemblers are:
/usr/bin/../libexec/gcc/darwin/x86_64/as for architecture x86_64
/usr/bin/../libexec/gcc/darwin/i386/as for architecture i386</code></pre>
<p>Although it looks confusing at first, the error message is actually quite explicit: no ppc (PowerPC) assembler could be found I assume Apple have removed it in Xcode 4.</p>
<p>I don't care about <span class="caps">PPC </span>support <em>per se</em>, but it does become a problem if it breaks the other architectures. Happily, a solution is at hand.</p>
<h2>A short-term fix</h2>
<p>The key variable which governs which architectures get built is $archflags which is defined in /System/Library/Perl/5.10.0/darwin-thread-multi-2level/Config_heavy.pl</p>
<p>Simply edit this file and change</p>
<pre><code>$archflags = exists($ENV{ARCHFLAGS}) ? $ENV{ARCHFLAGS} : '-arch x86_64 -arch i386 -arch ppc';</code></pre>
<p>into</p>
<pre><code>$archflags = exists($ENV{ARCHFLAGS}) ? $ENV{ARCHFLAGS} : '-arch x86_64 -arch i386';</code></pre>
<p>Presumably one could do something similar by manipulating the <span class="caps">ARCHFLAGS </span>environment variable, but that seemed rather fragile to me.</p>
<h2>A proper solution</h2>
<p>At some point I'm sure Apple will either fix Xcode or Perl. </p>DD8E7B04-71BC-11DF-A88E-73F95D8D4C0B2010-06-06T22:39:29:29Z2013-06-05T18:13:43:43ZGetting an ARM toolchain on MacOS 10.6Martin Oldfield<p> I'm starting to play with <span class="caps">ARM </span>microcontrollers, and building a toolchain is a necessary step. Here's a log of my experiences.</p><h2>Compilers &c.</h2>
<h3><span class="caps">YAGARTO</span></h3>
<p>Much of the web talks about <a href="http://www.yagarto.de/">Yet Another <span class="caps">GNU ARM</span> Toolchain,</a> but I couldn't get yagarto-4.4.2 to install on MacOS 10.6.</p>
<h3>CodeSourcery</h3>
<p><a href="http://www.codesourcery.com/">CodeSourcery</a> are an outfit who make commercial <span class="caps">ARM IDE</span>s. However they make a free <a href="http://www.codesourcery.com/sgpp/lite_edition.html">Lite</a> version of their tools available which you can use from the command-line. "</p>
<p>Happily James Snyder has packaged them into a convenient <a href="http://github.com/jsnyder/arm-eabi-toolchain/">Makefile:</a></p>
<pre><code>git clone http://github.com/jsnyder/arm-eabi-toolchain.git
cd arm-eabi-toolchain
sudo make install-deps
<edit Makefile to change install dir, which should be on the PATH>
make cross-install</code></pre>
<h3>Mac Ports</h3>
<p>There are several different <span class="caps">ARM </span>compilers here, though at the time of writing (May 2010) James Synder's versions seem to be newer.</p>
<h2>OpenOCD</h2>
<p>You can download <a href="http://developer.berlios.de/project/showfiles.php?group_id=4148">the source</a> but it's easier to get it from Mac Ports:</p>
<pre><code>sudo port install openocd
"The documentation":http://openocd.berlios.de/web/?page_id=54 is helpful. </code></pre>F258ACFC-59F2-11E0-A6C5-8D637B0AA95E2011-03-24T23:28:46:46Z2013-06-05T18:13:43:43ZTime Machine hasslesMartin Oldfield<p> One of my Macs recently died and I wanted to recover data from a Time Machine. Sadly this wasn't entirely trivial.</p><h2>A sudden death in the unibody.</h2>
<p>A couple of years ago I bought a unibody MacBook Pro, and fairly quickly replaced the hard drive with a fancy flash drive from Crucial. Since then it's been a fine machine: zippy and very quiet.</p>
<p>However recently the flash drive suddenly and completely died. Happily I had a relatively recent disk image, so whatever happened I wouldn't lose too much work. However, given that the machine was hooked up to a second generation <a href="http://www.apple.com/timecapsule/">Time Capsule.</a> I hoped I'd be able to restore the system to the state it was in just hours before the drive died.</p>
<p>I had a spare drive so put that in the MacBook, booted it from <span class="caps">DVD </span>and asked it to restore. It found the Time Capsule, but variously couldn't see the right sparseimage or thought it was corrupt. Not only was the process stressful and irritating, there was also a real lack of helpful progress or diagnostic information.</p>
<p>Apple installation <span class="caps">DVD</span>s usually have a copy of the Disk Utility on them, which one can use to fix problems like this, but again it was hard to make progress.</p>
<h2>A slightly better approach</h2>
<p>One of the big problems with running things from the installation <span class="caps">DVD </span>is that you're effectively giving up multi-tasking. You can't, for example, fire up a shell to see just what's going on.</p>
<p>Rather than suffer such a limitation, I installed a fairly minimal MacOS system, booted that, then ran the Migration Assistant and Disky Utility from the comfort of MacOS proper.</p>
<h3>Disk Utility and Time Capsules</h3>
<p>In principle, the Disk Utility application can repair Time Machine backups which are a bit damaged. To do this, mount the Time Capsule in the Finder, then drag the relevant sparseimage to the left-hand pane in Disk Utility.</p>
<p>Having persuaded Disk Utility to look at the data, the most important thing to note is that the Time Capsule is <span class="caps">SLOW.</span> I had a 1TB Time Capsule 2 hooked up to a 2009 unibody MacBook Pro over gigabit ethernet. The total time to 'Repair' the disk image was about 15 hours. During this time, the Disk Utility display didn't change at all!</p>
<p>Under the hood the Disk Utility's Repair function appears to call fsck_hfs. This logs information to /var/log/fsck_hfs.log, so if you've got a proper OS you can watch what's going on.</p>
<p>It's hardly the most exciting file to watch: typically you might see a new line every hour or so, but at least you can see that some progress is being made. In practice, I think the drive was fsck'd three times as the image converged to something sensible.</p>
<p>Rather disappointingly the system logs showed that Disk Utility crashed almost immediately after fsck_hfs gave the all clear.</p>
<h3>Migration Assistant and Time Capsules</h3>
<p>Besides the slowness and lack of meaningful feedback, there also seems to be an issue where the Time Capsule effectively hides an image from the Migration Assistant.</p>
<p>Some combination of waiting and rebooting everything seems to help, but I'm never very sure about rebooting the Time Capsule.</p>
<h2>Lessons Learned</h2>
<h3>At the end of the day, it worked (slowly)!</h3>
<p>I suppose it's unfair not to say that the most important observation is that, given some prodding, Time Machine worked as advertised.</p>
<p>On the other hand, where Time Machine and Time Capsule are concerned you probably need to wait <strong>a day</strong> before concluding that an apparently hung process won't get anywhere.</p>
<h3><span class="caps">CLI </span>beats <span class="caps">GUI</span></h3>
<p>However, I think it was a mistake to use Apple's <span class="caps">GUI </span>tools for the restore: they seem flakey and uninformative.</p>
<p>Next time, I think it would be better to:</p>
<ol>
<li>mount the image manually with <a href="http://developer.apple.com/library/mac/#documentation/Darwin/Reference/ManPages/man1/hdiutil.1.html">hdiutil;</a></li>
<li>repair the image with <a href="http://developer.apple.com/library/mac/#documentation/Darwin/Reference/ManPages/man8/fsck_hfs.8.html">fsck_hfs.</a></li>
</ol>
<p>If possible, I'd do this from some other Mac rather than the one I was trying to fix.</p>
<p>Once I had a good image, I'd probably try to install it again with the install <span class="caps">DVD, </span>before resorting to Migration Assistant. If I did go the latter route though, it would make more sense to create the initial user as some dummy account so that it wouldn't clash.</p>
<h3>Avoiding Time Machine</h3>
<p>I suspect that the best solution is simply to be more disciplined about using e.g. <a href="http://git-scm.com/">git</a> to push stuff off the machine in near-real time.</p>
<p>Although that wouldn't be trivial for things like system settings, in practice they get pushed to cloud anyway by iSync. </p>4CA4DF88-5086-11E2-B58D-19A541CA42B82012-12-28T00:33:54:54Z2013-06-05T18:13:43:43ZHaskell qua calculatorMartin Oldfield<p>Although Haskell is a fine language for hard-core programming, it’s increasing my tool of choice for trivial arithmetic too. </p><p><a href="http://www.haskell.org/">Haskell</a> is a most unusual language. At one end it’s a hard-core functional language with impeccable CS credentials, but increasingly I’ve come to use it for simple mathematical calculations too.</p>
<p>I’m not entirely sure how the language manages to fill both roles so well, though it surely demonstrates the designers’ exquisitely good taste. However, I think the following are important:</p>
<ul>
<li>Haskell ‘feels’ more like maths than most other languages.</li>
<li>Haskell is basically <a href="http://en.wikipedia.org/wiki/Lazy_evaluation">lazy,</a> which facilitates conveniences like infinite lists.</li>
<li>Haskell has <a href="http://www.haskell.org/tutorial/numbers.html">built-in transparent support</a> for <a href="http://en.wikipedia.org/wiki/Arbitrary-precision_arithmetic">bignums.</a></li>
<li>Haskell is blessed with lots of little operators and functions which do useful things. For example, the $ operator avoids lots of brackets. $ roughly means evaluate the right-hand side and continue, and so turns nested expressions into linear ones:<br/> <code>a $ b $ c d = a(b(c d))</code>.</li>
<li>In <a href="http://www.haskell.org/ghc/docs/latest/html/users_guide/ghci.html">ghci</a> Haskell has a splendid <a href="http://en.wikipedia.org/wiki/REPL"><span class="caps">REPL.</span></a></li>
</ul>
<p>Besides the language itself of course, it’s helpful to have some good libraries. Happily <a href="http://hackage.haskell.org/packages/hackage.html">Hackage</a> has lots of good stuff in it, though it can’t match e.g. the <a href="http://www.cpan.org/"><span class="caps">CPAN.</span></a></p>
<h2>A simple calculation</h2>
<p>As an example, suppose we want to know the sum of the factorials of all the primes less than 50. Here’s one way (assuming you’ve installed the <a href="http://hackage.haskell.org/package/primes">primes package</a>):</p>
<pre><code>$ ghci
GHCi, version 7.4.2: http://www.haskell.org/ghc/ :? for help
Prelude> import Data.Numbers.Primes
Prelude Data.Numbers.Primes> let fac i = product [1..i]
Prelude Data.Numbers.Primes> sum [ fac i | i <- takeWhile (< 50) primes ]
258623301959883784393716899074939573050130131319471976510768</code></pre>
<p style="margin-left:3em"><small><em>Note: I’ve removed some of ghci’s diagnostics to make things clearer.</em></small></p>
<h2>Defining variables</h2>
<p>Haskell can also play the poor man’s symbolic calculator, here calculating an approximate value of <em>g</em>, the <a href="http://en.wikipedia.org/wiki/Gravity_of_Earth">acceleration due to gravity</a> at the Earth’s surface:</p>
<pre><code>$ ghci
GHCi, version 7.4.2: http://www.haskell.org/ghc/ :? for help
Prelude> let bigG = 6.674e-11
Prelude> let mE = 6e24
Prelude> let rE = 6.4e6
Prelude> bigG * mE / rE^2
9.776367187499998</code></pre>
<p style="margin-left:3em"><small><em>Note: I’ve removed some of ghci’s diagnostics to make things clearer.</em></small></p>
<h2>List Comprehensions</h2>
<p>One of Haskell’s most succinct features is the <a href="http://en.wikipedia.org/wiki/List_comprehension#Haskell">list comprehension.</a> Continuing from above let’s see how <em>g</em> changes as we ascend from the Earth:</p>
<pre><code>Prelude> let g h = bigG * mE / (rE + h)^2
Prelude> [ (h, g h) | h <- [0,10000..100000] ]
[(0.0,9.776367187499998),(10000.0,9.745887495406212),...
</code></pre>
<p>That’s hardly the most readable of output, but Haskell has a <a href="http://www.haskell.org/ghc/docs/latest/html/libraries/base/Text-Printf.html">printf clone</a> which solves the problem:</p>
<pre><code>Prelude> import Text.Printf
Prelude Text.Printf> hs = [0,10000..100000]
Prelude Text.Printf> putStr $ concat [ printf "%6.0f %.4f\n" h (g h) | h <- hs ]
0 9.7764
10000 9.7459
20000 9.7156
30000 9.6854
40000 9.6553
50000 9.6254
60000 9.5956
70000 9.5660
80000 9.5365
90000 9.5071
100000 9.4779</code></pre>
<p> Of course you can do more than just a simple map. It’s easy to loop over more than one variable:</p>
<pre><code>$ ghci
Prelude> import Data.Numbers.Primes
Prelude Data.Numbers.Primes> [ p^i | p <- take 5 primes, i <- [1..3] ]
[2,4,8,3,9,27,5,25,125,7,49,343,11,121,1331]</code></pre>
<p>or define new variables:</p>
<pre><code>... > [ (p,i,n) | p <- take 5 primes, i <- [1..3], let n = p^i ]
[(2,1,2),(2,2,4),(2,3,8),(3,1,3),(3,2,9),(3,3,27),(5,1,5),(5,2,25)...</code></pre>
<p>or add conditions:</p>
<pre><code>... > [ (p,i,n) | p <- take 5 primes, i <- [1..3], let n = p^i, n `mod` 10 == 7 ]
[(3,3,27),(7,1,7)]</code></pre>
<h2>Purity</h2>
<p>Haskell is a <a href="http://www.haskell.org/haskellwiki/Functional_programming#Purity">pure</a> language which means, amongst other things, that random bits of the program <a href="http://www.haskell.org/haskellwiki/IO_inside">can’t simply do I/O.</a></p>
<p>However, if you just want to write the results of a calculation to a file, it’s usually possible to ignore these issues, by replacing putStr with writeFile. One twist you’ll probably need is to convert the result of the calculation into a string first. Happily show does a passable job of that most of the time (and ghci uses it implicitly):</p>
<pre><code>$ ghci
Prelude> import Data.Numbers.Primes
Prelude Data.Numbers.Primes> let ps = takeWhile (< 50) primes
Prelude Data.Numbers.Primes> ps
[2,3,5,7,11,13,17,19,23,29,31,37,41,43,47]
Prelude Data.Numbers.Primes> putStrLn $ show ps
[2,3,5,7,11,13,17,19,23,29,31,37,41,43,47]
Prelude Data.Numbers.Primes> writeFile "primes.txt" $ show ps
Prelude Data.Numbers.Primes> ^D
$ cat primes.txt
[2,3,5,7,11,13,17,19,23,29,31,37,41,43,47]</code></pre>
<h2>Source files</h2>
<p>So far, all of these examples are just typed at the ghci prompt. In practice, for more complicated calculations, or those you want to keep, it’s helpful to save things in a Haskell file which you can then load into ghci.</p>
<p>I use this a lot for solving <a href="http://www.geocaching.com/">geocache</a> puzzles. For example, there’s a cache in Paris where you have to <a href="http://www.geocaching.com/seek/cache_details.aspx?guid=4674ee30-4069-45fd-b2a9-f8f8ab0bbfcf">identify the models of car</a> and then do some simple arithmetic.</p>
<p>Here, in its entirety, is the trivial Haskell program I wrote as I solved it (with fake data):</p>
<pre><code>a = 123
b = 123
c = 123
d = 123
e = 123
f = 123
g = 123
h = 123
i = 123
j = 123
k = 123
l = 123
m = 123
n = 123
o = 123
p = 123
xxxx = c + e + f + l + m + n + o - 96 - 52
yyyy = a + b + d + g + h + i + j + k + p + 3701 - 37</code></pre>
<p> And here’s how I ran it:</p>
<pre><code>$ ghci cars.hs
GHCi, version 7.4.2: http://www.haskell.org/ghc/ :? for help
... [1 of 1] Compiling Main ( cars.hs, interpreted )
Ok, modules loaded: Main.
*Main> xxxx
9999
*Main> yyyy
9999</code></pre>
<p>There’s nothing very sophisticated or elegant about it, but equally the source code is almost exactly what I’d write on paper were I to do it by hand.</p>
<p>However, unlike the paper version, it would be easy to extend this if I wasn’t sure of some of the values and wanted to calculate all the possible coordinates. If I felt particularly keen I could even write them to a file, perhaps in a format that e.g. Google Earth could understand.</p>
<h2>Weaknesses</h2>
<h3>Text</h3>
<p>Although it’s not always true, I still find Perl a better tool for quickly munging text. Perhaps it’s familiarity, or perhaps the seemless integration of regexps into the language remove just that bit of friction.</p>
<h3>External data</h3>
<p>One of the consequences of Haskell’s immutable data and purity is that it can be messy to work with e.g. a dictionary stored in a file. The contents of that file might change, so deep in the bowels of a library you can’t simply open it and read the contents, without jumping through a hoop or two.</p>
<p>For small–medium sized data sets, simply embedding the data in a Haskell source file seems a reasonable hack. Typically I’d write some trivial program to create that source file, which could then be included like any other library. </p>14FAB5E4-FC61-11DD-8670-EEC6349F32812009-02-16T19:35:55:55Z2013-06-05T18:13:43:43ZMacOS X and the Olimex AVR-ISP500Martin Oldfield<p>How to make the Olimex <span class="caps">AVR</span>-ISP500 work on MacOS 10.5.6: install version 1.005 of the firmware. </p><p>Olimex make a wide range of handy inexpensive products. One of these is the <a href="http://www.olimex.com/dev/avr-isp500.html"><span class="caps">AVR</span>-ISP500,</a> an <span class="caps">USB </span>hosted in-circuit programmer for <span class="caps">AVR </span>microcontrollers. I was keen to use one of these to program ATmega 168 chips to make my own Arduino-like projects.</p>
<p>Whilst the programmer worked perfectly on my Linux box, it sadly failed on my MacBook Pro, which runs OS 10.5.6. I emailed Olimex about this, and they provided a fix <strong>almost immediately</strong>—I emailed them on Saturday and had the fix on Monday. In short, you simply need to update the firmware to version 1.005.</p>
<p>Thanks and much qudos to Olimex for fixing this so quickly.</p>
<h2>Symptoms of the problem</h2>
<p>If things are working properly, then when you connect the device to the <span class="caps">USB </span>port a device should be created in /dev. On my MacBook Pro (after applying the fix) I see</p>
<pre><code> $ ls /dev/cu.usb*
/dev/cu.usbmodemfd1131</code></pre>
<p>However with firmware 1.004 the Mac doesn't create the device. Instead, I find the following entries in the system log:</p>
<pre><code>$ sudo dmesg
...
0 0 AppleUSBCDCACMControl: getFunctionalDescriptors
- Descriptors are incorrect, checking...
AppleUSBCDCACMData: start: InterfaceMappings dictionary not found for
this device. Assume CDC Device...
0 0 AppleUSBCDCACMData: start - Find CDC driver failed
...</code></pre>
<p>If you poke around in the Apple System Profiler then the device is found on the <span class="caps">USB </span>bus, so it's clear that there's some sort of compatibility problem.</p>
<h2>How to fix it</h2>
<p>Simply download the new firmware and apply it. Both the firmware and the manual can be found on the <a href="http://www.olimex.com/dev/avr-isp500.html">the Olimex website.</a></p>
<p>The manual describes in detail how to apply the upgrade from Windows, so that's what I did. The hardest part of the job is installing the Windows drivers, and trying not to curse at the horrors of using Windows again.</p>
<p>The whole process took about ten minutes.</p>
<h2>When it's fixed</h2>
<p>Having upgraded the firmware, just plug it back into the Mac. The system log now shows:</p>
<pre><code>$ sudo dmesg
...
AppleUSBCDCACMData: start: InterfaceMappings dictionary not found for
this device. Assume CDC Device...
AppleUSBCDC::createSerialStream NON WAN CDC Device
AppleUSBCDC::createSerialStream using default naming and suffix...
AppleUSBCDCACMData: Version number - 3.2.12,
Input buffers 8, Output buffers 16
...</code></pre>
<p>Furthermore you really will find an entry in /dev which is just what e.g. avrdude needs. </p>48C7AFC4-F79A-11DD-981D-BF0DC1244EA92009-02-10T17:42:51:51Z2013-06-05T18:13:43:43ZStripboard TemplatesMartin Oldfield<p>A simple Perl script to generate blank stripboard templates. </p><p>When I was young, dot-matrix printers were the height of sophistication, and I remember writing a program in <span class="caps">BBC</span> Basic to print blank <a href="http://en.wikipedia.org/wiki/Stripboard">stripboard</a> templates to help me layout small electronic circuits.</p>
<p>This is an update of the same idea, but it's written in Perl and generates PostScript. Here's a sample:</p>
<p><img src="ds/sample.png" alt="" /></p>
<h2>Installation instructions</h2>
<p>The program is a single file Perl executable, so you just need to <a href="/atelier/2009/02/ds/draw-sboard.pl">download it</a> and copy it to a suitable place. For example, to download it and put it in /usr/local/bin, you could do this:</p>
<pre><code>% wget http://www.mjoldfield.com/atelier/2009/02/ds/draw-sboard.pl
% sudo mv draw-sboard.pl /usr/local/bin
% sudo chmod a+rx /usr/local/bin/draw-sboard.pl</code></pre>
<h2>Execution instructions</h2>
<p>If you want to print a template for a piece of board with 10 strips of 20 holes each, then you should do something like this:</p>
<pre><code>% draw-sboard 10 20
% lpr 10x20@100.ps</code></pre>
<p>If you want to get more information, do this:</p>
<pre><code>% draw-sboard --help</code></pre>
<h2>Sample output</h2>
<p>Of course, you could just download some I prepared earlier:</p>
<table><tr><th style="padding-left:1em">Strips</th><th style="padding-left:1em">Holes</th><th style="padding-left:1em">Magnification</th><th style="padding-left:1em">Files</th></tr><tr><td style="padding-left:1em">9</td><td style="padding-left:1em">25</td><td style="padding-left:1em">2x</td><td style="padding-left:1em"><a href="/atelier/2009/02/ds/9x25@200.ps">PostScript</a> <a href="/atelier/2009/02/ds/9x25@200.pdf"><span class="caps">PDF</span></a></td></tr><tr><td style="padding-left:1em">24</td><td style="padding-left:1em">37</td><td style="padding-left:1em">1x</td><td style="padding-left:1em"><a href="/atelier/2009/02/ds/24x37@100.ps">PostScript</a> <a href="/atelier/2009/02/ds/24x37@100.pdf"><span class="caps">PDF</span></a></td></tr><tr><td style="padding-left:1em">36</td><td style="padding-left:1em">50</td><td style="padding-left:1em">1x</td><td style="padding-left:1em"><a href="/atelier/2009/02/ds/36x50@100.ps">PostScript</a> <a href="/atelier/2009/02/ds/36x50@100.pdf"><span class="caps">PDF</span></a></td></tr><tr><td style="padding-left:1em">36</td><td style="padding-left:1em">170</td><td style="padding-left:1em">0.5x</td><td style="padding-left:1em"><a href="/atelier/2009/02/ds/36x170@50.ps">PostScript</a> <a href="/atelier/2009/02/ds/36x170@50.pdf"><span class="caps">PDF</span></a></td></tr></table>