Monthly Archives: June, 2014


Docker logoA geeky post. You have been warned.

I wanted to make a brief reference to my favourite new technology: Docker. It’s brief because this is far too big a topic to cover in a single post, but if you’re involved in any kind of Linux development activity, then trust me, you want to know about this.

What is Docker?

Docker is a new and friendly packaging of a collection of existing technologies in the Linux kernel. As a crude first approximation, a Docker ‘container’ is like a very lightweight virtual machine. Something between virtualenv and VirtualBox. Or, as somebody very aptly put it, “chroot on steroids”. It makes use of LXC (Linux Containers), cgroups, kernel namespaces and AUFS to give you most of the benefit of running several separate machines, but they are all in fact using the same kernel, and some of the operating system facilities, of the host. The separation is good enough that you can, for example, run a basic Ubuntu 12.04, and Ubuntu 14.04, and a Suse environment, all on a Centos server.

“Fine”, you may well say, “but I can do all this with VirtualBox, or VMWare, or Xen – why would I need Docker?”

Well, the difference is that Docker containers typically start up in milliseconds instead of seconds, and more importantly, they are so lightweight that you can run hundreds of them on a typical machine, where using full VMs you would probably grind to a halt after about half a dozen. (This is mostly because the separate worlds are, in fact, groups of processes within the same kernel: you don’t need to set aside a gigabyte of memory for each container, for example.)

Docker has been around for about a year and a half, but it’s getting a lot of attention at present partly because it has just hit version 1.0 and been declared ready for production use, and partly because, at the first DockerCon conference, held just a couple of weeks ago, several large players like Rackspace and Spotify revealed how they were already using and supporting the technology.

Add to this the boot2docker project which makes this all rather nice and efficient to use even if you’re on a Mac or Windows, and you can see why I’m taking time out of my Sunday afternoon to tell you about it!

I’ll write more about this in future, but I think this is going to be rather big. For more information, look at the Docker site, search YouTube for Docker-related videos, and take a look at the Docker blog.

Speaking to the future?

speakerconesFrom the “Things I should patent but probably won’t” department…

Predictive Loudspeakers

Yesterday, I was talking to a loudspeaker designer, who was describing the mechanical limitations of speaker cones. One of the key problems is that a loudspeaker cone is basically a mass on a spring. Once you give it an impulse, it will then rebound to its original location with a velocity and momentum, and these will all affect the sound that comes immediately afterwards. It also, of course, has its own resonant frequencies. And these factors mean that people have spent a lot of time trying to minimise the mass and resonance of loudspeaker cones or replacing them with complex electrostatic devices, or vibrating ribbons, or whatever.

Now, it occurred to me that as we start to get digital linkups to loudspeakers, we can do something we could never do before: we can predict the future.

At least, we can buffer the digital signals for a while as they come into the loudspeaker. As long as we do this for all of our channels equally, the delay is not usually a problem. (I am thinking primarily about listening to music at home or in a studio; if this was audio for a movie, you’d need to delay the image by the same amount, and it wouldn’t work at all for live PA systems, but bear with me…) We could then look ahead in that buffer and provide an impulse to the loudspeaker cone now based not just on what we want now, but on what we are going to want a few milliseconds down the line. The signal you feed to the speaker would then be the first derivative, or the Laplace transform, or something, of the sound you actually want to come out once delays, masses and spring co-efficients are taken into account.

Now, I’m not a control systems expert, and I don’t know how difficult constructing a dynamic PID-feedback system would be, but this is at least a very controlled environment, and the system could, if needed, be self-monitoring and adapt over time.

Such analysis could, of course, already have been done in other places that have access to the digital stream – like the CD/DVD player – but the earlier stages will not typically have much information about the amps and speakers. Now, however, the trend is towards active speakers which include their own carefully-matched amps, and for digital, even wireless, links replacing the old analog cables. So this becomes quite possible, and the market is ready.

What do you think? Anyone want to invest huge amounts of capital and help me make the speakers of the future? Or is this already available?

Music Nerd

SONOS-PLAY3-and-BRIDGE.jpgLike most people who own Sonos kit, I’m a big fan of my loudspeakers and amps. They work beautifully, and sound great.

However, I suspect most of their users would be less interested than me to discover that by pointing a browser at ip_addr:1400/status I can discover, for example, which version of the Linux kernel each loudspeaker is running, and the fact that they seem to incorporate an accelerometer.

I couldn’t do that with my old record deck, now, could I?

Where is everybody?

If we’re right that there are 100,000 or more intelligent civilizations in our galaxy, and even a fraction of them are sending out radio waves or laser beams or other modes of attempting to contact others, shouldn’t SETI’s satellite array pick up all kinds of signals?

But it hasn’t. Not one. Ever.

Where is everybody?

That question (or a variant of it) is known as The Fermi Paradox, and the above quote is taken from this rather nice and very readable article which outlines some of the big questions and possible answers.

Many thanks to Michael Fraser for the link.

Standing guard


To biblically go where no man has gone before

sparksIt occurs to me that the Book of Job has a persuasive argument for space travel.

Man is born unto trouble, as the sparks fly upward.

This is surely meant to inspire us to explore the opportunities that zero-gravity has to offer.

Talking of which, why do the crew of the Starship Enterprise always stay on the floor? Even when under attack, when the last photon torpedo has been fired, the shields are out and even the life-support systems are failing, captain, somehow the gravitational field never ceases to function.

I think they have their priorities wrong. Life-support is more important than avoiding weightlessness, people! Especially if, as would seem to be the case, you are born unto trouble.



Lovely cartoon by Mobii, who has since done several variations on the theme. I ‘liked’ this on Facebook recently, but wanted to save it in a place where I could find it again…

Post-processing RAW for Fujifilm cameras

One thing for which the Fujifilm cameras (such as my beloved X-Pro1) are known is their impressive on-board JPEG converter, which can produce sufficiently yummy images that many people who would otherwise shoot RAW just stick to JPEG with these devices.

I, however, want to stick with RAW, and I found that getting the best out of it takes rather more initial tweaking with the Fuji cameras than it did, say, with my Canon. I eventually settled on a small boost to the saturation (+13), and quite a large amount of sharpening (+60), and saved that as a Lightroom preset which I now apply as I import any images coming from the X-Pro1.

However, the biggest improvement came, I think, when Adobe Camera Raw (the engine behind Lightroom & Photoshop imports) was upgraded a couple of months ago. One of the easy-to-miss features was the inclusion of Fujifilm camera profiles which mimic the film emulation modes found in the camera. Even when I had upgraded and knew it was there, it was still a little tricky to find, but it’s under the Camera Calibration section of the Develop module:


(click for full size)

I’ve found that experimenting with these profiles, and particularly using the VELVIA emulation while reducing my previous saturation setting a little, can bring much more richness to the colours.

When the law is an ass…

Some patent lawyers sent me a few bits of paper this week.


I reckon there’s well over 1000 pages here, shipped at, I imagine, vast expense all the way from Atlanta to my recycling bin here in Cambridge. The big box in the foreground brought them here. The slim envelope in the background is for returning the half-dozen pages that actually need my signature.

I’m not blaming this particular firm for this foolishness: they are probably obliged to provide me with hard copies by some outdated regulation kept in existence by extensive lobbying from FedEx and Xerox. But you’d think they could find an alternative. Like emailing PDFs. Especially since (a) I don’t need to read them to sign the bits of paper and (b) their client is Google…

Anyway, now you know where the trees have gone.



From 9Gag

The race is to the Swift?

swiftI love my Mac and iOS devices, but writing native apps for them has always been made somewhat less pleasurable by the programming languages available. Objective-C (which is behind the typical app on your iPhone or Mac) has its merits, or at least, had its merits when it was designed 30 years ago, but things have moved on quite a lot since then. And don’t get me started on the abomination that is AppleScript…

That’s why, amongst the panoply of geeky goodies that Apple announced at its developer conference this week, the thing that interested me most is their new programming language, Swift, which looks rather lovely. (You can find excellent introductory talks about it here.) It’s early days yet, but may be good enough that, henceforward, people will flock to Apple’s development environment because of, rather than despite, the language.

It’s not clear whether Swift will be available anywhere other than on Apple platforms, and there may be a certain degree of deliberate lock-in here. But that’s better than the old situation where Objective-C was available elsewhere, but nobody really cared.

All of which may help to explain why the book The Swift Programming Language had been downloaded by more than a third of a million people within the first 24 hours of anyone knowing the language even existed.

What Ike was like

This is a splendid article by Val Lauder, about Eisenhower’s personal feelings on the D-Day landings.

When we passed 45 minutes, and he could no longer ignore his aide’s anything-but-subtle glances at his watch, Ike said he would take three more questions. I do not remember the first two. Nor will I ever forget the last one.

Strongly recommended.

My father-in-law was one of the 101st Airborne paratroopers to drop behind the lines.

© Copyright Quentin Stafford-Fraser