Category Archives: Open Source

Pannellum in Pittenweem

I’ve had fun in the last year or so playing with spherical cameras (often known as 360-degree cameras) and I’ve posted a few on here. But they’ve always had a problem: you really need a plugin to view them, which is untidy. This one of the Sacré-Coeur, for example, relies on a plugin from the Ricoh site.

So I’ve been delighted to discover Matthew Petroff’s Panellum, a panorama viewer created using just HTML5, CSS3, JavaScript, and WebGL, which means it runs in most modern browsers natively.

Here’s an example from Pittenweem, a favourite spot I discovered on my campervan trip over Christmas, just north of Edinburgh.

You can drag the image around to look in different directions, and you can zoom in and out by scrolling, or using Shift & Ctrl keys.

On my early experiments, it seems to work very well, even on my fairly elderly laptop. It even has a full-screen button…

So you may be seeing a few more of these here in the near future!

Old News

A couple of days ago, I received some suggestions for improvements to a program I had written. This isn’t unusual: I’m writing code all the time, and much of it needs improving. (Thanks, Sam!) No, the surprise in this case was that the suggested changes were to a little script called newslist.py that I wrote in 1994 and haven’t updated since. It wasn’t very important or impressive, but seeing it again made me a bit nostalgic, because it was very much an artefact of its time.

For a couple of years, I had been playing with early versions of a new programming language called Python (still at version 0.9 when first I fell in love with it). In those days, online discussions about things like new languages occurred on forums called Usenet newsgroups. (As an aside, I was also playing with a new operating system called Linux, which Linus Torvalds had announced on the comp.os.minix newsgroup with one of those throwaway phrases that have gone down in history: “I’m doing a (free) operating system — just a hobby, won’t be big and professional…”.)

Anyway, the Usenet group archives are still accessible now through web servers like Google Groups, but the usual way to read them back then was to fire up a news-reading program and point it at a local news server, which could serve up the messages to you using the ‘network news transfer protocol’ NNTP. Since you wouldn’t necessarily have a fast connection to anywhere else in the world from your desktop, organisations such as universities and the more enlightened companies would run their own NNTP servers and arrange with their pals in other organisations to synchronise everything gradually in the background (or, at least, to synchronise the newsgroups they thought would be of local interest). When a user posted a new message, it would trickle out to most of the servers around the world, typically over the next few hours.

But another novelty was catching my attention at that time… This thing called the World Wide Web. Early web browsers generally spoke enough NNTP to be able to display a message (and maybe even the list of messages in a group), so you could include ‘news://‘ URLs in your web pages to refer to Usenet content. But a browser wasn’t much good for more general news perusal because it didn’t have a way to find out which newsgroups were available on your local server. My newslist.py script was designed to fill that gap by connecting to the server, getting the list of its groups, and creating a web page with links to them displayed in a nice hierarchical structure. You could then dispense with your special newsgroup app, at least for the purposes of reading.

When version 1.1 of Python was released, Guido van Rossum added a Demo directory with some examples of things you could do with the language, and newslist.py was included. And there it remained for a couple of decades, until, I discover, it was removed in 2.7.9 because the comment I had casually included at the top about it being free “for non-commercial use” no longer quite fit with the current Python licensing scheme. (I would happily have changed that, had I known, but I wouldn’t have imagined anybody was still using it!) The Demo directory itself was dropped in Python 3, and so newslist.py was consigned to the historical archives.

So you can understand my surprise at discovering that somebody was still experimenting with it now! I didn’t know anybody had an NNTP server any more.

What’s almost more surprising is that one of my two email addresses, mentioned in the code, is still working 23 years later, so he was able to write and tell me!

All of which tells me I should probably pay more attention to what I put in the comments at the top of my code in future…

Tips for using a private Docker registry

This is a geeky post for those Googling for relevant phrases. Sometimes a Docker Registry is referred to as a ‘Docker repository’; technically, they’re different things, but the terms are often used interchangeably.

It can be useful to have a private Docker repository for sharing images within your organisation, and from which to deploy the definitive versions of your containers to production.

At Telemarq, we do this by running:

  • a standard registry:2 container on one of our DigitalOcean servers
  • an nginx container in front of it, with basic HTTP auth enabled
  • a letsencrypt system to provide HTTPS certificates for nginx, so the communications are secure.

The registry container can, itself, handle both authentication and the certificates, but it’s easier for us to deploy it this way as part of our standard infrastructure. It all works very nicely, and we’re just starting to incorporate it into some of our more serious workflows.

So how do you make sure that the right images end up in your repository?

One practice we adopt for any deployment system, with or without Docker, is to require that things pushed to the servers should come directly from the git repository, so that they aren’t influenced by what just happens to be in the directory on some arbitrary machine at some time. Typically we might have a script that creates a temporary directory, checks out a known version of the code, builds and deploys it to the server, and then tidies up after itself. (If you use a continuous delivery system, this may happen automatically on a regular basis.)

In the Docker world, you can take advantage of the fact that the docker command itself understands git repositories. So you can build a container from the current master branch of your github project using something like:

docker build -t myproject git@github.com:quentinsf/myproject.git

and docker will do the necessary bits behind the scenes, assuming there’s a Dockerfile in the source. (More details here).

So, suppose you want to build version 1.6 of ‘myapp’ and upload it, appropriately tagged, to your Docker registry, you can do so with a couple of simple commands:

docker build -t dockerregistry.example.com/myapp:1.6 \
             gitrepository.example.com/myapp.git#1.6
docker push dockerregistry.example.com/myapp:1.6

I can run this on my Mac, a Windows machine, or any Linux box, and get a consistent result. Very nice. You could also tag it with the SHA1 fingerprint of the git commit, if wanted.

Listing the containers on your Docker registry

At present, there isn’t a convenient command-line interface for examining what’s actually stored on your registry. If I’m wondering whether one of my colleagues has already created and uploaded a container for one of our services, how would I know? There is, however, an HTTP API which will return the information as JSON, and you can then use the excellent jq utility to extract the bits you need:

curl -s -u user:password https://dockerregistry.example.com/v2/_catalog | jq .repositories[]

If you want to see the version tags available for mycontainer, you can use:

curl -s -u user:password https://dockerregistry.example.com/v2/mycontainer/tags/list | jq .tags[]

And you can of course wrap these in scripts or shell aliases if you use them often.

Hope that’s useful for someone!

Optimising the size of Docker containers

Or ‘Optimizing the size of Docker containers’, in case people from America or from Oxford are Googling for it…

For Docker users, here are a couple of tricks when writing Dockerfiles which can help keep the resulting container images to a more manageable size.

Also available on Vimeo here.

Using nginx as a load-balancing proxy with the Docker service-scaling facilities

There’s a geeky title for you! But it might help anyone Googling for those keywords…

Recent versions of Docker have many nice new facilities. Here’s a demo of how you can use the service-scaling to run multiple instances of your app back-end, and Nginx as a front-end proxy, while keeping track of them using the round-robin DNS facility built in to the Docker engine.

All demonstrated in a few lines of code on my laptop, using the new Docker for Mac.

Also available on YouTube.

With thanks to Jeppe Toustrup for some helpful hints. Have a look at his page for more detailed information. Also see the Docker channel on YouTube for lots of talks from the recent DockerCon.

Update, spring 2017: Do note that if you’re using Docker Swarm, you may want to adopt a more complex approach, perhaps based on Interlock.

“If you’ve got a browser connected to the Internet, I can show you…”

For the sake of posterity, I’ve uploaded the original VNC video that we made back in 1998.

Lots of nostalgia in here – remember the JavaStation? The WebTV? And the days when we made movies in 4:3 ratios?

A great deal has changed in the last 16 years, but VNC goes from strength to strength!

Starring, in order of appearance:

  • Quentin Stafford-Fraser
  • Andy Harter
  • Ken Wood
  • Tristan Richardson
  • Paul Webster
  • Frazer Bennett
  • James Weatherall
  • Daisy Sadleir

Also available on YouTube. Thanks to Andy Fisher for doing the original transfer from VHS to DVD some years ago.

Docker

Docker logoA geeky post. You have been warned.

I wanted to make a brief reference to my favourite new technology: Docker. It’s brief because this is far too big a topic to cover in a single post, but if you’re involved in any kind of Linux development activity, then trust me, you want to know about this.

What is Docker?

Docker is a new and friendly packaging of a collection of existing technologies in the Linux kernel. As a crude first approximation, a Docker ‘container’ is like a very lightweight virtual machine. Something between virtualenv and VirtualBox. Or, as somebody very aptly put it, “chroot on steroids”. It makes use of LXC (Linux Containers), cgroups, kernel namespaces and AUFS to give you most of the benefit of running several separate machines, but they are all in fact using the same kernel, and some of the operating system facilities, of the host. The separation is good enough that you can, for example, run a basic Ubuntu 12.04, and Ubuntu 14.04, and a Suse environment, all on a Centos server.

“Fine”, you may well say, “but I can do all this with VirtualBox, or VMWare, or Xen – why would I need Docker?”

Well, the difference is that Docker containers typically start up in milliseconds instead of seconds, and more importantly, they are so lightweight that you can run hundreds of them on a typical machine, where using full VMs you would probably grind to a halt after about half a dozen. (This is mostly because the separate worlds are, in fact, groups of processes within the same kernel: you don’t need to set aside a gigabyte of memory for each container, for example.)

Docker has been around for about a year and a half, but it’s getting a lot of attention at present partly because it has just hit version 1.0 and been declared ready for production use, and partly because, at the first DockerCon conference, held just a couple of weeks ago, several large players like Rackspace and Spotify revealed how they were already using and supporting the technology.

Add to this the boot2docker project which makes this all rather nice and efficient to use even if you’re on a Mac or Windows, and you can see why I’m taking time out of my Sunday afternoon to tell you about it!

I’ll write more about this in future, but I think this is going to be rather big. For more information, look at the Docker site, search YouTube for Docker-related videos, and take a look at the Docker blog.

Alfred 2 support for iTerm 2

iTerm 2 is a terminal program for the Mac with lots of great features beyond the standard OS X Terminal. Alfred is an excellent app launcher, which in its newly-released second version is taking the Mac world by storm.

If you don’t use either of these, I strongly recommend them.

If, on the other hand, you already use both of them, you might like my (very basic) plugin that lets you list your iTerm profiles using an Alfred keyword and fire up a new iTerm window using the selected one.

Alfred iTerm

Django cross-site authentication

Richard wrote a nice bit of code to allow one Django app to authenticate users using another Django app’s database. It saves the users having to get a separate set of credentials.

This assumes that both apps can securely access the original database, but if you have a situation where, say, they both run on EC2 machines in the same Amazon account, this can be very handy.

It’s still fairly basic, so I’m sure he’d welcome contributions.

Feeling old, and honoured…

I’m chuffed to discover, while looking for something else, that a script I wrote – called newslist – is still included as a demo in the Python distribution. I sent it to Guido in 1994.

It was basically a simple interface to Usenet (NNTP) news servers, so I shouldn’t really draw attention to it because it probably hasn’t had a lot of use in the last decade or so!

The included documentation, however, may induce a little nostalgia in those who were involved in the early web. It begins:

                             NEWSLIST
                             ========
            A program to assist HTTP browsing of newsgroups
            ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

WWW browsers such as NCSA Mosaic allow the user to read newsgroup
articles by specifying the group name in a URL eg 'news:comp.answers'.

To browse through many groups, though, (and there are several thousand
of them) you really need a page or pages containing links to all the
groups. There are some good ones out there, for example,

    http://info.cern.ch/hypertext/DataSources/News/Groups/Overview.html

is the standard one at CERN, but it only shows the groups available there,
which may be rather different from those available on your machine.

Newslist is a program which creates a hierarchy of pages for you based
on the groups available from YOUR server. It is written in python - a
splendid interpreted object-oriented language which I suggest you get
right now from the directory /pub/python at ftp.cwi.nl, if you haven't
already got it.

Note that we hadn’t yet started to call it ‘the Web’.

I was just too late to make it into the Python 1.0 distribution. But for this and a couple of other small early contributions, I find I’ve been in the Python acknowledgements since 1.0.2, nearly 19 years ago.

‘Tis an honour I dreamed not of.

🙂

GRUB, Ubuntu, and failed boots

Another geeky technical post. Ignore if it’s not your thing….

I wrote recently about the GRUB bootloader and how it can sometimes cause a remote or headless server not to come back online after, e.g. a kernel update.

This can happen in other situations: The configuration used on recent versions of Ubuntu is such that, if the system thinks that the last boot attempt failed, it stops it automatically booting into the same configuration again by cancelling the countdown timer, setting the number of seconds to -1, causing it to wait indefinitely at the boot menu until you decide on the best action to take. This is a sensible default, because a machine that goes into an infinite loop of reboots is doing nobody any good and puts a fair bit of strain on its own hardware.

Unfortunately, though, other things can trigger this behaviour. If you have a power fluctuation, for example, such that the machine restarts, gets part-way and then power-cycles again, you may find yourself with a machine that doesn’t come back online of its own accord.

On the most recent Ubuntu versions (12.04 with updates, and later) you can add a setting to /etc/default/grub:

GRUB_RECORDFAIL_TIMEOUT = 30

for a 30 second timeout after it has recorded a failure condition. You can use 0 if you don’t want it to pause and show the boot menu at all, but remember that it could then go into fairly fast repeated reboots if something really does go wrong.

On earlier versions, you’ll need to edit /etc/grub.d/00_header and find the line near the bottom, in the maketimeout()_ function, which sets

set timeout=-1

and change that to your preferred value.

In either case you’ll then need to run:

update-grub

to make your changes take effect.

Old men forget; yet all shall be forgot, but he’ll remember, with advantages…

Richard and I have been playing with flash cards as a way of learning things.

The great thing about an electronic implementation of the old ‘question on one side, answer on the other’ idea, is that it can make smart decisions about when and how frequently you should be presented with a particular card. Things you find easy to remember need only occasional repetition, while those which are new or more challenging need more regular viewing until they stick in your memory. When you see the answer, you just say whether or not you got it right, and how hard you found it.

Richard wrote a little while back about using this model to learn a reading he had been asked to give at a wedding. I’ve always liked learning poetry or bits of Shakespeare, but often find that large chunks will flow easily while there are one or two lines I always forget. Could this be the solution?

One of the popular flashcard systems out there is an Open Source one called Anki, created by Damien Elmes. It has Windows, Mac, Linux and Web clients, plus Android and iOS (though these don’t yet work on the latest version). And there are various ways you can get decks of cards in and out. The user interfaces are rather quirky, I find, and even the web sites can be confusing to navigate, but the underlying system works fine.

It’s easy to find plain-text versions online of most things I want to learn, so I wrote a little script called poem2anki which will take a text file containing lines of poetry (or prose!) and convert it into a file suitable for importing into Anki.

A question:

and the answer:

It will create these for all the lines in the poem, but you’ll quickly find you’re only tested on the ones you find difficult to answer.

You can find poem2anki here if wanted.

© Copyright Quentin Stafford-Fraser