Category Archives: Programming

A work of art is never finished

“A work of art”, so the saying goes, “is never finished, merely abandoned.”

This assertion rings true in many artistic spheres, to the extent that I’ve seen variations attributed to people as diverse as Leonardo da Vinci and W.H.Auden.

Paul ValeryThe site ‘Quote Investigator’ suggests that it actually originated in a 1933 essay by the poet Paul Valéry:

Aux yeux de ces amateurs d’inquiétude et de perfection, un ouvrage n’est jamais achevé, – mot qui pour eux n’a aucun sens, – mais abandonné …

 and they offer this approximate translation:

In the eyes of those who anxiously seek perfection, a work is never truly completed—a word that for them has no sense—but abandoned …

My knowledge of French idiom falls short of telling me how significant Valéry’s use of the word ‘amateur’ is, though. Is he saying that it’s the professionals who really know when a work is complete?

~

Anyway, the same original core assertion is sometime used when speaking of software: that it’s never finished, only abandoned.

It’s rare that any programmer deems his code to be complete and bug-free, which is why Donald Knuth got such attention and respect when he offered cheques to anyone finding bugs in his TeX typesetting system (released initially in the late 70s, and still widely-used today).  The value of the cheques was not large… they started at $2.56, which is 2^8 cents, but the value would double each year as long as errors were still found. That takes some confidence!  

He was building on the model he’d employed earlier for his books, most notably his epic work, The Art of Computer Programming. Any errors found would be corrected in the next edition. It’s a very good way to get diligent proofreaders.

Being Donald Knuth does give you some advantages when employing such a scheme, though, which others might want to consider before trying it themselves: first, there are likely to be very few errors to begin with.  And second, actually receiving one of these cheques became a badge of honour, to the extent that many recipients framed them and put them on the wall, rather than actually cashing them!

For the rest of us, though, there’s that old distinction between hardware and software:

Hardware eventually fails.  Software eventually works.

~

I was thinking of all this after coming across a short but pleasing article by Jose Gilgado: The Beauty of Finished Software.  He gives the example of WordStar 4, which, for younger readers, was released in the early 80s.  It came before WordPerfect, which came before Microsoft Word.  Older readers like me can still remember some of the keystrokes.  Anyway, the author George R.R. Martin, who apparently wrote the books on which Game of Thrones is based, still uses it.

Excerpt from the article:

Why would someone use such an old piece of software to write over 5,000 pages? I love how he puts it:

“It does everything I want a word processing program to do and it doesn’t do anything else. I don’t want any help. I hate some of these modern systems where you type up a lowercase letter and it becomes a capital. I don’t want a capital, if I’d wanted a capital, I would have typed the capital.”

— George R.R. Martin

This program embodies the concept of finished software — a software you can use forever with no unneeded changes.

Finished software is software that’s not expected to change, and that’s a feature! You can rely on it to do some real work.

Once you get used to the software, once the software works for you, you don’t need to learn anything new; the interface will exactly be the same, and all your files will stay relevant. No migrations, no new payments, no new changes.

 

I’m not sure that WordStar was ever ‘finished’ , in the sense that version 4 was followed by several later versions, but these were the days when you bought software in a box that you put on a shelf after installing it from the included floppies.  You didn’t expect it to receive any further updates over-the-air.  It had to be good enough to fulfill its purpose at the time of release, and do so for a considerable period.

Publishing an update was an expensive process back then, and we often think that the ease which we can do so now is a sign of progress.  I wonder…

Do read the rest of the post.

Artificial History?

Here are some lovely examples of what can be achieved with a combination of technological prowess and human patience. Denis Shiryaev takes old, noisy, shaky black and white videos, and adds stabilisation, resolution enhancement, facial feature enhancement and some light colourisation. Then he adds sound. This is far from a fully-automatic process: he takes weeks over each one, but without the help of neural networks, it would take months or years if it were even possible.

Here’s a collection of old Lumière Brothers’ films that have had his treatment. Even though, by modern YouTube standards, almost nothing happens in them, I found them surprisingly compelling, yet also calming.

https://www.youtube.com/watch?v=YZuP41ALx_Q

Oh, and this, though less convincing, is also fun:

More information on his YouTube channel and at his new company site.

Geek wisdom of the day

Some people, when confronted with a problem, think ‘I know, I’ll use multithreading’.
Nothhw tpe yawrve o oblems.

Eiríkr Åsheim

The Modern Lab Notebook

I’ve just uploaded my longest YouTube video yet!

Entitled The Modern Lab Notebook: Scientific computing with Jupyter and Python, it’s a two-and-a-quarter hour blockbuster! But you can think of it as three or four tutorial seminars rolled into one: no need to watch it in one sitting, and no need to watch it all! It starts with the basics, and builds up from there.

It’s intended for people who have some Python programming experience, but know little about the libraries that have become so popular recently in numerical analysis and data science. Or for people who may even have used them — pasted some code into a Jupyter notebook as part of a college exercise, say — but not really understood what was going on behind the scenes.

This is for you. I hope you find it useful!

Watch it full-screen, and turn the resolution up 🙂

Also available on Vimeo.

A healthy glow

Here’s a little video selfie of me breathing. Pretty exciting stuff, eh? The reason I look so strange is that it’s taken with a thermal camera: the white and yellow areas are warm, the blue and black ones colder. I haven’t decided whether or not it’s an improvement on my normal appearance.

One reason for our interest in this at the Lab is that you can clearly see my nostrils getting cooler as I breathe in, and warmer as I breathe out. So a thermal camera is a pretty straightforward way for a computer to measure my breathing rate.

But I had some fun playing with the camera at home, too. Pointing it at my hall floor showed glowing tracks where the hot pipes run under the tiles, allowing me to see how the radiators in different rooms are connected up.

When I was looking around upstairs, I noticed some light patches on the floor and wondered what they were. It took me a moment to realise that Tilly had trotted up behind me to see what I was doing, and had silently departed, leaving only warm paw-prints behind her as evidence.

Roses are red

I came across a thread on Twitter with geeky poems on the ‘Roses are red…’ model. So here’s mine:

Roses are #ff0000
Violets are #0000ff?
I think violets should be
More like #ee82ee
Don’t you?

Mmm.

Retro Space Invaders

I think this is wonderful. Today I got to play with Gareth Bailey’s Space Invaders game – a quick hack, he claims, which he put together yesterday.

This uses an oscilloscope as an X-Y plotter to draw the graphics, harking back to the earliest days of computer displays. But historically, displays like this would usually have been driven by a mainframe, where as Gareth’s is driven by a Raspberry Pi.

And where do you get a couple of nice analog outputs from a Raspberry Pi? Why, from the left and right channels of the audio, of course….

Apologies for the quality of the video, but I thought this was worth capturing despite the challenging environment!

Old News

A couple of days ago, I received some suggestions for improvements to a program I had written. This isn’t unusual: I’m writing code all the time, and much of it needs improving. (Thanks, Sam!) No, the surprise in this case was that the suggested changes were to a little script called newslist.py that I wrote in 1994 and haven’t updated since. It wasn’t very important or impressive, but seeing it again made me a bit nostalgic, because it was very much an artefact of its time.

For a couple of years, I had been playing with early versions of a new programming language called Python (still at version 0.9 when first I fell in love with it). In those days, online discussions about things like new languages occurred on forums called Usenet newsgroups. (As an aside, I was also playing with a new operating system called Linux, which Linus Torvalds had announced on the comp.os.minix newsgroup with one of those throwaway phrases that have gone down in history: “I’m doing a (free) operating system — just a hobby, won’t be big and professional…”.)

Anyway, the Usenet group archives are still accessible now through web servers like Google Groups, but the usual way to read them back then was to fire up a news-reading program and point it at a local news server, which could serve up the messages to you using the ‘network news transfer protocol’ NNTP. Since you wouldn’t necessarily have a fast connection to anywhere else in the world from your desktop, organisations such as universities and the more enlightened companies would run their own NNTP servers and arrange with their pals in other organisations to synchronise everything gradually in the background (or, at least, to synchronise the newsgroups they thought would be of local interest). When a user posted a new message, it would trickle out to most of the servers around the world, typically over the next few hours.

But another novelty was catching my attention at that time… This thing called the World Wide Web. Early web browsers generally spoke enough NNTP to be able to display a message (and maybe even the list of messages in a group), so you could include ‘news://‘ URLs in your web pages to refer to Usenet content. But a browser wasn’t much good for more general news perusal because it didn’t have a way to find out which newsgroups were available on your local server. My newslist.py script was designed to fill that gap by connecting to the server, getting the list of its groups, and creating a web page with links to them displayed in a nice hierarchical structure. You could then dispense with your special newsgroup app, at least for the purposes of reading.

When version 1.1 of Python was released, Guido van Rossum added a Demo directory with some examples of things you could do with the language, and newslist.py was included. And there it remained for a couple of decades, until, I discover, it was removed in 2.7.9 because the comment I had casually included at the top about it being free “for non-commercial use” no longer quite fit with the current Python licensing scheme. (I would happily have changed that, had I known, but I wouldn’t have imagined anybody was still using it!) The Demo directory itself was dropped in Python 3, and so newslist.py was consigned to the historical archives.

So you can understand my surprise at discovering that somebody was still experimenting with it now! I didn’t know anybody had an NNTP server any more.

What’s almost more surprising is that one of my two email addresses, mentioned in the code, is still working 23 years later, so he was able to write and tell me!

All of which tells me I should probably pay more attention to what I put in the comments at the top of my code in future…

Conway’s Law

Somehow, I hadn’t come across Conway’s Law until today, despite the fact that Melvin Conway came up with it when I was still wearing nappies.

Conway’s Law states that:

Organizations which design systems are constrained to produce designs which are copies of the communication structures of these organizations.

Or, as it is often more briefly stated,

Any piece of software reflects the organizational structure that produced it.

If you’ve worked on software of any scale, you will know how true this is! Another nice form is:

If you have four groups working on a compiler, you’ll get a 4-pass compiler.

Brilliant stuff. More information on Conway’s Law and some of its corollaries here.

Tips for using a private Docker registry

This is a geeky post for those Googling for relevant phrases. Sometimes a Docker Registry is referred to as a ‘Docker repository’; technically, they’re different things, but the terms are often used interchangeably.

It can be useful to have a private Docker repository for sharing images within your organisation, and from which to deploy the definitive versions of your containers to production.

At Telemarq, we do this by running:

  • a standard registry:2 container on one of our DigitalOcean servers
  • an nginx container in front of it, with basic HTTP auth enabled
  • a letsencrypt system to provide HTTPS certificates for nginx, so the communications are secure.

The registry container can, itself, handle both authentication and the certificates, but it’s easier for us to deploy it this way as part of our standard infrastructure. It all works very nicely, and we’re just starting to incorporate it into some of our more serious workflows.

So how do you make sure that the right images end up in your repository?

One practice we adopt for any deployment system, with or without Docker, is to require that things pushed to the servers should come directly from the git repository, so that they aren’t influenced by what just happens to be in the directory on some arbitrary machine at some time. Typically we might have a script that creates a temporary directory, checks out a known version of the code, builds and deploys it to the server, and then tidies up after itself. (If you use a continuous delivery system, this may happen automatically on a regular basis.)

In the Docker world, you can take advantage of the fact that the docker command itself understands git repositories. So you can build a container from the current master branch of your github project using something like:

docker build -t myproject git@github.com:quentinsf/myproject.git

and docker will do the necessary bits behind the scenes, assuming there’s a Dockerfile in the source. (More details here).

So, suppose you want to build version 1.6 of ‘myapp’ and upload it, appropriately tagged, to your Docker registry, you can do so with a couple of simple commands:

docker build -t dockerregistry.example.com/myapp:1.6 \
             gitrepository.example.com/myapp.git#1.6
docker push dockerregistry.example.com/myapp:1.6

I can run this on my Mac, a Windows machine, or any Linux box, and get a consistent result. Very nice. You could also tag it with the SHA1 fingerprint of the git commit, if wanted.

Listing the containers on your Docker registry

At present, there isn’t a convenient command-line interface for examining what’s actually stored on your registry. If I’m wondering whether one of my colleagues has already created and uploaded a container for one of our services, how would I know? There is, however, an HTTP API which will return the information as JSON, and you can then use the excellent jq utility to extract the bits you need:

curl -s -u user:password https://dockerregistry.example.com/v2/_catalog | jq .repositories[]

If you want to see the version tags available for mycontainer, you can use:

curl -s -u user:password https://dockerregistry.example.com/v2/mycontainer/tags/list | jq .tags[]

And you can of course wrap these in scripts or shell aliases if you use them often.

Hope that’s useful for someone!

What’s in a name?

Many years ago, I was helping a local church with a project which involved a database of the participating parishioners. This was stored in the columns of a spreadsheet, and occasionally printed out strange things in the lists of names – like ‘1/6’, or ‘1/5/05’. Most bizarre.

I eventually uncovered the problem: one of the members of the congregation was named ‘June’. Another was called ‘May’. And when they had been imported into the spreadsheet, it was being far too clever for its own good! I found out just before adding a nice lady whose name was ‘April’…

Even my simple double-barrelled surname causes some problems: a surprising number of systems can’t cope with hyphenated names. For a while I seemed to be undergoing a lot of security checks at airports, which one member of staff suggested might be because my passport had a hyphen in my name, but the airline systems invariably did not, so I never matched up as expected. The US Patent Office gets similarly confused, and in some search engines, a ‘minus’ indicates an exclusion, so if you search for ‘Stafford-Fraser’ you are guaranteed never to get me because I have a ‘Fraser’ in my name. Sigh. I don’t envy those who have names with more complicated punctuation…

Unusual initials, while general handy, also have their downsides. I could never get a good personalised license plate for my car, for example, because Qs are deemed to be too easily confused with Os or zeroes in the UK and are not allowed (except in a few very specific circumstances). My friend Brian Robinson told me about an occasion when his son Xavier was excluded from something at school, if I remember correctly, because they mistook his initial, ‘X’, as a cross indicating he was crossed off the list!

So it’s appropriate that it was Brian who forwarded a Wired article by someone with a problem I hadn’t previously considered: his surname causes much more confusion for many computer systems than mine, because his name is Christopher Null.

Optimising the size of Docker containers

Or ‘Optimizing the size of Docker containers’, in case people from America or from Oxford are Googling for it…

For Docker users, here are a couple of tricks when writing Dockerfiles which can help keep the resulting container images to a more manageable size.

Also available on Vimeo here.

© Copyright Quentin Stafford-Fraser