John posted this lovely image from CultOfMac, showing phone designs before and after the iPhone.
As he says, imitation is the sincerest form of flattery…
John posted this lovely image from CultOfMac, showing phone designs before and after the iPhone.
As he says, imitation is the sincerest form of flattery…
Here’s my latest Raspberry Pi-based experiment: the CloudSwitch.
I don’t discuss the software in the video, but the fun thing is that the Pi isn’t dependent on some intermediate server – it’s using the boto module for Python to manage the AWS resources directly.
I decided to build the app slightly differently from the way I would normally approach a little project like this. I knew that, even for this very simple system, I would have several inputs and outputs of various kinds, some of them with big delays, and I wanted to make sure that timing hiccups or race conditions didn’t ever leave the lights displaying something that didn’t represent reality.
So this is only a single python file, but it runs several threads – one that looks for button presses, one that monitors and controls the Amazon server, and one that handles the lights – including flashing them in various patterns. They interact with the main thread using ZeroMQ messages, which is a lovely way to do inter-thread communications without all that nasty messing about with semaphores and mutexes.
Update: Here’s the very simple circuit diagram. The illuminated buttons I used have LEDs which take a little more power than the Raspberry Pi can really drive, so I put a couple of NPN transistors in there. It really doesn’t matter too much what they are – I used the 2N3904.
Some of my readers will know about plans I started hatching about seven years ago, but worked on again more seriously last year, for a new device, which I called the Telemarq Pen.
The basic idea is that writing and drawing are incredibly important aspects of human communication – as demonstrated by the number of whiteboards we put in our offices – and yet something that is very poorly served by current technology. The nearest we get to a widely-deployed drawing device is the iPad, which isn’t designed for it and reduces most artistic endeavours to the level of crude finger-painting.
I had an idea for a low-cost stylus that could be used with any LCD display – a phone, tablet, laptop, even a TV – and yet would allow for exceedingly high-resolution drawing. The basic idea was to include a small camera – something that now costs very little – pointing at the screen, and use various cunning techniques to recognise the pixels at which it was pointing. I reckoned I could locate the pen on the screen at sub-pixel resolution, so it would be a very accurate drafting device, and yet could be made for a few dollars.
I did some experiments, wrote a draft patent, produced a nice slideshow, and was on the verge of going out and pitching it to investors. If you look carefully at the Telemarq logo you can see that pens were on my mind when I designed it! But at the start of this year, other things – mostly the need to earn some money again – put my plans on hold.
Which, it turns out, was just as well.
In May, an Apple patent was published showing that they had had very much the same ideas.
Today, there’s a report of, guess what, Microsoft’s prototype optical stylus.
So I think my plans are now definitely shelved! (Though if either of them would like to hire me, I have some fun ideas on how to optimise the location process!)
Now, I suppose I could be kicking myself that I didn’t file some patents six or seven years ago, when I first started pointing cameras at screens – but I couldn’t have afforded it at that point, and however good your patents, it’s a brave man who takes on both Apple and Microsoft’s lawyers! It would have been much more likely to result in ruin than riches. A clear illustration of the problems of the current patent system, perhaps?
So, in fact, I’m very grateful that the Fates conspired to make me abandon the project six months ago. But I can still go on at great length to anyone who’d like to buy me a beer about why such a device is vitally important and how it should be done!
I hope they pursue this seriously, beyond simply filing IP, because I, at least, would be an enthusiastic customer. So perhaps I’ll get my pen in the end!
I finally got a chance to play with my RaspberryPi, so I threw together a quick experiment.
Update: A few people have asked me for a little more information. I’m happy to make the source code available, but it’s not very tidy and a bit specific to my situation… however, to answer some of the questions:
The enclosure for the Raspberry Pi comes from SK Pang Electronics, and it’s perfect for my needs. You can buy just the perspex cover, but they do a nice Starter Kit which includes the breadboard, some LEDs, resistors and the pushswitch. Definitely recommended.
For the graphics, I used the PyGame library, which has the advantage of being cross-platform: you can use it with a variety of different graphics systems on a variety of different devices. On most Linux boxes, you’d normally run it under X Windows, but I discovered that it has various drivers that can use the console framebuffer device directly. This makes for a quicker startup and lighter-weight system, though I imagine it probably has less access to hardware acceleration, so it’s probably not the way to go if your graphics need high performance. You can read about how to get a PyGame display ‘surface’ (something you can draw on) from the framebuffer, in a handy post here.
To load an image from a file in PyGame is easy: you do something like this:
im_surf = pygame.image.load(f, "cam.jpg")
where ‘f’ is an open file, and the ‘cam.jpg’ is just an invented filename to give the library a hint about the type of file it’s loading.
Now, with a webcam, we need to get the image from a URL, not from a file. It’s easy to read the contents of a URL in Python. You just need something like:
import urllib img = urllib.urlopen(img_url).read()
but that will give you the bytes of the image as a string. If we want to convert it into a PyGame surface, we need to make it look more like a file. Fortunately, Python has a module called StringIO which does just that: allows you to treat strings as if they were files. So to load a JPEG from img_url and turn it into a PyGame surface which you can blit onto the screen, you can do something like:
f = StringIO.StringIO(urllib.urlopen(img_url).read()) im_surf = pygame.image.load(f, "cam.jpg")
I’ll leave the remaining bits as an exercise for the reader!
If you like this, you might also like my CloudSwitch…
Warning – technical post ahead…
My shiny new RaspberryPi came through the door this week, and the first thing I noticed was that it was the first computer I’d ever received where the postman didn’t need to ring the bell to deliver it.
The next thing I discovered, because it came sooner than expected, was that I was missing some of the key bits needed to play with it. A power supply was no problem – I have a selection of those from old phones and things, and I found a USB-to-micro-USB cable. Nor was a network connection tricky – I have about as many ethernet switches as I have rooms in the house.
A monitor was a bit more challenging – I have lots of them about, but not with HDMI inputs, and I didn’t have an HDMI-DVI adaptor. But then I remembered that I do have an HDMI monitor – a little 7″ Lilliput – which has proved to be handy on all sorts of occasions.
The next hiccup was an SD card. You need a 2GB or larger one on which to load an image of the standard Debian operating system. I had one of those, or at least I thought I did. But it turned out that my no-name generic 2GB card was in fact something like 1.999GB, so the image didn’t quite fit. And a truncated filesystem is not the best place to start. But once replaced with a Kingston one – thanks, Richard – all was well.
Now, what about a keyboard and mouse?
Well, a keyboard wasn’t a problem, but it didn’t seem to like my mouse. Actually, I think the combined power consumption probably just exceeded the capabilities of my old Blackberry power supply, which only delivers 0.5A.
If you’re just doing text-console stuff, then this isn’t an issue, because it’s easy to log into it over the network from another machine. It prints its IP address just above the login prompt, so you can connect to it using SSH, if you’re on a real computer, or using PuTTY if you’re on Windows.
But suppose you’d like to play with the graphical interface, even though you don’t have a spare keyboard and mouse handy?
Well, fortunately, the Pi uses X Windows, and one of the cunning things about X is that it’s a networked display system, so you can run programs on one machine and display them on another. You can also send keyboard and mouse events from one machine to another, so you can get your desktop machine to send mouse movements and key presses from there. On another Linux box, you can run x2x. On a Mac, there’s Digital Flapjack’s osx2x (If that link is dead, see the note at the end of the post).
These both have the effect of allowing you to move your mouse pointer off the side of the screen and onto your RaspberryPi. If you have a Windows machine, I don’t think there’s a direct equivalent. (Anyone?) So you may need to set up something like Synergy, which should also work fine, but is a different procedure from that listed below. The following requires you to make some changes to the configurations on your RaspberryPi, but not to install any new software on it.
Now, obviously, allowing other machines to interfere with your display over the network is something you normally don’t want to happen! So most machines running X have various permission controls in place, and the RaspberryPi is no exception. I’m assuming here that you’re on a network behind a firewall/router and you can be a bit more relaxed about this for the purposes of experimentation, so we’re going to turn most of them off.
Firstly, when you log in to the Pi, you’re normally at the command prompt, and you fire up the graphical environment by typing startx
. Only a user logged in at the console is allowed to do that. If you’d like to be able to start it up when you’re logged in through an ssh connection, you need to edit the file /etc/X11/Xwrapper.config
, e.g. using:
sudo nano /etc/X11/Xwrapper.config
and change the line that says:
allowed_users=console
to say:
allowed_users=anybody
Then you can type startx
when you’re logged in remotely. Or startx &
, if you’d like it to run in the background and give you your console back.
Secondly, the Pi isn’t normally listening for X events coming over the network, just from the local machine. You can fix that by editing /etc/X11/xinit/xserverrc
and finding the line that says:
exec /usr/bin/X -nolisten tcp "$@"
Take out the ‘-nolisten tcp’ bit, so it says
exec /usr/bin/X "$@"
Now it’s a networked display system.
There are various complicated, sophisticated and secure ways of enabling only very specific users or machines to connect. If you want to use those, then you need to go away and read about xauth
.
I’m going to assume that on your home network you’re happy for anyone who can contact your Pi to send it stuff, so we’ll use the simplest case where we allow everything. You need to run the ‘xhost +
‘ command, and you need to do it from within the X environment once it has started up.
The easiest way to do this is to create a script which is run as part of the X startup process: create a new file called, say:
/etc/X11/Xsession.d/80allow_all_connections
It only needs to contain one line:
/usr/bin/xhost +
Now, when the graphical environment starts up, it will allow X connections across the network from any machine behind your firewall. This lets other computers send keyboard and mouse events, but also do much more. The clock in the photo above, for example, is displayed on my Pi, but actually running on my Mac… however, that’s a different story.
For now, you need to configure x2x, osx2x, or whatever is sending the events to send them to ip_address:0, where ip_address is the address of your Pi. I’m using osx2x, and so I create the connection using the following:
Once that’s done, I can just move the mouse off the west (left-hand) side of my screen and it starts moving the pointer on the RaspberryPi. Keyboard events follow the pointer. (I had to click the mouse once initially to get the connection to wake up.)
Very handy, and it saves on desk space, too!
Update: Michael’s no longer actively maintaining binaries for osx2x, and the older ones you find around may not work on recent versions of OS X. So I’ve compiled a binary for Lion(10.7) and later, which you can find in a ZIP file here. Michael’s source code is on Github, and my fork, from which this is built, is here.
There’s something rather fruity about the fact that my new RaspberryPi is being powered by an old Blackberry power supply.
Which used to be on the Orange network.
And that I log in to it from my Apple.
Hope it doesn’t turn out to be a lemon.
My friend Aideen has been doing fun stuff with the TP-Link micro-routers.
These things are amazingly small – note the relative size of the USB socket – and very cheap.
She’s written it up in a nice blog post here.
A few of my favourite iOS apps at present:
I’ve always had mixed feelings about mind-maps. They’re a great way to capture thoughts and to brainstorm, but a terrible way to communicate with others. The chief responsibility of somebody writing a paper or giving a talk, it seems to me, is to turn such a personal 2D ‘splat’ of ideas into a logically-ordered serial presentation that can be followed by others with different mental processes, and not just to serve up the splat in its unprocessed form.
Still, I do use them for my own notes, and a paper and pen has always been my medium of choice. I’ve tried several highly-regarded pieces of desktop software, but keyboards and mice just don’t seem right for doing this. iThoughts on the iPad is the first environment that feels pretty natural, especially if you use it with a stylus.
I listen to lots of podcasts, every day, while shaving, driving, walking the dog. I always used the facilities built into the iPhone music player and iTunes, which aren’t bad, so I had never really thought about using a separate app for it. And then I tried Instacast and was an Instaconvert.
If you have a recent iPhone with a good camera, then Scanner Pro is a really useful thing to have in your pocket. In essence it’s a photo app designed for capturing documents, or parts of documents, and it makes it easy (a) to crop and de-warp the images so as to get something closer to a proper scan and (b) to capture more than one ‘page’ as a single document and then (c) to email that as a PDF to someone (or upload it to various services). While you’d never confuse the results with the output of a proper scanner, there are times when you might be browsing in a library or perusing a magazine and you don’t happen to have a flat-bed scanner in your pocket…
The dictionaries
OK, here’s where you might need to start spending some real money… but I’ve definitely found it worthwhile when travelling to have the Collins language dictionaries in my pocket. I’ve now bought the expensive versions of the German-English, French-English and Italian-English ones and, even though they’re amongst the most costly apps on my phone, have never regretted it. They cost about the same as a hardback equivalent, but are a lot easier to carry around and I find, surprisingly, that I can look things up more quickly in them than on paper.
Another wonderful treat is to have the Shorter OED on my phone, something which in dead-tree form is hard even to lift off my bookshelf! (The current edition comes in two large hardbacks of around 2000 pages each.) It’s fabulous for all those times when someone at the restaurant tables asks, “What is the origin of the word ‘poppycock’?”. Sadly, the iOS app has been discontinued, so if you haven’t already got it, you’re out of luck, but there are a lot of lesser-but-much-cheaper options available, including Chambers.
Yes, you can often find good stuff on web, but not as quickly, especially if the restaurant table is in a basement. And if it’s in a foreign basement, then looking stuff up online may be rather expensive too.
Vous êtes hereux de me voir, ou vous avez une bibliothèque dans votre poche?…
Update 2012–08–14
Since this post, I’ve switched from using Instacast to Downcast. Its interface is a little crowded, on the iPhone at least, but it has a couple of nice features over Instacast.
The first is the ability to skip forward and backwards by a certain number of seconds: useful to skip ads, or to rewind a bit if you were distracted and lost the thread. Instacast has this, but it’s always been very unreliable. With Downcast it’s still a bit hit-and-miss – the buttons often seem to do nothing, or perhaps they’re just too small and so easy to miss – but my success rate is higher.
The second is the ability to sync various things between devices – which podcasts I’m subscribed to, which episodes I’ve already heard, and to some degree, how far through them I am. So I can listen at home on the iPad’s superior speaker and then carry on using my iPhone when I’m on the move.
Very nice.
For many years I’ve been a fan of TextExpander on the Mac, a utility which converts a short sequence of keystrokes into a much longer one. For example, most of my email messages end with
All the best,
Quentin
which appears when I type ‘atb’ and hit space. There are many much more complex things you can do with TextExpander, which is good, because it’s a little pricey for a small utility, but in the end I realised that 35 bucks wasn’t too much for something I use dozens of times every single day.
But typing efficiency is even more important when you have a sub-optimal keyboard, like the iPhone or iPad’s. One of my favourite tips is that you can get an apostrophe or quote mark by pressing the comma or full-stop key briefly and sliding upwards; there’s no need to switch into punctuation mode. (I wrote about this before once, but I think it must have been on Twitter or Facebook, which means I can’t find it now. Note to self: always keep useful stuff on blog.)
Anyway, one of the recent iOS updates added a very handy but somewhat hidden keystroke-expansion feature, and I’ve realised that I’m using that all the time too.
Under Settings > General > Keyboard you can create shortcuts, which will let you do something similar to my ‘All the best’ trick, and can be very handy if you have a silly long name like mine: ‘qqsf’ expands into ‘Quentin Stafford-Fraser’, complete with capitals and punctuation.
But the thing I’ve found most useful is to have abbreviations for my main email addresses, since an increasing number of sites use them as login usernames. I find I’m always having to type, say, ‘quentin@mycompany.com’ on my little iPhone keyboard, and it was a real pain until I replaced it with ‘qmc’ and a space.
One small note: if you use it this particular way, there are some sites that get confused if you leave the space on the end. So I actually tend to type ‘qmc<space><backspace>’, but that’s still a great deal easier than the whole address.
Venice, as you may know, is made up of about 100 islands connected by lots of little bridges. That’s roughly how the little network here in my Venice hotel room works, too.
The hotel charges for a wifi connection – only a one-off charge, but it is per-machine, so I only paid for my Macbook to be connected. With recent versions of OS X you can easily create a PAN (a ‘Personal Area Network’) using Bluetooth, so Rose’s laptop and my iPad could then get access by using my Mac as a Bluetooth < -> Wifi gateway. All very cool.
However, I could not get my iPhone to connect that way. I don’t know whether it should work or not – the general expectation is that you’re more likely to use your phone to provide connectivity for your laptop than the other way around! But I wanted a connection for the phone because I needed to download maps and other reference materials to have in my pocket as we explored, and I didn’t want to pay roaming data charges.
And then I realised that, just as my laptop was sharing its wifi connectivity to Rose’s laptop using a Bluetooth PAN, so her machine could then share that connectivity as a wifi network again! And, hey presto, my phone had a network, so I can now download maps to my heart’s content!
What else would one be doing in Venice, after all?…
🙂
I’ve always been fascinated by the work my friend Peter Robinson and his team have been doing at the University’s Computer Lab, in trying to make computers both understand, and express, emotions.
But I hadn’t seen this very nice little video they made just over a year ago.
In 1994, Knight Ridder’s Information Design Lab produced a video which was their vision of the future of newspapers: The Tablet Newspaper. Have a look at around 2:20, and see if it looks at all familiar!
(I guess my nearest equivalent in gadget prediction is shown here.)
© Copyright Quentin Stafford-Fraser
Recent Comments