I finally got a chance to play with my RaspberryPi, so I threw together a quick experiment.
Update: A few people have asked me for a little more information. I’m happy to make the source code available, but it’s not very tidy and a bit specific to my situation… however, to answer some of the questions:
The enclosure for the Raspberry Pi comes from SK Pang Electronics, and it’s perfect for my needs. You can buy just the perspex cover, but they do a nice Starter Kit which includes the breadboard, some LEDs, resistors and the pushswitch. Definitely recommended.
For the graphics, I used the PyGame library, which has the advantage of being cross-platform: you can use it with a variety of different graphics systems on a variety of different devices. On most Linux boxes, you’d normally run it under X Windows, but I discovered that it has various drivers that can use the console framebuffer device directly. This makes for a quicker startup and lighter-weight system, though I imagine it probably has less access to hardware acceleration, so it’s probably not the way to go if your graphics need high performance. You can read about how to get a PyGame display ‘surface’ (something you can draw on) from the framebuffer, in a handy post here.
To load an image from a file in PyGame is easy: you do something like this:
im_surf = pygame.image.load(f, "cam.jpg")
where ‘f’ is an open file, and the ‘cam.jpg’ is just an invented filename to give the library a hint about the type of file it’s loading.
Now, with a webcam, we need to get the image from a URL, not from a file. It’s easy to read the contents of a URL in Python. You just need something like:
import urllib img = urllib.urlopen(img_url).read()
but that will give you the bytes of the image as a string. If we want to convert it into a PyGame surface, we need to make it look more like a file. Fortunately, Python has a module called StringIO which does just that: allows you to treat strings as if they were files. So to load a JPEG from img_url and turn it into a PyGame surface which you can blit onto the screen, you can do something like:
f = StringIO.StringIO(urllib.urlopen(img_url).read()) im_surf = pygame.image.load(f, "cam.jpg")
I’ll leave the remaining bits as an exercise for the reader!
If you like this, you might also like my CloudSwitch…
Great experiment, loved the video of me boarding 🙂 Perhaps I should get a pi for the apartment to run chrome….
Nicely done! Can you share some more info how you have done this using the framebuffer?
Also, where did you get you housing for the Raspberry Pi?
Hi eggn1n3 –
Yes, sure – I’ve updated the post with a bit more info. Hope it’s useful!
How close it is to my project 🙂
Please note I’m totally new to this exceptionally tiny board: please apologize any misunderstanding:
What I’d like to build is an electronic magnifier :
webcam + Raspberry Pi + LCD monitor + some buttons + AC adapter
When my user moves the page he wants to read I’d like the picture to be updated in real-time (above 15 fps)
(I think most webcams have faster framerate than what it seemed in the above video.)
Buttons would be used to adjust magnifying, contrast, saturation, negative, etc.
Did I miss something ? Would it be overkill?
Being a senior Windows applications programmer, is this easily do-able?
Thank you for any answer (including gentle flaming if necessary).
Hi, can you give more information on how you used the framebuffer?
When I run the example code given in the link using a composite display, DirectFB throws an invalid permissions error, even though I am running under root.
I do not have a HDMI display, so I cannot verify if the same problem occurs for this.
Update, found out composite works fine if you change the driver to fbcon. Its just DirectFB that doesnt work.
I’m sure this could be done – it sounds like a nice project – and yes, you’re quite right: most cameras can produce much higher frame rates, including the ones I’m using – speed wasn’t the main issue for me. Remember that I’m capturing from two cameras here, so by just using one I could easily double the frame rate, even without doing anything else.
But, also, I tend to use the word webcam in its literal and original sense: to describe cameras that are accessed over the web. So each frame here is an HTTP request for a single JPEG image. Requesting a video stream, or using a local, USB-connected camera, would improve the speed greatly.
I’m not sure which kernels for the Pi include USB camera support – you may need to research that a bit.
The OpenCV library is quite cool for doing more complex image manipulation and includes camera capture functions. I think someone ported it to the Pi, but it probably depends on kernel support too.
There may be advances on this in the newer kernel version, perhaps.
Thank you very much for these tips.
You are true, I didn’t read about the “Web” bytes flow aspect, sorry.
In the mean time, I read there were some directly connected camera modules being previewed at RaspberryPi labs.
I think this would be a lot faster for there won’t be an USB bottleneck and we can hope for some dedicated libraries coming with it.
Thx a lot for answering me.
Nice demo. It would be great if you published the code for this project, however untidy it is. Also, if it is not too much to ask, could you post the schematic and some information on the hardware connected to the Raspberry Pi?
Hi Roberto –
Yes, very happy to share the code & schematics to both this and the CloudSwitch project – have put it on my to-do list, but I’m afraid my to-do list is not short at present!
Thanks, Quentin. I am excited to learn more about this project.
Nice work. I have created a qr-code scanner that works on windows xp/7/8. The program involves
a camera taking 20 fps and processing it i open cv and then decoding it i zxing. All this takes about
a sekond if a code us found. I have been looking to replace the windows with a Raspberry pi.
Is it pissible to do this with the Raspberry or should I just stick to windows?
Thanks again for som really good work.
Well, a bit of Googling suggests that several people are successfully using OpenCV on a Pi, so that bit should be possible. But I think Zxing may be based on .NET, in which case I think getting it running might be a bit more challenging, but not, I think, impossible…
I’ve had some fun with barcodes in general, but haven’t really played with decoding libraries, so I’m not sure if there are good alternatives…
I remembered seeing your video on the RaspberryPI page months ago, I spent an hour today trying to find it. I really like this project, and was hoping to find some code. I had an idea to use a PI as an output for a zoneminder installation. Currently I use VGA from the server to one TV and webpage access from anywhere else. but a PI using something similar to your code could display multiple feeds to multiple monitors.
I really hope you get around to sharing the code. I’ve never programmed in Python, but this would be a good reason to start.
Thanks for the inspiration.
+1 for the code. Awesome project! Would love to clean it up for you 🙂
Have been browsing the web for various solutions to achieve something that I consider rather basic but still cant find it, your solution seems to be closest though….
I have multiple xvision X720B IP Cameras, they are installed and I have them recording to a windows based machine however I want to put a Pi in place on the same network to simply view all the feeds, I want to put it so that i can either put it onto my AV so as to see the feeds as a channel on the TV or even have the Pi direct to the TV. its a smart TV and annoyingly no apps either seem to allow it. I therefore wonder if you can tell me if your setup would allow me to use said cameras as ZM for example wont support them due to authentication I believe.
Hope to hear from you soon, been driving me slightly crazy….
Hi Duncan –
Well, my system, as you can see, is pretty unsophisticated; it just grabs still images from the cameras.
I don’t know the Xvision cameras, but if you can set them up so you can point a browser at a URL (without any special plugins, authentication or apps) and get a JPEG back, then you should be able to use something like this.
All the best,