Tom Coates gave a talk earlier this morning and mentioned that he didn’t have an entry on Wikipedia. By lunchtime, someone in the room had created one for him.
Tom’s from the BBC, is a very nice guy, and has a good blog here, by the way.
Tom Coates gave a talk earlier this morning and mentioned that he didn’t have an entry on Wikipedia. By lunchtime, someone in the room had created one for him.
Tom’s from the BBC, is a very nice guy, and has a good blog here, by the way.
I knew the fundamental idea behind BitTorrent, the file-distribution system – that after some initial seeding every BitTorrent client provides uploads as well as downloads and so you can distribute much more data more quickly without ridiculously heavy loads on one server.
But I’ve just been reading Bram Cohen’s paper and so have a bit more of a grasp of the behind-the-scenes operation, which is really quite clever. Here’s a very simplistic overview:
Files are distributed by creating a .torrent file and putting it on a web server. The .torrent file includes information about the file, its name, length, checksums and so on, and also the URL of a tracker. This is a very simple server which knows abut the machines currently downloading the file (henceforth known as peers). When your client starts downloading a file, it connects to the tracker, gets added to the list, and receives back a random selection of other peers also downloading the file. It can then go off and talk to those peers.
Files are downloaded in pieces, each typically a quarter of a megabyte in size. Your client connects to several peers and finds out from them which pieces of the file they each currently have. It can then start downloading different pieces from different peers; it doesn’t have to get the pieces of the file in order. Whenever you have a complete piece, it’s added to the list of pieces you can make available to others. Often the traffic will be two-way: you’ll be downloading one piece from a peer while uploading a different piece to them.
The overall amount of data downloaded across the system must equal the overall data uploaded – every download has to come from somewhere! So, as a very rough approximation, you can download data as fast as you make it available for upload. This isn’t quite the case, because people often leave their clients running for some time after the download has finished, either because they’re good citizens or because they’re off having a cup of coffee. Others can therefore get more download capacity. Also at particular times, you’ll see the speed of your downloads or uploads fluctuate for a few minutes, though it roughly balances out over time.
A disadvantage of the system as a whole is that if, like many of us, you have a much faster downstream connection than upstream, your bittorrent download is likely to happen at something closer to your slower upstream speed. The advantage, though, is that if the file you’re downloading is at all popular, you’re likely to get it in a much more reliable way than a regular web download, you won’t be limited by the capacity of the originator’s server link, and they won’t end up paying a fortune in bandwidth charges.
There are lots of clever bits which I haven’t touched on here. For example, if you’re connected to several peers, how does your client decide which piece of the file to download next? Answer: it normally downloads the rarest one, the one which fewest of the others have, thus helping to redress the balance. Cute, eh? There’s also stuff related to starting up, to finishing, to finding new peers etc, but for more details have a look at the paper linked to above. All in all, a very nice system.
I’m reading a book edited by Joel Spolsky and came across this nice footnote:
This reminds me of my rule: if you can’t understand the spec for a new technology, don’t worry; nobody else will understand it either, and the technology won’t be important.
Where Google leads, Microsoft follows.
Actually, both of these projects must have been in the pipeline for some time, and I bet MS was furious that Google stole their thunder.
I suppose that, having worked on and off with VoIP for a little while, I really should have cottoned on to this earlier, but it’s only having it at home that has made me realise what will really be different for ordinary users in a VoIP world.
It’s not the lower cost, though that will be nice. It’s not that you’ll need fewer wires around your house, or that you’ll be able to make phone calls from your laptop, or that you’ll only need to buy one link to the outside world because your internet connection and your phone connection will have merged. No, I think the big changes will be that:
All of this has been possible in the past, but it won’t be long before this is a standard facility that everybody will have at home if they choose to make use of it.
I’ve been playing with Gizmo. For those who haven’t come across it, the Gizmo Project is like Skype, but uses open, standard VoIP protocols. Why would you want to do this, when the software’s still in beta and there are millions of people using Skype? Well, because Skype users can only connect to other Skype users (unless they pay money to be routed over the standard phone system). Gizmo can connect to things which are not Gizmo, like standard VoIP phones and IP-capable exchanges.
I have both of these. I’ve been experimenting with Asterisk, the open source PBX, and I’ll write more about that soon. But for now, suffice it to say that my office phone line is now connected to a computer, instead of to a phone. I have a motley collection of phones around the house which are also connected to it, either via conventional phone wiring or via the network. And I have complete control over this… but more about that in a later post.
When you get a Gizmo ID, which is just like registering for AIM or Skype, you get something which works just like those systems, but can also be dialled by a standard VoIP system (using a SIP call to <gizmoname>@proxy01.sipphone.com, for those interested). So I now have phones around the house on which I can dial a four-digit extension number and it will call my friend Robert on his Gizmo session, wherever he is in the world. And he can choose which phones in my house to ring, because I’ve assigned them different names, or whether to ring all of them at once, and he can do it when he doesn’t have a phone handy! And get this: it’s all free!.
Now, there are quite a few rough edges here still, and configuring some of this is not for the faint-hearted, but trust me, this is the way of the future. You can now either call my phone here using either using a phone number (and pay for the privilege) or call, say, my study using study@home.quentin.org. (Actually, I’ve changed the address here, but it’s very similar to that; let me know if you’d like to try it). The second one is more flexible, easier to remember, and it won’t cost you a penny.
Here’s a quick idea: When you’re next getting your business cards printed, upload your vCard (the standard electronic equivalent) to a web server somewhere, and print the URL of the vCard on your business card. Most people end up copying business cards into an electronic address book – those that are important to them, anyway – and if you do this then they’ll only have to type in one thing.
You don’t have to publish your details for all and sundry to see; you don’t need to link to it from elsewhere on your web site, and you can pick a fun URL that people won’t guess, like www.mycompany.com/007.vcf . But if you’re giving somebody the details in paper form anyway, it’s probably because you want to make it easy for them to contact you, so why not make it even easier?
Creating a vCard file is easy. On a Mac, you can just drag your address from the Address Book to your desktop or some other folder. On Windows, I think you can select a contact in Outlook or Outlook Express and do a Save As… Thunderbird, sadly, doesn’t seem to export vCards yet, though it will import them.
There was a time when almost everyone had a PalmPilot, and you could just beam your details to them. Sometimes technology takes a step backwards…
Rose has been avidly listening to the cricket. Well, à chacun son gout, that’s what I say. However, during much of the day, it’s only broadcast on BBC Radio 4 LW (Long Wave), not on FM.
Now, the last time I used Long Wave, we specified things by wavelength and hence everything was in metres, whereas now everything is in kHz, so it took me a while to realise that we no longer have a radio in the house, at least not a portable one, capable of receiving LW.
Ironically, the only way we could allow Rose to listen to the cricket while working in the garden was to put a laptop out there and connect to the BBC web site using our own wireless network. (Which can be found on your dial at about the 12cm mark, by the way.)
The Baby Name Wizard is a very nice use of Java.
iTunes 4.9 is out and splashed across the front of Apple’s site with the tagline ‘Radio Reborn’. Why? Because it has built-in support for subscribing to podcasts. This is quite big news. More info.
I’ve been surprised how much I’ve used the RSS facilities in the Tiger version of Safari. I had assumed beforehand that the facilities in a general-purpose browser would not match up to those in NetNewsWire Lite, the RSS reader I had previously used. They don’t, but in fact Safari provides all I need – an indication on my bookmarks bar of which pages have new material.
So I expect that iTunes will now replace my copy of iPodderX Lite, though I’d still recommend the full iPodderX for anyone needing more substantial facilities.
I came across this a while ago and forgot it. TinyURL.com is a free redirection service which takes your big URLs, like this one:
http://www.amazon.com/gp/browse.html/103-7066182-4716634?node=3435361
and turns them into small ones which do the same thing:
Much less messy in your email messages. Much easier to dictate over the phone.
© Copyright Quentin Stafford-Fraser
Recent Comments