Two long-established names in the world of journalism are approaching the challenges of AI in very different ways.
The New York Times is suing OpenAI, in an expensive landmark case that the world is watching carefully, because it could have very far-reaching ramifications.
The Atlantic, on the other hand, has just done a deal with them.
This isn’t a subject I normally follow very closely, but in what I found to be an intriguing interview, Nicholas Thompson, the Atlantic‘s CEO, explains how and why they made this decision, and explores areas well beyond the simple issues of copyright and accreditation. It’s an episode of the Decoder podcast, hosted by The Verge‘s Nilay Patel, who is an excellent and intelligent interviewer.
Recommended listening if you have a car journey, commute, or dog-walk coming up — just search for ‘Decoder’ on your favourite podcast app — or you can get the audio, and/or a transcript, from the link above.
If I were giving advice to somebody considering buying a Tesla at the moment, it would be (a) buy it and (b) don’t believe the ‘full self-driving’ hype… yet.
You’ll be getting a car that is great fun to drive, has amazing range, a splendid safety record, a brilliant charging network, etc… and, in the standard included ‘autopilot’, has a really good cruise control and lane-keeping facility. One thing I’ve noticed when comparing it to the smart cruise control on my previous car, for example, is that it’s much better at handling the situation where somebody overtakes and then pulls into the lane just in front of you. Systems that are primarily concerned with keeping your distance from the car in front have difficult decisions to make at that point: how much and how suddenly should they back off to maintain the preferred gap. The Tesla, in contrast, is constantly tracking all the vehicles around you, and has therefore been following that car and its speed relative to yours for some time, so can react much more smoothly.
The dubiously-named ‘Full Self-Driving’ package is an expensive optional extra which you can buy at the time of purchase or add on later with a couple of clicks in the app. At the moment, it doesn’t give you very much more: the extra functionality (especially outside the US) hasn’t been worth the money. If you purchase it now, you’re primarily buying into the promise of what it will offer in the future, and the hope that this will provide you with significant benefits in the time between now and when you sell the car!
But at sometime in the not-too-distant future, the new version –currently known as the ‘FSD Beta’ — will be released more widely to the general public. ‘Full Self Driving’ will then still be a misnomer, but will be quite a bit closer to the truth. YouTube is awash with videos of the FSD Beta doing some amazing things: people with a 45-minute California commute essentially being driven door-to-door, for example, while just resting their hands lightly on the steering wheel… and also with a few examples of it doing some pretty scary things. It seems clear, though, that it’s improving very fast, and will be genuinely valuable on highways, especially American highways, before too long, but also that it’s likely to be useless on the typical British country road or high street for a very long time!
What Tesla has, to a much greater degree than other companies, is the ability to gather data from its existing vehicles out on the road in order to improve the training of its neural nets. The more cars there are running the software, the better it should become. But the back-at-base process of training the machine learning models on vast amounts of video data (to produce the parameters which are then sent out to all the cars) is computationally very expensive, and the speed of an organisation’s innovation, and how fast it can distribute the results to the world, depends significantly on how fast it can do this.
Last week, Tesla held their ‘AI Day’, where Elon Musk got up on stage and, in his usual way, mumbled a few disjointed sentences. Did nobody ever tell the man that it’s worth actually preparing before you get up on a stage, especially the world stage?
However, between these slightly embarrassing moments are some amazing talks by the Tesla team, going into enormous detail about how they architect their neural nets, the challenges of the driving task, the incredible chips they are building and rolling out to build what may be the fastest ML-training installation in the world, and the systems they’re building around all this new stuff.
For most people, this will be too much technical detail and will make little sense. For those with a smattering of knowledge about machine learning, you can sit back and enjoy the ride. There are lots of pictures and video clips amidst the details! And for those with a deeper interest in AI/ML systems, I would say this is well-worth watching.
There are two key things that struck me during the talks.
First, as my friend Pilgrim pointed out, it’s amazing how open they’re being. Perhaps, he suggested, they can safely assume that the competition is so far behind that they’re not a threat!
Secondly, it suddenly occurred to me — half way through the discussions of petaflop-speed calculations — that I was watching a video from a motor manufacturer! An automobile company! If you’re considering buying a Tesla, this is a part of what you’re buying into, and it’s astonishingly different from anything you’d ever see from any other car-maker. Full self-driving is a very difficult problem. But this kind of thing goes a long way to convincing me that if anybody is going to get there, it will be Tesla.
You may or may not ever pay for the full FSD package, but it’s safe to assume much of the output of these endeavours will be incorporated into other parts of the system. So, at the very least, you should eventually get one hell of a cruise control!
The livestream is here, and the interesting stuff actually starts about 46 minutes in.
Google Street View is, I think, one of the most amazing achievements in recent times, and it’s one of the things that keeps me using Google Maps even though many of the alternatives are rather good. If I’m heading to a new destination, I’ll often look in advance at, say, the entrance gate, or the correct exit from the last roundabout, so those final manoeuvres when the traffic is slowing down behind you are less stressful: you’re in familiar surroundings. Street View is, in that sense, a déjà-vu-generator.
But there are interesting questions to be asked about Street View as well. For someone who enjoys window-shopping on Rightmove for a possible next home, it’s a very valuable tool, and I’ve often wondered how much the market appeal of your property is affected by whether the Google car drove by on a sunny or a cloudy day!
And this morning, I saw debates on Twitter about research that used images of your house on Street View to estimate how likely you were to have a car accident, something which could be used against you by insurance companies (or, of course, in your favour, but that doesn’t make such good headlines).
The paper’s here, and I was most surprised by just how poor the insurance company’s existing model was; information about your age, gender, postcode etc apparently doesn’t give them as much insight as you might expect, and knowing whether you live in a well-maintained detached house in a nice neighbourhood gives them just a little bit more. Some see this as very sinister, but you need to remember that this wasn’t some automated image-analysis system; the researchers had to spend a lot of time looking at StreetView pictures of houses and annotating them by hand with their assessment of the condition, type of house, etc. Some of this could be performed by machines in future, but there are lots of other factors to consider as well: is the issue that you are more likely to crash into somebody in certain neighbourhoods, or that they are more likely to crash into you? What’s the speed limit on the surrounding streets? How close is the pub? And so on…
So I sat down thinking I would write about this, but one thing I failed to notice was the date of the research. I assumed that because people were talking about it on Twitter today, it must be new — a fatal mistake. What’s more, just a little bit of further research showed me that my friend John Naughton had written a good piece about it in the Observer two years ago.
So it’s perhaps not surprising that I like technologies that can give me a sense of déja vu. My own abilities in that area are clearly lacking!
Possibly-related posts:
Down my street Several key Cambridge University indoor locations are now on Google...
Street Music I’m in San Francisco, staying just off Union Square. It’s...
Statistics The death of 55 people in the London blasts was...
Blog Statistics A new blog is created every second, more than half...
Stephen Pulman gave the Wheeler Lecture in our department this afternoon; an excellent discussion about whether current machine-learning techniques would ever allow us to build a machine that passes the Turing Test.
It made me wonder about the value of a variation on the theme, which I propose to call the Meta-Turing-Test.
It which would work like this:
Can we build a machine which, given a Turing Test scenario, can work out whether the responses are from a human or a machine, even when a human can’t?
Recent Comments