As is generally well known now, ChatGPT and similar LLM systems are basically just parrots. If they hear people saying ‘Pieces of eight’ often enough, they know it’s a valid phrase, without knowing anything about the Spanish dollar. They may also know that ‘eight’ is often used in the same context as ‘seven’ and ‘nine’, and so guess that ‘Pieces of nine’ would be a valid phrase too… but they’ve never actually heard people say it, so are less likely to use it. A bit like a parrot. Or a human.
And when I say they know nothing about the phrase actually referring to Spanish currencies… that’s only true until they read the Wikipedia page about it, and then, if asked, they’ll be able to repeat phrases explaining the connection with silver coins. And if they read Treasure Island, they’ll also associate the phrase with pirates, without ever having seen a silver Spanish coin. Or a pirate.
A bit like most humans.
The AI parrots can probably also tell you, though they’ve never been there or seen the mountain, that the coins were predominantly made with silver from Potosi, in Bolivia.
A bit like… well… rather fewer humans. (Who have also never been there or seen the mountain, but unfortunately are also not as well-read and are considerably more forgetful.)
Since so much human learning and output comes from reading, watching and listening to things and then repeating the bits we remember in different contexts, we are all shaken up when we realise that we’ve built machines that are better than us at reading, watching and listening to things and repeating the bits they remember in different contexts.
And this leads to Quentin’s first theorem of Artificial Intelligence:
What really worries people about recent developments in AI is not that the machines may become smarter than us.
It’s that we may discover we’re not really much smarter than the machines.
The only thing that says “Pieces of nine” is a parrot-y error.
🙂
Quentin,
I have only briefly tried the various AI products, but reach the same conclusion now, as I did as a speech recognition researcher in 1979-1982. There is a spectrum from understanding simple, trained speech, all the way to sentient, conscious beings. In between, on the spectrum, but far to the latter conscious area, is creativity. We have certainly improved on “search”, when compared to traditional google search. Searching history (accurate or not) is a step up. Unfortunately, it is not accurate, or so biased, to be of little decisional value. Sky news (Murdock) vs New York Times, give vastly
different facts. So what value does AI have in that space? None. Creativity and discriminatory analysis still fall into the human bucket. Jules Verne and H G Wells are safe. But checking your sources… bibliographic memory work will get more difficult.
I asked Google Bard to list all the ex-Presidents of the (UK) Supreme Court. (I was thinking about a particular case, and the name of the judge had slipped my mind, so I was trying to jog my memory.) The answer was quite strange. It gave a timeline which covered the entire period since the court had been set up, but Lady Hale was missing. Eventually I worked out that one of the other judges had been assigned a longer period in office, to fill the gap.
At the time I just thought the technology is imperfect and so on, but then I started wondering if the AI could have picked up sexist attitudes from the material it had ingested. Perhaps it’s more likely to make a mistake like that with a female judge, because there is a certain amount of material out there which suggests that women don’t do that kind of role.
As artificial intelligence climbs the mountain humans have started the decent.
[…] I like this, because it echoes Quentin’s First Theorem of Artificial Intelligence, which I proposed here about a year ago. […]