As is generally well known now, ChatGPT and similar LLM systems are basically just parrots. If they hear people saying ‘Pieces of eight’ often enough, they know it’s a valid phrase, without knowing anything about the Spanish dollar. They may also know that ‘eight’ is often used in the same context as ‘seven’ and ‘nine’, and so guess that ‘Pieces of nine’ would be a valid phrase too… but they’ve never actually heard people say it, so are less likely to use it. A bit like a parrot. Or a human.
And when I say they know nothing about the phrase actually referring to Spanish currencies… that’s only true until they read the Wikipedia page about it, and then, if asked, they’ll be able to repeat phrases explaining the connection with silver coins. And if they read Treasure Island, they’ll also associate the phrase with pirates, without ever having seen a silver Spanish coin. Or a pirate.
A bit like most humans.
The AI parrots can probably also tell you, though they’ve never been there or seen the mountain, that the coins were predominantly made with silver from Potosi, in Bolivia.
A bit like… well… rather fewer humans. (Who have also never been there or seen the mountain, but unfortunately are also not as well-read and are considerably more forgetful.)
Since so much human learning and output comes from reading, watching and listening to things and then repeating the bits we remember in different contexts, we are all shaken up when we realise that we’ve built machines that are better than us at reading, watching and listening to things and repeating the bits they remember in different contexts.
And this leads to Quentin’s first theorem of Artificial Intelligence:
What really worries people about recent developments in AI is not that the machines may become smarter than us.
It’s that we may discover we’re not really much smarter than the machines.