Tag Archives: artificial intelligence

The success of Django… and when the machines take over.

The Django web framework is now 20 years old.  Within a few months of its launch, I discovered it, liked it, and we rather daringly decided to bet on it as the basis for the software of my new startup.  (Here’s my post from almost exactly 20 years ago, explaining the decision.)

For those not familiar with them, web frameworks like this give you a whole lot of functionality that you need when you want to use your favourite programming language to build web sites and web services.  They help you receive HTTP requests, decide what to do based on the URLs, look things up in databases, produce web pages from templates, return the resulting pages in a timely fashion, and a whole lot more besides.  You still have to write the code, but you get a lot of lego bricks of the right shape to make it very much easier, and there are a lot of documented conventions about how to go about it so you don’t have to learn, the hard way, the lessons that lots of others had to learn in the past!

Anyway, I had made a similar lucky bet in 1991 when the first version of the Python programming language was released, and I loved it, and was using it just a few weeks later (and have been ever since).  

Django was a web framework based on Python, and it has gone on to be a huge success partly because it used Python; partly because of the great design and documentation build by its original founders; partly because of the early support it received from their employer, the Kansas newspaper Lawrence Journal-World, who had the foresight to release it as Open Source; and partly because of the non-profit Django Software Foundation which was later created to look after it.  

Over the last two decades Django has gone on to power vast numbers of websites around the world, including some big names like Instagram.  And I still enjoy using it after all that time, and have often earned my living from doing so,  so my thanks go out to all who have contributed to making it the success story that it is!

Anyway, on a podcast this week of a 20th-birthday panel discussion with Django’s creators, there was an amusing and poignant story from Adrian Holovaty, which explains the second part of the title of this post.  

Adrian now runs a company called Soundslice (which also looks rather cool, BTW).  And Soundslice recently had a problem: ChatGPT was asserting that their product had a feature which it didn’t in fact have. (No surprises there!)  They were getting lots of users signing up and then being disappointed.  Adrian says:

“And it was happening, like, dozens of times per day. And so we had this inbound set of users who had a wrong expectation. So we ended up just writing the feature to appease the ChatGPT gods, which I think is the first time, at least to my knowledge, of product decisions being influenced by misinformation from LLMs.”

Note this.  Remember this day.   It was quicker for them to implement the world as reported by ChatGPT than it was to fix the misinformation that ChatGPT was propagating.

Oh yes.

Another AI cautionary tale

In one of my YouTube videos, I talk about how I’ve wired up my solar/battery system to ensure the energy in my home battery isn’t ever used to charge my car (which has a much bigger battery, so doing this doesn’t normally make sense), while still allowing the car to be charged using any excess solar power.

I had a query from somebody who was confused about how it worked, so I did my best to answer, and we went to and fro in what became a decent-length conversation.  He has a similar inverter to me, but had some fundamental misunderstandings about how it worked.  

At first, I assumed this was because he had different goals: he lives in another part of the world where there’s a lot more sun and a much less reliable electricity supply, for example.  But no, it turned out he wanted to do the same thing as me, but was convinced it wouldn’t work the way I had described it.

It turned out, in the end, that the source of his confusion was that he had asked four different LLMs (ChatGPT, Claude, Perplexity, and Grok) about how to configure the system, and they had all agreed that ‘battery power is never used to power loads on the “Grid” port’, which is actually incorrect.

What persuaded him, in the end, that my description was right, and that all four LLMs were wrong?

He read the manual.

Wisdom of the crowds, or lowest common denominator?

I liked this:

People have too inflated sense of what it means to “ask an AI” about something. The AI are language models trained basically by imitation on data from human labelers. Instead of the mysticism of “asking an AI”, think of it more as “asking the average data labeler” on the internet.

But roughly speaking (and today), you’re not asking some magical AI. You’re asking a human data labeler. Whose average essence was lossily distilled into statistical token tumblers that are LLMs. This can still be super useful of course. Post triggered by someone suggesting we ask an AI how to run the government etc. TLDR you’re not asking an AI, you’re asking some mashup spirit of its average data labeler.

Andrej Karpathy

Thanks to Simon Willison for the link.

AI Whitewashing

Yesterday, I asked:

Here’s a question, O Internet:

If I buy full-fat milk and dilute it 50/50 with water, do I effectively have semi-skimmed milk, or is there something more sophisticated about the skimming process?

And if I then dilute it again, do I get skimmed milk… for one quarter of the price?

Now, the quick answer, as I understand it, is ‘no’: milk contains a variety of nutrients, and several of these are water-soluble.  So the process of ‘skimming’ to reduce the fat content doesn’t dilute these nutrients in the way that you would by just adding water: you still get them at approximately the same concentration when you buy semi-skimmed or skimmed milk.

But I learned a couple of interesting different things from asking!

The first thing is this wonderful diagram, found and posted in the comments by Spencer:

Thumbnail of 'Milk products' flowchart

(click for full size)

It looks like something explaining the petrochemical industry, but much, much more yummy.

His comment: “I wonder how many of these I can have with my breakfast today?”

And the second thing is that, as well as beginning “Here’s a question, O Internet”, I could have asked “Here’s a question, O Artificial Intelligence”.  

My friend Keshav did that, submitting my question verbatim to Perplexity, a system I hadn’t previously tried.  Here’s the rather good result (and here as a screenshot in case that live link goes away).

I then went on to ask “Which nutrients in milk are water-soluble?”, and it told me, with good citations, with the comment that “maintaining adequate levels of these water-soluble vitamins in breast milk is important for the health and development of the breastfed infant”.  So I asked a follow-up question:  “Is this different in cows’ milk?”, and again, got a useful, detailed response with references for all the facts.

This stuff really is getting better… at least until the Internet is completely overrun by AI spam and the AIs have to start citing themselves.  But for now,  I think Perplexity is worth exploring further.

Thanks to Spencer, Keshav and other respondents!

Some suggested reading: AI and dopamine

Andrew Curry’s thoughtful newletter ‘Just Two Things’ arrives in my inbox three times a week (which, I confess, is slightly too often for me always to give it the attention it deserves).   The two things he talks about today included some gems, though.

First, he looks at Ted Gioia’s article, The State of the Culture, 2024 , which comes with the subtitle ‘Or a glimpse into post-entertainment society (it’s not pretty)’.

Gioia talks about the old dichotomy between Art and Entertainment:

Many creative people think these are the only options—both for them and their audience. Either they give the audience what it wants (the entertainer’s job) or else they put demands on the public (that’s where art begins).

but he then describes how a dopamine-driven world is changing that into something more complex and rather more worrying. This is only the beginning:

 

 

It’s a good and interesting piece, and well worth reading, but if you find it depressing you should also read Curry’s comments, which suggest things may not be as bad as they seem.

 

In the second of his Two Things, Curry talks about an article by Paul Taylor in the London Review of Books.  (So, yes, you’re reading my comments on Andrew Curry’s comments on Paul Taylor’s comments on other people’s books.  This is starting to resemble that fish picture above!)

The Taylor article is also very good, and I won’t repeat too much of it here.  I will, however, quote a section that Curry also quotes:

We should be genuinely awestruck by what ChatGPT and its competitors are capable of without succumbing to the illusion that this performance means their capacities are similar to ours. Confronted with computers that can produce fluent essays, instead of being astonished at how powerful they are, it’s possible that we should be surprised that the generation of language that is meaningful to us turns out to be something that can be accomplished without real comprehension.

I like this, because it echoes Quentin’s First Theorem of Artificial Intelligence, which I proposed here about a year ago.

What really worries people about recent developments in AI is not that the machines may become smarter than us.

It’s that we may discover we’re not really much smarter than the machines.

Again, the LRB article is well worth your time, if you can get through it before being distracted by things which offer you more dopamine.

Sign of the times: might ChatGPT re-invigorate GPG?

It’s important to keep finding errors in LLM systems like ChatGPT, to remind us that, however eloquent they may be, they actually have very little knowledge of the real world.

A few days ago, I asked ChatGPT to describe the range of blog posts available on Status-Q. As part of the response it told me that ‘the website “statusq.org” was founded in 2017 by journalist and author Ben Hammersley.’ Now, Ben is a splendid fellow, but he’s not me. And this blog has been going a lot longer than that!

I corrected the date and the author, and it apologised. (It seems to be doing that a lot recently.) I asked if it learned when people corrected it, and it said yes. I then asked it my original question again, and it got the author right this time.

Later that afternoon, it told me that StatusQ.org was the the personal website of Neil Lawrence.  

Unknown

Neil is also a friend, so I forwarded it to him, complaining of identity theft!

A couple of days later, my friend Nicholas asked a similar question and was informed that “based on publicly available information, I can tell you that Status-Q is the personal blog of Simon Wardley”.  Where is this publicly-available information, I’d like to know!

The moral of the story is not to believe anything you read on the Net, especially if you suspect some kind of AI system may be involved.  Don’t necessarily assume that they’re a tool to make us smarter!

When the web breaks, how will we fix it?

So I was thinking about the whole question of attribution, and ownership of content, when I came across this post, which was written by Fred Wilson way back in the distant AI past (ie. in December).  An excerpt:

I attended a dinner this past week with USV portfolio founders and one who works in education told us that ChatGPT has effectively ended the essay as a way for teachers to assess student progress. It will be easier for a student to prompt ChatGPT to write the essay than to write it themselves.

It is not just language models that are making huge advances. AIs can produce incredible audio and video as well. I am certain that an AI can produce a podcast or video of me saying something I did not say and would not say. I haven’t seen it yet, but it is inevitable.

So what do we do about this world we are living in where content can be created by machines and ascribed to us?

His solution: we need to sign things cryptographically.

Now this is something that geeks have been able to do for a long time.  You can take a chunk of text (or any data) and produce a signature using a secret key to which only you have access.  If I take the start of this post: the plain text version of everything starting from “It’s important” at the top down to “sign things cryptographically.” in the above paragraph, I can sign it using my GPG private key. This produces a signature which looks like this:

-----BEGIN PGP SIGNATURE-----
iQEzBAEBCgAdFiEENvIIPyk+1P2DhHuDCTKOi/lGS18FAmRJq1oACgkQCTKOi/lG
S1/E8wgAx1LSRLlge7Ymk9Ru5PsEPMUZdH/XLhczSOzsdSrnkDa4nSAdST5Gf7ju
pWKKDNfeEMuiF1nA1nraV7jHU5twUFITSsP2jJm91BllhbBNjjnlCGa9kZxtpqsO
T80Ow/ZEhoLXt6kDD6+2AAqp7eRhVCS4pnDCqayz0r0GPW13X3DprmMpS1bY4FWu
fJZxokpG99kb6J2Ldw6V90Cynufq3evnWpEbZfCkCl8K3xjEwrKqxHQWhxiWyDEv
opHxpV/Q7Vk5VsHZozBdDXSIqawM/HVGPObLCoHMbhIKTUN9qKMYPlP/d8XTTZfi
1nyWI247coxlmKzyq9/3tJkRaCQ/Aw==
=Wmam<
-----END PGP SIGNATURE-----

If you were so inclined, you could easily find my corresponding public key online and use it to verify that signature.  What would that tell you?

Well, it would say that I have definitely asserted something about the above text: in this case, I’m asserting that I wrote it.  It wouldn’t tell you whether that was true, but it would tell you two things:

  • It was definitely me making the assertion, because nobody else could produce that signature.  This is partly because nobody else has access to my private key file, and even if they did, using it also requires a password that only I know. So they couldn’t  produce that signature without me. It’s way, way harder than faking my handwritten signature.

  • I definitely had access to that bit of text when I did so, because the signature is generated from it. This is another big improvement on a handwritten signature: if I sign page 6 of a contract and you then go and attach that signature page to a completely new set of pages 1-5, who is to know? Here, the signature is tied to the thing it’s signing.

Now, I could take any bit of text that ChatGPT (or William Shakespeare) had written and sign it too, so this doesn’t actually prove that I wrote it.  

But the key thing is that you can’t do it the other way around: somebody using an AI system could produce a blog post, or a video or audio file which claims to be created by me, but they could never assert that convincingly using a digital signature without my cooperation.  And I wouldn’t sign it. (Unless it was really good, of course.)

Gordon Brander goes into this idea in more detail in a post entitled “LLMs break the internet. Signing everything fixes it.”   The gist is that if I always signed all of my blog posts, then you could at least treat with suspicion anything that claimed to be by me but wasn’t signed.  And that soon, we’ll need to do this in order to separate human-generated content from machine-generated.

A tipping point?

This digital signature technology has been around for decades, and is the behind-the-scenes core of many technologies we all use.  But it’s never been widely, consciously adopted by ordinary computer users.  Enthusiasts have been using it to sign their email messages since the last millennium… but I know few people who do that, outside the confines of security research groups and similar organisations.  For most of us, the tools introduce just a little bit too much friction for the perceived benefits.

But digital identities are quickly becoming more widespread: Estonia has long been way ahead of the curve on this, and other countries are following along.  State-wide public key directories may eventually take us to the point where it becomes a matter of course for us automatically to sign everything we create or approve.

At which point, perhaps I’ll be able to confound those of my friends and colleagues who, according to ChatGPT, keep wanting to pinch the credit for my blog.

 

 

 

 

 

 

 

In case you were wondering…

I’ve been trying to get ChatGPT to do some recursion, using prompts like “Write a letter explaining how to use ChatGPT to write letters about how to use ChatGPT to write letters”, but the results, though coherent, are not very satisfying.

So instead I tried confusing it with, “Give an example of the kind of question you would be unable to answer, and then answer it.” It did well on the first bit… but ignored the second.

OK, I thought, I’d better stick to something simple, like the question that all hyperintelligent machines should be able to answer.

The ultimate question and its answer are not specified in the information available to me. However, the concept of an "ultimate question" and the search for an "ultimate answer" has appeared in various forms in literature and pop culture. For example, in Douglas Adams' "The Hitchhiker's Guide to the Galaxy," the supercomputer Deep Thought is asked to find the answer to the ultimate question of life, the universe, and everything, and ultimately determines that the answer is 42. This is meant to be humorous and should not be taken literally.

Testing Turing?

Stephen Pulman gave the Wheeler Lecture in our department this afternoon; an excellent discussion about whether current machine-learning techniques would ever allow us to build a machine that passes the Turing Test.

It made me wonder about the value of a variation on the theme, which I propose to call the Meta-Turing-Test.

It which would work like this:

Can we build a machine which, given a Turing Test scenario, can work out whether the responses are from a human or a machine, even when a human can’t?

© Copyright Quentin Stafford-Fraser