Tag: blogging

Personal Software

One of the most thought-provoking articles I've read on AI recently was by Oliver Roeder, writing last month in the FT. (Here, but behind a paywall). In it, he talks about his frustration with most text editors and word processors as being unsuitable for his daily job of writing articles for newspapers. "For decades", he writes, "I’ve chiselled through the thick accretion of features encrusting mass-market commercial writing software."

And so, with the help of OpenAI's Codex software, he decided to create his own.

Some extracts:

"Over a single weekend, entirely from scratch and heavily “vibe coded”, I created by some distance the best word processor I have ever used. I’ve named it vibedit. I’m writing in it right now. If there is an actually productive task for generative AI, it is as a creator of bespoke tools like this. Given this new, relative ease of app development, it is easy to imagine the atomisation of software into a mist of customised personal projects, droplets as numerous as users.
Aside from the software product, the experience had other benefits. I felt at liberty from corporate design, and could easily amend my own app as I thought of additions and deletions. And far from removing me from my work, this AI experience forced me to think carefully about how I work, and how to craft a fitting tool."

and

"Given its extensive tailoring, it is possible that vibedit would fit no one in the world except for me.
...
The only furniture in the window is a tiny word counter. There is only one typeface in only one size. More telling is what there is not: a window of templates, a title bar, a toolbar, a ruler, draggable margin setters, an option to insert tables or images, spelling and grammar check, AI 'suggestions'..."

There's more, but you get the idea. He created his own app, tailored for the job he does, and intended to be used only by him in just the way he likes.

I think this is very important.

As we enter an era where software development is much cheaper and easier, the number of people able to create their own bespoke apps will increase rapidly. Either you'll do it, or you'll pay your neighbour's son a modest amount to do it for you. Good and/or important software, to be used by large numbers of people, will still require experienced developers, though they may spend more time guiding the AI than actually typing the code. But software that is good enough for you to use yourself? That's becoming a different story.

Rather than a software company having to create one baroque and bloated product which includes every feature that might appeal to any of its customers, I think we will see a flourishing of smaller programs which leave out everything not needed by their small handful of users. To quote Antoine de Saint-Exupery, "You know you've reached perfection in design, not when there is nothing more to add, but when there is nothing more to take away."

Anyway, I've been doing the same thing as Roeder. You're looking at it.

As mentioned in my last post, I've long wanted to move my blog away from its WordPress roots. To be fair to WordPress, it has served me (and a significant chunk of the world's websites) pretty well, and for a lot longer than most pieces of software. I have had to rescue friends whose WordPress sites had been compromised by malware, but in recent years, as long as you don't install too many random plugins and are diligent about keeping both WordPress and any plugins up-to-date, it works fine.

But it also has a lot of code in it that I don't need, in a programming language I have long-since abandoned, and some of the recent design decisions weren't in a direction I would have taken. I have to adopt them, though, because I must keep updating for security reasons. And yet it is trusted with some of my most important data, and it defines the format in which that data is stored.

And so I settled on Wagtail, a Python-and-Django-based Content Management System (CMS). Unlike WordPress, which started as a blogging tool and then evolved to become a more general publishing system, Wagtail is a more general CMS, not tailored to blogging at all, but it is also a software platform which can be used to define whatever types of content you want to store, if that isn't just simple web pages. If you're creating a site advertising job vacancies, for example, you might create a JobVacancy page type which has form fields for storing metadata like location, salary, company name and description, and a template specifying how such a page should be converted into nice-looking HTML. The bits in that last sentence do require you (or somebody) to write code, so you need to be happy with that, but not very much code, since most of the hard work regarding storing, editing and publishing content under particular URLs is all done for you.

So I created content types for blog posts and comments, which largely mirrored the fields used by WordPress because I wanted minimal trouble importing my existing data. And at about this point I started enlisting Claude Code's help to do the rest of the work, and I have barely written a single line of code since.

I needed a script to import all of my past content -- posts, comments, categories and tags and any uploaded media -- preserving existing URLs, and doing various conversions as it went along, e.g. from the way WordPress stored embedded images or YouTube videos to the native Wagtail style of doing the same thing. I didn't at this point know what the normal Wagtail approach was for this, but Claude did: it had read a lot more of the documentation than I had! This script was very important to get right, but I could test it repeatedly on my development system before running it on the main site. I could also give Claude prompts along the lines of "Browse the old site, pick 250 random links and check that the same URLs resolve correctly on the new site." Once I was happy with it, this script was only going to run once and then be discarded.

Then it was a case of adding the features I wanted.

  • I liked the 'Possibly related posts' links at the bottom of each article, so I asked Claude to look at the way the old PHP plugin worked and create something similar for this environment.
  • I liked the calendar in the right margin (if you're on a wide screen), allowing you to see when there were posts, and jump to them by date.
  • When I publish a new blog entry, I like to post a link on the social media platforms I use, so there's a one-button facility to put it on Mastdon, BlueSky and LinkedIn, editing the associated post text on each beforehand if wanted.
  • Spam is a real problem with blog comments, so I needed a way to handle incoming posts effectively, which involved passing them through some rules of my own devising, some that Claude suggested, and finally calling the Akismet API (which, for a modest subscription, had been very effective on the old site). At the end of this pipeline, comments will have been either marked as 'accepted', as 'spam', or as 'pending', in which case I get sent an email with a link to review them.
  • Lastly, I needed an RSS feed, both for people who read the posts using an RSS reader and because it's the basis for the email feed for those who subscribe that way.

And that was it. Though at present, on the surface, you should see very little difference as a result, my own unique blogging environment, 'Status-Q: The Platform', has all the facilities I want, none of the facilities I don't want, and I'm confident I can add or remove features in future much more easily than if I had to burrow into the WordPress source code. (Some of the changes are already planned -- watch this space!)

So far, I'm very happy with the results, and looking forward to tinkering more with my 'personal software' platform in future. Now let me click 'Publish' and see what happens...!

Sign of the times: might ChatGPT re-invigorate GPG?

It's important to keep finding errors in LLM systems like ChatGPT, to remind us that, however eloquent they may be, they actually have very little knowledge of the real world.

A few days ago, I asked ChatGPT to describe the range of blog posts available on Status-Q. As part of the response it told me that 'the website "statusq.org" was founded in 2017 by journalist and author Ben Hammersley.' Now, Ben is a splendid fellow, but he's not me. And this blog has been going a lot longer than that!

I corrected the date and the author, and it apologised. (It seems to be doing that a lot recently.) I asked if it learned when people corrected it, and it said yes. I then asked it my original question again, and it got the author right this time.

Later that afternoon, it told me that StatusQ.org was the the personal website of Neil Lawrence.  

Unknown

Neil is also a friend, so I forwarded it to him, complaining of identity theft!

A couple of days later, my friend Nicholas asked a similar question and was informed that "based on publicly available information, I can tell you that Status-Q is the personal blog of Simon Wardley".  Where is this publicly-available information, I'd like to know!

The moral of the story is not to believe anything you read on the Net, especially if you suspect some kind of AI system may be involved.  Don't necessarily assume that they're a tool to make us smarter!

When the web breaks, how will we fix it?

So I was thinking about the whole question of attribution, and ownership of content, when I came across this post, which was written by Fred Wilson way back in the distant AI past (ie. in December).  An excerpt:

I attended a dinner this past week with USV portfolio founders and one who works in education told us that ChatGPT has effectively ended the essay as a way for teachers to assess student progress. It will be easier for a student to prompt ChatGPT to write the essay than to write it themselves.

It is not just language models that are making huge advances. AIs can produce incredible audio and video as well. I am certain that an AI can produce a podcast or video of me saying something I did not say and would not say. I haven't seen it yet, but it is inevitable.

So what do we do about this world we are living in where content can be created by machines and ascribed to us?

His solution: we need to sign things cryptographically.

Now this is something that geeks have been able to do for a long time.  You can take a chunk of text (or any data) and produce a signature using a secret key to which only you have access.  If I take the start of this post: the plain text version of everything starting from "It's important" at the top down to "sign things cryptographically." in the above paragraph, I can sign it using my GPG private key. This produces a signature which looks like this:

-----BEGIN PGP SIGNATURE-----
iQEzBAEBCgAdFiEENvIIPyk+1P2DhHuDCTKOi/lGS18FAmRJq1oACgkQCTKOi/lG
S1/E8wgAx1LSRLlge7Ymk9Ru5PsEPMUZdH/XLhczSOzsdSrnkDa4nSAdST5Gf7ju
pWKKDNfeEMuiF1nA1nraV7jHU5twUFITSsP2jJm91BllhbBNjjnlCGa9kZxtpqsO
T80Ow/ZEhoLXt6kDD6+2AAqp7eRhVCS4pnDCqayz0r0GPW13X3DprmMpS1bY4FWu
fJZxokpG99kb6J2Ldw6V90Cynufq3evnWpEbZfCkCl8K3xjEwrKqxHQWhxiWyDEv
opHxpV/Q7Vk5VsHZozBdDXSIqawM/HVGPObLCoHMbhIKTUN9qKMYPlP/d8XTTZfi
1nyWI247coxlmKzyq9/3tJkRaCQ/Aw==
=Wmam<
-----END PGP SIGNATURE-----

If you were so inclined, you could easily find my corresponding public key online and use it to verify that signature.  What would that tell you?

Well, it would say that I have definitely asserted something about the above text: in this case, I'm asserting that I wrote it.  It wouldn't tell you whether that was true, but it would tell you two things:

  • It was definitely me making the assertion, because nobody else could produce that signature.  This is partly because nobody else has access to my private key file, and even if they did, using it also requires a password that only I know. So they couldn't  produce that signature without me. It's way, way harder than faking my handwritten signature.

  • I definitely had access to that bit of text when I did so, because the signature is generated from it. This is another big improvement on a handwritten signature: if I sign page 6 of a contract and you then go and attach that signature page to a completely new set of pages 1-5, who is to know? Here, the signature is tied to the thing it's signing.

Now, I could take any bit of text that ChatGPT (or William Shakespeare) had written and sign it too, so this doesn't actually prove that I wrote it.  

But the key thing is that you can't do it the other way around: somebody using an AI system could produce a blog post, or a video or audio file which claims to be created by me, but they could never assert that convincingly using a digital signature without my cooperation.  And I wouldn't sign it. (Unless it was really good, of course.)

Gordon Brander goes into this idea in more detail in a post entitled "LLMs break the internet. Signing everything fixes it."   The gist is that if I always signed all of my blog posts, then you could at least treat with suspicion anything that claimed to be by me but wasn't signed.  And that soon, we'll need to do this in order to separate human-generated content from machine-generated.

A tipping point?

This digital signature technology has been around for decades, and is the behind-the-scenes core of many technologies we all use.  But it's never been widely, consciously adopted by ordinary computer users.  Enthusiasts have been using it to sign their email messages since the last millennium... but I know few people who do that, outside the confines of security research groups and similar organisations.  For most of us, the tools introduce just a little bit too much friction for the perceived benefits.

But digital identities are quickly becoming more widespread: Estonia has long been way ahead of the curve on this, and other countries are following along.  State-wide public key directories may eventually take us to the point where it becomes a matter of course for us automatically to sign everything we create or approve.

At which point, perhaps I'll be able to confound those of my friends and colleagues who, according to ChatGPT, keep wanting to pinch the credit for my blog.

 

 

 

 

 

 

 

More thoughts on entering decade three

Actually, I realise that in yesterday's post, I was out by a day: the first blog post I still retain was from the 28th Feb 2001, so it's today that Status-Q is 20 years old. But since quite a few people get Status-Q by email overnight, they won't have read it until this morning anyway!

In the beginning, I was using Dave Winer's 'Radio Userland' software (which pretty much defined the early days of blogging, RSS feeds etc). One thing that wasn't common then was for blog posts to have titles. After all, they were just log entries; what else did they need but the date and time? However, they did need to be given a heading when I moved them to Wordpress, so if you look back now at some of my posts from 2001, they're all called '[Untitled]'.

Inspired by Jon Crowcroft's comment yesterday, I went back on the Internet Archive and reminded myself of how Status-Q looked in 2001. See, no titles!

I also, while browsing, came across one post from September 2001:

There are some benefits to having an unusual name. If I type 'quentin' into Google, I'm on the first page! I come a little below Quentin Tarantino and Quentin Crisp, though. I know my place.

It's been a long time since I was so visible. It turns out that quite a lot of other people have discovered this World Wide Web thing in the intervening decades, and quite a few of them are named Quentin, including, for example, Quentin Blake and Quentin Willson. So I long ago gave up the occasional vanity search, and my personal non-blog site quentinsf.com has descended way below the threshold of 'next page' clicks that even I am willing to undertake!

I'll tell you what, though...

I just opened a new private window in my browser, one that I hoped wouldn't personalise my results, and typed 'quentin'. Though quentinsf.com was, as expected, nowhere to be seen, Status-Q, in contrast, was in the middle of page 2! That'll do just fine for now.

So there you go, you youngsters: if you want Googlejuice, all you have to do is write miscellaneous rubbish in the same place every couple of days. And do it for about twenty years...

Facebook as a blogging platform, considered.

Euan Semple and I have been having similar thoughts. In a perceptive post he writes:

...As people have moved into places like Facebook and Twitter the energy has moved away from blogging to some extent. Less comments and less people using RSS to track conversations. I, like many bloggers, used to post links to my blog posts on Facebook or Google+. Then I realised that I was expecting people to move from where they were to where I wanted them to be - always a bad idea.

So I started posting the entire content of my blog posts on Facebook and Google+. The process is the same, I get the same benefit of noticing things that blogging gives me, the same trails left of what caught my eye, but the conversations have kicked off. I love the forty or fifty comment long threads that we are having. I love the energy of the conversations. It's like the old days...

And I have to agree. Much as I dislike the tabloid-style, ad-infested nature of Facebook, it does seem to be where the conversations are happening. Yes, some of the smarter people are on Google Plus and App.net, but just not very many of them, and I'm letting my App.net subscription lapse this year. I am even starting to tire a little of Twitter's 140-character limit and, more so, of the difficulty of having real multi-person conversational threads there. And even though it's now easy to reply to posts here on Status-Q using your Facebook ID, where your thoughts will be preserved for viewing by other readers, many more people prefer to comment on Facebook or Twitter when I post notifications there.

Euan and I have both been blogging for about 13 years. In that time, a variety of other platforms have come and gone. I expect that quality blogs like his and John's will outlive Facebook, too. At the very least, I expect that I'll be able to find good past content on them (see my recent post), long after the social network of the day has changed its ownership, its URL structure, its login requirements or its search engine. So I'm not going to be abandoning Status-Q any time soon: it's not worth putting much effort into anything that you post only on one of these other platforms.

But his idea of cross-posting the whole text of one's articles is an interesting one. Facebook is clear, at least at present, that you still own it, though they have a non-exclusive right to make extensive use of it - something those of us who occasionally post photos and videos need to consider carefully.

But I also need to consider the fact that I actually saw his post on Google+, even if I then went to his blog to get a nicely-formatted version to which I could link reliably. Mmm.

A quick retrospective

It’s 12 years today since my first blog post -- the first post, at least, on a publicly-readable system that we’d recognise as blog now. I had registered this ‘statusq.org’ domain a couple of days before, and started tapping out miscellaneous thoughts with no particular theme, and no expectation of an audience.

I was using Dave Winer’s innovative but decidedly quirky ‘Radio Userland’ software, a package which is long since deceased but was very influential in the early days of blogging and RSS feeds. Over the years I’ve moved the content through a couple of different systems but I think -- I hope -- that all the URLs valid in 2001 still work today! Most of my early posts do not have a title. The convention of giving titles to what we thought of as diary entries wasn’t yet well-established.

Things that caught my attention in the first couple of months included:

  • An appreciation that Windows 2000 was really rather a good operating system. Certainly the best Microsoft had produced so far. (It was also -- though I didn’t know it at the time -- the last version I was to use on a regular basis.) Microsoft were pushing an idea called the ‘Tablet PC’, which was marketing-speak for what had previously been called WebPads, and something called .NET, which was marketing-speak for nobody-knew-what!
  • The importance of this new thing called XML, which was giving the world a standard way to store and transmit structured data. I was at a conference where Steve Ballmer described the major revolutions in computing as The PC, The Gui, The Web, and XML. Well, the brackets have become a bit more curly since then, but it was indeed a major change.
  • Astonishment that, with the upcoming launch of Mac OS X, the world’s largest Unix vendor was about to become, of all people, Apple! I’d been playing with the early beta versions. It’s been my operating system of choice ever since.
  • The bizarre level of press coverage when we announced the impending shutdown of the Trojan Room Coffee pot.
  • A survey saying that less than half of US college students were taking hi-fi systems to college, because they were now listening to music from their PCs instead! It was still nearly a year before an amazing thing called the iPod was to appear, and surprise us all.

Here’s a snapshot of Status-Q captured by the Internet Archive in early May 2001