Category Archives: Computing

Coffee Pot – The Movie

For a long time, it has both bugged and bemused me that, though the first webcam ran for 10 years taking photos of our departmental coffee pot, there are almost no original images saved from the millions it served up to viewers around the world! I had one or two.

Then, suddenly, in a recent conversation, it occurred to me to check the Internet Archive’s ‘Wayback Machine’, and, sure enough, in the second half of the coffeepot camera’s life — from 1996-2001 — they had captured 28 of its images. I wrote a script to index and download these, and turned them into a slideshow, which you can find in my new and very exciting three-minute video:

Total Recall

The tech news has had a lot of coverage recently of Microsoft’s proposed ‘Recall‘ system, which (as a very rough approximation) takes a screenshot of your display every five seconds, and uses their AI-type Copilot system to allow you to search it. “What was that cafe or restaurant that someone in the call recommended yesterday?”

At first glance, this is a very appealing feature. Back in the 90s, when I was working on human-computer interaction stuff, we used to say things like “the more a secretary knows about you, the more helpful he or she can be”. We were living in a world where your computer knew almost nothing about you except what you typed on your keyboard or clicked with your mouse.

Nowadays, however, users are more often concerned about your computer — or someone with access to your computer — knowing too much about you. The data used by Recall is only stored locally, but in a corporate environment, for example, somebody with admin access to your PC could scroll back to the last time you logged in to your online banking and see screenshots of your bank statements. So, potentially, could a piece of malware running with your access permissions (though that could also probably take snapshots of its own). You can tell the system not to record when you’re using certain apps, or visiting certain websites… as long as you’re using Microsoft’s browser, of course. Or you can opt-out completely… but all of these require you to take action to preserve your privacy – the defaults are for everything to be switched on.

This caused enough of a storm that Microsoft recently switched it from being part of their next general release to being available only through the ‘Windows Insider Program’, pending further discussion.

There’s been enough online debate that I won’t revisit the arguments here about whether such a system could be built securely, whether we’d trust it more if it came from someone other than Microsoft, what the appropriate level of paranoia actually is, and so on.

There are, however, a couple of things I’d like to point out.

The first is that this facility was to be available, in the immediate future at least, only on PCs that meet Microsoft’s ‘CoPilot+’ standard, meaning they had a neural processing unit (NPU) which allowed them to run the necessary neural network models at a sensible speed. And the only machines on the market that currently have that are ARM-based, not powered by AMD and Intel. I find it intriguing that the classic Intel x86 platform which has been so closely tied to Microsoft software for so long is not able to support such a headline feature of Windows. “We are partnering with Intel and AMD to bring Copilot+ PC experiences to PCs with their processors in the future.”

The second is that, ahem, I predicted such a system, right here on this blog, 21 years ago.

Actually, though, my idea wasn’t just based on screenshots. I wanted a jog-wheel that would allow you to rewind or fast-forward through the entire state of your machine’s history: filesystem, configuration and all. One key component for this we didn’t really have then, but it is much more readily available now: filesystems which can save an instantaneous snapshot without using much time or space to do it. As I wrote at the time,

The technology would need a quick way of doing “freeze! – duplicate entire storage! – continue!”.

And that, at least, is now possible with filesystems like ZFS (which I use on my Linux home server), BTRFS (used by my Synology), and APFS (used on my Macs, where such snapshots are a key part of the Time Machine backup system). So one of the key requirements for my wishlist is now on almost all my machines.

And my Linux server is running NixOS, which means that I can, should I so desire, at boot time, select any of the past configurations from the last few months and boot into that — Operating System, applications, configuration and all — instead of the current version.

I haven’t quite got my rewind/fast-forward jog-wheel yet, though. Oh, we do have that AI stuff… all very clever, I’m sure, but I’d rather have my jog-wheel. Let’s give it another 21 years…

Some suggested reading: AI and dopamine

Andrew Curry’s thoughtful newletter ‘Just Two Things’ arrives in my inbox three times a week (which, I confess, is slightly too often for me always to give it the attention it deserves).   The two things he talks about today included some gems, though.

First, he looks at Ted Gioia’s article, The State of the Culture, 2024 , which comes with the subtitle ‘Or a glimpse into post-entertainment society (it’s not pretty)’.

Gioia talks about the old dichotomy between Art and Entertainment:

Many creative people think these are the only options—both for them and their audience. Either they give the audience what it wants (the entertainer’s job) or else they put demands on the public (that’s where art begins).

but he then describes how a dopamine-driven world is changing that into something more complex and rather more worrying. This is only the beginning:

 

 

It’s a good and interesting piece, and well worth reading, but if you find it depressing you should also read Curry’s comments, which suggest things may not be as bad as they seem.

 

In the second of his Two Things, Curry talks about an article by Paul Taylor in the London Review of Books.  (So, yes, you’re reading my comments on Andrew Curry’s comments on Paul Taylor’s comments on other people’s books.  This is starting to resemble that fish picture above!)

The Taylor article is also very good, and I won’t repeat too much of it here.  I will, however, quote a section that Curry also quotes:

We should be genuinely awestruck by what ChatGPT and its competitors are capable of without succumbing to the illusion that this performance means their capacities are similar to ours. Confronted with computers that can produce fluent essays, instead of being astonished at how powerful they are, it’s possible that we should be surprised that the generation of language that is meaningful to us turns out to be something that can be accomplished without real comprehension.

I like this, because it echoes Quentin’s First Theorem of Artificial Intelligence, which I proposed here about a year ago.

What really worries people about recent developments in AI is not that the machines may become smarter than us.

It’s that we may discover we’re not really much smarter than the machines.

Again, the LRB article is well worth your time, if you can get through it before being distracted by things which offer you more dopamine.

Checking out the competition

Google has just announced the launch of the Bard chatbot, its competitor to ChatGPT, and I, no doubt like many others, immediately went to compare their performance when asked some of the key questions of our age:

ChatGPT 3.5

The blog "Status-Q" is written by Neil Gaiman, a British author known for his works in the fantasy genre, including novels like "American Gods," "Neverwhere," and "The Sandman" comic book series. Neil Gaiman is also known for his involvement in various creative projects across different media, including writing for television and film. "Status-Q" is his personal blog where he shares thoughts, updates, and various musings. Please note that information may change over time, and it's a good idea to verify the current authorship if there have been any changes since my last knowledge update in January 2022.

Mmm.

OK, well, let’s try…

Bard

Quentin Stafford-Fraser is the author of the blog 'Status-Q'. He is a computer scientist, writer, and filmmaker who has been exploring the nature of reality, consciousness, and the meaning of life for many years. His blog is a forum for his thoughts and ideas on a wide range of topics, from the latest scientific discoveries to the deepest philosophical questions.

Bard gets my vote.

Now, having done that important test, I can return to my career as a filmmaker who has been exploring the nature of reality, consciousness, and the meaning of life.

Sunny suggestion for a student project

I think this would be a fun student project. I would certainly have enjoyed it.


Here’s a year’s worth of my solar-generation data. My roof is oriented approximately 5 degrees west of south.

  1. How close can you get to discovering where I live, from this data alone?
  2. How can you improve your estimate by incorporating other publicly-available data sources?
  3. What further information about the installation would help you improve your estimate?

Feel free to make suggestions in the comments about fun additions.

The AI Ballad Of John Henry

Friends this side of the Atlantic may not be familar with the story of John Henry, but you can read about him on Wikipedia.  John Henry, the story goes, was a ‘steel-driving man’ whose prowess with the hammer was formidable.  

At one point, he took on a steam hammer, side-by-side, and won… but the effort also killed him.

It’s not quite clear whether John Henry was ever anything more than a legend, but he has inspired statues, books, animations, compositions by Aaron Copland… and almost everybody seems to have recorded musical versions of the story, including Jerry Lee Lewis, Bruce Springsteen, Lonnie Denegan, Harry Belafonte, Woodie Guthrie… to name but a few.  For a brief version, here’s Tennessee Ernie Ford, or I rather like the slightly longer story as recorded by Johnny Cash.

My friend Keshav, of course, asked ChatGPT to write a version, which also covers the threat posed to traditional skills by the coming of machines.

 

 

Who’s a pretty Polly?

As is generally well known now, ChatGPT and similar LLM systems are basically just parrots. If they hear people saying ‘Pieces of eight’ often enough, they know it’s a valid phrase, without knowing anything about the Spanish dollar. They may also know that ‘eight’ is often used in the same context as ‘seven’ and ‘nine’, and so guess that ‘Pieces of nine’ would be a valid phrase too… but they’ve never actually heard people say it, so are less likely to use it. A bit like a parrot. Or a human.

And when I say they know nothing about the phrase actually referring to Spanish currencies… that’s only true until they read the Wikipedia page about it, and then, if asked, they’ll be able to repeat phrases explaining the connection with silver coins. And if they read Treasure Island, they’ll also associate the phrase with pirates, without ever having seen a silver Spanish coin. Or a pirate.

A bit like most humans.

The AI parrots can probably also tell you, though they’ve never been there or seen the mountain, that the coins were predominantly made with silver from Potosi, in Bolivia.

A bit like… well… rather fewer humans. (Who have also never been there or seen the mountain, but unfortunately are also not as well-read and are considerably more forgetful.)

Since so much human learning and output comes from reading, watching and listening to things and then repeating the bits we remember in different contexts, we are all shaken up when we realise that we’ve built machines that are better than us at reading, watching and listening to things and repeating the bits they remember in different contexts.

And this leads to Quentin’s first theorem of Artificial Intelligence:

What really worries people about recent developments in AI is not that the machines may become smarter than us.

It’s that we may discover we’re not really much smarter than the machines.

Sign of the times: might ChatGPT re-invigorate GPG?

It’s important to keep finding errors in LLM systems like ChatGPT, to remind us that, however eloquent they may be, they actually have very little knowledge of the real world.

A few days ago, I asked ChatGPT to describe the range of blog posts available on Status-Q. As part of the response it told me that ‘the website “statusq.org” was founded in 2017 by journalist and author Ben Hammersley.’ Now, Ben is a splendid fellow, but he’s not me. And this blog has been going a lot longer than that!

I corrected the date and the author, and it apologised. (It seems to be doing that a lot recently.) I asked if it learned when people corrected it, and it said yes. I then asked it my original question again, and it got the author right this time.

Later that afternoon, it told me that StatusQ.org was the the personal website of Neil Lawrence.  

Unknown

Neil is also a friend, so I forwarded it to him, complaining of identity theft!

A couple of days later, my friend Nicholas asked a similar question and was informed that “based on publicly available information, I can tell you that Status-Q is the personal blog of Simon Wardley”.  Where is this publicly-available information, I’d like to know!

The moral of the story is not to believe anything you read on the Net, especially if you suspect some kind of AI system may be involved.  Don’t necessarily assume that they’re a tool to make us smarter!

When the web breaks, how will we fix it?

So I was thinking about the whole question of attribution, and ownership of content, when I came across this post, which was written by Fred Wilson way back in the distant AI past (ie. in December).  An excerpt:

I attended a dinner this past week with USV portfolio founders and one who works in education told us that ChatGPT has effectively ended the essay as a way for teachers to assess student progress. It will be easier for a student to prompt ChatGPT to write the essay than to write it themselves.

It is not just language models that are making huge advances. AIs can produce incredible audio and video as well. I am certain that an AI can produce a podcast or video of me saying something I did not say and would not say. I haven’t seen it yet, but it is inevitable.

So what do we do about this world we are living in where content can be created by machines and ascribed to us?

His solution: we need to sign things cryptographically.

Now this is something that geeks have been able to do for a long time.  You can take a chunk of text (or any data) and produce a signature using a secret key to which only you have access.  If I take the start of this post: the plain text version of everything starting from “It’s important” at the top down to “sign things cryptographically.” in the above paragraph, I can sign it using my GPG private key. This produces a signature which looks like this:

-----BEGIN PGP SIGNATURE-----
iQEzBAEBCgAdFiEENvIIPyk+1P2DhHuDCTKOi/lGS18FAmRJq1oACgkQCTKOi/lG
S1/E8wgAx1LSRLlge7Ymk9Ru5PsEPMUZdH/XLhczSOzsdSrnkDa4nSAdST5Gf7ju
pWKKDNfeEMuiF1nA1nraV7jHU5twUFITSsP2jJm91BllhbBNjjnlCGa9kZxtpqsO
T80Ow/ZEhoLXt6kDD6+2AAqp7eRhVCS4pnDCqayz0r0GPW13X3DprmMpS1bY4FWu
fJZxokpG99kb6J2Ldw6V90Cynufq3evnWpEbZfCkCl8K3xjEwrKqxHQWhxiWyDEv
opHxpV/Q7Vk5VsHZozBdDXSIqawM/HVGPObLCoHMbhIKTUN9qKMYPlP/d8XTTZfi
1nyWI247coxlmKzyq9/3tJkRaCQ/Aw==
=Wmam<
-----END PGP SIGNATURE-----

If you were so inclined, you could easily find my corresponding public key online and use it to verify that signature.  What would that tell you?

Well, it would say that I have definitely asserted something about the above text: in this case, I’m asserting that I wrote it.  It wouldn’t tell you whether that was true, but it would tell you two things:

  • It was definitely me making the assertion, because nobody else could produce that signature.  This is partly because nobody else has access to my private key file, and even if they did, using it also requires a password that only I know. So they couldn’t  produce that signature without me. It’s way, way harder than faking my handwritten signature.

  • I definitely had access to that bit of text when I did so, because the signature is generated from it. This is another big improvement on a handwritten signature: if I sign page 6 of a contract and you then go and attach that signature page to a completely new set of pages 1-5, who is to know? Here, the signature is tied to the thing it’s signing.

Now, I could take any bit of text that ChatGPT (or William Shakespeare) had written and sign it too, so this doesn’t actually prove that I wrote it.  

But the key thing is that you can’t do it the other way around: somebody using an AI system could produce a blog post, or a video or audio file which claims to be created by me, but they could never assert that convincingly using a digital signature without my cooperation.  And I wouldn’t sign it. (Unless it was really good, of course.)

Gordon Brander goes into this idea in more detail in a post entitled “LLMs break the internet. Signing everything fixes it.”   The gist is that if I always signed all of my blog posts, then you could at least treat with suspicion anything that claimed to be by me but wasn’t signed.  And that soon, we’ll need to do this in order to separate human-generated content from machine-generated.

A tipping point?

This digital signature technology has been around for decades, and is the behind-the-scenes core of many technologies we all use.  But it’s never been widely, consciously adopted by ordinary computer users.  Enthusiasts have been using it to sign their email messages since the last millennium… but I know few people who do that, outside the confines of security research groups and similar organisations.  For most of us, the tools introduce just a little bit too much friction for the perceived benefits.

But digital identities are quickly becoming more widespread: Estonia has long been way ahead of the curve on this, and other countries are following along.  State-wide public key directories may eventually take us to the point where it becomes a matter of course for us automatically to sign everything we create or approve.

At which point, perhaps I’ll be able to confound those of my friends and colleagues who, according to ChatGPT, keep wanting to pinch the credit for my blog.

 

 

 

 

 

 

 

Clippy comes of age?

I’m old enough that I can remember going into London to see the early launch demos of Microsoft Word for Windows.  I was the computer officer for my Cambridge college at the time, and, up to that point, everyone I was helping used Word for DOS, or the (arguably superior) WordPerfect.

These first GUI-enabled versions of Word were rather good, but the features quickly piled on: more and more buttons, toolbars, ribbons, bells and whistles to persuade you, on a regular basis, to splash out on the next version, unwrap its shrink-wrapped carton, and install it by feeding an ever-increasing number of floppy disks into your machine.  

ClippyAnd so for some of us, the trick became learning how to turn off and hide as many of these features as possible, partly to avoid confusing and overwhelming users, and partly just to get on with the actual business of creating content, for which we were supposed to be using the machines in the first place.  One feature which became the iconic symbol of unwanted bloatware was ‘Clippy’ (officially the Office Assistant), which was cute for about five minutes and then just annoying. For everybody. We soon found the ‘off’ switch for that one!

These days, I very seldom use any Microsoft software (other than their truly excellent free code editor, VSCode, with which I earn my living), so I certainly haven’t sat through any demos of their Office software since… well, not since a previous millennium.

But today, since it no longer involves catching a train into London, I did spend ten minutes viewing their demo of ‘Microsoft 365 Copilot’ — think Clippy endowed with the facilities of ChatGPT — and I recommend you do too, while remembering that, as with Clippy, the reality will almost certainly not live up to the promise!

Still, it’s an impressive demo (though somewhat disturbing in parts) and though, like me, you may dismiss this as something you’d never actually use, it’s important to know that it’s out there, and that it will be used by others.

 

 

ChatGPT is famous for producing impressively readable prose which often conceals fundamental factual errors.  Now, that prose will be beautifully formatted, accompanied by graphs and photos, and therefore perhaps even more likely to catch people unawares if it contains mistakes.  

The text produced by these systems is often, it must be said, much better than many of the things that arrive in my inbox, and that will have some advantages.  One challenge I foresee, though, is the increasing difficulty in filtering out scams and spams, which often fail at the first hurdle due to grammatical and spelling errors that no reputable organisation would make.  What happens when the scammers have the tools to make their devious schemes grammatically correct and beautiful too?

I would also be interested to know how much of one’s text, business data etc is uploaded to the cloud as part of this process?  I know that most people don’t care too much about that — witness the number of GMail users oblivious to the fact that Google can read absolutely everything and use it to advertise to them and their friends — but in some professions (legal, medical, military?), and in some regimes, there may be a need for caution.

But it’s easy to dwell on the negatives, and it’s not hard to find lots of situations where LLMs could be genuinely beneficial for people learning new languages; struggling with dyslexia or other disabilities; or just having to type or dictate on a small device a message that needs to appear more professional at the other end.

In other words, it can — to quote the announcement on Microsoft’s blog page — help everyone to ‘Uplevel skills‘.  

Good grief.  Perhaps there’s something to be said for letting the machines write the text, after all.

Q Tips

Some simple tricks for Mac users.  Do you know all of these?

 

Direct link

Standalone installers for macOS

This is one of those posts intended to help people Googling for the subject, and to help refresh my memory when I next have to use it!  Non-techie, and especially non-Mac readers, may wish to skip this one!

Monterey installer

Most of us upgrade our Mac operating systems using the automatic software update process which replaces the existing version on our internal hard disk with the next version.

But suppose you don’t want to install it on the same hard disk?  This can be desirable for a variety of reasons: you may want to test a new version before committing to it; you may prefer to keep the existing disk intact; you may wish to boot your machine from one drive when the kids are using it and keep a completely different world when you are using it for work.  I remember, a long time ago, when my laptop’s screen died, I was able to borrow a friend’s spare one and run it for a week using a clone of my hard disk on an external drive before handing it back to him completely unchanged.  (Thanks, John!)

Anyway, I was helping a friend last night who boots her elderly iMac from an external USB drive, because the internal one is dead and she’s not currently in a position to replace it.  I wanted to give her a nice, clean installation on another external disk to use in future, and for that we needed to install Monterey somewhere other than the place where it was currently running.  To do this, you need a standalone installer program, which you can run from your Applications folder and direct as to the installation location.

It’s not always easy to find the right place to download these from Apple’s site.  So here’s a tip I came across which worked nicely.  You need to type a couple of commands into the Terminal, but they’re easy ones.

softwareupdate --list-full-installers

This will give you the list of available installers suitable for your machine.  On my new MacBook Pro it currently looks like this:

$ softwareupdate --list-full-installers

Finding available software
Software Update found the following full installers:
* Title: macOS Ventura, Version: 13.2, Size: 12261428KiB, Build: 22D49
* Title: macOS Ventura, Version: 13.1, Size: 11931164KiB, Build: 22C65
* Title: macOS Ventura, Version: 13.0.1, Size: 11866460KiB, Build: 22A400
* Title: macOS Ventura, Version: 13.0, Size: 11866804KiB, Build: 22A380
* Title: macOS Monterey, Version: 12.6.3, Size: 12115350KiB, Build: 21G419
* Title: macOS Monterey, Version: 12.6.2, Size: 12104568KiB, Build: 21G320
* Title: macOS Monterey, Version: 12.6.1, Size: 12108491KiB, Build: 21G217

Then, once you’ve chosen your version, you run:

softwareupdate --fetch-full-installer --full-installer-version 12.6.3

where ‘12.6.3’ is the version you want.  After quite a while, you’ll find an app named something like ‘Install macOS Monterey’ in your Applications folder.  When you run this, it will think for quite a long time, and then give you various options, including the preferred destination drive for your installation.  

In my case, I had downloaded the installer onto one external drive, and then was installing the OS onto another, and both of these could be done without actually requiring my friend’s machine for which the new disk was intended.

Now, some things to note: 

  • First, pay attention to the sizes listed in the output of the first command.  Most of the recent OSes have installers of around 12GB, which means you don’t want to be on a slow or expensive connection, or in a hurry, to do this.  You also need to have sufficient space free on whichever drive you put the installer.
  • Second, note that the list you get shows the appropriate installers for the machine on which you’re running the command.  If your eventual target for the disk is a different machine, if may not have the same options.  In particular, you can’t do this on an Apple Silicon Mac to get an installer for an Intel one, but even within the Intel world, you need a machine of similar vintage.
  • Finally, when you come to run the installer, it will only do so if the OS is a valid one for the machine on which you’re running it.  So if, like me, you’re doing it on another machine, make sure they’re reasonably similar.  In my case, I knew that Monterey would run happily on both machines. When the installer finishes, the machine will reboot from the disk it’s just created, but you can just shut it down and move the disk to its actual destination.

Finally, if you are using an external drive for your OS, you should probably go into System Preferences > Security & Privacy > FileVault and enable encryption for the disk.  If somebody decides to pinch it, or just unplug it and connect it to another machine, they wan’t be able to get in without your password.

For me, it all worked beautifully, and my friend can now boot into her old world or her new clean one simply by shutting the machine down and swapping disks.

Unblocked?

One of the great benefits of the internet, of course, is its ability to give you a smug sense of satisfaction when you find others who agree with your point of view. This can be further enhanced after a short period if you feel that historical events have actually proved you were right all along.

So powerful is this effect that I’ve just been to check whether the domain IToldYouSo.com was still available. But it wasn’t. “Well”, you’re probably saying, “I could have told you that…”

I can’t help wondering whether, if you added it up on a global scale, the tears shed in recent days over the collapse of the FTX crypto exchange have been balanced by all the small self-affirming boosts for those of us who always felt this cryptocurrency stuff was too good to be true, and are now experiencing emotions somewhere between Schadenfreude and “There but for the grace of God…”!

The key technology behind most cryptocurrencies is, of course, the blockchain: a distributed ledger consisting of entries that are like the laws of the Medes and Persians; once written, they cannot be changed. What’s more, this system doesn’t require you to trust Medes, Persians or anyone else to maintain it because this ledger is distributed over many tens of thousands of independent machines. It’s often described as a zero-trust system.

It’s particularly appealing to conspiracy theorists who distrust all big corporations and governments, and also to those who live in regimes that are genuinely untrustworthy, or where the rule of law is not well-established. Once your purchase, contract, will, marriage certificate, patent application or whatever is recorded on a blockchain, there’s theoretically nothing anybody can do to get rid of that record. I’m reading Nineteen Eighty-Four again at the moment, and one of the keys to The Party’s absolute power in that book is their ability to rewrite history at any time, and erase all evidence of having done so. Not so with blockchains!

Sounds wonderful, doesn’t it? Especially if you ignore for now the fact that most implementations turned out to be phenomenally power-hungry to run. It is a clever technology, and quite apart from the ridiculous amounts of cash that have been converted to and from cryptocurrencies and similar gambles like NFTs, huge amounts have also been invested in startups that are building things using blockchain technologies.

But there’s a problem.

In its first 14 years, at least, despite vast amounts of interest and investment, it’s been very hard to identify more than a small handful of real use cases of the blockchain. (The Cambridge Centre for Carbon Credits is run by very smart friends of mine, and may well prove to be an example of a great application.)

But in general, yes, there are lots of things you can build using Distributed Ledger Technologies (to give the more formal generic term), and there are many systems that would probably be better if they were built that way, but it almost always turns out to be much easier just to use a database and trust somebody! If you don’t want to trust any individual organisation, then you can create an industry-wide standards body or something similar to run your database.

Sometimes you might use an irreversible ledger, but again, if you can just trust somebody to look after it, you can avoid all that nasty messing about with the complexity and environmental impact of the proof-of-work algorithm: the normal way of avoiding the need for trust.

All of the above is a very long introduction to Tim Bray’s interesting article about how Amazon’s AWS team, providers of the largest computing facilities in the world, basically came to the same conclusion about blockchains as I did, which made me feel smug.

History, of course, may tell a different story, but I’ll have edited this blog post by then, because it’s in a database.

Thanks to John Naughton and Charles Arthur, both of whom linked to Tim’s article.

© Copyright Quentin Stafford-Fraser