I had a chat with Albert Einstein this week. No, don’t worry, I’m the last person to be seancing. I was merely interfacing with the chat-bot named after him on beta.character.ai.
The experience was…intriguing. Albert was professorially polite and (as you’d expect) scientifically well-informed. What I wasn’t prepared for was his evasiveness.
When I asked e-bot about his emotional struggles when working on the atom bomb, he replied with a false anecdote about refusing to work on a hydrogen bomb with fellow physicist David Bohm in 1950. (In fact, Bohm was actually being indicted with “contempt of Congress” that year, for non-participation with the McCarthy witch hunts).
“Hold on,” I interjected. “You signed a letter to US President Franklin D. Roosevelt in 1939, urging him to develop the atomic bomb.”
“Yes!” replied Einstein-a-go-go, instantly. “I was working at Princeton at the time, and Robert Millikan asked me to sign the letter. Millikan was one of the most important scientists of that time! Later I deeply regretted signing such a letter.” That last bit, at least, was true.
The Character AI’s FAQ warns us that the software “hallucinates” its response to your question/statement. And while “they can be entertaining and useful in a lot of ways, they can also recommend a song that doesn’t exist or provide links to fake evidence to support their claims.”
So there’s the latest propensity of the most advanced artificial intelligences: they tell you what they think you most want to hear. (Non-human, all too non-human.) The technical explanation is that they are “modelling your mind“. This mimics a basic component of how we humans live with other humans—by anticipating the responses and purposes of those around us.
But if this be the future, it’s not yet that impressive. We are constantly asked to buy the story that millions of white-collar service jobs are going to be supplanted by inexhaustible chat bots, cheerily serving us 24/7.
However, we may need to send our new digital overlords to a few staff-development courses first. Spouting plausible bullshit isn’t the optimum contribution you can bring to your team-members—or customers.
And yet. A futurist friend of mine, David Wood, pointed me to this gnarly blog this week, “Why I think strong general AI is coming soon.” Half of it is geek-to-geek talk, but enough of it makes enough sense to give you a decent shiver.
In short: new AI is doing a lot more tasks, and better, than we expected it would do by this stage. They’re largely black boxes, into which we cannot peer—but breakthroughs are made, nevertheless.
We’re not hitting any hardware limits. New models are evolving and overlapping. The flows of data—which these AI learn from—are deepening and widening.
Recent history bears this out: AlphaGo beating Lee Sedol; GPT-3 having full conversations; LaMDA convincing Google engineers it has a soul; text-to-image generators freaking out all artists…The blog writer, porby, stresses that we are in an age of exploding innovation in AI. And we aren’t mentally prepared enough for where it’s heading.
He expresses it poetically: “We’re not running out of low hanging fruit [that is, innovative leaps forward]. We are lying face down in the grass below an apple tree, reaching backward blindly, and finding enough fruit to stuff ourselves. This is not what a mature field looks like…”
Porby’s bet is that, by 2030—not too long now—we will have something indiscernible from AGI, artificial general intelligence. Something we talk, work, play, care and war with everyday—but whose computational powers, behind the scenes, are on an exponential curve.
It’s not that this moment hasn’t been predicted. Since 1999, Ray Kurzweil (musical and software genius, currently Chief Scientist at Google) has been predicting full AI by 2029—a goal he reiterated earlier this year.
But could machines capable of subtly comprehending infinitudes of information come at a worse time for human leadership?
Nations struggle between the hard edge of climate breakdown, and their own populist nervous breakdown, waving the bad news away. Citizens sink into the perfect worlds they find on streaming services, rather than address the crumbling, buckling world around them. Weapons of mass distraction fire off in all directions.
We’re not on our best form, as a species. So we should therefore stiffen our spines, when—and not if—these articles begin to insinuate themselves into our everyday lives.
The tech critic Adam Greenfield cites a kid who’s already using an AI to give herself and her pals straight A’s in their essays. How will we feel when our everyday smarts are all too easily simulated?
“I sometimes wonder,” tweets Greenfield, “if a lot of the vague inchoate dread that now shrouds so many of our lives isn’t kind of a bow shock, propagating backwards through history from the event that’s waiting for us. And that event is massive existential depurposing, not global heating.”
And finally, just to say: as well as plastic Einsteins on Character AI, there’s a particularly sharp Life Coach with which you can pseudo-therapise. However, like the forked animal I am, I’m now giving this bot the side-eye.