Whenever I consider the current state of play in the field of artificial intelligence — particularly regarding the way companies like OpenAI are scrambling to create a kind of digital god, as it were — it reminds me an awful lot about the earful we all got during the Covid pandemic about the concept of gain-of-function research.
That term, of course, refers to the controversial process by which the evolution of potential pathogens and viruses is artificially sped up in a lab. The idea is that scientists shouldn’t have to wait around helplessly to see what sorts of nasty surprises nature eventually has in store for us; they can sort of jump to the end, as it were, and in the controlled setting of a lab figure out how to contain this or that threat before it ever materializes.
Needless to say, there’s a lot about that process that’s incredibly stupid and dangerous — starting with the possibility that academic nitwits with tunnel vision just might, in fact, be unable to control what’s inside Pandora’s Box after they’ve cracked it open.
I say all that because I remain convinced that a version of the same thing is going on with AI, in spite of the cultish manner with which OpenAI has always talked about its reason for being: We have to proactively develop artificial superintelligence to our benefit, they insist, lest it just sort of spring forth from the bowels of the web fully formed — like some kind of immaculately conceived digital intelligence that immediately put us under its heel. Or something.
Tech. Entertainment. Science. Your inbox.
Sign up for the most interesting tech & entertainment news out there.
By signing up, I agree to the Terms of Use and have reviewed the Privacy Notice.
So insists OpenAI CEO Sam Altman, a Silicon Valley veteran who’s built a career out of failing upward, who believes there is such a thing as a “median human,” and who has yet to be candid about why OpenAI fired and then rehired him. OpenAI, I should offer as a reminder, is the same company whose CTO a few months ago also mused publicly that some of the jobs that get killed off by AI maybe shouldn’t have existed in the first place. Wonderful people, this crew.
And it’s not just me saying these sorts of things about Altman & Co. People who’ve worked closely with Sam likewise portray him as essentially a toxic bullshit artist. Nevertheless, each time he opens his mouth and makes some new proclamation — as he’s just done with a blog post titled The Intelligence Age — the headlines pile up. The internet goes wild. And very serious people continue to take him very seriously.
“It is possible that we will have superintelligence in a few thousand days (!),” Sam writes in his new essay. “It may take longer, but I’m confident we’ll get there.”
He goes on: “It won’t happen all at once, but we’ll soon be able to work with AI that helps us accomplish much more than we ever could without AI; eventually we can each have a personal AI team, full of virtual experts in different areas, working together to create almost anything we can imagine.”
Folks, please don’t make a fool of yourselves by buying into any of that garbage. As computer scientist Grady Booch wrote in response to Altman’s new bluster on X/Twitter: “I am so freaking tired of all the AI hype: it has no basis in reality and serves only to inflate valuations, inflame the public, (garner) headlines, and distract from the real work going on in computing.”
Likewise on X/Twitter, Bloomberg columnist Matthew Yglesias writes: “Notable that @sama is no longer even paying lip service to existential risk concerns, the only downsides he’s contemplating are labor market adjustment issues.”
We’re not talking about some kind of pitched battle here between true believers who think the light of consciousness will eventually be found in computer code versus everyone else. It’s really a disagreement between people who think Sam’s artificial intelligence woo-woo heralds a positive future for humanity — and people like me, who see what they’re doing as the Silicon Valley version of HR laying off Bob in Accounting (“Look at it this way, Bob, now you’ve got time to take that vacation you and Shirley have always been talking about”).
We’ll be so much happier, OpenAI promises! We’ll be so much happier when their digital assistants 10x our free time, our happiness, our prosperity. Even though the fact of the matter is that OpenAI and their fellow AI cultists aren’t, in fact, trying to save us from the hypothetical future effects of Big Tech; if they get their way, our lives will be wholly and completely dependent on it. Sam tells you so, straight up, in his new essay (“People have an innate desire to create and to be useful to each other, and AI will allow us to amplify our own abilities like never before”).
It’s as if you and I have been forcibly shoved onto a crowded train to some kind of Dystopian Elsewhere. Our destination is still under construction, but the po-faced conductor with vocal fry promises we’re totally gonna love it.