How AI went from weird to boring

In 2018, a viral joke started making the rounds on the internet: scripts based on “making a bot watch 1000 hours” of just about anything. The premise (coined by comedian Keaton Patti) was that you could train an artificial intelligence model on large quantities Saw movies, Hallmark specials or Olive Garden commercials and get back a bizarre funhouse mirror version with lines like “lasagna wings with extra Italy” or “her mouth is full of secret soup.” The scripts almost certainly weren’t actually written by a bot, but the joke conveyed a common cultural understanding: AI was foreign.

Strange AI was everywhere a few years ago. AI dungeon, a text adventure game truly powered by OpenAI’s GPT-2 and GPT-3, touted its ability to produce deeply imagined stories about the inner life of a chair. The first known AI art tools, such as Google’s computer vision program Deep Dream, unashamedly produced bizarre Giger-esque nightmares. Perhaps the archetypal example was Janelle Shane’s blog AI strangeness, where Shane trained models to create physically impossible nuclear waste warnings or sublimely inedible recipes. “Created by a bot” was shorthand for a kind of free-associative, nonsensical surrealism – both because of the technical limitations of the models and because they were more curiosities than commercial products. Many people had that seen what “a bone” (actually or supposedly) has produced. Fewer had used one. Even fewer people had to worry about this in everyday life.

But soon, generative AI tools would explode in popularity. And as they have done, the cultural shorthand for “chatbot” has changed dramatically – because AI is getting boring.

“If you really want to hurt someone’s feelings in 2023, just call them an AI,” suggested Caroline Mimbs Nyce in The Atlantic Ocean last May. Nyce charted the rise of “AI” as a derisive term – referring to material that was “boring or uninspired, full of clichés and recycled ideas.” The insult would reach new heights at the start of the Republican primaries in August, when former New Jersey Governor Chris Christie labeled rival Vivek Ramaswamy “a guy who sounds like ChatGPT.”

And with that, ‘AI’ – as an aesthetic or cultural descriptor – ceased to provide meaning foreign and is actually just an abbreviation for mediocre.

The insult would reach new heights at the start of the Republican primaries in August, when former New Jersey Governor Chris Christie labeled rival Vivek Ramaswamy “a guy who sounds like ChatGPT.”

Part of the shift comes from AI tools getting dramatically better. The surrealism of early generative work was partly a byproduct of its deep limitations. For example, early text models had limited memory, making it difficult to maintain narrative or even grammatical continuity. This produced the characteristic dream logic of systems like Early AI dungeonin which stories moved back and forth between settings, genres and protagonists over the course of the sentences.

When director Oscar Sharp and researcher Ross Goodwin created the AI-written short film in 2016 SunflowerFor example, the bot they trained to create it couldn’t even ‘learn’ the patterns behind proper names – resulting in characters named H, H2 and C. The dialogue is technically correct, but almost Borgesian in its oddity. “You need to see the boys and shut up,” H2 snaps during the film’s opening scene, in which no boys are mentioned. “I was the one who would live to be a hundred years old.” Less than a decade later, a program like Sudowrite (built on OpenAI’s GPT-3.5 and GPT-4 models) can spit out paragraphs of text that closely imitate clichéd genre prose.

But AI has also been deliberately pushed away from intriguing strangeness and towards banal interactions that often end up wasting people’s time and money. As companies search for a profitable vision of generative artificial intelligence, AI tools are becoming big business by becoming the least interesting version of themselves.

AI is everywhere right now, even in many places where it is a bad fit. Google and Microsoft present it as a search engine – a tool whose core purpose is to point users to facts and information – despite a deep-seated tendency to make things up entirely. The media has made some interesting attempts to leverage AI’s strengths, but it is most visible in low-quality spam that is neither informative nor (intentionally) entertaining and is designed purely to entice visitors to click a few ads. load. AI image generators are no longer seen as tailor-made artistic experiments, but are alienating large parts of the creative community; they are now largely associated with poorly executed stock art and invasive pornographic deepfakes, sometimes referred to as the digital equivalent of “a fake Chanel bag.”

AI tools become big business by becoming the least interesting versions of themselves

And as the stakes become higher regarding the safety of AI tools, guardrails and training appear to be making them less amenable to creatively unorthodox applications. In early 2023, Shane posted transcripts of ChatGPT refusing to play along with scenarios like a squirrel or creating a dystopian sci-fi technology, short-circuiting the now-trademarked “I’m sorry, but like an AI language model.” Shane had to resort to theatrics with what she called the “AI Weirdness hack,” telling ChatGPT to imitate older versions of AI models, which generated funny responses for a blog about weird AI. The AI ​​Weirdness hack has proven surprisingly adept at getting AI tools like Bloom to shift from boring or human-replicating results to word salad surrealism, an outcome that Shane himself has found a bit disturbing. “It’s creepy to me,” she mused in a post, “that the only reason BLOOM is getting weird designs with this method is because I’ve been seeding internet training data with lists of weird AI-generated text for years.”

AI tools are still capable of being funny, but this is usually due to their exaggerated performance of commercialized nonsense. For example, consider the “I apologize, but I cannot fulfill this request” table and chair set on Amazon, whose selling points include being “made with materials” and “saving you valuables and effort.” (You can pay a spammer almost $2,000 for it, which is less funny.) Or the detailed recaps of matches by a sportswriting bot, complete with strange phrases like “an encounter of the athletic kind.” The absurdity of ChatGPT is situational: dependent on real people doing painfully serious work with a tool they overestimate or fundamentally misunderstand.

It’s possible that we’re simply in an awkward in-between stage for creative AI uses. AI models exist in the uncanny valley between “so bad it’s good” and “good enough to be bad,” and perhaps in time we’ll see them become truly good, adept at remixing information in a way that feels fresh and unexpected. Perhaps the schism between artists and AI developers will disappear, and we’ll see more tools that amplify human idiosyncrasy rather than providing a lowest-common-denominator replacement for it. At the very least, it is still possible to juxtapose AI tools in a clever way – like a Bible verse about taking a sandwich out of a VCR or a hilariously overconfident assessment of ChatGPT’s art skills. But for now, you probably won’t want to read anything that sounds “like a bone” anytime soon.

Leave a Reply

Your email address will not be published. Required fields are marked *