You may recall my sci-fi romance, A Heaven For Toasters, taking place some 100 years in the future. Leo, the android protagonist, exhibits some distinctly human characteristics—including the ability to feel human emotions. But could Leo become a writer or poet?
The Economist recently shared an article cheekily called Don’t Fear the Writernator – a reference to literature’s terminator. What prompted this was the news that researchers have come up with a more powerful version of automated writing.
So, how afraid should we be? Is Leo about to compose a sonnet to woo Mika?
Automated writing, in case you’re unfamiliar with the term, is best exemplified by Gmail’s Smart Reply feature. Gmail offers brief answers to routine emails. So, if someone asks you “shall we meet up for lunch?” Gmail suggests a variety of appropriate responses, for example, “Sure!”
More strikingly, Smart Compose kicks in as you write, suggesting endings to your sentences.
The system makes some sophisticated statistical guesses about which words follow which. Imagine beginning an email with “Happy…” Having looked at millions of other emails, Gmail can plausibly guess that the next word will be “birthday”.
Automated Writing, New Yorker-style
Now, New Yorker’s John Seabrook recently described a more powerful version of this technology, called GPT-2.
GPT has been refined with 40 gigabytes-worth of back-issues of the New Yorker. This lets it ably mimic the magazine’s style.
How Scared Should You Be?
From testing GPT-2, Seabrook realized that what eludes computers is creativity. By virtue of having been trained on past compositions, they can only be derivative. Furthermore, they cannot conceive a topic or goal on their own, much less plan how to get there with logic and style.
At various points in the online version of his article, readers can see how GPT-2 would have carried on writing Seabrook’s piece for him. The prose gives the impression of being human. But on closer inspection, it is empty—even incoherent.
To truly write, you must first have something to say. Computers do not. They await instructions. Given input, they provide output.
Such systems can be seeded with a topic, or the first few paragraphs, and be told to “write.” While the result may be grammatical English, this should not be confused with the purposeful kind.
As the Economist points out, to compose meaningful essays, the likes of GPT-2 will first have to be integrated with databases of real-world knowledge. This is possible at the moment only on a very limited scale. Ask Apple’s Siri or Amazon’s Alexa for a single fact—say, what year “Top Gun” came out—and you will get the answer. But ask them to present arguments in a debatable case—”Do gun laws reduce gun crime?”—and they will flounder.
An advance in integrating knowledge would then have to be married to another breakthrough: teaching text-generation systems to go beyond sentences to structures.
Seabrook found that the longer the text he solicited from GPT-2, the more obvious it was that the work it produced was gibberish.
Each sentence was fine on its own. Remarkably, three or four back to back could stay on topic, apparently cohering. But machines are eons away from being able to recreate rhetorical and argumentative flow across paragraphs and pages.
Not only can today’s journalists expect to finish their careers without competition from the Writernator—today’s parents can tell their children that they still need to learn to write, too.
Fake News and Writernator
A more plausible worry is that such systems will be able to flood social media and online comment sections with semi-coherent but angry ramblings that are designed to divide and enrage.
This is already happening, as a recent study on fake news from the University of California, Santa Barbara shows. Artificial intelligence allows bots to simulate Internet users’ behavior (e.g., posting patterns) which helps in the propagation of fake news.
For instance, on Twitter, bots are capable of a number of social interactions that make them appear to be regular people. They respond to postings or questions from others based on scripts that they were programmed to use. They look for influential Twitter users (Twitter users who have lots of followers) and contact them by sending them questions in order to be noticed and generate trust.
They also generate debate by posting messages about trending topics. They can do this by hunting for, and repeating, information about the topic that they find on other websites.
Bots’ tactics work because the average social media user tends to believe what they see or what’s shared by others without questioning. So bots take advantage of this by broadcasting high volumes of fake news and making it look credible.
But bots aren’t apparently that good at deciding what original comments by other users to retweet. They’re not that smart.
People are smart. But people are emotional. Real people also play a major role in the spread of fake news.
Fighting fake news will, then, get harder with the advent of increasingly sophisticated software. Most of the time, however, the spread of fake news is a consequence of real people, usually acting innocently. All of us, as users, are the biggest part of the problem.
Perhaps a flood of furious auto-babble will force future readers to distinguish between the illusion of coherence and the genuine article. If so, the Writernator, much like the Terminator, would even come to do the world some good.