Fake news may be the issue of the day, but what about “deepfake” news – that is, articles written by an artificial intelligence (AI) machine only pretending to be human? OpenAI, a company backed by several Silicon Valley heavyweights, claims it has developed a software that can mimic human writing so convincingly that those who invented it are too scared to divulge the full details. After being fed an initial sentence or question to start the ball rolling, the AI program GPT2 generates text in either fiction or non-fiction genres, matching the style of the initial human-input prompt.
“…it’s possible to generate malicious-esque content quite easily.”
“Deepfakes” are phony content created by “deep learning” (artificial intelligence) computers. This is a quickly developing field, but the researchers behind GTP2 have refused to release the full version of the product, due to concerns about it “being used to generate deceptive, biased, or abusive language at scale.” In other words, fake news and objectionable material can now be composed at speeds beyond the capabilities of human writers and sent out to spam the world.
Breaking News: Scientists Discover Unicorn!
Previous attempts at such technology stumbled when the bots were unable to “remember” details of the story and context; the GPT2 program, however, can compose a narrative without losing track of these elements. After receiving one or two lines as a prompt, the technology went on to write convincing fake news stories about nuclear material being stolen in Cincinnati, scientists discovering a unicorn (albeit one with four horns), performer Miley Cyrus being caught shoplifting, as well as Lord of the Rings fan fiction and an essay on why the American Civil War occurred – of which the first paragraph reads:
It is easy to identify why the Civil War happened, because so many people and so many books and so much television and films tell us that it was the cause, that it has something to do with race or economics or religion. There is much agreement that it was essentially a war of slavery on behalf of capitalism, about a century of slavery. But that’s not what most people think of when they think of the Civil War. Many people think of the war as about states rights. You’re not wrong about that, but it’s kind of misleading to say that the Civil War was a conflict between states’ rights and federalism. So let’s try again. What’s the idea that has inspired so many people to go onto the battlefields of Gettysburg and Antietam and Gettysberg (sic) and Petersburg and Fredericksburg? The American idea of the republic–a notion of limited government–is a great part of the history.
Yes, this technology could prove a boon to lazy high school students who don’t want to do their homework, but are there other major implications for truth and the spread of information? OpenAI computer scientists certainly think so; policy director Jack Clark commented that researchers “quickly discovered it’s possible to generate malicious-esque content quite easily.” Investor Elon Musk has since chosen to publicly distance himself from the company, tweeting, “I’ve not been involved closely with OpenAI for over a year & don’t have mgmt or board oversight.”
Automating the News
It turns out that some news outlets have been releasing computer-generated stories for ages; Digiday reports that Reuters, the Associated Press, and Bloomberg are already doing it for short reports, while The Washington Post’s own “robot reporter” published more than 800 stories in a year – and that was back in 2017! The Guardian recently experimented with the GPT2 program, which staff prompted to write new material based on the first lines of news articles and classic novels. The program picked up on the style of the text and proceeded with that information – simulating tone even to an individual level. As columnist Hannah Jane Parkinson wrote:
But, oh my God. Seeing GPT2 “write” one of “my” articles was a stomach-dropping moment: a) it turns out I am not the unique genius we all assumed me to be; an actual machine can replicate my tone to a T; b) does anyone have any job openings?
While AI-powered text may indeed force today’s commentariat into unemployment, that is unlikely to bother many people; there was little public sympathy following the recent layoffs at Buzzfeed, HuffPost, and others. But there are broader possibilities here related to fraud, wrongful convictions and accusations of crimes, propaganda, security, and even the stunting of human creativity and the artistic merit of the written word.
He Said What!?
But forget about text. The simultaneously emerging world of video deepfakes has even greater implications. “I’ll believe it when I see it,” we vision-oriented humans say when faced with the unknown, but what if you can no longer believe what your eyes tell you? A Bloomberg segment demonstrates exactly what deepfake video is and why the technology sparks concerns.
The potential for creating falsely incriminating “evidence” and convincing propaganda appears limitless. Internet users may already be reeling from the onslaught of “fake news,” or at least the insistence of media and government bodies that we are being inundated with the stuff. But it appears the snowball has only started rolling down the hill, and it may gain enough size and speed to affect individual lives more than any “Russian bots” or elections ever have.