Let's keep this going
Sign up today to receive our daily news briefs featuring a handful of the most important stories in business, tech, and life. Weekdays at 9am Pacific

TERMS & PRIVACY POLICY
EMAILED ON February 15, 2019 BY Conor Grant

OpenAI says its new robo-writer is too dangerous for public release

OpenAI, an AI nonprofit, developed a text generator so good at creating “deepfake news” that its creators decided the program is too dangerous to release to the public.

OpenAI’s writing won’t end up in your Facebook Feed anytime soon, but robo-writers are already helping other companies write, making it harder than ever for regulators to rein in fake news.

OpenAI-ing Pandora’s box

In 2015, Sam Altman and Elon Musk became worried that the world’s most powerful AI programs were all being developed behind closed doors — that’s why they launched a nonprofit called OpenAI with a mission to make “safe” artificial intelligence publicly available. 

But OpenAI’s program, called GPT2, is so good that it produces writing that’s virtually indistinguishable from real journalism, opening the door for increasingly sophisticated fake news.

“It’s very clear that if this technology matures, it could be used for disinformation or propaganda,” OpenAI’s policy director Jack Clark told the MIT Technology Review. “We’re trying to get ahead of this.”

Robo-reporters are already out there writing

OpenAI’s text generator will be kept under lock and key until its creators understand what it can and can’t do. 

But, robo-writers are already roaming: Bloomberg News uses a robo-writing program called Cyborg in ⅓ of its articles, and The Washington Post, the Associated Press, and The Guardian all produce “machine-assisted” writing.

GPT2 — or something like it — will eventually go public. When it does, researchers hope they’ll have a way to control it. “We’re trying to build the road as we travel…” Clark told The Guardian.

Sounds like something a robot would write…

Get news (like this) delivered by email every morning