AI News Spinning & Content Development System Withheld On Ethical Concerns

0
2295
ai news spinning

Elon Musk-backed company OpenAI has decided not to make public its latest AI that can produce extremely convincing “fake news”. The company is concerned about the potential harmful use of the artificial intelligence system by individuals and organizations looking to create and distribute fake information.

Though OpenAI typically makes its research public, its latest AI – named GPT-2 – has the potential to create fictitious text that resembles genuine news so closely that it makes it virtually impossible for the average internet consumer to distinguish between the two.

Unbelievably Credible Unicorns

In a sample provided by OpenAI, GTP-2 completed a fake news pieces announcing the discovery of a herd of unicorns in the Andes Mountains.

Based on a single introductory paragraph, the AI was able to generate a convincing news stories, complete with quotes and made-up names, and with a logical structure resembling that of an actual news pieces.

ai news spinning

The AI system is based on more than ten million news articles sourced from Reddit. The database used to train GPT-2 is up to several times larger than those used by previous AI systems.

Creating Infinite Fake News & Reviews

As well as generating highly convincing fake news, a tweaked version of the system can also create a virtually infinite amount of similarly credible product reviews – both positive and negative.

In this light, it is not difficult to see why OpenAI chose to withhold its latest research from the public.

The company also warns internet consumers to exercise more caution when reading online text.

OpenAI says that “The public at large will need to become more skeptical of text they find online, just as the ‘deep fakes’ phenomenon calls for more skepticism about images.”

As systems like GPT-2 might become available in the future, developers need to experiment with the models to discover both positive and harmful potential applications.

Jack Clark, OpenAI’s head of policy, said: “We need to perform experimentation to find out what they can and can’t do. If you can’t anticipate all the abilities of a model, you have to prod it to see what it can do. There are many more people than us who are better at thinking what it can do maliciously.”

More details can be found at https://www.artificialintelligence-news.com/2019/02/15/openai-latest-research-societal-impact and https://www.techradar.com/news/ai-news-writing-system-deemed-too-dangerous-to-release.