Are disinformation campaigns possible with GPT-3?27. May 2021
Are disinformation campaigns possible with GPT-3?
With public and industry interest in high-performance natural language generation applications growing rapidly, and with GPT-3 the most powerful module yet available, researchers at Georgetown’s Center for Security and Emerging Technology (CSET) have been looking at whether GPT-3 can be used to run a large-scale automated disinformation campaign.
In their breaking news study, they share some astonishing results. As one would expect, GPT-3 can significantly reduce the amount of work required to write disinformation while increasing its reach and possibly its effectiveness.
Background: the system consists of a huge neural network, a powerful machine learning algorithm, and more than a trillion words.
The scientists now wanted to know: If GPT-3 can write credible news, is it also capable of producing compelling “fake news”? Another question was: if it can write opinion pieces, might it also produce misleading tweets?
Findings: From the study, it appears that GPT-3 is quite powerful on its own, but reaches new heights when combined with a skilled operator and editor. While the software won’t replace all humans in disinformation operations, it is undoubtedly a tool that helps create moderate to high-quality news at a scale far beyond what has existed before.
GPT- 3, they say, is a versatile and effective writer, yet it is limited by the data on which it has been trained. Furthermore, the authors criticize the far from perfect writing style, the lack of focus in narratives, and the strong tendency to adopt extreme views.
Should countries like China and Russia decide to automate their disinformation campaigns, software like GPT – 3 is most certainly the first choice. Both countries have the computing power to train and run such a system should they wish to do so.