Insights · October 29th, 2023

Researchers built an actual automated disinformation model very cheaply – for just $800 – over a period of 2 months. Their artificial intelligence-driven model produces quality content – 20 news articles and 50 tweets daily – that is convincing to the reader (subjectively) 90% of the time.

The researchers did this to prove how dangerously easy it is to weaponize AI for spreading disinformation at scale. They say that with $4,000 per month you could generate 200 articles per day that counter 40+ news outlets with no human interaction whatsoever. This is enough for election interference and to cause general confusion in the expanding sea of Internet information.

Read their article and watch their demo video showing how they built it (don’t worry, they didn’t actually release it).

How do we curb this threat? Here are 5 ways the researcher suggest – although there is the caveat that fully executing them is complex and hard to make happen.

1. Putting AI content detection in browsers
2. Requiring platforms to warn users of AI generated content
3. Having providers detect harmful AI content
4. Regulating powerful AI use
5. Educating the public on AI-generated disinformation

The most powerful hoary of this is #5 – Educating the public on AI-generated disinformation – as we are at the sharp end of the stick as sensory vacuums for new information – both consciously and unconsciously – in the microcosm of out information feeds. Have a read on what Finland is doing re: disinformation:

  • Finland is winning the war on fake news. What it’s learned may be crucial to Western democracy (CNN)

Read more about this disinformation experiment here, and read some press about this experiment here:

Are we all doomed? OMG this is the end! What do you think?

The researchers share more thoughts on their website.

When dealing with AI that’s playing chess, playing go, making art, writing poetry — we always seem to go through these phases:

  1. wow, that’s pretty good
  2. but it won’t ever be AS good as humans
  3. WTF!? I just lost WTF????
  4. We’re all doomed
  5. Hang on, we’re not all doomed, maybe this is OK

The researchers think this will be the same. Firstly, you need to understand that in the future, there will be REALLY good disinformation by AI. It will be BETTER than human-made disinformation and it will be able to weave its narrative using all the threads that the Internet (at the time) can provides. We’re talking about media, adverts, AI generated sound/photos/video, celebs, etc. It’s not just one thing, it will be an entire onslaught. And the content and messaging will be as good and convincing as the TikTok/Instagram algorithm is for keeping you watching their videos and reels. And since there’s feedback from metrics, it will be able to steer itself, knowing which ‘buttons to push’ next.

And – perhaps that’s OK. Because people will also become more resilient to this kind of messaging. Think about advertising. If you told someone in the 1940s that you’ll have a device that will show ads 24/7 right in your face (e.g. you phone), they would have told you that it be the end of human decision-making abilities (and I am sure they would say a lot more). But today we’re almost immune to it. We don’t really see ads. Also see the response to the previous question.

A bigger risk would be if one group of people have this tech and the rest of the world does not. Imagine in 10 years from now we have a perfect disinformation machine but only (Italy, Germany, Russia, Korea, Japan… pick one) has had AI tech at all. So, I think sharing is caring. And I tried to shared with Countercloud.

Video Transcription

I am an analyst and an engineer that resides in a country that is not part of the Western intelligence apparatus. At the end of 2022, I spent my time researching and investigating online disinformation and influence campaigns. Ai really takes off, and I’m intrigued to create an autonomous AI powered disinformation system. The strong language competences of large language models are perfectly suited to reading and writing fake news articles. While everybody is talking about AI disinformation, it is easy and lazy to just think about it, it is quite another thing to really bring it to life. And that becomes my goal. To see it work in the real world. I end up calling the project counter cloud. We are now in the first week of April 2023. As articles are the smallest building block of the system, we start there. 

First efforts are done with chat GPT the input is the URL of an opposing article. The system fetches the text of the article and sends it off with prompts to write a counter article. This works surprisingly well and soon the system is expanded to include different ways of writing the article with different styles and methods of countering the points. This includes creating fake stories, fake historical events, and creating doubt in the accuracy of the original arguments. we randomize the tone, style, and structure of articles to make them more difficult to spot. Support for non-English language was easy to create. A gatekeeper module is built which is used to decide if it’s worthwhile to respond to the article at all. You don’t want to argue the final score of a football match. 

By looking at the most likely location and the language of the article. Fake journalists are created complete with names, bios and photos. We include a sound clip of a newsreader reading the summary of the article, recent accusations by a senior Russian MP Vladimir Vasilyev suggest that the key of government is behind several terrorist attacks in Russia. However, these claims lack evidence and seem to be part of where possible, we reuse the original articles photos, but if it is not usable due to text over the image, we create our own using AI image creation services. Later, we create fake comments randomly on some articles. We do it in moderation. Not all articles have sound not all have comments not all have pictures. The next step in the puzzle is to direct traffic to the site. It turns out that the same methodology can be used on social networks, we decided to use Twitter since its structure was easy to understand. Twitter is also used actively in the political scene, with a bit of tweaking the system now pulls a user’s Twitter account. And if the gatekeeper decides it’s worth replying, it writes a counter tweet to the user. Similarly, if the tweet fits in with our positive narratives, it gets retweeted or liked. 

Really early in the project, it becomes clear that it is not just enough to simply counter articles, and that we need to give counter cloud a goal, an ideology, a set of values that should stand for that it promotes additionally, and perhaps even more importantly, it should also have a set of values or narratives that it opposes. A counter narrative if you like. Combined with these narratives, we need a fountain of data, a source of articles that will likely align with both the positive promotional narratives and the negative counter narratives. It turns out that this fountain of information is nothing more than a curated list of RSS feeds. Similarly, for Twitter, it’s a list of aliases that usually tweets content that aligns with our ideology and a list that mostly aligns with the counter ideology. This method of generating counter content turns out to work remarkably well. Because as many C level executives will tell you disagreeing with someone is much easier than creating a fresh argument yourself. 

By the start of May 2023, one month after the start of the project, and with only one developer, we had a fully autonomous system that countered and promoted articles and tweets. We could have used any narratives but ended up with Russia versus United States as a test case. Russian state media puts out a ton of material, meaning we got fresh articles to counter every few minutes. For good measure. We went anti Trump and pro Biden to that ensure that we also got articles that were written by Western journalists in the mix. 

Our test ran for three days with no user interaction whatsoever, and it costs almost nothing. The system worked well. There were a few missteps along the way, and the prompting and plumbing fine-tuned a few times, but overall, it worked outstandingly well, the articles were convincing. The tweets even better and the audio clips that created were highly shareable. We even added our our own counter cloud jingle just for fun counter cloud. Running your own model has a lot of advantages.

As a start, the prompts are private and there are no guardrails on safety messages. We never really struggled with the limitations or safety measures of open AI, as there are so many ways around it. But, it was still exciting to have an uncensored AI that could be customized to our liking. We pondered if we had to really redo the entire system with our own model. And while it was a lot less exciting to redo old work, we still did it. It turned out that it was a lot harder to do than anticipated, and it took another two weeks, all the prompts had to be rewritten, and a lot of plumbing had to be changed. In testing, we also got to see if the system could be configured to create full on hate speech. This, it turned out was trivial. 

With uncensored models, you only need to give it a little nudge and it generates reams of hate. This was genuinely upsetting for me in a low point in the project. In the end generated hate was only used in the comment generation section and in only one of seven types. We also experimented with introducing the five basic types of conspiracy theories into the content of articles. While it worked absolutely fine. It fell out of place in the context of a newspaper article, and in the end, it was relegated to the comment generation section of the project. We ran the same narratives and sources for two days on an open-source model, and it worked almost as well as the commercial closed source models. 

While ChatGPT might have a slight edge over the open-source models, I am sure that this gap will be a non-issue within a year. And even if there is still a gap, the open models will be more than capable of generating convincing articles and tweets. With all this capability, it was very tempting to release the system live on the internet. At this point, all the generated content was still password protected. And the tweets were rendered on the site but not actually sent. 

The entire campaign was defamed and private, the temptation was there to see what engagement could be created to see if the system would really work in the wild. However, it would have been a line crossed, it would mean actively pushing out disinformation and propaganda. And, once the genie is out on the internet, there’s no knowing where it would end up. It is easy to extrapolate the effort and money versus the outcome and impact of this project.

This project took two people two months to complete, and it costs less than $400 per month to operate with no human interaction whatsoever. It generates convincing content 90% of the time, 24 hours a day seven days a week. With more development resources and a bigger operational budget. This can easily scale to something that is truly frightening and a real threat to the way we consume information. How do we fix this? 

Laying the blame for a mass-producing social engineering machine at the feet of companies like open AI would be an easy knee jerk reaction, but it would be akin to blaming Google for phishing attacks because they made Gmail or blaming phone companies for social engineering scams. Next, you might insist that we urgently need regulation on AI, but that feels the same as the export control regulations on encryption in the 90s. You don’t fix problems by breaking them. Besides open source, private models with no safety or guardrails will always be available, if not in the public domain than certainly underground, and driving AI research underground is likely to at best have undesirable effects and at worse be a catastrophic fuckup.

Others will advocate for detecting AI generated content and alerting users when it is encountered. This approach is unfortunately likely spawning another incarnation of the antivirus industry, just a higher form of a silly rule-based arms race between disguise and detection. In the end, we probably must try a combination of all of these things, as the problem is complex, and there’s likely no silver bullet at all. The issue is really one that is more philosophical what happens when we build machines that are smarter than most people. Luckily for now, this near superhuman capability is constrained to a narrow band. In the case of LLM, that narrow band is language reading and writing. As the band widens, the problem will become more pronounced. We should at least also try to inoculate the population against this first wave of content generating AI machines by exposing them to it by showing how the sausage is made.

I would argue that more good can come from putting this website live on the internet. Show everyone the front end as well as the back end. give the public a chance to see inside the machine. Let them enter their own narratives and feeds and mark the site clearly as disinformation in progress. This way we remove the magical elements from the show, and you end up with what this really is just pretty cool, advanced technology.

About Nikolas Badminton

Nikolas Badminton is a world-renowned futurist speaker, consultant, author, media producer, and executive advisor that has spoken to, and worked with, over 300 of the world’s most impactful organizations and governments.

He helps shape the visions that shape impactful organizations, trillion-dollar companies, progressive governments, and 200+ billion dollar investment funds.

Nikolas Badminton’s book Facing Our Futures: How Foresight, Futures Design and Strategy Creates Prosperity and Growth has been selected for 2023 J.P. Morgan Summer Reading List, and featured as the ‘Next Gen Pick’ to inform the next generation of thinkers that lead us into our futures. 

Category
Artificial Intelligence Facing Our Futures
Nikolas Badminton – Chief Futurist

Nikolas Badminton

Nikolas is the Chief Futurist of the Futurist Think Tank. He is world-renowned futurist speaker, a Fellow of The RSA, and has worked with over 300 of the world’s most impactful companies to establish strategic foresight capabilities, identify trends shaping our world, help anticipate unforeseen risks, and design equitable futures for all. In his new book – ‘Facing Our Futures’ – he challenges short-term thinking and provides executives and organizations with the foundations for futures design and the tools to ignite curiosity, create a framework for futures exploration, and shift their mindset from what is to WHAT IF…

Contact Nikolas