The Hustle

An AI experiment made a surprisingly pleasant Twitter

Three mannequins looking at computers and tablets in various poses against a light blue background.

Researchers built a fake Twitter populated with bots — not to be confused with actual Twitter, now X, which is only mostly bots.

And it turns out there’s some promise that these kinds of simulacra can actually tell us something about humans, per a fascinating read from Business Insider.

How it worked

Lead scientist Petter Törnberg and his team built 500 chatbots using ChatGPT 3.5. Each had a persona, specifying its age, gender, income level, religion, politics, preferences, etc.

The bots were fed news from July 1, 2020 and let loose inside a Twitter-like social media platform to discuss it.

Why? To study how to build a better social network, given the idea that large language models (LLMs) — designed to act like people conversing — would allow researchers to efficiently study human behavior.

So what happened?

The study used three different models for how its Twitter functioned:

Törnberg told Insider that when discussing partisan issues “if 50% of the people you agree with vote for a different party than you do, that reduces polarization. Your partisan identity is not being activated.”

Of course…

… these are just bots in a sandbox and there’s still work to be done when it comes to training methods and ethics.

But Lisa Argyle, a political scientist at Brigham Young University, said LLMs with identity profiles like these often answer survey questions similar to the humans they were modeled after — so maybe there’s hope for social media yet.

Get the 5-minute roundup you’ll actually read in your inbox​

Business and tech news in 5 minutes or less​

Exit mobile version