Large language models can help detect social media bots — but can also make the problem worse
An external study of Twitter in 2022 estimated that between a third and two thirds of accounts on the social media site were bots. And many of these automatons flooding social media are dispatched to sow political polarization, hate, misinformation, propaganda and scams. The ability to sift them out of the online crowds is vital for a safer, more humane (or at least more human) internet.
But the recent proliferation of large language models (known as "LLMs" for short), such as OpenAI’s ChatGPT and Meta’s Llama, ...

















