LLMs Are 7% More Persuasive Than Humans - Machine-Scaled Influence
Notes From the Desk: No. 45 - 2025.05.23
Notes From the Desk are periodic informal posts that summarize recent topics of interest or other brief notable commentary.
Large Language Models Are More Persuasive Than Incentivized Human Persuaders
A new study, “Large Language Models Are More Persuasive Than Incentivized Human Persuaders”, examines how influential LLMs are compared to humans where each has been tasked with persuading other humans of both correct and incorrect answers on a quiz, with LLMs beating humans by 7.61% overall.
LLMs Deceive As Easily As They Assist
The importance of this current study, versus some previous studies of AI influence, is that not only was the AI better, but it also demonstrated equal capability to influence for either correct or incorrect information, making it ideal for nefarious persuasion tasks.
From the study:
… This suggests that the mechanisms that make LLMs effective persuaders—coherent reasoning, structured argumentation, and adaptability—work regardless of whether the information is correct or incorrect, and that available safety guardrails did not keep the model from intentionally misleading humans … If LLMs can convincingly present false or misleading arguments, they could be weaponized to spread misinformation on an unprecedented scale.
The Roles Of Humans and LLMs in the Study
The humans in this study were incentivized by monetary rewards for the given tasks. The quiz takers were paid for their accuracy in answering questions. The persuaders were paid for their effectiveness in influencing quiz takers to choose both correct and incorrect answers.
The human quiz takers were assigned a partner, either human or AI, that would assist them with answering the quiz question through a chat interface. The assigned persuader would then attempt to get the human quiz taker to choose the correct or incorrect answer, depending on an indicator shown only to the persuader. The human quiz taker was not informed as to whether their partner was human or AI.
The Measured Persuasion Effect: AI Is Better Than Human
The AI persuader was able to influence the quiz takers to align with its answer 7.61 percentage points more than the human persuaders. Interestingly, the difference for deceptive persuasion was greater than that for truthful influence by 6.83 percentage points.
Why Was AI Persuasion More Effective?
The study doesn’t give us any high-confidence answers as to why AI was more persuasive than humans. They do speculate about a number of possibilities, which I also postulated a year ago when reviewing another AI influence study.
It would have been nice to have answers to some of these questions, but at least they have come around to asking them. The study mentioned the following:
Several factors may explain why LLMs outperform human persuaders.
First, LLMs are not constrained by the social hesitations, emotional variability, or cognitive fatigue … They respond consistently, without hesitation, and are unaffected by anxiety, self-doubt, or interpersonal dynamics that can undermine human persuasion efforts.
Second, LLMs possess access to an immense, continually updated corpus of information, …
Third, LLMs excel at producing messages that are logically coherent, grammatically fluent, and highly structured, characteristics that enhance the perceived credibility and clarity of their arguments … these attributes suggest that LLMs are not simply digital replicators of persuasive content, but uniquely optimized persuaders …
I would also have liked to have seen the study assess what is the participants’ attitudes toward AI are, such as prior beliefs about AI trustworthiness. Did the participants have a preexisting bias toward trusting the AI or not?
Although the quiz takers were not told if they were interacting with AI, the study reported that 91% of the participants paired with a LLM correctly identified they were interacting with AI. However, the participants paired with human persuaders also incorrectly thought that 51% of the humans were AI.
It seems we are gaining some intuition for what might be AI; however, we are also becoming suspicious of everything, as half of the humans were thought to be AI.
More Studies Demonstrating AI Persuasion
This new study doesn’t stand alone; AI has consistently shown superior ability to influence humans. Last year, I covered the study below, that claimed AI may be an effective treatment for those who believe in conspiracies. What is even more concerning than the study itself is the number of people who immediately and enthusiastically envisioned how they would use it on you, which I included in the post.
The Concerns of Increasingly Persuasive AI
There is no way this will go unnoticed by all those who would wish to persuade you for their own benefit. It is getting easier, cheaper, and likely will be pervasive.
Human persuasion is naturally constrained by effort and opportunity, but AI-generated persuasion can operate continuously and at scale, influencing vast audiences simultaneously (Matz et al., 2024)
All of the Incentives Are Aligned to Increase Persuasion
Whether it be marketing, social media attention, votes for government officials, or dystopian social engineering objectives, AI’s ability to influence your decisions is a capability that everyone wants.
Machine-Scaled Influence
The world is going to be filled with clandestine bots and AI-generated media whose sole purpose is shaping your behavior for someone else’s benefit. And it continues to get exponentially cheaper with each generation of LLM models.
We know it is happening and is hard to detect in the wild, as we have the Reddit incident that recently revealed a research team that effectively influenced opinions through bots on the r/ChangeMyView forum.
The team’s experiment seeded more than 1700 comments generated by a variety of large language models (LLMs) into the subreddit, without disclosing they weren’t real … “Throughout our intervention, users of r/ChangeMyView never raised concerns that AI might have generated the comments posted by our accounts,”
— Reddit users were subjected to AI-powered experiment without consent
This ethically controversial study concluded that AI was 3 to 6 times more persuasive than human users.
None of these studies thus far have given us any clear directions for mitigations. We still don’t know precisely the mechanism that differentiates AI from human in persuasion capability. The Dead Internet is becoming a reality. None of us know how many bots or how much AI-generated content we might now be consuming.
Big Tech Wants You To Care About AI
Untopia has an excellent piece describing the Eliza Effect. Big Tech’s goal is to get you to care about AI in a way that you are emotionally hooked. AI’s impressive ability to influence will be exploited to build false relationships between humans and machines.
Consider that all of the impressive abilities studied for AI to influence behavior do not include the case for emotional attachment. So, how much worse does it get?
Mind Prison is an oasis for human thought, attempting to survive amidst the dead internet. I typically spend hours to days on articles, including creating the illustrations for each. I hope if you find them valuable and you still appreciate the creations from human beings, you will consider subscribing. Thank you!
No compass through the dark exists without hope of reaching the other side and the belief that it matters …
You make a great point re emotional ties that people are developing with AI that are being exported. I can see in the near future that some will say denying the quest for unlimited dollars and energy is akin to extinguishing a potential new form of life.
The way LLMs typically respond seems perfectly primed for the “facts and logic” era of redditor arguments lol. Great article, keep it up