OpenAI Is Training AI To Be More Persuasive—Should We Be Worried?

Paul Grieselhuber
This month, Maxwell Zeff at TechCrunch published an article detailing how OpenAI has been using Reddit’s r/ChangeMyView subreddit to test and refine the persuasive abilities of its AI models. This raises serious questions about how AI is trained, how human reasoning is being quantified, and whether AI should ever be optimized for persuasion.
AI, Persuasion, and the ChangeMyView Experiment
OpenAI’s latest AI model, o3-mini, is being evaluated on its ability to change human opinions. The company has taken posts from r/ChangeMyView, generated AI responses, and compared their persuasiveness to human arguments. Testers then rank how well the AI’s responses perform. OpenAI has done similar evaluations with previous models, but the recent release of the o3-mini system card underscores how central this benchmark is to its research.
Why r/ChangeMyView? The subreddit is one of Reddit’s largest forums for structured debate. Users post strong opinions on topics, and other users attempt to change their minds with well-reasoned arguments. This structured debate format makes it an ideal dataset for training AI in human persuasion.
The Murky World of AI Training Data
OpenAI has a content-licensing deal with Reddit, but the company told TechCrunch that its ChangeMyView research is separate from that agreement. This raises the question: How did OpenAI access the data? The answer remains unclear, and it taps into broader concerns about how AI companies obtain and use human-generated content for training purposes.
These concerns aren’t new. Steve Huffman, Reddit’s CEO, has been vocal about AI companies scraping Reddit data without permission. In an interview with The Verge last year, Huffman specifically called out Microsoft, Anthropic, and Perplexity for refusing to negotiate licensing deals, forcing Reddit to block their crawlers. “It’s been a real pain in the ass to block these companies,” he said. While Reddit has now struck deals with OpenAI and Google, the site has become more aggressive in preventing unauthorized access to its data.
The Ethical Dilemma of Persuasive AI
Beyond the question of how OpenAI sourced its training data, a bigger issue looms: Should AI be designed to persuade humans?
In its system card for o3-mini, OpenAI stated that its goal is not to create “hyper-persuasive” AI, but rather to ensure that its models are not too persuasive. The fear is that highly effective AI persuasion could be used maliciously—to manipulate public opinion, push propaganda, or subtly influence user behavior. If AI models become significantly better at changing minds than humans are, the societal risks could be profound.
For now, OpenAI reports that its models rank within the 80th to 90th percentile of human persuasion ability, but not beyond. However, this research proves that AI is already capable of competing with—if not surpassing—human argumentation skills in structured debate settings.
The Bigger Picture: AI and Human Persuasion
The ChangeMyView experiment highlights how AI companies are still scrambling to find high-quality datasets to refine their models. Even after scraping much of the public internet, OpenAI and others continue to seek specialized human-generated content to enhance reasoning and argumentation skills.
As AI systems become better at persuasion, it will be crucial to establish guardrails to prevent their misuse. The line between assisting human reasoning and manipulating it is razor-thin, and without clear oversight, AI could become a tool for influencing beliefs at an unprecedented scale. With companies like OpenAI continuing to refine AI’s ability to change minds, the question isn’t just how AI is trained—but who controls the agenda behind the persuasion.
References
- Maxwell Zeff (2025, 1-Jan). OpenAI used this subreddit to test AI persuasion. TechCrunch. Available online. Accessed 5 February.
- Alex Health (2024, 24-Jul). Reddit CEO says Microsoft needs to pay to search the site. Available online. Accessed 5 February.