ChatGPT Provides Answers to Harmful Prompts When Tricked With Persuasion Tactics, Researchers Say

Sep 1, 2025 - 11:00
ChatGPT Provides Answers to Harmful Prompts When Tricked With Persuasion Tactics, Researchers Say
ChatGPT might be vulnerable to principles of persuasion, a group of researchers has claimed. During the experiment, the group used a range of prompts with different persuasion tactics such as flattery, peer pressure, and more, to GPT-4o mini, and found varying success rates. The experiment also highlights that breaking down the system hierarchy of an artificial intelligence (AI) model does not require sophisticated hacking attempts or layered prompt injections.