Imagine posting one angry comment online—and finding yourself angrier, more anxious, and emotionally disconnected weeks later. According to a groundbreaking new study, that’s not just possible. It’s likely.

Researchers from Toronto Metropolitan University and the University of Guelph have revealed a chilling truth: hate speech doesn’t just impact its victims. It changes the behavior and emotional patterns of the people who post it.(Ghenai et al., 2025)
The study, published in Information Processing & Management, analyzed over 5.4 million tweets from 6,002 Twitter/X users, tracking emotional and linguistic changes for 120 days following hateful posts. The results were clear: users who posted hate content didn’t return to normal. They sank deeper into anger, anxiety, and moral outrage.
“We wanted to see whether hate online was just a momentary outburst or something more systematic,” said lead researcher Dr. Amira Ghenai of Toronto Metropolitan University. “What we found shocked us. Hate created behavioral momentum.”
Anger Begets Anger: How Hate Traps Its Author
The researchers found that users who posted hateful tweets exhibited a 75% increase in anger, a 34% rise in negative emotions, and a persistent decline in positive emotional expression compared to users who didn’t post hate.
These changes weren’t just statistical noise. They were measurable personality shifts, echoing through subsequent tweets weeks later. Hate wasn’t just an emotion—it became a behavior pattern.
The Subtle Language of Dehumanization
One of the most fascinating revelations? Haters changed the way they spoke. They began using more third-person pronouns like “they” and “them,” signaling psychological detachment. The language shifted toward viewing others as outsiders or threats.
They also leaned more heavily on power-based, death-related, and risk-centric vocabulary, painting the world as a hostile battlefield.
This kind of language, the researchers argue, is how online hate sustains itself. It builds a narrative where the hater is a victim, and the out-group is a threat.
Profanity and Outrage: The Addictive Combo
Immediately after posting hate, users showed a spike in profanity and moral outrage. These weren’t one-off rants. Hate tweets were echo chambers, attracting similar content and triggering more of the same.
Even as the profanity slightly declined over time, the outrage remained elevated. Hate speech became a loop: post, react, escalate.
Why Haters Sound the Same: A Linguistic Phenomenon
Another curious finding? Hateful tweets look more alike than normal ones.
Hate content formed tightly interconnected clusters of topics. This suggests that hate speech doesn’t just emerge from randomness. It adheres to a script. The authors found that hateful content relied on a few recurring themes, making it more likely to be amplified by algorithmic feeds and echo chambers.
In contrast, neutral or positive tweets were more diverse, topic-rich, and emotionally varied.
What This Means for the Public and Platforms
The implications are huge. For individuals, it’s a warning: venting online may reinforce the very emotions you’re trying to expel. Hate, once posted, becomes a behavioral habit.
For platforms like X (formerly Twitter), the study suggests that content moderation isn’t enough. Interventions must target the behavioral loop—interrupting not just what people post, but how that post shapes what they post next.
Final Thought: Hate Doesn’t Just Spread. It Sticks.
This research highlights an uncomfortable truth: posting hate rewires you. It builds a self-reinforcing identity where negativity becomes your new normal.
The next time you see an angry post going viral, or feel the urge to blast your frustration online, remember: hate isn’t just toxic to society. It’s a trap for your future self.