Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124

AI models can be somewhat human-like.
a new one Study From the University of Texas at Austin, Texas A&M and Purdue University show that large language models fed a diet of popular but low-quality social media content experience a kind of “brain rot” that might be familiar to anyone who’s been doomscrolling on X or TikTok for too long.
“We live in an age where information grows faster than attention—and much of it is designed to capture clicks, not to convey truth or depth,” said Junyuan Hong, an incoming assistant professor at the National University of Singapore who worked on the research as a graduate student at UT Austin. “We wondered: What happens when AIs are trained to do the same things?”
Hong and his colleagues fed two open-source large language models a variety of text to pre-train. They examined what happened when the models were fed a mix of highly “engaging” or widely shared, social media posts and sensational or hyped text such as “wow,” “look” or “only today.”
The researchers then used various benchmarks to measure the impact of this “junk” social media diet on two open source models: Meta’s Llama and Alibaba’s Coin.
Models fed junk text experienced a form of AI brain rot – cognitive decline including reduced reasoning ability and degraded memory. Models also became less morally aligned and more psychopathic according to the two measures.
The results mirror research on human subjects, which shows One that has low quality online content Harmful effects On human cognitive abilities. The prevalence of the phenomenon has seen the Oxford Dictionary call it “brain rot”. Word of the year in 2024.
The findings are important to the AI ​​industry, Hong said, because model-makers can assume that social media posts are a good source of training data for their models. “Training on viral or attention-grabbing content can look like scaling data,” he says. “But it can quietly erode logic, ethics, and long-term attention.”
The brain rot of LLMs seems particularly worrisome when AI itself is increasingly creating social media content, much of which is seemingly optimized for engagement. The researchers also found that models impaired by low-quality content could not be easily improved by retraining.
The findings also suggest that AI systems built around social platforms like Grok may suffer from quality control problems if user-generated posts are used in training without looking at post integrity.
“As more AI-generated slop spreads across social media, it contaminates the data that future models will learn from,” Hong said. “Our findings show that once this type of ‘brain rot’ has set in, subsequent clear training cannot completely reverse it.”
This is a version Will Knight’s AI Lab Newsletter. Read previous newsletters here