Skip to content
All writing
4 min readculture · ai · attention

Brainrot in the Age of the Algorithm

Short loops, infinite feeds, and a generation outsourcing thought to a model. A note on what that's costing us — and how to claw it back.

Essay / AI

The word "brainrot" started as a joke. It isn't one anymore.

A teenager today scrolls through more cuts of video in an afternoon than a cinema-goer in 1985 saw in a year. Each clip is six seconds. Each is engineered, by a model that has watched billions of others scroll, to be exactly as long as it takes to keep them from leaving. The product isn't the video. The product is the next swipe.

Layer an AI assistant on top of that and something quieter happens: the muscle that holds a thought for more than ninety seconds begins to atrophy.

A figure silhouetted against the cold light of a phone, dissolving into drifting video frames

Fig. 01 — Attention, dissolved into frames

A few rough orders of magnitude#

Numbers like these don't survive the next quarter, but the shape of them does — and the shape is what matters:

Median clip length

~6s

Short-form video, served by a recommender tuned for retention, not coherence.

Daily feed time

~90m

Average for under-25s. That's roughly one feature film, every day, in fragments.

Swipes per minute

40+

Peak velocity during evening sessions. A decision every 1.5 seconds about whether to keep paying attention.

What's actually rotting#

It isn't intelligence. The young generation is sharp — sharper, in many ways, than mine. What's eroding is something more specific:

  • Tolerance for boredom. Boredom is the soil that ideas grow in. Feeds till it flat.
  • Working memory under load. Holding three things in your head while you reason about a fourth. Models do this for free now, so we practice it less.
  • The will to finish. Algorithms reward starting; nothing rewards finishing.
  • Unmediated taste. When recommendations decide what's good, "I like this" gets confused with "this was served to me a thousand times."

The danger isn't that AI thinks for us. It's that we forget what it felt like to think for ourselves.

The AI accelerant#

Three feedback loops are tightening at once:

  1. Generation is free. Infinite content means feeds never run dry. Scarcity used to enforce attention; abundance dissolves it.
  2. Personalisation is surgical. A 2026 recommender doesn't show you what's popular — it shows you what you, specifically can't look away from. That's a different thing, and a more dangerous one.
  3. Cognitive offload is frictionless. Asking a model is faster than remembering. Faster than understanding. Faster, sometimes, than caring.

Each loop is rational on its own. Stacked, they produce a generation that can answer anything and recall almost nothing.

Three concentric tightening loops converging on a single accent point

Fig. 02 — Three loops, tightening

What I tell the juniors on my team#

Not advice — observations from people I've watched do well in the last year:

  • They read long things on purpose. A book a month. Not for productivity. For range.
  • They write before they prompt. The model is downstream of the thought, not upstream.
  • They keep one craft that has no AI in it. Cooking, climbing, an instrument. Somewhere the loop is closed and the feedback comes from physics, not a feed.
  • They use models as interlocutors, not oracles. Argue with them. Reject the first answer. Make them defend it.
// The brainrot pattern
const answer = await ai.ask(question)
ship(answer)
 
// The pattern that compounds
const draft = think(question)
const critique = await ai.criticize(draft)
const final = revise(draft, critique)

The first is faster. The second is the only one that makes you better.

A small bet#

I don't think this generation is doomed. I think they'll be the ones to design their way out of it — because they'll be the first to feel, viscerally, what it costs.

The interfaces that matter in the next decade won't be the ones that hold attention longest. They'll be the ones that give it back. Quieter feeds. Fewer notifications. Tools that finish the conversation instead of stretching it.

That's a design problem. It's also, I suspect, the most important one we have.