There is a particular kind of unease that settles in when you realize a machine has figured you out before you figured out yourself. You scroll past a video, barely register it, then find yourself deep in a two-hour spiral on the same topic three days later, and you never told the platform you were interested. You never searched for it. You just paused, for maybe two seconds, and the algorithm took note.
You have asked yourself, ‘How is this ever happening?’
Welcome to the attention economy, where the currency is not money; it is now the wandering of your eye.
Most times, recommendation algorithms are often framed as a convenience story. They save you time by connecting you with music you’d never have found on your own. They surface the news story that happens to be exactly relevant to your situation. And sure, sometimes they do all of that. But there is a growing body of evidence that ranges from psychologists, media scholars, and former tech insiders alike that suggests the convenience framing is incomplete. These systems are not just learning what you like. They are shaping who you become.
The Mirror That Moves
One thing about the algorithm is that it personalizes; it feels like a mirror, but it behaves more like a funnel. The algorithm reflects your interests to you, but it is a selective reflection that gradually narrows the aperture of what you see. If you engage with fitness content, you will see more fitness content. But the algorithm also decides which fitness content: the extreme content, typically. The before-and-after photos. The transformation reels. The stuff that provokes a strong reaction, because strong reactions mean more time on the platform.
The philosopher Byung-Chul Han has written about what he calls the ‘transparency society’, a world in which everything is made visible, legible, and optimized, and in which this very transparency becomes a form of control. The algorithm is the ultimate transparency machine. It renders your behavior into data, processes that data into patterns, and returns those patterns to you as content that feels uncannily resonant. And because it feels resonant, you trust it. You feel seen. What you do not fully register is that you are being steered.
Identity in the Feed
This is where psychology enters the conversation. Psychologists have long understood that identity is not a fixed thing; it is a project, something we construct and reconstruct continuously through our interactions with the world. Erik Erikson wrote about identity formation as a lifelong process. Narrative psychologists like Dan McAdams have argued that we build our sense of self through the stories we tell about our experiences.
What happens when a significant portion of the stories you encounter is algorithmically curated? When the worldview you are exposed to is shaped, day by day, by a system that cares only about how long you linger?
In Philadelphia, as in cities everywhere, you can find communities built almost entirely around algorithmically surfaced content. Teenagers who discovered their political identities through YouTube rabbit holes. Adults who radicalized, in every direction, left and right — through a steady drip of increasingly extreme takes that were served up one after another because each one got a little more engagement than the last. The algorithm did not make these people who they are. But it had a hand in the making.
The Cultural Lag
Technology has always moved faster than culture’s ability to absorb it. The printing press took a century to fully reshape European society. Television reshaped family life and political communication over the decades. Social media and personalization algorithms have upended how humans encounter information in less than twenty years, and we are still only beginning to understand the consequences.
What we know so far: filter bubbles are real, though more complex than initially theorized. Eli Pariser coined the term in 2011, warning that personalization would create invisible barriers between different communities and worldviews. Subsequent research complicated this picture. People do encounter cross-cutting content online, but the most recent studies suggest that among heavy social media users, particularly those who rely on algorithmic feeds as their primary news source, the echo chamber effect is pronounced and politically significant.
We also know that algorithmic recommendations are not culturally neutral. The systems are built by particular people, in particular places, with particular assumptions baked into their design. What counts as ‘engaging’ content reflects cultural biases. What gets amplified and what gets buried is not random; it reflects choices, even if those choices are encoded in math.
Reclaiming the Scroll
None of this is an argument for retreating from the internet. That ship has sailed. But it is an argument for a particular kind of cultural literacy, what some researchers are calling algorithmic awareness: the capacity to recognize, at least in part, how your information environment has been shaped, and to make deliberate choices about it.
This might mean choosing to follow people whose worldviews differ from yours, not because you agree with them, but because you value the friction. It might mean periodically stepping outside the platforms entirely and engaging with ideas the old-fashioned way: a book, a stranger, a newspaper you didn’t choose. It might mean asking, when you encounter something that resonates deeply, whether it resonates because it is true or because it is telling you what you already believe.
The algorithm knows a great deal about you. More, in some respects, than you know about yourself. But knowledge of your behavior is not the same as understanding of your humanity. That gap, between what can be measured and what matters, is exactly where culture, philosophy, and human agency still have something irreplaceable to offer. While it is true that the machine can optimize the scroll. It cannot tell you who you want to be.





Leave a Reply