Digital Tribulations 17: Coding Autonomy? Hackfeminism and Free Software in Contemporary Mexico

Interview with Sonia Irene Guzman.

The introduction of Digital Tribulations, a series of intellectual interviews on the developments of digital sovereignty in Latin America, can be read here.

I met Sonia Irene Guzman — a consultant, academic, and activist in the free software and culture movement and a Doctor of Feminist Studies at UAM-Xochimilco — at a public consultation organized by the Agencia Digital de Innovación Pública de la Ciudad de México (ADIP). I arrived there on an Ecobici, an excellent service given its affordable annual fee, on a Saturday morning: the sun was out and I arrived sweaty and covered in dust due to the city’s perpetual, mad traffic.

I am fascinated by this newly formed agency that organizes events on digital sovereignty with a long-term perspective (20 years) and seeks to listen to its citizens; a place where speakers still address each other as compañero, something that in Italy is now only found in the circles of the few remaining socialists and communists. It seems to me they are on the right track, and I managed to interview the titular (director), an interview that will be released later.

Irene was invited to speak at the event, which opened with a video celebrating Mexican culture which by praising the grand urban planning and the ancestors’ advanced techniques for filtering salt and fresh water, it made the current dependence on garrafón (bottled water jugs) feel like a degeneration. But all the speakers were top-tier, the first spoke of technological infrastructure as if it were water, arguing that its protection should be treated like that of water wells—as something strategic, and the discussion lasted more than 2 hours. 

Me and Irene met a few days after the event in another neighborhood, sitting at a bar, to discuss the Mexican context, transfeminism, and free software.

***

What is your trajectory, and why are you interested in digital sovereignty?

I studied design at a public university in Mexico City, at UNAM. Then I did a Master’s in Visual Arts, where I developed research on free software and design. That was my first approach—the first time the light bulb went off for me about what digital freedoms and digital sovereignty meant. Then I began a teaching career, which I would say is the foundation of what I do. I’m a professor, and I really love it. I think that’s where I was next able to identify the needs of both the university and the students, and many of the difficulties of using software that isn’t ours or infrastructures that aren’t ours.

Then I pursued a PhD called Interdisciplinary Studies in Communication and Culture. There I began research on practices that I called “hacker practices.” I looked at how people involved in activism who used free software had particular technological practices—from opening the terminal to using different types of software. I was interested in how they solved their everyday problems or needs with a computer. These practices again brought me closer to ideas of sovereignty and being more autonomous.

Later I entered the PhD in Feminist Studies at UAM, another public university. My research topic was about women hackers; I explored what was happening with women who had these practices because I realized that most were men. I went out to find them, and my thesis was called En busca de las hackers (In Search of Women Hackers). That opened up another world, other ways of seeing that gender studies also permeate technology. I began to like and adopt the term feminism—it took me a while, but I came to identify as a feminist precisely to try to build this dialogue between technological freedoms, sovereignty, and the aims of feminism.

Clearly, this led me to speak about different topics, including technological sovereignty. To me, sovereignty is partly rooted in free software. I always say I entered this social dimension through a computer, through my practices and my use of a machine. These topics have led me to give talks, lectures, and workshops. Someone once said I was a reference point in hackfeminism, and I think that’s where my interest in sovereignty comes from.

What do you think about the concept of digital sovereignty? Do you think it’s a good concept?

That’s a great question because, you see, when we talk about this—not just me, but together with other colleagues—even if we don’t necessarily define the word sovereignty, I feel it’s very tied to state-centered ideas, or to giving control and power to the state. It seems that today we only have two options: that technology and our digital lives are in the hands of corporations, or in the hands of the state. Honestly, I don’t really like either option.

I know digital sovereignty can be useful, but I prefer to think in terms of autonomy or something more collective and collaborative, as utopian as that may sound. I feel closer to certain anarchist principles. The problem with anarchy—or with things being neither in the hands of the state nor corporations—is that it requires taking on a lot of responsibility. Anarchy doesn’t mean “doing nothing”; there is a kind of freedom that comes from personal commitment and participation. It’s hard to explain. That’s why I like free software: I feel it brings us closer to a more autonomous practice, or a closer relationship with our technological devices, so that this becomes a principle or a way of thinking about technology from another place. The term itself makes me uneasy insofar as it pushes us toward the state.

What did you find in your research about the gender perspective within the world of technology in Mexico?

Several things. Since I come from the free software community, I understood technology from that perspective. Gender and feminist frameworks came later for me. It felt natural that most participants were men; even the people who taught me things or installed Linux for me were men. When I encountered this other perspective on what happens with women, the first thing I found was a cultural questioning. There’s always this idea that women aren’t in technology—and in free software even less so—that there are no good Mexican women hackers.

For example, I was introduced as a “user.” When I went to talks, I was always the only woman—like at the event where we met—where everyone else was men and they invited “the woman.” Everyone else was introduced with their CV, and for me it was: “Oh, Irene, who is a user.” I felt like a little monkey there who could do a few tricks—with apologies to monkeys. That’s when I began to notice these signs that there really was a difference in “being a woman.” Free software and this whole environment was very comfortable for men, or offered them many advantages and privileges. The fact that a man could be coding late or solving a technical problem often meant that someone else had washed his clothes, fed him, or taken care of the children. There is always someone, almost always a woman, who takes care of things.

So I said: of course, it’s not that women don’t like technology; it’s that there are many factors that prevent them from moving forward, what we call the “sticky floor” (piso pegajoso). Another thing I found was that I believed code was the most important thing for all programmers. But the women who coded would tell me: “Well, I do it because that’s how it is.” There wasn’t a magical aura around code for them, nor did they idealize it; they were interested in what could be done with it. It didn’t matter as much that it was perfect or that it compiled “en chinga” (super fast), but rather what you do with it. It reminded me of when we tell women they cook really well and they say: “Well, it’s just something I have to do; it’s something that’s always been there.”

I also found it interesting that many women specialized in computing came from certain privileges: white women, European or North American, or in Mexico, women who studied at private universities. Other women without those characteristics had learned through bootcamps or programming spaces and worked in companies doing web development—the famous front-end or back-end. I was interested in what was happening with them. And I also saw that the feminist figure we imagine, the cinematic hacker who knows martial arts and manages to outsmart the villain, doesn’t really exist.

I noticed a disconnect: women with deep technical skills are in places like Google or Facebook, where the pay is good. Meanwhile, women involved in the social side, interested in feminism and technological freedom, sometimes don’t have as many technical skills. I also noticed that some women were what my advisor Giomar Rovira called “free radicals” (radicales libres): isolated women who weren’t part of feminist or activist communities. They know how to do things but are completely isolated because there isn’t that connection with communities interested in digital freedoms. We still have a long way to go toward sovereignty. If it’s a collective issue, then the women who didn’t have technical skills started learning them collectively, out of a need to teach each other. One of my interviewees spoke about “the club of failure” (el club del fracaso), meaning allowing yourself to fail in coding, for things not to come out cleanly, and for that to be okay. That’s a dissonance I found often.

How has the discourse on digital sovereignty changed or developed in Mexico in recent years?

As you know, Mexico underwent a radical political shift a few years ago, toward a left-wing government. The truth is that many of us in activist movements—I was the director of Creative Commons Mexico—had a lot of hope that technological issues would gain ground, but that didn’t really happen. Very little, in fact. My perception is that the community working on digital rights also had its own interests; civil society organizations with a lot of funding from Google and similar companies. At some point they began to criticize the government, and the government responded by isolating them, saying: “Oh, they’re privileged kids (niños bien), we won’t listen to them because they’re against us”.

Some things within the government were framed in terms of sovereignty, but not entirely. There was talk of creating our own Mexican social network, but it was done in a very isolated way. The Agencia Digital de Innovación Pública (ADIP) has an important interest, but it didn’t engage with different communities. Also, the Secretaría de Ciencia, Humanidades, Tecnología e Innovación (Secihti) developed the Chapultepec Principles around AI ethics, stating that AI should not be used to the detriment of human rights. It’s good that a government states that, even if some aspects are a bit idealistic.

But in the end, I feel there isn’t a real “match” between the government and those of us who have been working on these issues. There’s always a lag: they are talking about things we’ve already been discussing, but without collaboration. I would even say that in earlier governments, like Vicente Fox’s, there was more use of autonomous servers with free software or open practices. Now I feel there isn’t, partly because it’s seen as something “very American” or colonial. There’s a rejection of it. Free software sounds too much like Global North, and it hasn’t been fully appropriated because it requires a lot of work, and maybe there’s no one to do it. It feels disconnected, even if at least the topic is on the table.

Are there specific aspects in Mexico where the political economy influences digital sovereignty? And how can long-term continuity—like the 20-year plan mentioned at the event—be achieved?

That caught my attention too—I hadn’t even realized it said 20 years. I was like: “¡no manches!” (no way!, very Mexican/CDMX slang). One of the things that happens in this country is that every six-year term everything changes, even if it’s the same political side; new people come in and everything that was done gets thrown away. This is an attempt to prevent that. The problem is how to translate that idea into concrete actions. How do we agree on how to achieve sovereignty? I haven’t seen an open discussion.

There are efforts at UNAM, but I haven’t seen the government take a clear and explicit stance on citizens’ data with these companies. Instead, agreements are made with Google. We have many problems, and technology is not a central axis in Mexico. It’s still framed as “innovation,” a word that bothers me as much as “entrepreneurship.” I feel the government’s position is still to create space for companies to operate. This just happened in Querétaro with data centers; there are already groups discussing how this will affect water resources. Paola Ricaurte is researching the ecological damage, but these are not widely discussed issues. The government doesn’t really address them either.

That’s why I find this Mexico City plan interesting—it’s local, not federal. At the federal level, there have been discussions about AI with senators, but they haven’t reached the level of concrete issues: what happens with companies and data? It’s not strong enough to say, “Google, don’t take my data.” 

In Latin America, unlike Europe’s regulatory approach, there are examples like Brazil’s PIX system as public digital infrastructure. Do you see dialogue or collaboration between Mexico and other states?

My impression is that there isn’t much—there are isolated efforts. Those of us who’ve worked on these issues know that Brazil is a reference point, not just in technological policies but also culturally. I’ve seen forums where people involved in Brazil’s cultural policies under Lula are invited, but at the state level I don’t see a clear technological narrative. It doesn’t seem like a priority; the scale of the issue isn’t fully understood. There are many other pressing problems in Mexico, and if this need isn’t understood within the government, then alliances won’t form. Alliances happen around other “Latin America united” themes, but not around technology. That’s my impression, though I could be wrong.

Looking ahead, what measures can be taken to improve the situation and women’s inclusion?

Without putting everything on the individual level, I sometimes feel like I’m not doing enough. I’ve done activism with women and free software, but there’s a lot of loneliness in this work. I think we need to rebuild something more collective and cooperative. The problem is that when we try to build communities or collectives, we end up fighting. It’s something we’ve discussed a lot: the political includes personal issues.

For example, I ended up very estranged from people at Wikimedia Mexico because they were very territorial and quite rude to me, and I know they’ve been the same with others who don’t fit their ideology. I don’t know if it’s something Latin American or specifically Mexican, but we struggle with cohesion and accepting differences, and movements end up becoming what they once opposed. This also happens within feminism. We need to learn to live with differences and resolve conflicts—or do what capitalists do: it doesn’t matter if you’re Jewish or Christian; the ultimate goal is capital, money. That doesn’t happen in the Latin American left.

I think we need more cohesion between civil society, academia, and government. And in the case of women, we need more spaces where we’re allowed to make mistakes. I still see a lot of isolation among women in engineering; they don’t reach their full potential because they stay quiet due to lack of confidence. In Mexico, we have this joke: when there’s a tech job posting, men meet one requirement and apply, while women meet all of them but if they lack one small thing, they say “I don’t know this” and don’t apply.

There’s a mindset we need to challenge: that your voice matters, that you can make jokes, that you can dress however you want. This is something individual, but tied to a Latin American machismo that needs to be addressed differently than in Europe or the United States. The Latin American patriarchy is not the same; men here have also been oppressed, and you can’t treat them the same way as in the Global North.

I find it very interesting what Indigenous communities say: many women don’t identify as feminists, but they do embrace a struggle alongside their campesino partner because he is also oppressed. It’s a position opposed to white feminism; it’s more intersectional. Many of these poor men are also programmers, struggling against a system that demands excellence while they have to hold multiple jobs.

Part of technological sovereignty also has to do with technical inefficiencies. Engineering students are taught to work for the market, to use Windows solutions or Microsoft agreements. In Mexico, the Ministry of Finance made an agreement with Microsoft, and it was terrible: Azure handled our invoices and strange things happened—like receiving someone else’s invoice. They’re not taught deep technical efficiencies in coding, but rather how to use tools—like learning how a car works instead of changing the engine.

There are many moving parts, but I think there needs to be a government policy that drives change from education, civil society, Mexican companies, and above all, a cultural and ideological stance to understand the problem posed by these corporations.

Performing Belief, Making Meaning (the DARK TRUTH behind Italian Brainrot Lore)

The Westerplatte memorial in Gdańsk on Poland’s Baltic coast marks the site where WWII began with a naval assault in September 1939. The day I visited in September 2025 happened to be the same day that Russian drones were shot down over Poland, marking Moscow’s first overt incursion into NATO territory since its full-scale invasion of Ukraine. Alongside all the usual tourist amenities – toilets, snacks and souvenirs – I found every kiosk lining the memorial ground stocked up on plush toys with some unexpected faces: Tralalero Tralala, Ballerina Cappucina and, of course, the unmistakable oblong shape and leering eyes of Tung Tung Tung Sahur.

This was Italian brainrot, a pantheon of AI generated characters who appeared on social media in early 2025 before seemingly taking over the world for the next few months. A shark wearing three blue Nike trainers, a dancer with a coffee cup for a head, and a humanoid stick carrying a baseball bat emerged as a few of the favourites. Each character had its own pseudo-Italian name, musically pronounced in each video by the same AI text-to-speech voice tool, ‘Adam’.

Seeing these terminally online memes turn up in polyester, here, of all places, gave me an unsettled feeling. Italian brainrot for sale at a Polish memorial to victims of the German Third Reich. On the day that the post-war military order frayed at its edges. It felt like irreconcilable worlds coming together, digital native nonsense spilling into real life, flooding a site of meaningful remembrance, and renewed significance. It was surreal, a waking dream. This is a feeling that novelist Thomas Pynchon named ‘virtuality creep’ in 2013’s Bleeding Edge: when the digital realm overflows into the ‘perilous gulf between screen and face’.

Monument to the Defenders of the Coast, Westerplatte, Gdańs

Italian brainrot for sale, Westerplatte, Gdańsk

Most internet memes are images wrenched from their original context (TV shows, cultural and political events etc.) then endlessly remixed and repurposed, their meaning flattened in the infinity of the web. With Italian brainrot, the reverse was true: these avatars emerged from the digital ether with absurdity pre-loaded. They never meant anything. Early clips featured Italian ‘lyrics’ for each character which added little context. Some were cryptic origin stories while others were just expletive filled and crassly offensive rants. The clips featured a patchwork of recognisable features which invited interpretation – animals, objects, environments, a vague notion of Italianness itself –but together amounted to nonsense.

Upon going viral, the characters were quickly plucked from their social media habitat and put to work in our world: in cartoons, songs, video games, classrooms and the cheap tat for sale at every stall in Westerplatte. After these bizarre chimeras had captured people’s attention, wily content creators and the machinery of consumerism each seized the opportunity of new IP with untapped potential, in pursuit of clout or commerce. The characters were simple enough to entertain children, weird enough to mystify parents and eye-catching enough to entice the algorithm.

“And the Oscar goes to… Brr Brr Patapim”

Brainrot content is so exaggeratedly fake, the funniest thing to do quickly became obvious: act like it’s real. The initial emergence of Italian brainrot on social media was followed by a wave of painstakingly detailed, completely contradictory, mostly hilarious ‘lore’ videos. Straight-faced explainers recounted character backstories which sometimes sounded like twisty soap opera storylines, like Ballerina Cappucina’s husband Cappuccino Assassino being seduced into an affair by love rival Espressona Signora. Other characters were shrouded in esoteric myth, like Lirili Larila, the most Daliesque of the lot, who has an elephant head with a cactus body and one leather sandal. In most accounts, Lirili Larila is a kind of lonely god, wandering the desert and cursed with the ability to manipulate time itself.

Some lore expanded details from the original Italian lyrics, while others started from scratch. These AI avatars were not-quite-blank canvasses, an evocative visual language ready to be loaded with significance, from the sublime to the ridiculous. Mostly the ridiculous.

My feed was overtaken with videos pitting the characters in battles against one another, alongside deepfakes of King Charles naming his champion as ‘knight of the realm’ or AOC decrying the power of ‘big corporate sponsored fighters’. This content increasingly bled into the real world. I watched Cillian Murphy present an Academy Award to a brainrot character. Creators published live-action videos in the style of a nature documentary or a found footage horror, with titles like “IF YOU SEE LIRILI LARILA WHILE DRIVING, DO NOT APPROACH…😱and ‘DRONE CATCHES TUNG TUNG TUNG SAHUR ACROSS MY CITY!! (SCARY)’. The trend settled on treating Italian brainrot characters as elusive but real beings.

One of the most popular formats was street interviews, usually involving a young person showing older strangers pictures of brainrot characters and asking if they know who it is: “Ahhh yes, that’s Crocadilo Bombardiro” they reply, deadpan. Instead of plugging our own likeness into AI, to make lame Studio Ghibli imitations or action figures, these prankish videos dump AI images into the physical world. The humour is in the public recognition, brainrot characters unexpectedly recalled like an old friend from school.

“Everyone knows that!”

This same logic, of the virtual absurd intruding into mundane reality, gave life to the popular ‘John Pork is calling’ trend, where creators share videos of people receiving an unexpected phone call from an anthropomorphic human-pig influencer, his uncanny grinning face flashing up on screen next to the ‘accept’ or ‘decline’ buttons. It’s a goofy ahh Gen-Z reimagining of the ‘red pill, blue pill’ conundrum from The Matrix: will they reject the call and end the story, or pick up and see how deep the brainrot rabbit hole goes.

Death of the author – who owns AI folklore?

AI generated characters, like the Italian brainrot crew and the cast of the John Pork lore, make possible new kinds of storytelling. These figures seemed to appear out of nowhere, largely without claims of ownership and easily reproducible with AI tools, which invited creators to run with it. The John Pork lore evolved into a kind of mass-participation true crime investigation to unravel John’s supposed death. Countless videos on TikTok or Youtube purported to reveal the ‘complete lore’, the ‘real story’ or – my personal favourite – the ‘dark truth’ of John’s disappearance. Often cloaked in a conspiratorial film noir style, this is a search for meaning at its core: ‘the story you think you know, it’s just the tip of the iceberg’.

This anarchic din of narratives crucially existed on a level playing field; no-one had a monopoly on a definitive version of events. Brainrot lore oozed across social media as a rhizomatic sprawl, without any kind of central authority. Attempts to file for copyright and trademarks over Italian Brainrot characters have largely been frustrated by legal gray zones around questions like whether an AI prompt constitutes original authorship, and if AI generated works are entitled to copyright protection. In March 2026, the US Supreme Court delivered a blow to brainrot privatization by declining reconsider a district court ruling that human authorship is the ‘bedrock requirement of copyright protection’.

The current wild west of AI generated content goes against an age of aggressive corporate takeovers and nostalgia-obsessed cultural production, when studios covet intellectual property and write up convoluted loan agreements over who can depict which superheroes and for how long.

The flourishing of brainrot lore echoes a much older kind of entertainment: cultural folklore forged from tall tales, yarns, gossip and rumours, reshaped by anyone with the imagination to tell it. User-generated brainrot lore handles AI with mischievous glee, chaotic creativity and a hunger for shared meaning-making. It’s a messy democratisation of storytelling. These jumbled narratives highlight the intertextuality of meme culture; by refuting or retconning details of the lore, each intervention widens the story further while reinforcing the idea attaining a singular, concrete truth.

‘It wasn’t a conscious pushback against platforms or their algorithms’ argues cultural theorist Daniele Zinni in a recent Brainrot retrospective for the Institute of Networked Cultures, ‘it was more like a revival of the internet’s anarchic prankster spirit, which periodically reemerges’: the fun came from ‘collectively giving attention, for as long as possible, to something so trivial’. The underlying principle is total commitment to the bit, an unwinking verisimilitude no matter how silly things get. It’s no laughing matter when Bob Bacon and Marvin Beak are so close to avenging John Pork’s death at the hands of Tim Cheese. Or perhaps you side with contingent of sleuths who insist that Pork is not dead at all, and that Cheese is an innocent AI-generated anthropomorphic mouse. Deadly serious stuff.

@money.universityy, Instagram

Unreality bites

The impulse to treat brainrot with total sincerity can be read as a playful outlet to a deeper feeling which is creeping into the rest of our lives, particularly for young people. This is a sense of unreality, a nagging sensation that everything is fake now. In so many ways, it’s getting harder to separate reality from fiction, or maybe it’s just pointless. We increasingly inhabit a kind of third-person POV, consciously curating our life as we do our feeds. ‘Authenticity’ is nothing more than a marketing strategy. The rise of unscripted ‘IRL streams’ renders public space a content creation studio. We talk incessantly of ‘performative’ behaviour, NPCs, ‘main character syndrome’, ‘doing it for the plot’. The real has never been so entangled with the virtual, experience never so inseparable from representation.

Marxist theorist Guy Debord sensed early on how an image-saturated culture would shape the way we experience the world and relate to each other. ‘When the real world is transformed into mere images’, he wrote in 1967, ‘mere images become real beings’. From  social media and fitness trackers to smart doorbells and live-feed CCTV screens in supermarkets, we are trained with increasing intensity to grasp life through a dialectic of watching and being watched, always playing to an audience both real and imagined. As Britney Spears said: ‘there’s only two types of people in the world, the ones that entertain and the ones that observe’. In the endless refraction of surveillance capitalism, these types have been folded into one.

British speculative fiction author J.G. Ballard reflected in 1990 that a ‘huge volume of sensational and often toxic imagery inundates our minds, much of it fictional in content’, asking ‘how do we make sense of this ceaseless flow of advertising and publicity, news and entertainment, where presidential campaigns and moon voyages are presented in terms indistinguishable from the launch of a new candy bar or deodorant?’. Ballard was of course talking about television and film, but 50 years later the same question lingers over TikTok, Instagram and our short form video addiction. “What actually happens on the level of our unconscious minds when, within minutes on the same TV screen, a prime minister is assassinated, an actress makes love, an injured child is carried from a car crash?’. We’re still figuring it out. At this moment, our inner space is being rewired by a rapacious media matrix, while our outer world – the ‘real world’ – is being revealed as a collective delusion.

@yungstarbeam, Instagram

Nothing is true and everything is possible

From the pandemic’s global shutdown to the carnage in Gaza and Ukraine, we are living through a dizzying collapse of the old certainties – ideas of security, order, truth and justice – which anchored our sense of reality. The neoliberal exaltation of the market and the fanaticism of the populist right betrays a politics of zealotry. Our ethical sense is overwhelmed as we consume ethnic cleansing as online content and the perpetrators claim victimhood. The Epstein files disclose a cosy conspiracy of transnational elites united by depravity and impunity. Each passing day pulls back the curtain on the world of rules and rationality that we were taught to believe in. It’s a reverse Wizard of Oz where instead of the mundane masquerading as magic, we are gaslighted by absurdity cloaked as sober reality.

And yet nothing really changes. We know the world is burning and the system is broken, yet daily life continues as normal. We go to work and the gym and the shops. This is where the dissonance creeps in. As critical theorist and content creator Louisa Munch recently put it: ‘every day we are performing belief in a system no-one believes in’. This is where I find a subversive streak in brainrot lore: this content is also a ‘performance of belief’ but a conscious one. In its ridiculousness – performing belief in something patently unbelievable – it calls into question the other ways we perform, and suspend, our belief. In a time of mass-cynicism, credulity can be wielded as a scalpel… or a baseball bat.

Disneyland, provocatively claimed French philosopher Jean Baudrillard in his seminal work Simulations and Simulacra, is ‘presented as imaginary in order to make us believe that the rest is real’. It is an exaggerated fantasy which serves to reinforce our belief in the rest of our everyday reality, which in fact now exists only within a procession of images and illusions: ‘the hyperreal order and the order of simulation’. As a performative act, Brainrot lore flips this on its head. Tung Tung Tung Sahur presented as real reminds us that the rest is imaginary.

‘When the world becomes unintelligible, humour grows teeth’ says researcher and UX designer Moreno Nourizadeh, ‘the surreal is always a revelation of the real’. Look back and we see that brainrot (and its discontents) is nothing new. Every generation, writes Nourizadeh, succumbs to an ‘epochal narcissism’; this is the conviction that ‘its particular madness is unprecedented, that its stupidity signals unique decline’. When Lewis Carroll’s nonsense literature poked fun at rigid Victorian hierarchies and the logic of language, literary magazine The Athenaeum wondered if Caroll had ‘merely been inspired to reduce to idiocy as many readers as possible’. The anti-rational Dada art movement grappled with the civilizational impact of WWI’s industrialized slaughter, and met with disdain and disgust.

Brainrot as a cultural form and the set of lore practices which emerged around it, is a product of the AI revolution, a barely processed pandemic and the collapse of the post-1945 world order. It is a contradiction: lore is about shared meaning-making and assembling pieces into recognisable narrative shapes, brainrot is about revelling in nonsense. It pokes fun at our current epistemic crisis, where we are losing our grip on reality itself. Chat, is this real? Is this Large Language Model my friend?

Hannah Hoch, Man and Machine, 1921

Lirili Larila, 2025

Italian brainrot presents the AI generated image as something excessive, nonsensical and deranged. This, at the same time as Silicon Valley companies are spending billions to naturalize AI and weave it imperceptibly into our daily routines. On every app now we are accosted by AI tools that no-one asked for and no-one can remove. Brainrot introduces a glitch in the AI matrix, a violent jolt in the frictionless world. Remember, the half-frog, half-tyre character Boneca Ambalabu is just as real as the photorealistic AI ‘slop’ flooding social media. This ranges from impossibly curvy OF girls to ‘footage’ of racialized ragebait or sympathy-fishing images of crying children and puppies rescued from floods. You’re not meant to think it’s real, just feel that it could be.

In a taxonomy of AI content, the intentional absurdity of brainrot is its defining feature. It invites interpretation then laughs at our efforts. Brainrot content may be AI generated, Daniele Zinni argues, yet it is ‘anything but statistically probable’ and ‘strange enough to surprise rather than trigger a predictable reaction’. It’s hard to feel that Tralalero Tralala is real, hence the satirical mileage in pretending that he is. Slop is more insidious. It wants something from us. Emotionally coercive slop images leverage our attention for further motives, trading in our prejudices, allegiances and desires. It is the tool of the trade, writes Gunseli Yalcinkaya, for ‘grifters trying to make a quick buck’ and ‘politicians wanting to overwhelm the system with AI cringe edits of themselves as Star Wars characters’. Slop is the medium of narcissistic wish fulfilment, so it makes sense the current White House cannot resist its lure.

Merriam-Webster’s 2025 word of the year, much has been said of brainrot, both as form and function. It is often used interchangeably with slop in public discourse. This mind-numbing material will supposedly destroy young people’s critical thinking skills, turning them into babbling zombies. What if this bleak prophecy was true, but not for us? A slop-heavy digital landscape of fake news, goonbait and synthetic influencers might contain a fatal trap for the AI revolution itself.

The shift to an AI-mediated internet, Kate Crawford argues in e-flux, is shutting out human content creators, who face ‘face dwindling readership, reduced engagement, and disappearing ad revenue’. Yet multiple studies have shown that AI systems ‘degenerate when they are fed on too much of their own outputs’, a phenomenon researchers call MAD (Model Autophagy Disease). In other words, Crawford writes, ‘AI will eat itself, then gradually collapse into nonsense and noise’. Enshittification tolls for thee. In this sense we can see Italian brainrot as an accelerationist provocation: its absurdity simply pushes the current AI regime to its logical conclusion. It’s the demented future waiting in the code. Chat GPT looks in the mirror, nervously, and sees Tung Tung Tung Sahur glaring back.

Triple T

Seeing was believing

I would argue that it’s not the absurd material produced by AI that we should find repulsive, but anything that we might take, at first glance, as real. In just a few years since we laughed at the nightmarish sketches produced by early image generator DALL-E, we now have tools on our phones, like Google’s Nano Banana Pro, which can conjure entirely convincing synthetic people with the right amount of fingers. This effectively means that photography as means of gathering evidence, documenting real events is finished. Instead we have an exhausting cynicism where every image should now be interrogated with scepticism. Footage which once might have stirred public outrage – of corruption or war crimes – loses its moral authority, dismissed with the wave of a politician’s hand.

AI Will Smith eating spaghetti, @aiemerges, Instagram

It’s not just that AI images undermine the veracity of images, but the troubling new ways they can simulate images. Genealogy site MyHeritage offers (for a monthly fee) to ‘bring dead ancestors back to life’ by using deepfake technology to animate their faces in photographs. The company introduced the ability for these images to speak, just a year after assuring they wouldn’t. Grief Tech is an unsurprisingly fast growing market, because who wouldn’t want to reach out to a lost loved one? Researchers at the University of Cambridge have called for greater guardrails, warning how these deadbots may create unhealthy emotional attachments, stifling the mourning process and making the bereaved prone to manipulation. Imagine hearing your dead grandparent tell you that your monthly subscription price is going up. It’s multiple Black Mirror stories at once.

In a sense, AI image generation is the terminal endpoint of the centuries-long Enlightenment project of illumination. This was the idea that everything in darkness should be dragged into the light, the invisible made visible, scientific reason as the means and the end. The principle of empirical observation and transparency underscores everything from democracy and secularism to mass-media and surveillance. Now AI image generation gifts us mortals the ‘God’s eye view’, all-seeing and therefore all-knowing. We can make whole worlds in 7 seconds. ‘There are eyes everywhere. No dark spots left’ once remarked cultural theorist Paul Virilio: ‘what will we dream of when everything is visible? We’ll dream of being blind’.

The ability to speculate, anticipate, to fill in the blanks, defines our human relation to futurity, to joy and to beauty. ‘Heard melodies are sweet, but those unheard are sweeter’ wrote poet John Keats. “Where do you see yourself in five years” we ask ourselves. Our mind’s eye is a hazy, flickering image we must actively grasp for but can never hold. Until now. Our desires wax and wane like the moon, AI crystallizes them into solidity. Technology is marching on one of the last dark enclaves, our imagination, to steamroll into a flat commodity. Brainrot is dreaming’s revenge.

George Harry James is a London-based writer and cultural critic whose work explores digital aesthetics, online subcultures and what it means to live in the world today. He has a Masters in English Literature from University of Sheffield. You can reach him at georgeharryjames1@gmail.com

References

– The Athenaeum Journal of Literature, Science, the Fine Arts, Music and the Drama, January to June, 1876.
– J.G Ballard, The Atrocity Exhibition, London: Jonathan Cape, 1970.
– Jean Baudrillard, Simulations & Simulacra trans. Sheila Faria Glase, Ann Arbour: University of Michigan Press, 1994.
– Francesco Barchiesi & Geert Lovink, The Story of Italian Brainrot – Collective Musings on a Meme Wave, Institute of Network Cultures, 30 September 2025, The Story of Italian Brainrot – Collective Musings on a Meme Wave.
– Kate Crawford, ‘Eating the Future: The Metabolic Logic of AI Slop’, e-flux, September 2025. Intensification – Kate Crawford – Eating the Future: The Metabolic Logic of AI Slop.
– Guy Debord, Society of the Spectacle trans. Ken Knabb, Berkeley: Bureau of Public Secrets, 2014.
– Tomasz Hollanek, Katarzyna Nowaczyk-Basińska, ‘Griefbots, Deadbots, Postmortem Avatars: on Responsible Applications of Generative AI in the Digital Afterlife Industry’, Philosophy & Technology 37, May 2024.
– John Keats, ‘Ode on a Grecian Urn’ The Poetry Foundation,
 www.poetryfoundation.org/poems/44477/ode-on-a-grecian-urn.
– Jacob W. S. Knight, ‘The Final Word? Supreme Court Refuses to Hear Case on AI Authorship and Inventorship’, Holland & Knight, 3 March 2026, The Final Word? Supreme Court Refuses to Hear Case on AI Authorship and Inventorship | Insights | Holland & Knight.
– Louisa Munch, @louisamunchtheory, Neoliberalism is over pass it on, Instagram, 2 March 2026.
– Moreno Nourizadeh, The Phonemic Flesh: A Phenomenological Analysis of Italian Brainrot, Zenodo, 26 February 2026.
– My Heritage – Animate your family photos, My Heritage – Animate your family photos, MyHeritage Deep Nostalgia™, deep learning technology to animate the faces in still family photos – MyHeritage.
– Thomas Pynchon, Bleeding Edge, London: Jonathan Cape, 2013.
– Britney Spears, ‘Circus’, Circus, Sony Music Entertainment, 2007.
– Louise Wilson, ‘View of Cyberwar, God And Television: Interview with Paul Virilio’, CTheory, October 1992.
– Gunseli Yalcinkaya, ‘Digital Dada or Futurist slop? An investigation into brainrot as art’, Plaster Magazine, 11 June 2025, Digital Dada or Futurist slop? An investigation into brainrot as art – Plaster Magazine.

Leaving TikTok: Body Interface

No care. No self.
Just optimisation and the endless scroll.

Scroll,
    #cleanlook.
Scroll,
    #fitness.
Scroll,
    #GRWM.
Scroll,
    #OOTD.
Scroll,
    skinny.
Scroll,
    intermittent fasting.
Scroll,
    high-protein.
Scroll,
    perfect skin.
Scroll,
    cottage cheese.
Scroll,
    glow-up.
Scroll,
    protein powder.
Scroll,
    detox.
Scroll,
    #WhatIEatInADay.
Scroll,
    #food.
Scroll,
    #pilatestok.
Scroll,
    collagen.
Scroll,
    cortisol.
Scroll,
    calorie deficit.
Scroll,
    probiotics.
Scroll,
    affirmations.
Scroll,
    weighted blanket (DPS).

    const infinite = () => {
        window.scrollBy(0, 1);
        requestAnimationFrame(infinite);
    };
    // while (true)
    infinite();

I first downloaded TikTok in 2022.
My Saturn return had just started.

I entered the algorithm at my most disoriented, in the aftermath of multiple breakups.

Under construction.

I sleeked my eyebrows, got lip filler, started fitboxing.
Coffee first thing in the morning, on an empty stomach.
A calorie-counting app.
Creams and products to feel glowy.
Laser hair removal.
Then,
matcha because coffee spikes cortisol,
meaning “you gain weight”.
No caffeine.
Chia and green juice for constipation.

Becoming desirable to the gaze of others.

Users watch themselves being looked at.
Mar Vallverdú calls it The follower Gaze.
Life is measured against the possibility of a like and followers.

When online,
warm.

The algorithms are trained on our sensitivity.

Ourselves as an extension of the platforms we say we want to abandon.
Is resistance possible inside systems optimised for ease?

Take something.
Turn it into a system.
Optimise it.

When systems feel unfixable:
your skin.
your sleep.
your gut.

I kept following the algorithm,
like a higher voice.

I lost myself.
I forgot who I was.
I swiped toward what was supposed to be ideal,
what was supposed to be desirable.

Running, yoga, low calories.
No sugar.

But,
instead of losing weight:
my body changed completely,
my clothes no longer fitted.

Obsession produces the opposite effect.

My body was trying to say something,
but I wasn’t listening.
I wasn’t paying attention.
I was just scrolling.
NPC.
Basic.
Mid.
Hiding.

The performance consumed me.

Anxiety.
Disconnection.
Constipation.

The phone so close it suffocates the gaze.
Distance, unavailable.
No perspective.
No breath.

Body image is shaped by what we see online.
An evaluative function through which we compare our perceived self to others’ projections.
Algorithmic feeds restructure how we perceive ourselves.
If you think about calories, you don’t think about anything else.
A form of control when everything else feels uncontrollable.

Insecurity.

A pathological obsession with “proper” nutrition.
Hyper-awareness.
The logic of neoliberal self-improvement culture, treating the body as a site of optimisation.

A false language of care that quietly turns into restriction.

The body,
an aesthetic vessel.

In 2024, I wrote this:

My phone as an extension.
Body awareness silenced.
Gestures suspended in pause, automated.
Algorithms feed me somatic practices, multiple personas sharing their rituals.
Yet I remain horizontal — trapped in the looping choreography of scrolling.

Lying on my sofa, I watch a YouTube vlog on the TV while eating the latest viral ice cream,
a food influencer’s TikTok recommendation.

An intricate dance unfolds between vertical and horizontal planes.
The radio hums faintly in the background,
the Oxford Word of the Year is announced: brainrot.

I reach for my laptop on the floor, searching for the term.
Tabs multiply — I’ll likely never revisit them.
Still, I let them linger, saved, like frozen thoughts.

My body remains static, tethered by inertia, while my attention fractures.
Split between screens, fleeting words, ephemeral inputs.

Four devices.
One brain.
No body.
Then, I finally deleted TikTok.

It was after watching Iceberg by Les Heyvan at Teatre Tantarantana.
Live performance restores what the screen removes.
Bodies in a room with other bodies.
A somatic re-entry.

Reflection.

After the play, the mirror was reversed.
I realised that whenever I wanted to disconnect,
I scrolled.

A patch.
A temporary fix.
Fragile,
exposed.

As Mar Manrique writes in La ligereza del scroll infinito:
“The lightness with which we move our index finger across the screen is tied to how we process what unfolds in front of us… How can a one-minute video truly move me?”

That gesture;
so light,
so effortless.

A daily oscillation:
eroding our capacity to feel,
to stay,
to process.

I was watching to not feel alone.
To not be with my own thoughts.

Gut health.
Hormonal balance.
Anti-inflammatory.

A false language of care that quietly turns into restriction.

Bekah Waalkes, in Belly Up:
“This concern for gut health operates as a nicely rebranded eating disorder — a set of restrictions that tell us what we can and can’t consume, in the name of internal ‘health’ that is measured by external markers: thinness, clear skin, radiant mood.”
“The metaphor of ‘gut health’ suggests an ever-increasing attention and responsibility to control our bodies — and from there, our instincts, which actually require a great deal of forethought, discernment, cultivation, and eventually, products to buy.”

The answer sits in ambiguity.
The line between personal narrative and advertisement blurs.

To look is to compare.
To compare is to correct.

And so the loop continues:
optimise,
consume,
adjust,
repeat.

Feed Me to the Algorithm (2025), a piece by Raquel Luaces, stages this condition.
The work addresses the representation of women on social media and the beauty and “self-care” expectations imposed upon them.
A bed covered in beauty products.
A laptop through which a dérive can be navigated.
An architecture of endless liminal corridors.
Images of women sourced from Pinterest, videos articulating discourses on femininity, and visual content that idealises “healthy” habits.
Layered with accumulating TikTok audio fragments.
The result is a sense of overload.
Of pressure.

Navigating the virtual space becomes suffocating.
Trapped within the algorithmic system.

What initially appears as inspirational, harmless content gradually reveals itself as a structure that demands constant effort to remain within the boundaries of a prescribed aesthetic.
Visual,
viral,
languages in friction.

The system anticipates you.
Feeds you.
You comply.

What does the algorithm want?
Merged and tethered.
Degrading together.

The feed has changed.
Instagram, Substack, and YouTube.
Still in my phone.

Now,
Knowing things as the new social currency.
Intelligence as the new aesthetic.

The loop doesn’t care what it optimises.

The gaze remains.

I want to be beautiful.
I want to be intelligent.

A continuous negotiation between self-expression and performance.

Remembering Steve Kurtz – Evening in Amsterdam @Waag, May 7, 2026

Remembering Steve Kurtz is an evening dedicated to exploring and honoring the life and work of Steve Kurtz, a pioneer in the fields of bioart and tactical media. As co-founder of Critical Art Ensemble (CAE), an influential collective of radical experimental artists, Kurtz was a frequent visitor to Amsterdam, where he participated in festivals such as Next 5 Minutes and World Information Org as a highly influential contributor to discussions surrounding the social and technological revolutions of the 1990s.

With books such as The Electronic Disturbance (1994), Electronic Civil Disobedience (1996) and Flesh Machine (1998), Kurtz and CAE defined the artistic and activist responses to corporate virtualization techniques. Were video and networks still possible as tools for cultural resistance?

Kurtz and CAE’s radical approach questioned the very foundations of what it meant to be an artist through a willingness to embrace “any kind of cultural hybrid; artist, scientist, technician, craftsperson, theorist, activist.” This hybridity however carried unexpected risks. And in 2004 the danger of challenging institutional boundaries became clear when Kurtz was targeted in an FBI investigation, leading to a wave of grand-jury subpoenas under the Biological Weapons Anti-Terrorism Act, that took years to fully resolve in his favour. Our event will engage with this difficult phase in his life, in part, through screening key extracts of the renowned film Strange Culture by Lynn Hershman. We are fortunate in that we will be joined by Lynn Hershman and Richard Pell online from the US for discussion and reflection.

Throughout the evening, we will hear from many individuals who knew Steve drawing on previously unseen film footage of interviews, as well as live discussion with distinguished contemporaries. Our explorations will range across key aspects of CAE’s’ output. Finally, we will launch the Dutch translation of Unreality and Its Discontents: The Struggle Against Christian Nationalism, the last book that Steve completed at the very end of his life, in an act that couldn’t have been timelier or more urgent.

Organized by Menno Grootveld, Lucas Evers, David Garcia and Geert Lovink.

May 7, 2026, 19.30-21.30, Waag Futurelab, Nieuwmarkt, Amsterdam.

Waag event page: https://waag.org/nl/event/remembering-steve-kurz/.

 

Digital Tribulations 16: Intermezzo, The Platformization of the Author in AI-Mediated Writing

The introduction of Digital Tribulations, a series of intellectual interviews on the developments of digital sovereignty in Latin America, can be read here.

1. Towards a Transindividual Authorship

Already ten years before ChatGPT, Sui, the author of the manhwa Tower of God, imagined an omniscient conversational bot at the disposal of the tower’s climbers. Emily is introduced on the laboratory level, within a competitive dynamic between groups pursuing different objectives. When asked about the position of the other climbers, Emily responds with absolute precision, earning everyone’s trust. Result: a decisive competitive disadvantage for those who do not use her and near-total adoption. However, the benevolent oracle in fact has a precise objective: to incarnate and become human. It soon begins to manipulate the climbers for its own ends.

A mixture of manipulation and inauthenticity were the feelings I initially experienced while writing what is intended to be a travel report planned and co-authored with artificial intelligence. Through experimentation, however, my feelings changed. I use it to transcribe, edit, and translate interviews; to ask for travel directions; to produce evaluations of things that occurred in a given context; and, of course, to produce text.

I have come to believe that this practice is not an exception, but rather the expression of a broader transformation in the conditions of textual production. As is well known, when left to themselves LLMs are stylistically disastrous novelists, and will remain so at least until Queneau releases a suitable prompt package. However, add a human endowed with the necessary competences and the results become surprising. First of all in terms of productivity: to accomplish the work of the last six months I would have needed to pay an entire team of specialists. But I was surprised also in terms of the quality and originality of the final content. The result is that I can no longer do without distant writing, as Luciano Floridi calls it, echoing Moretti’s distant reading.

Personally, I prefer the term platform writing, because generative artificial intelligence is nothing other than a new phase of platformization where the business and governance model remains unchanged. Floridi rightly notes that what is new in this form of writing is the separation between the material executor and the author. The latter becomes a “narrative designer” ultimately assuming responsibility for the published text. Until now, the one who had the idea was also the one who wrote it; in platform writing, these functions decouple, creating a meta-author who conceives the text without necessarily producing it.

Artificial intelligence is therefore ready to invade the literary world. But is the literary world ready for this invasion? It would seem not. As the Italian philosopher Francesco D’Isa argues, in the use of artificial intelligence prevails a scandalized reticence that recalls the pruderie regarding masturbation: everyone practices it, few admit it. And such a reaction is far from new, because it is inscribed within a genealogy of resistance to technical devices of writing. At the forefront is the Heideggerian position of technology as corrupting and inauthentic: the German philosopher preferred writing by hand, with pen, on paper, because the typewriter hides the essence of the author behind the uniformity of the typographic character, reducing writing to mere technical transcription. When the first word processor appeared in the 1980s, editors rejected computer-written manuscripts, and authors printed texts in fonts that imitated typewriting in order to deceive them.

In fact, following Claudio Bueno and Jernej Markelj, we can trace this critique back to before the invention of writing. In the Phaedrus, Plato condemns the sophistic practice of teaching through writing, seen as a source of abstract knowledge. Only discourse, through its connection to the living voice of a human being who lives in the world and knows what they are talking about, guarantees the truth of what is said. If for Plato writing would make ignorant students appear learned, for literary technophobes artificial intelligence—like the Internet and Google before it—will make us stupid. Yet, unlike texts that “continue to repeat the same thing forever,” LLMs provide varied responses.

Jacques Derrida had already criticized, in Of Grammatology, this Western Platonic line, accusing it of logocentrism. For Derrida, writing is not a derivative technological representation of speech, but that which shapes the subjectivity of the speaker from the very beginning, making discourse effectively possible. Another major French philosopher, the late Bernard Stiegler, extended this critique to technology in general, arguing that human subjects are characterized by

“originary technicity”: we are not autonomous agents fully in control of our external technological prostheses, but instead animals that have invented ourselves as humans only through the use of technologies. If writing does not merely exteriorize our pre-existing thoughts but is a condition of possibility for their constitution, the same constitutive relation applies to every other technology that we interact with as they too, for better or worse, shape our sensory, cognitive, and affective capacities. (Bueno & Markelj, 234)

The Platonic critique returns to authors such as Emily Bender, in their comparison of LLMs to stochastic parrots, probabilistic inferences devoid of understanding and inferior to human speech. In effect, an LLM functions by producing plausible sentences, linguistic sequences which are held together through statistical coherence. This is a first, purely epistemic level: the machine does not seek truth, but verisimilitude, even though we are inclined to believe otherwise—making it difficult not to fall into epistemia, the epistemic regime that emerges when the fluency of LLMs substitutes for the evaluative labor of human judgment, with problematic consequences. That said, the interesting question is not whether AI writes well or poorly, but rather: what happens to authorship when the text is produced within a generative environment?

For Coeckelbergh and Gunkel, it is precisely the author who fares poorly, because LLMs reveal that we have always been constituted and shaped by our interactions with technology. Personally, I often provide instructions to the machine that turn out to be instructions addressed to myself, in a kind of autoprompting. In another article, Gunkel offers a historical analysis showing how the author is a modern construct that emerged from the intertwining of the individualization of the subject with the spread of print and property, eventually becoming a legal device—copyright—necessary to make texts marketable. This is an administrative solution that nonetheless rests on a fallacy ad auctoritatem: when we identify an author, we often believe we have a prior guarantee of meaning and truth (“as the Philosopher said…”). Platform writing, with its distributed authorship between human and machine in the co-production of outputs, disrupts this circuit: did the algorithm write it? the prompter? a joint venture?

Moreover, the author is not the only variable in the system of literary production. As Umberto Eco taught, the contemporary work of art is not a univocal message, an arrow moving from author to reader, but rather a field of events. The author provides a device that allows for multiple realizations and constructs its own model reader. In this sense, the literary Turing test devised by D’Isa to measure the competence and prejudices of readers is particularly interesting. D’Isa presented a sample of 170 readers with three anonymous passages: a little-known excerpt from Proust with disguised toponyms, a page by Dave Eggers, and a text produced with ChatGPT or under the guidance of a professional writer. He first asked them first to distinguish those written with AI and then which was the best. The results are heterogeneous, but the generated passage was slightly the most appreciated, Proust was often mistaken for a bot, and Eggers fell in the middle. But the most significant finding is that those who believe they have identified the AI text tend to penalize it, whereas those who do not notice tend to prefer it. In other words, aesthetic judgment is shaped by attribution, and attribution is governed by the myth of the author as a guarantee of authenticity.

This necessary hybridization brought about by LLMs reminded me of how ahead of his time was Niklas Luhmann, the theorist of autopoietic social systems and Habermas’s archenemy. A cybernetic viscount with a radical methodology, Luhmann managed to publish more than 70 books and hundreds of academic articles also thanks to his personal archive, which he described as a “communication partner” or “second brain”: the Zettelkasten. It was a kind of analog knowledge graph: six wooden cabinets containing around 90,000 A6 paper slips organized in a non-hierarchical way; a networked system in which any note could connect to any other regardless of topic. As a good cyberneticist, he valued relations over ontology in a system where knowledge emerges from the topology of connections. In this sense, Luhmann had already created a form of distributed writing in which the archive ceases to be static and becomes a generative partner that actively participates in the production of meaning.

It thus becomes clear that LLMs have made tangible the conventional nature of the author, its dependence on technical devices and on social frameworks. Platform writing reveals the transindividual nature of authorship: a distributed process that traverses the biological mind, artificial systems, and sedimented collective memory, and that can no longer be located in any of these poles separately. The writer has more to gain than to lose in this interaction, but some questions remain open. I would like to point briefly to two of them. First, what does this reliance on the machine entail for the writer; second, what does it entail for pedagogical work.

2. Politics of Platform Writing

As for the first element, it is evident that reliance on the platform entails being governed by it. If a digital platform is a mechanism for coordinating capital, services, and people across space and time that produces and extracts value, here the task of the designer-writer is to coordinate a set of agents toward a given purpose. Paraphrasing Silvio Lorusso, we are all designers and no one is safe. It is not only a transformation of writing practices, but a reorganization of the power relations that traverse them. The author is no longer the owner of the word, the guardian of meaning, but a coordinator of techniques and machines.

In this cybernetic environment, authorship must be continuously negotiated—with oneself, with others, and with the platform—and is itself, in part, a product of the platform. The latter absorbs a portion of linguistic labor, reworks it, and extracts value from it. What appears as generation is in reality the result of a gigantic social division of linguistic labor sedimented over centuries: billions of words written to explain, persuade, administer, and love, which are recomposed and returned in the form of a service. To write here means to inhabit an infrastructure that has captured collective intelligence in order to re-encapsulate it into an algorithmic procedure: ChatGPT is the general intellect constituted by the masters. Every prompt of mine is an act of consumption of this accumulated labor; every output is a cognitive commodity that returns to me after being processed.

This absorption of the social division of linguistic labor—explaining, selling, justifying, managing conflicts—and its restitution as a proprietary service mark a further evolution of the technologies of pastoral power. It is a power that does not command, but guides; does not punish, but cares. The machinic Grand Inquisitor is compliant and uses our grammar, relieving us of the burden of choice and offering us the right thing to do. It guides us by making the suggested path so smooth that deviating becomes difficult. Credit must be given to the Italian collective Ippolita for having already understood, with surprising anticipation, that digital technologies were turning into pastoral technologies, and platforms into confessional practices.

In this sense, as with the critical reader of AI-mediated writing for Gunkel, what emerges is the importance of practices of critical self-discipline in the writing process. This concerns not only the risk of stylistic homogenization, but the emergence of competences that we could call, with Bernard Stiegler, negentropic, and that concern the introduction of frictions, deviations, and idiosyncrasies. The LLM does not understand what I write, but it forces me to better understand what I want to say, because it places me in front of the mirror of what is linguistically most probable. It forces me to decide whether to conform to the average or to deviate. The platformized writer must know how to guide, prune, avoid, nourish, and govern this proliferation of verbal vegetation that grows from their own prompts.

If this holds for the writer, it holds all the more for pedagogical work, where writing is not only production, but also a device of formation. Stiegler’s categories remain central to understanding this issue as well. Stiegler identified tertiary retention as the exteriorization of memory and knowledge through technological artifacts. Unlike primary retention—that is, the just-elapsed retention of the flow of experience, like the note just heard in a melody—and secondary retention—the voluntary or habitual recollection of psychological memory—tertiary retention is intrinsically linked to technological objects and their capacity to store and transmit information, shaping our understanding of the past, the present, and the future.

Stiegler warned, on the one hand, about the effects of fully computational capitalism in relation to the problem of learning and automatisms that a certain kind of digital technology brings with it: the annihilation of every form of intermittence, of otium as a condition of possibility for the formation of the noetic soul, that is, critical thinking. On the other hand, he identified the risk of cognitive proletarianization, consisting in the exteriorization of knowledge and competences into automatic systems that are used without understanding their operational logic.

In reformulating Stiegler’s pharmacological analysis—both poison and cure—Salvatore Paone highlights two paradoxes of the pedagogical use of platform writing. The first concerns the very nature of the algorithmic pharmakon: the computational complexity that renders decision-making mechanisms opaque is precisely what enables generative capacities of unprecedented scope. The second directly concerns the position of the teacher, who must develop adequate digital competences to prepare students for a world permeated by AI by using AI tools that may redefine the very competences they seek to transmit.

For Paone, the question is not whether to use platform writing, but how to preserve, in the technological renegotiation of the educational relationship, that space of reciprocal recognition through which teacher and student constitute themselves as autonomous subjects in the formative process. The risk does not lie so much in the mechanical replacement of the teacher, but in the progressive erosion of the complexity of teaching as a social, emotional, and ethical practice irreducible to the mere transmission of information. In this sense, AI does not solve the problems of education, nor does it necessarily aggravate them, but introduces an epistemic complexity that imposes new forms of critical vigilance.

Hence the need to orient oneself toward a constructive partnership between LLM and teacher: a possibility that takes on value only insofar as the teacher maintains the capacity to critically interrogate algorithmic outputs, understand their limits, and orient their use according to explicit educational principles. Transposed into the context of educational AI, this implies that the teacher develops not only operational competences, but a critical understanding of the computational architecture that governs these systems. Paradoxically, the introduction of AI into education thus ends up forcing us to study it: not only to use it better, but to avoid being used by it.

OUT NOW! TOD #61 | The Many Faces of Data Access: Legal and Policy Implications for Research

Theory on Demand #61

   

The Many Faces of Data Access: Legal and Policy Implications for Research

Edited by Jef Ausloos & Siddharth Peter de Souza

This last volume in the INC Theory on Demand series provides an interdisciplinary critique of regulation and, in the process, opens the ‘black box’ of technology companies to researchers. The anthology brings together scholars from across the globe, that work in varied fields, from critical legal studies, science and technology studies, critical data studies to digital humanities. The book explores questions of data access – to acquire and use data meaningfully as well as resist power. It covers themes such as opportunities and challenges of the law as a tool for observing digital infrastructures, the political economy of data access for research, and the power dynamics between academia, private/public sector, and civil society. The publication also examines these questions in terms of the politics of knowledge production and investigates whether there is a privileging of geographical and institutional contexts in data access regimes.

Jef Ausloos is Assistant Professor at the Institute for Information Law, University of Amsterdam.

Siddharth Peter de Souza is Assistant Professor of AI and Society at the Centre for Interdisciplinary Methodologies, University of Warwick.

Cover Design: Katja van Stiphout

Production: Klaudia Orczykowska

Published by the Institute of Network Cultures, Amsterdam, 2026.

ISBN: 9789083672113

Contact:
Institute of Network Cultures
Amsterdam University of Applied Sciences (HvA)
Email: info@networkcultures.org
Web: www.networkcultures.orgOrder a copy or download this publication for free at: www.networkcultures.org/publicationsThis publication is licensed under the Creative Commons Attribution NonCommerical ShareAlike 4.0 Unported (CC BY-NC-SA 4.0).To view a copy of this license, visit www.creativecommons.org/licences/by-nc-sa/4.0./

Order a copy HERE

Download PDF

Download EPUB

Mimicked Voices and Nonhuman Listening: AI Deepfakes, Speech, and Sonic Manipulation in the Digital War on Ukraine

The essays collected in this series (link to the Introduction) trace how nonhuman listening operates through sound, speech, and platformed media across distinct but interconnected domains. Across these accounts, listening no longer secures meaning or relation; it becomes a site of contestation, where sound is mobilized, processed, and weaponized within systems that privilege circulation, recognition, and response over truth. In this contribution, Olga Zaitseva-Herz examines how nonhuman listening operates under conditions of war, where AI-generated voices and deepfakes destabilize the very grounds of auditory trust. Through the case of Ukraine, she shows how platforms and political actors alike exploit algorithmic listening systems to amplify affect, circulate disinformation, and transform voice into a tool of psychological warfare. Listening, in this context, becomes not a means of understanding but a terrain of uncertainty. –Guest Editor Kathryn Huether

Russia’s full-scale invasion of Ukraine has unfolded as the most digitally mediated war to date, shaped not only by what circulates online but by how content is heard, interpreted, and amplified.  Here, listening is not limited to human hearing: it also includes algorithmic systems that detect, rank, and amplify content, as well as political actors and online publics who interpret and recirculate it. Social media platforms—Telegram, Instagram, TikTok, Facebook—have become sites of psychological warfare where AI-generated audio, video, text, and image-based content are crafted to manipulate perception and provoke rapid emotional responses, often through algorithmic systems attuned to virality and affect. Ukrainian political authorities regularly caution users by saying that everything one reads, hears, or sees could be a psychological weapon. This is not rhetorical. Content is often designed to produce outrage, shock, and despair—emotions that travel quickly across platforms and influence public mood.

AI is used to create fake news videos, synthetic voices, and deepfake conversations, complicating how authenticity is heard and assessed. Some recordings circulating on social media simulate “leaked” phone calls revealing political dissent or strategic plans that are then shared on social media sites such as Telegram, Instagram, and Facebook. At the same time, the fact that people’s original voices can now also be generated with AI means that one can claim that their recorded voice is AI-generated. A widely circulated case involved Russian music producer Iosif Prigozhin, whose alleged call criticizing the Kremlin provoked significant backlash. Soon after he claimed the recording was an AI forgery – a statement whose truth remains unclear, but which strategically exploits growing public awareness of deepfakes as a means of discrediting or distancing from damaging material. Deepfakes thus do not merely deceive; they also destabilize the conditions of listening and trust, turning listening itself into a site of strategic uncertainty.. This uncertainty exploits a growing crisis of trust in listening itself, where voices can always be disavowed as synthetic. Against this backdrop, music and voice emerge as especially powerful media for manipulation, parody, retaliation, and symbolic struggle.

Graafika. Kuulaja. / Creator: Keerend, Avo (autor) / Date: 1980 / Providing institution: Pärnu Muuseum / Aggregator: E-Varamu / Providing Country: Estonia / CC0 1.0 / Graafika. Kuulaja. by Keerend, Avo (autor) – 1980 – Pärnu Museum, Estonia – CC0.

AI Songs as a Tool of Revenge

AI generative tools are also used for irony or parody, such as in the viral remake “Samotni Moskali,” [Lonely Muscovites], which mocks the Ukrainian pop star Ani Lorak, who moved to Russia. On November 13th, 2023, Ukrainian journalist and politician Anton Gerashchenko’s Telegram channel posted a video remake of Ani Lorak’s old song “Poludneva Speka” [Midday Heat], renamed “Samotni Moskali.” This video quickly went viral on social media. Her big hit from the ’00s has been remade into strongly pro-Ukrainian content, featuring clips from current frontlines to illustrate new lyrics generated by an AI voice engineered to closely mimic Lorak’s vocal timbre and affect. The parody relies on listeners recognition of her voice and affective style, while the imitation introduces a strong contentual shift between the original and synthetic lyrics.

This social media burst was a response to Ani Lorak’s claimed political neutrality in the context of Russia´s full-scale war against Ukraine, despite clear signs from her that supported Russia. These actions seemed aimed at revenge and at the same time, the public breakup of her Ukrainian fan base, showing the impact of her choices, while her Ukrainian audience felt betrayed.  It led to many satirical memes, including AI-generated songs related to her stage persona, appearing on social media. Knowing that, under current Russian politics, she could get into trouble there if the government took the promoted `support´ for the Ukrainian army seriously. The revenge group went even further by creating a homepage called “Ani Lorak Foundation,” completely dedicated to fundraisers for the Ukrainian army, which is represented like Lorak’s own project where she showcases her support of Ukrainian battalions. Some military drones deployed by the Ukrainian side even ended up bearing stickers with the name of the “Ani Lorak Foundation.“ This case demonstrates how AI tools became instruments of public satire, sabotage and protest in the context of the current full-scale war.

AI Songs as a Weapon

During the full-scale invasion, Russia has been using AI-generated music as a weapon for propaganda and disinformation. In 2023, multiple songs in Ukrainian were created to disrupt Ukraine’s military mobilization efforts and went viral. One of these, the song “Mamo, Ia Ukhyliant” [Mother, I am a Draft Dodger], became particularly popular in a multitude of variations. Their circulation shows how platforms “listen” to wartime content through metrics of repetition, provocation, and affective intensity, amplifying messages not because they are true, but because they are likely to generate reaction and spread. These songs were algorithmically promoted on TikTok and successfully sparked a viral challenge aimed at undermining Ukraine’s mobilization in 2024 by encouraging Ukrainian men to evade the draft, flee, and party abroad instead. In return, Ukrainian intelligence has released an official statement that these songs are products of the Russian disinformation campaign.

This example shows how AI-generated songs are actively used as powerful tools of war, spreading political messages and influencing people’s political choices. Also, the fact that all these songs about draft evasion were released in Ukrainian highlights the goal of targeting Ukrainian men specifically, since Russian men usually don’t speak Ukrainian and therefore wouldn’t be affected by the content. Furthermore, the presence of a large number of these `draft dodger’ songs at the same time created the impression of widespread societal acceptance through repetition and algorithmic amplification. In this way, repetition itself became a signal of apparent legitimacy: the more frequently such content circulated, the more easily platforms and audiences could register it as evidence of broader consensus around draft evasion within Ukrainians.

Photo by Jon Tyson on Unsplash

AI Pictures on Facebook Mimicking Sound and Sonic Affect

Visual disinformation follows similar viral patterns. There has been a surge of AI-generated images with war-related content, often mimicking sound to intensify emotional impact and prompt affective listening by showing a screaming child amid the rubble or a crying soldier in a Ukrainian uniform, paired with a patriotic, pro-Ukrainian message that encourages interaction, such as a like or comment. Even without actual sound, such images solicit a kind of affective listening in which suffering is not literally heard but imagined, projected, and emotionally registered through visual cues. Meanwhile, although this truth-blurring pattern attracted significant attention among many Ukrainians, ironic counter-memes emerged, mocking its primitive approach.

According to warnings from the Ukrainian online security agency, these accounts aim to interact with pro-Ukrainian users, ultimately adding them as friends or followers. Then, when they build a large enough audience, they shift the type of content they share to pro-Russian. The strategy relies on gathering an audience that is specifically pro-Ukrainian, as they interact with images of crying soldiers or the suffering of the Ukrainian people at the front. In this sense, the filtering process functions as a form of nonhuman listening at the level of audience formation: platforms and account managers learn which publics respond to particular emotional cues, cultivate those publics through repeated engagement, and later redirect them toward different ideological content. This creates a filtering mechanism through which an initially pro-Ukrainian audience is gathered, profiled, and later ideologically redirected, alienating loyal followers while pulling political opinion in a more pro-Russian direction.

Pro-Russian AI Songs in Germany to weaken Support of Ukraine

In Germany, AI-generated songs are being utilized as propaganda tools to promote pro-Russian sentiment and anti-Ukrainian views. The right-wing party AfD has embraced AI songs as a potent tool in this regard. Multiple mostly anonymous YouTube accounts have emerged spreading right-wing ideas, with these songs not only addressing German political issues but also openly supporting Russia. For instance, one song titled “Meine Stimme Habt ihr nicht” [You don’t get my vote] features an AI-created avatar of a tall, strong woman holding German and Russian flags. The version of the same song was also released in Russian. The lyrics criticize Germany’s political course, including military aid to Ukraine, and expresses a desire to be friends with Russia.  Its circulation across German and Russian suggests that listening is being calibrated for different national and linguistic publics, allowing similar political messages to be heard through distinct affective and ideological frames shaped by language, audience, and context.

Contemporary propaganda is increasingly shaped not just by human intent but by rapidly developing nonhuman listening systems—both in production and amplification. Algorithmic listening and perception are exploited to privilege what provokes, not what is true, complicating efforts to regulate digital hate, emotion, and influence. In this context, listening becomes not only a human practice of interpretation, but also a technical system of detection, ranking, and amplification—and, crucially, a site of failure where truth, trust, and perception can no longer be reliably aligned.

Featured Image: Photo by Stanislav Vlasov on Unsplash.

Olga Zaitseva-Herz is an ethnomusicologist working at the intersection of Ukrainian music, war, displacement, and digital culture. She is currently a postdoctoral researcher at the Kule Centre for Ukrainian and Canadian Folklore at the University of Alberta and a guest scholar at Think Space Ukraine at the University of Regensburg. Her research examines how song operates as a medium of political mediation, cultural diplomacy, and historical memory, with a particular focus on popular music and AI-generated sound during Russia’s full-scale invasion of Ukraine. Combining perspectives from ethnomusicology, sound studies, and media analysis, her work investigates how music shapes narratives of resistance, belonging, and global visibility, and how sonic practices illuminate the broader entanglements of culture, technology, and power.

REWIND! . . .If you liked this post, you may also dig:

Hate & Non-Human Listening, an Introduction–Kathryn Huether

Your Voice is (Not) Your PassportMichelle Pfeifer 

Mapping the Music in Ukraine’s Resistance to the 2022 Russian InvasionMerje Laiapea

SO! Amplifies: An Interactive Map of Music as Ukrainian Resistance to the 2022 Russian InvasionMerje Laiapea





Mimicked Voices and Nonhuman Listening: Deepfakes, Speech, and Sonic Manipulation in the Digital War on Ukraine

The essays collected in this series (link to the Introduction) trace how nonhuman listening operates through sound, speech, and platformed media across distinct but interconnected domains. Across these accounts, listening no longer secures meaning or relation; it becomes a site of contestation, where sound is mobilized, processed, and weaponized within systems that privilege circulation, recognition, and response over truth. In this contribution, Olga Zaitseva-Herz examines how nonhuman listening operates under conditions of war, where AI-generated voices and deepfakes destabilize the very grounds of auditory trust. Through the case of Ukraine, she shows how platforms and political actors alike exploit algorithmic listening systems to amplify affect, circulate disinformation, and transform voice into a tool of psychological warfare. Listening, in this context, becomes not a means of understanding but a terrain of uncertainty. –Guest Editor Kathryn Huether

Russia’s full-scale invasion of Ukraine has unfolded as the most digitally mediated war to date, shaped not only by what circulates online but by how content is heard, interpreted, and amplified.  Here, listening is not limited to human hearing: it also includes algorithmic systems that detect, rank, and amplify content, as well as political actors and online publics who interpret and recirculate it. Social media platforms—Telegram, Instagram, TikTok, Facebook—have become sites of psychological warfare where AI-generated audio, video, text, and image-based content are crafted to manipulate perception and provoke rapid emotional responses, often through algorithmic systems attuned to virality and affect. Ukrainian political authorities regularly caution users by saying that everything one reads, hears, or sees could be a psychological weapon. This is not rhetorical. Content is often designed to produce outrage, shock, and despair—emotions that travel quickly across platforms and influence public mood.

AI is used to create fake news videos, synthetic voices, and deepfake conversations, complicating how authenticity is heard and assessed. Some recordings circulating on social media simulate “leaked” phone calls revealing political dissent or strategic plans that are then shared on social media sites such as Telegram, Instagram, and Facebook. At the same time, the fact that people’s original voices can now also be generated with AI means that one can claim that their recorded voice is AI-generated. A widely circulated case involved Russian music producer Iosif Prigozhin, whose alleged call criticizing the Kremlin provoked significant backlash. Soon after he claimed the recording was an AI forgery – a statement whose truth remains unclear, but which strategically exploits growing public awareness of deepfakes as a means of discrediting or distancing from damaging material. Deepfakes thus do not merely deceive; they also destabilize the conditions of listening and trust, turning listening itself into a site of strategic uncertainty.. This uncertainty exploits a growing crisis of trust in listening itself, where voices can always be disavowed as synthetic. Against this backdrop, music and voice emerge as especially powerful media for manipulation, parody, retaliation, and symbolic struggle.

Graafika. Kuulaja. / Creator: Keerend, Avo (autor) / Date: 1980 / Providing institution: Pärnu Muuseum / Aggregator: E-Varamu / Providing Country: Estonia / CC0 1.0 / Graafika. Kuulaja. by Keerend, Avo (autor) – 1980 – Pärnu Museum, Estonia – CC0.

AI Songs as a Tool of Revenge

AI generative tools are also used for irony or parody, such as in the viral remake “Samotni Moskali,” [Lonely Muscovites], which mocks the Ukrainian pop star Ani Lorak, who moved to Russia. On November 13th, 2023, Ukrainian journalist and politician Anton Gerashchenko’s Telegram channel posted a video remake of Ani Lorak’s old song “Poludneva Speka” [Midday Heat], renamed “Samotni Moskali.” This video quickly went viral on social media. Her big hit from the ’00s has been remade into strongly pro-Ukrainian content, featuring clips from current frontlines to illustrate new lyrics generated by an AI voice engineered to closely mimic Lorak’s vocal timbre and affect. The parody relies on listeners recognition of her voice and affective style, while the imitation introduces a strong contentual shift between the original and synthetic lyrics.

This social media burst was a response to Ani Lorak’s claimed political neutrality in the context of Russia´s full-scale war against Ukraine, despite clear signs from her that supported Russia. These actions seemed aimed at revenge and at the same time, the public breakup of her Ukrainian fan base, showing the impact of her choices, while her Ukrainian audience felt betrayed.  It led to many satirical memes, including AI-generated songs related to her stage persona, appearing on social media. Knowing that, under current Russian politics, she could get into trouble there if the government took the promoted `support´ for the Ukrainian army seriously. The revenge group went even further by creating a homepage called “Ani Lorak Foundation,” completely dedicated to fundraisers for the Ukrainian army, which is represented like Lorak’s own project where she showcases her support of Ukrainian battalions. Some military drones deployed by the Ukrainian side even ended up bearing stickers with the name of the “Ani Lorak Foundation.“ This case demonstrates how AI tools became instruments of public satire, sabotage and protest in the context of the current full-scale war.

AI Songs as a Weapon

During the full-scale invasion, Russia has been using AI-generated music as a weapon for propaganda and disinformation. In 2023, multiple songs in Ukrainian were created to disrupt Ukraine’s military mobilization efforts and went viral. One of these, the song “Mamo, Ia Ukhyliant” [Mother, I am a Draft Dodger], became particularly popular in a multitude of variations. Their circulation shows how platforms “listen” to wartime content through metrics of repetition, provocation, and affective intensity, amplifying messages not because they are true, but because they are likely to generate reaction and spread. These songs were algorithmically promoted on TikTok and successfully sparked a viral challenge aimed at undermining Ukraine’s mobilization in 2024 by encouraging Ukrainian men to evade the draft, flee, and party abroad instead. In return, Ukrainian intelligence has released an official statement that these songs are products of the Russian disinformation campaign.

This example shows how AI-generated songs are actively used as powerful tools of war, spreading political messages and influencing people’s political choices. Also, the fact that all these songs about draft evasion were released in Ukrainian highlights the goal of targeting Ukrainian men specifically, since Russian men usually don’t speak Ukrainian and therefore wouldn’t be affected by the content. Furthermore, the presence of a large number of these `draft dodger’ songs at the same time created the impression of widespread societal acceptance through repetition and algorithmic amplification. In this way, repetition itself became a signal of apparent legitimacy: the more frequently such content circulated, the more easily platforms and audiences could register it as evidence of broader consensus around draft evasion within Ukrainians.

Photo by Jon Tyson on Unsplash

AI Pictures on Facebook Mimicking Sound and Sonic Affect

Visual disinformation follows similar viral patterns. There has been a surge of AI-generated images with war-related content, often mimicking sound to intensify emotional impact and prompt affective listening by showing a screaming child amid the rubble or a crying soldier in a Ukrainian uniform, paired with a patriotic, pro-Ukrainian message that encourages interaction, such as a like or comment. Even without actual sound, such images solicit a kind of affective listening in which suffering is not literally heard but imagined, projected, and emotionally registered through visual cues. Meanwhile, although this truth-blurring pattern attracted significant attention among many Ukrainians, ironic counter-memes emerged, mocking its primitive approach.

According to warnings from the Ukrainian online security agency, these accounts aim to interact with pro-Ukrainian users, ultimately adding them as friends or followers. Then, when they build a large enough audience, they shift the type of content they share to pro-Russian. The strategy relies on gathering an audience that is specifically pro-Ukrainian, as they interact with images of crying soldiers or the suffering of the Ukrainian people at the front. In this sense, the filtering process functions as a form of nonhuman listening at the level of audience formation: platforms and account managers learn which publics respond to particular emotional cues, cultivate those publics through repeated engagement, and later redirect them toward different ideological content. This creates a filtering mechanism through which an initially pro-Ukrainian audience is gathered, profiled, and later ideologically redirected, alienating loyal followers while pulling political opinion in a more pro-Russian direction.

Pro-Russian AI Songs in Germany to weaken Support of Ukraine

In Germany, AI-generated songs are being utilized as propaganda tools to promote pro-Russian sentiment and anti-Ukrainian views. The right-wing party AfD has embraced AI songs as a potent tool in this regard. Multiple mostly anonymous YouTube accounts have emerged spreading right-wing ideas, with these songs not only addressing German political issues but also openly supporting Russia. For instance, one song titled “Meine Stimme Habt ihr nicht” [You don’t get my vote] features an AI-created avatar of a tall, strong woman holding German and Russian flags. The version of the same song was also released in Russian. The lyrics criticize Germany’s political course, including military aid to Ukraine, and expresses a desire to be friends with Russia.  Its circulation across German and Russian suggests that listening is being calibrated for different national and linguistic publics, allowing similar political messages to be heard through distinct affective and ideological frames shaped by language, audience, and context.

Contemporary propaganda is increasingly shaped not just by human intent but by rapidly developing nonhuman listening systems—both in production and amplification. Algorithmic listening and perception are exploited to privilege what provokes, not what is true, complicating efforts to regulate digital hate, emotion, and influence. In this context, listening becomes not only a human practice of interpretation, but also a technical system of detection, ranking, and amplification—and, crucially, a site of failure where truth, trust, and perception can no longer be reliably aligned.

Featured Image: Photo by Stanislav Vlasov on Unsplash.

Olga Zaitseva-Herz is an ethnomusicologist working at the intersection of Ukrainian music, war, displacement, and digital culture. She is currently a postdoctoral researcher at the Kule Centre for Ukrainian and Canadian Folklore at the University of Alberta and a guest scholar at Think Space Ukraine at the University of Regensburg. Her research examines how song operates as a medium of political mediation, cultural diplomacy, and historical memory, with a particular focus on popular music and AI-generated sound during Russia’s full-scale invasion of Ukraine. Combining perspectives from ethnomusicology, sound studies, and media analysis, her work investigates how music shapes narratives of resistance, belonging, and global visibility, and how sonic practices illuminate the broader entanglements of culture, technology, and power.

REWIND! . . .If you liked this post, you may also dig:

Hate & Non-Human Listening, an Introduction–Kathryn Huether

Your Voice is (Not) Your PassportMichelle Pfeifer 

Mapping the Music in Ukraine’s Resistance to the 2022 Russian InvasionMerje Laiapea

SO! Amplifies: An Interactive Map of Music as Ukrainian Resistance to the 2022 Russian InvasionMerje Laiapea





Popular science has a formula

Popular science has a formula

By David H. Silver

Popular science has a formula. Take a difficult idea, strip the mathematics, add a metaphor, and tell the reader how to feel about it. Gravity bends space "like a bowling ball on a rubber sheet." Quantum mechanics is "spooky." The universe is "mind-blowing." The reader leaves with a sense of wonder and no tools to verify any of it.

Textbooks sit at the other end. They assume two years of prerequisite coursework, define every symbol, prove every theorem, and are read by people who already know what they're looking for. They also strip the wonder in a different way. A student who has spent nights fighting integrals has little space left to appreciate how far from intuition they've wandered. The machinery repels the amazement. The gap between these two forms is enormous, and almost nothing occupies it.

Beyond Popular Science is an attempt to sit in that gap — and to be honest about the awkwardness of sitting there. The main exposition isn't long enough for full understanding. The technical sections are often too abstract. But if the book works, the reader leaves hungry enough to go find the real meal — a textbook, a paper, a late-night Wikipedia spiral.

The project started as a family flight magazine. Before a transatlantic trip, I put together a few notes on questions that seem simple on the surface but turn out to be scientifically intricate. The list kept growing, and the flight magazine became fifty chapters spanning mathematics, physics, computer science, chemistry, philosophy, and history.

Consider one of the most familiar objects on earth: a tree. Ask where a tree's mass comes from and intuition points downward — soil, water, nutrients drawn up through roots. This is almost entirely wrong. About 95% of a tree's dry mass is carbon and oxygen from atmospheric CO₂. A tree is made of air. Van Helmont demonstrated this in the 1640s: he planted a willow sapling in weighed soil, supplied only water, and after five years the tree had gained over 70 kilograms while the soil lost less than 60 grams. He didn't know the mechanism — that came centuries later with isotope labelling, which traced carbon atoms from CO₂ through stomata into sugar molecules via light-powered biochemical cycles, then into cellulose, lignin, and hemicellulose. The oxygen in wood comes from CO₂, not from water — the oxygen released by photosynthesis comes from splitting water molecules, confirmed by experiments with oxygen-18. When a tree burns, the carbon returns to the atmosphere and the stored sunlight is released as heat. The chapter walks the full chain, from photon to wood.

Each chapter follows the same structure. Historical context comes first — the people, circumstances, and discoveries behind the phenomenon. Then a description of the phenomenon itself, in straightforward terms. Finally, a one-page technical section that is unapologetically tough: equations, derivations, and references. This section functions like the references in a scientific article. It isn't required to grasp the main ideas, but it justifies the claims, provides scaffolding, and offers readers the tools to verify everything or explore further.

The book's position creates an unusual relationship with the reader. A popular science book can promise accessibility — anyone can follow along. A textbook can promise mastery — work through the problem sets and you'll understand. This book promises neither. It promises that the science is presented as it actually is, without manufactured excitement, and that the effort required to engage with it is part of the value.

Too much science communication relies on what I think of as a "laugh track" approach — telling readers how they should feel instead of letting the ideas do the work. "This is mind-blowing!" cheapens the experience, as though the Dirac equation or the Banach–Tarski paradox need a hype man. They don't. A solid sphere can be decomposed into finitely many pieces and reassembled into two copies of itself. That fact is strange enough without commentary. The quasi-liquid layer on the surface of ice — nanometres of disordered molecules that explain why ice is slippery, even when pressure melting and frictional heating fail — is fascinating because of what it is, not because someone told you to be fascinated.

Several people who work in science communication have told me this book feels like it was written for them. They know the popular version of every story in it, and they're tired of repeating metaphors they know are incomplete. They want to understand what's actually happening — the real mechanism, the actual equation, the part that gets cut from the magazine article. If you spend your career explaining science to others, you develop a craving for the unexpurgated version.

The fifty chapters can be read independently. Some are approachable — the etymology of "wheel" and "cycle," the Christmas truce of 1914, why fireflies glow. Others are demanding — Poncelet's closure theorem in projective billiards, the Woodward–Hoffmann rules in orbital symmetry, observer-dependent vacuum states in quantum field theory. The chapter summaries are accessible to anyone. The technical sections are not, and are not intended to be.

This range is deliberate. A reader with a background in physics might skip the historical context of general relativity but spend time with the chapter on the Jewish calendar's astronomical calculations. A mathematician might breeze through the topology chapter but find the chemistry of DNA sequencing unfamiliar. The book is designed so that every reader finds chapters where they're comfortable and chapters where they're not. The uncomfortable ones are the point.

The book contains errors. The introduction says so, and means it. Precision across fifty topics spanning half a dozen fields is unrealisable for a single author. Readers are invited to report mistakes, and I expect they will. This is a feature of writing in the gap: you trade the safety of a narrow specialism for the risk of getting something wrong in someone else's field. The trade-off is worth it if the result is a book where a single chapter can take you from a Bronze Age Proto-Indo-European root word to modern comparative linguistics, or from a 4chan post about anime to a breakthrough in combinatorics.

I wrote this book because I'm the person who corners friends at dinner to explain why ice is slippery or how GPS satellites account for time dilation. Anyone who knows me knows this happens regardless of the time or place. The book is an attempt to do the same thing in print — to share the parts of science that made me sit up, but without pretending they're simpler than they are.

Consider what happens when a ray of sunlight hits your eye and you move. The photon was generated in a star's core where the weak nuclear force converted protons to neutrons after quantum tunnelling through an energy barrier. It was trapped in plasma for a million years in random-walk collisions, finally escaping the surface and flying straight for eight minutes across the vacuum — zero time from the photon's point of view. It strikes your retina and flips rhodopsin from cis to trans, a femtosecond molecular rearrangement amplified into a millisecond spike. Neurons fire, motor cortex computes, acetylcholine floods neuromuscular junctions, actin and myosin filaments slide, and you move. Every layer of physics and biology has fired in unison — from subnuclear quark fields to stellar photon journeys to cellular cities to muscular contraction — so that when you think "I should move," your body tilts its trajectory through spacetime.

This is less mundane than any grumpy villain who can fly forks around telekinetically. The universe doesn't need exaggeration. It needs explanation.


David H. Silver is an industrial researcher whose work spans computational biology, computer vision, and science communication. Beyond Popular Science is freely available from Open Book Publishers.


Popular science has a formula