Main Duty Free Art - Art in the Age of Planetary Civil War

Duty Free Art - Art in the Age of Planetary Civil War

0 / 0
How much do you like this book?
What’s the quality of the file?
Download the book for quality assessment
What’s the quality of the downloaded files?
Publisher:
Penguin Random House LLC (Publisher Services)
Language:
english
Pages:
251
ISBN 13:
9781786632463
File:
EPUB, 2.97 MB
Download (epub, 2.97 MB)

You may be interested in Powered by Rec2Me

 

Most frequently terms

 
0 comments
 

You can write a book review and share your experiences. Other readers will always be interested in your opinion of the books you've read. Whether you've loved the book or not, if you give your honest and detailed thoughts then people will find new books that are right for them.
1

C. L. R. James - The Artist As Revolutionary

Year:
2017
Language:
english
File:
EPUB, 501 KB
0 / 0
2

The New Poverty

Year:
2017
Language:
english
File:
EPUB, 843 KB
0 / 0
Duty Free Art





Duty Free Art




Hito Steyerl





First published by Verso 2017

© Hito Steyerl 2017

This work is licensed under a Creative Commons

Attribution-Non Commercial 4.0 License

All rights reserved

The moral rights of the author have been asserted

1 3 5 7 9 10 8 6 4 2

Verso

UK: 6 Meard Street, London W1F 0EG

US: 20 Jay Street, Suite 1010, Brooklyn, NY 11201

versobooks.com

Verso is the imprint of New Left Books

ISBN-13: 978-1-78663-243-2

ISBN-13: 978-1-78663-245-6 (UK EBK)

ISBN-13: 978-1-78663-246-3 (US EBK)

British Library Cataloguing in Publication Data

A catalogue record for this book is available from the British Library

Library of Congress Cataloging-in-Publication Data

A catalog record for this book is available from the Library of Congress

Typeset in Sabon by MJ & N Gavan, Truro, Cornwall

Printed in the UK by CPI Group





Contents


1.A Tank on a Pedestal

2.How to Kill People: A Problem of Design

3.The Terror of Total Dasein: Economies of Presence in the Art Field

4.Proxy Politics: Signal and Noise

5.A Sea of Data: Apophenia and Pattern (Mis-)Recognition

6.Medya: Autonomy of Images

7.Duty Free Art

8.Digital Debris

9.Her Name Was Esperanza

10.International Disco Latin

11.Is the Internet Dead?

12.Why Games, Or, Can Art Workers Think?

13.Let’s Talk about Fascism

14.If You Don’t Have Bread, Eat Art! Contemporary Art and Derivative Fascisms

15.Ripping Reality

Acknowledgments

Notes





1


A Tank on a Pedestal


I love history.

But history doesn’t love me back,

Whenever I call her I get her answering machine.

She says: “Insert logo here.”



A tank on a pedestal. Fumes are rising from the engine. A Soviet battle tank—called IS3 for Iosip Stalin—is being repurposed by a group of pro-Russian separatists in Konstantinovka, Eastern Ukraine. It is driven off a World War II memorial pedestal and promptly goes to war. According to a local militia, it “attacked a checkpoint in Ulyanovka, Krasnoarmeysk district, resulting in three dead and three wounded on t; he Ukrainian side, and no losses on our side.”1

One might think that the active historical role of a tank would be over once it became part of a historical display. But this pedestal seems to have acted as temporary storage from which the tank could be redeployed directly into battle. Apparently, the way into the museum—or even into history itself—is not a one-way street. Is the museum a garage? An arsenal? Is a monument pedestal a military base?

But this opens up more general questions. How can one think of art institutions in an age that is defined by planetary civil war, growing inequality, and proprietary digital technology? The boundaries of the institution have become fuzzy. They extend from pumping the audience for tweets, to a future of “neurocurating” in which paintings will surveil their audience via facial recognition and eye tracking to check whether the paintings are popular enough or whether anyone is behaving suspiciously.

Is it possible, in this situation, to update the twentieth-century terminology of institutional critique? Or does one need to look for different models and prototypes? What is a model anyway, under such conditions? How does it link on-and off-screen realities, mathematics and aesthetics, future and past, reason and treason? And what is its role in a global chain of projection as production?

In the example of the kidnapped tank, history invades the hypercontemporary. It is not an account of events post factum. It acts, it feigns, it keeps on changing. History is a shape-shifting player, if not an irregular combatant. It keeps attacking from behind. It blocks off any future. Frankly, this kind of history sucks.

This history is not a noble endeavor, something to be studied in the name of humankind so as to avoid being repeated. On the contrary, this kind of history is partial, partisan, and privatized, a self-interested enterprise, a means to feel entitled, an objective obstacle to coexistence, and a temporal fog detaining people in the stranglehold of imaginary origins.2 The tradition of the oppressed turns into a phalanx of oppressive traditions.3

Does time itself run backwards nowadays? Did someone remove its forward gear and force it to drive around in circles? History seems to have morphed into a loop.

In such a situation, one might be tempted to rehash Marx’s idea of historical repetition as farce. Marx thought that historical repetition—let alone reenactments—produces ludicrous results. However, quoting Marx, or indeed any historical figure, would itself constitute repetition, if not farce.

So let’s turn to Tom Cruise and Emily Blunt instead, which is more helpful. In the blockbuster Edge of Tomorrow, the Earth has been invaded by a savage alien species known as Mimics. While trying to get rid of them, Blunt and Cruise get stuck in a time-looped battle; they get killed over and over again, only to respawn with sunrise. They have to find a way out of the loop. Where does the Mimic-in-chief live? Underneath the Louvre’s pyramid! This is where Blunt and Cruise go to destroy him.

The enemy is inside the museum, or more accurately, underneath it. The Mimics have hijacked the place and turned time into a loop. But what does the form of the loop mean, and how is it linked to warfare? Giorgio Agamben has recently analyzed the Greek term stasis, which means both civil war and immutability: something potentially very dynamic, but also its absolute opposite.4 Today, multiple conflicts seem to be mired in stasis, in both senses of the term. Stasis describes a civil war that is unresolved and drags on. Conflict is not a means to force a resolution of a disputed situation, but a tool to sustain it. A stagnant crisis is the point. It needs to be indefinite because it is an abundant source of profit: instability is a gold mine without bottom.5

Stasis happens as a perpetual transition between the private and public spheres. It is a very useful mechanism for a one-way redistribution of assets. What was public is privatized by violence, while formerly private hatreds become the new public spirit.

The current version of stasis is set in an age of cutting-edge nonconventional warfare. Contemporary conflicts are fought by Uber-militias, bank-sponsored bot armies, and Kickstarter-funded toy drones. Their protagonists wear game gear and extreme sports gadgets, and they coordinate with Vice reporters via WhatsApp. The result is a patchwork form of conflict that uses pipelines and 3G as weapons within widespread proxy stalemates. The present permawar is fought by historical battle reenactors (in the Ukrainian example, on both sides of the conflict), who one could well call real-life Mimics.6 Stasis is the curving back of time into itself, in the context of permanent war and privatization. The museum leaks the past into the present, and history becomes severely corrupted and limited.

Alfonso Cuarón’s brilliant film Children of Men presents another way that art institutions might respond to planetary civil war.7 It depicts a bleak near-future where humanity has become sterile. A planetary civil war has engulfed Britain, dividing the island into segregated zones, one for refugees and undocumented persons—a total dystopia—and another for citizens. Turbine Hall at the Tate Modern has become the home of the Ministry of the Arts; here, precious artworks are given a safe haven: an Ark of the Arts. In one scene set in Turbine Hall, Michelangelo’s David is shown with a broken leg, perhaps damaged during the conflict.

The destruction of antiquities by Daesh (also known as ISIS or Islamic State), which was preceded by major destruction and the looting of cultural objects during the US invasion in Iraq, raises the question: Wouldn’t it be great to have an Ark of the Arts that could rescue the antiquities of Palmyra or Nineveh and safeguard cultural treasures from violence?

However, the Ark of the Arts is a quite ambivalent institution. One is never quite sure what its function really is. In another scene, Picasso’s Guernica is used as a decoration for a private dinner.8 The Ark of the Arts might be an institution that has become so secure that the only people permitted to see the artworks are the Ark’s directors, their children, and their servants. But it could also be an evolution of international freeport art storage, where artworks disappear into the invisibility of tax-free storage cubes.9

Besides the international biennial, duty free art storage is probably the most important contemporary active form for art. It’s like the dystopian backside of the biennial, at a time when liberal dreams of globalization and cosmopolitanism have been realized as a multipolar mess peopled with oligarchs, warlords, too-big-to-fail corporations, dictators, and lots of newly stateless people.10

In the late twentieth century, globalization was described as a formula: the value of civil society multiplied by the internet divided by migration, metropolitan urbanism, the power of NGOs, and other forms of transnational political organization.11 Saskia Sassen characterized those activities as “citizen practices that go beyond the nation.”12 The internet was still full of hope and people believed in it. This was long ago.

The organizational forms pioneered by human rights NGOs and liberal women’s rights campaigns are now deployed by oligarch-funded fascist battalions, GoPro jihadi units, displaced dudes playing Forex exchanges, and internet trolls posing as feng shui Eurasians.13 In their wake, para-statelets and anti-“terrorist” operation zones emerge alongside duty free zones, offshore entities, and corporate proxy concessions.14 At the same time, horizontal networks are turned into global fiber-optic surveillance: the planetary civil war is fought by engaging with the logistic disruptions of planetary computerization. Contemporary cosmopolitans do not fail to promptly engage in civil warfare whenever the chance presents itself. Every digital tool imaginable is put to work: bot armies, Western Union, Telegram,15 PowerPoint presentations, jihadi forum gamification16—whatever works. Stasis acts as a mechanism that converts the “cosmo” of “cosmopolitan” into “corporate” and the polis into property.

The corresponding institutional model for art is freeport art storage, built on tax-exempt status and tactical extraterritoriality. Children of Men shows how this model could become a template for public institutions amid the effects of planetary civil war, securing artworks to the point of withdrawal. While the international biennial was the active form of art for late twentieth-century ideas of globalization, duty free art storage and the terror-proof hypersecure bunker are its equivalent in the age of globalizing stasis and pop-up NATO fence borders. But this is not a necessary or inevitable outcome.



Consider how Guernica was hung during a previous global civil war.

Guernica was made for the Spanish Republic’s pavilion at the 1937 World Expo in Paris, to show the results of airstrikes on civilian populations. In terms of conservation, this was a lousy decision indeed. The painting was hung more or less outdoors for quite some time.

In the future projected by Children of Men, Picasso’s painting finds shelter from the mayhem of war in a private dining room. The painting might be “safe,” and it certainly enjoys a climate-controlled atmosphere, but very few people will see it. In the historical civil war, however, a completely opposite decision was made: to expose the painting, to literally put it out there. After all, in French and other Latin languages, a show is called an “exposition.” Not an imposition.17

In terms of conservation, the scenario in Children of Men is contradictory, because the first thing that has to be conserved or even created is a situation where art can be seen and accessed. Why is this so? Because art is not art if it cannot be seen. And if it is not art, there is no point in conserving it. More than the artworks themselves, the thing that’s threatened by the institutional response to civil war—be it privatization or overprotection—is public access. But it is public access, to a certain degree, that makes art what it is in the first place, thus necessitating its conservation. Hence the contradiction: art requires visibility to be what it is, and yet this visibility is precisely what is threatened by efforts to preserve or privatize it.

But there is something wrong here. The Spanish Republic’s pavilion is, after all, an example from 1937. Am I not lapsing into bad old nostalgic Zombie Marxism here? Isn’t this repetition as farce?

The answer is no. Let’s come back to Edge of Tomorrow to see how it solves the problem of the loop. It offers an unexpected solution to the problem of stasis, to escaping from history-as-repetition. The movie is based on the novel All You Need Is Kill by Hiroshi Sakurazaka, which built a narrative out of the experience of hitting the reset button on a video-game console. So it is no coincidence that the movie narrates the impasse of a gamer being stuck, unable to complete a given level. But gamers are used to this: it is their mission to get to the next level. A gamer is not a reenactor. She doesn’t derive pleasure from having to play the same level over and over again or endlessly reenacting historical models. She will go online and look up a forum to figure out how to beat the level and move on. In gaming (most games at least) there is an exit for each level, each repeated sequence, each loop. Most likely there is a weapon or a tool hidden in some cupboard, and this can be used to vanquish whatever enemy and complete the level. Edge of Tomorrow not only maintains that there is a tomorrow, but that we are positioned at its edge, that it is possible to complete the level and to break free from the loop. Gaming can evolve into playing. And here, the ambiguity of “play” is helpful. On the one hand, play is about rules, which must be mastered if one is to proceed. On the other, play is also about the improvised creation of new, common rules. So reenactment is scrapped in favor of gaming moving towards play, which may or may not be another form of acting.

What does all this mean for the museum? First of all, one could say that history only exists if there is a tomorrow—if tanks remain locked up within historical collections and time moves on. The future only happens if history doesn’t occupy and invade the present. The museum must render the tank useless upon entry, the way old cannons are filled with cement before being displayed in parks. Otherwise, the museum becomes an instrument for prolonging stasis by preserving the tyranny of a partial, partisan history, which also turns out to be a great business opportunity.

But what does this have to do with the Spanish pavilion? It’s very simple. There was one detail I didn’t mention but which is very obvious if you think about it. In 1937, Guernica was new. It was a newly commissioned artwork dealing with the present. The curators didn’t pick Desastres de la Guerra by Goya or another historical work, even though it might have fit perfectly too. They commissioned new pieces and educational setups to speak about the present. To reactivate that model, one has to do the same. If one wants to reactivate this history, it needs to be different. On the next level. With new works. In the present. This is a huge endeavor of course, one that goes far beyond the task of the museum as it is usually understood. It enters into the project of re-creating not only the city, but society itself. And here, we again encounter the idea of play. To play is to re-actualize the rules as one goes along. Or to create rules that demand new actualization every time. There is a continuum between games and play. Both need rules. On one end of the spectrum there is a looped form. On the other, an open one.18

To summarize these ideas about museums, history, and the planetary civil war: history only exists if there is a tomorrow. And, conversely, a future only exists if the past is prevented from permanently leaking into the present and if Mimics of all sorts are defeated. Consequently, museums have less to do with the past than with the future: conservation is less about preserving the past than it is about creating the future of public space, the future of art, and the future as such.





2


How to Kill People:

A Problem of Design


I saw the future. It was empty. A clean slate, flat, designed through and through.

In his 1963 film How to Kill People designer George Nelson argues that killing is a matter of design, next to fashion and homemaking. Nelson states that design is crucial in improving both the form and function of weapons. It deploys aesthetics to improve lethal technology.



An accelerated version of the design of killing recently went on trial in this city. Its old town was destroyed, expropriated, in parts eradicated. Young locals claiming autonomy started an insurgency. Massive state violence squashed it, claimed buildings, destroyed neighborhoods, strangled movement, hopes for devolution, secularism, and equality. Other cities fared worse. Many are dead. Elsewhere, operations were still ongoing. No, this city is not in Syria. Not in Iraq either. Let’s call it the old town for now. Artifacts found in the area date back to the Stone Age.

The future design of killing is already in action here.

It is accelerationist, articulating soft- and hardwares, combining emergency missives, programs, forms and templates. Tanks are coordinated with databases, chemicals meet excavators, social media come across tear gas, languages, special forces and managed visibility.

In the streets children were playing with a dilapidated computer keyboard thrown out onto a pile of stuff and debris. It said “Fun City” in big red letters. In the twelfth century one of the important predecessors of computer technology and cybernetics had lived in the old town. Scholar Al-Jazari devised many automata and pieces of cutting-edge engineering.1 One of his most astonishing designs is a band of musical robots floating on a boat in a lake, serving drinks to guests. Another one of his devices is seen as anticipating the design of programmable machines.2 He wrote the so-called “Book of Knowledge of Ingenious Mechanical Devices,” featuring dozens of inventions in the areas of hydropower, medicine, engineering, timekeeping, music, and entertainment. Now, the area where these designs were made is being destroyed.

Warfare, construction and destruction literally take place behind screens—under cover—requiring planning and installation. Blueprints were designed. Laws bent and sculpted. Minds both numbed and incited by the media glare of permanent emergency. The design of killing orchestrates military, housing, and religiously underpinned population policies. It shifts gears across emergency measures, land registers, pimped passions, and curated acts of daily harassment and violence. It deploys trolls, fiduciaries, breaking news, and calls to prayer. People are rotated in and out of territories, ranked by affinity to the current hegemony. The design of killing is smooth, participatory, progressing and aggressive, supported by irregulars and occasional machete killings. It is strong, brash, striving for purity and danger. It quickly reshuffles both its allies and its enemies. It quashes the dissimilar and dissenting. It is asymmetrical, multidimensional, overwhelming, ruling from a position of aerial supremacy.

After the fighting had ended, the curfew continued. Big white plastic sheets were covering all entrances to the area to block any view of the former combat zones. An army of bulldozers was brought in. Construction became the continuation of warfare with other means. The rubble of the torn down buildings was removed by workers brought in from afar, partly rumored to be dumped into the river, partly stored in highly guarded landfills far from the city center. Parents were said to dig for their missing children’s bodies in secret. They had joined the uprising and were unaccounted for. Some remnants of barricades still remained in the streets, soaked with the smell of dead bodies.



Special forces roamed about arresting anyone who seemed to be taking pictures. “You can’t erase them,” said one. “Once you take them they are directly uploaded to the cloud.”



A 3D render video of reconstruction plans was released while the area was still under curfew. Render ghosts patrol a sort of tidied gamescape built in traditional-looking styles, omitting signs of the different cultures and religions that had populated the city since antiquity. Images of destruction are replaced with digital renders of happy playgrounds and Haussmannized walkways by way of misaligned wipes.

The video uses wipes to transition from one state to another, from present to future, from elected municipality to emergency rule,3 from working-class neighborhood to prime real estate. Wipes as a filmic means are a powerful political symbol. They show displacement by erasure, or more precisely, replacement. They clear one image by shoving in another and pushing the old one out of sight. They visually wipe out the initial population, the buildings, elected representatives, and property rights in order to “clear” the space and fill it with a more convenient population, a more culturally homogeneous cityscape, a more aligned administration and homeowners. According to the simulation, the void in the old town would be intensified by expensive newly built developments rehashing bygone templates, rendering the city as a site for consumption, possession, and conquest. The objects of this type of design are ultimately the people and, as Brecht said, their deposition (or disposal, if deemed necessary). The wipe is the filmic equivalent of this. The design of killing is a permanent coup against the non-compliant part of people, against resistant human systems and economies.





So, where is this old town? It is in Turkey: Diyarbakir, the unofficial capital of the Kurdish-populated regions. Worse cases exist all over the region. The interesting thing is not that these events happen. They happen all the time, continuously. The interesting thing is that most people think that they are perfectly normal. Disaffection is part of the overall design structure, as well as the feeling that all of this is too difficult to comprehend and too specific to unravel. Yet this place seems to be designed as a unique case that just follows its own rules, if any. It is not included in the horizon of a shared humanity; it is designed as a singular case, a small-scale singularity.4

So let’s take a few steps back to draw more general conclusions. What does this specific instance of the design of killing mean for the idea of design as a whole?

One could think of Martin Heidegger’s notion of being-toward-death (Dasein zum Tode), the embeddedness of death within life. Similarly, we could talk in this case about “Design zum Tode,” or a type of design in which death is the all-encompassing horizon, founding a structure of meaning that is strictly hierarchical and violent.5

But something else is blatantly apparent as well, and it becomes tangible through the lens of filmic recording. Imagine a bulldozer doing its work recorded on video. It destroys buildings and tears them to the ground. Now imagine the same recording being played backwards. It will show something very peculiar, namely a bulldozer that actually constructs a building. You will see that dust and debris will violently contract into building materials. The structure will materialize as if sucked from thin air with some kind of Brutalist vacuum cleaner. In fact, the process you see in this imaginary video is very similar to what I described; it is a pristine visualization of a special variety of creative destruction.

Shortly before World War I, the sociologist Werner Sombart coined the term “creative destruction” in his book War and Capitalism.6 During World War II, the Austrian economist Joseph Schumpeter labeled creative destruction “the essential fact about capitalism.”7 Schumpeter drew on Karl Marx’s description of capitalism’s ability to dissolve all sorts of seemingly solid structures and force them to constantly upgrade and renew, both from within and without. Marx emphasized that “creative destruction” was still primarily a process of destruction.8 However, the term became popular within neoliberal ideologies as a sort of necessary internal cleansing process to keep up productivity and efficiency. Its destructivism echoes in both futurism and contemporary accelerationism, both of which celebrate some kind of mandatory catastrophe.

Today, the term “creative disruption” seems to have taken the place of creative destruction.9 Automation of blue- and white-collar labor, artificial intelligence, machine learning, cybernetic control systems or “autonomous” appliances are examples of current so-called disruptive technologies, violently shaking up existing societies, markets, and technologies. This is where we circle back to Al-Jazari’s mechanical robots, predecessors of disruptive technologies. Which types of design are associated with these technologies, if any? What are social technologies of disruption? How are Twitter bots, trolls, leaks, and blanket internet shutdowns deployed to accelerate autocratic rule? How do contemporary robots cause unemployment, and what about networked commodities and semi-autonomous weapons systems? How about widespread artificial stupidity, dysfunctional systems, and endless hotlines from hell? How about the oversized Hyundai and Komatsu cranes and bulldozers, ploughing through destroyed cities, performing an absurd ballet mécanique, punching through ruins, clawing through social fabric, erasing lived presents and eagerly building blazing emptiness?

Disruptive innovation is causing social polarization through the decimation of jobs, mass surveillance, and algorithmic confusion. It facilitates the fragmentation of societies by creating antisocial tech monopolies that spread bubbled resentment, change cities, magnify shade, and maximize poorly paid freelance work. The effects of these social and technological disruptions include nationalist, sometimes nativist, fascist, or ultra-religious mass movements.10 Creative disruption, fueled by automation and cybernetic control, runs in parallel with an age of political fragmentation. The forces of extreme capital, turbocharged with tribal and fundamentalist hatred, reorganize within financials and filter bubbles.

In modernist science fiction, the worst kinds of governments used to be imagined as a single artificial intelligence remote-controlling society. Today’s real existing proto- and para-fascisms, however, rely on decentralized artificial stupidity. Bot armies, like farms and meme magick, form the gut brains of political sentiment, manufacturing shitstorms that pose as popular passion. The idea of technocratic fascist rule—supposedly detached, omniscient, and sophisticated—is realized as a barrage of dumbed-down tweets. Democracy’s demos is replaced by a mob on mobiles11 capturing people’s activities, motion, and vital energies. But in contrast to the modernist dystopias, current autocracies do not rely on the perfection of such systems. They rather thrive on their permanent breakdown, dysfunctions, and so-called “predictive” capacities creating havoc.

Time seems especially affected by disruption. Think back to the reversed bulldozer video: the impression of creative destruction only comes about because time was reversed and is running backwards. After 1989, Jacques Derrida dramatically declared that time was “out of joint” and basically running amok. Writers like Francis Fukuyama thought history had somehow petered out. Jean-François Lyotard described the present as a succession of explosion-like shocks, after which nothing in particular happened.12 Simultaneously, logistics reorganized global production chains, trying to montage disparate shreds of time to maximize efficiency and profit. Echoing cut-and-paste aesthetics, the resulting fragmented time created large-scale havoc for people who had to organize their own lives around increasingly impossible, fractured, and often unpaid work hours and schedules.

Added to this is a dimension of time that is no longer accessible to humans, but only to networked so-called control systems that produce flash crashes and high frequency trading scams. Financialization introduces a host of further complications: the economic viability of the present is sustained by debt, that is, by future income claimed, consumed, or spent in the present. Thus on the one hand futures are depleted, and on the other, presents are destabilized. In short, the present feels as if it is constituted by emptying out the future to sustain a looping version of a past that never existed. Which means that for at least parts of this trajectory, time indeed runs backwards, from an emptied-out future to nurturing a stagnant imaginary past, sustained by disruptive design.

Disruption shows in the jitter in the ill-aligned wipes of the old town’s 3D render. The transition between present and future is abrupt and literally uneven: frames look as if jolted by earthquakes. In replacing a present urban reality characterized by strong social bonds with a sanitized digital projection that renders population replacement, disruptive design shows grief and dispossession thinly plastered over with an opportunist layer of pixels.

Warfare in the old town is far from being irrelevant, marginal or peripheral, since it shows a singular form of disruptive design, a specific design of killing, a special form of wrecked cutting-edge temporality. Futures are hastened, not by spending future incomes, but by making future deaths happen in the present; a sort of application of the mechanism of debt to that of military control, occupation, and expropriation.

While dreaming of the one technological singularity that will once and for all render humanity superfluous, disruption as a social, aesthetic, and militarized process creates countless little singularities, entities trapped within the horizons of what autocrats declare as their own history, identity, culture, ideology, race, or religion; each with their own incompatible rules, or more precisely, their own incompatible lack of rules.13 “Creative disruption” is not just realized by the wrecking of buildings and urban areas. It refers to the wrecking of a horizon of common understanding, replacing it by narrow, parallel, top-down, trimmed and bleached artificial histories.

This is exactly how processes of disruption might affect you, if you live somewhere else that is. Not in the sense that you will necessarily be expropriated, displaced or worse. This might happen or not, depending on where (and who) you are. But you too might get trapped in your own singular hell of a future repeating invented pasts, with one part of the population hell-bent on getting rid of another. People will peer in from afar, conclude they can’t understand what’s going on, and keep watching cat videos.

What to do about this? What is the opposite design, a type of creation that assists pluriform, horizontal forms of life, and that can be comprehended as part of a shared humanity? What is the contrary to a procedure that inflates, accelerates, purges, disrupts, and homogenizes; a process that designs humanity as a uniform, cleansed, and allegedly superior product, a super-humanity comprised of sanitized render ghosts?

The contrary is a process that doesn’t grow via destruction, but very literally de-grows constructively. This type of construction is not creating inflation, but devolution. Not centralized competition but cooperative autonomy. Not fragmenting time and dividing people, but reducing expansion, inflation, consumption, debt, disruption, occupation, and death. Not superhumanity; humanity as such would perfectly do.

A woman had stayed in the old town on her own throughout the curfew to take care of her cow, who lived in the back stable. Her daughters had climbed through a waterfall in the Roman-era walls every week to supply her with basic needs. They kept being shot at by soldiers. This went on for weeks on end. When we talked to her, the cow had just had a baby. One of the team members was a veterinarian.

Daughter: Our calf is sick. Please come and see.

Vet: Sure, what happened? Is it newborn? Did it get the first milk of its mother?

Mum: No, it didn’t get the colostrum. There was no milk. The labor was difficult. It started five times over and stopped again.

Daughter: The other calf reached first and drank all the milk, we didn’t realize it.

Daughter: Mum, where is the calf?

Mum: [calls into the stable] Where is it? My little pistachu, where are you?





3


The Terror of Total Dasein:

Economies of Presence in the Art Field


The International Artists’ Strike in 1979 was a “protest against the ongoing repression of the art system and the alienation of artists from the results of their work.” Djordjevic mailed invitations to numerous artists around the world, asking if they would be willing to take part in the general strike. He received thirty-nine, mainly unsupportive responses from the likes of Sol Lewitt, Lucy Lippard, and Vito Acconci. Susan Hiller replied: “I have, in fact, been on strike all summer, but it has not changed anything and I am anxious to begin work again, which I shall do very soon.”1

Dear Goran, Thanks for your letter. Personally I am already on strike of producing any new form in my work since 1965 (i.e. 14 years). I don’t see what I could do more—Best Regards (Daniel) Buren.2



When legendary conceptual artist Goran Djordjevic tried to rally artists to go on a general art strike in 1979, some of them responded that they were on strike already—i.e. did not produce work or new work. But it made no difference whatsoever. Clearly, at the time, this seems to have confounded received ideas of what a strike was and how it worked. A strike was supposed to drain needed labor power from employers who would then need to make concessions to workers’ demands. But in the art field things were different.

Today, the artists’ reactions seem obvious. No one working in the art field expects his or her labor to be irreplaceable or even mildly important anymore. In the age of rampant self-employment, or rather self-unemployment, the idea that anyone would care for one’s specific labor power seems rather exotic.

Of course, labor in the art field has always been different from labor in other areas. One of the current reasons, however, might be that the contemporary economy of art relies more on presence than on more traditional ideas of labor power tied to the production of objects. Presence as in physical presence, as in attendance or being-there in person. Why would presence be so desirable? The idea of presence invokes the promise of unmediated communication, the glow of uninhibited existence, a seemingly unalienated experience and authentic encounter between humans. It implies that not only the artist but everyone else is present too, whatever that means and whatever it is good for. Presence stands for allegedly real discussion, exchange, communication, the happening, the event, liveness, the real thing—you get the idea.

In addition to delivering works, artists, or more generally content providers, nowadays have to perform countless additional services, which slowly seem to become more important than any other form of work. The Q&A is more important than the screening, the live lecture more than the text, the encounter with the artist more important than the one with the work. Not to speak about the jumble of quasi-academic and social media PR formats that multiply the templates in which unalienated presence is supposed to be delivered. The artist has to be present, as in Marina Abramović’s eponymous performance. And not only present, but exclusively present, present for the first time, or in some other hyperventilating capacity of newness. Artistic occupation is being redefined as permanent presence. But in the endless production of seemingly singular events, the serial churning out of novelty and immediacy, the happening of the event is also a general performance, as Sven Lütticken called it, a quantifiable measure of efficiency and total social labor.

The economy of art is deeply immersed in this economy of presence. The market economy of art has its own economy of presence which revolves around art fairs, with their guest lists, VIP areas and performative modes of access and exclusion on every level. People have been saying that previews of mega-shows have become completely inadequate for HNWIs. Really important people are only present for the pre-preview.

There are some rational reasons for an economy of physical human presence in the art field: the physical presence of people is, on average, cheaper than the presence of works that need to be shipped, insured and/or installed. Presence puts so-called butts on seats and thus provides legitimacy to cultural institutions competing for scarce funding. Institutions sell tickets or even access to people—this is usually done in the scope of para-academic formats like masterclasses or workshops—and capitalize on people’s desire to widen their networks or add contacts. In a word, presence can be easily quantified and monetized. It’s a thing that few people get paid for and a lot of people pay for, and is thus rather profitable.

But presence also means permanent availability without any promise of compensation. In the age of the reproducibility of almost everything physical, human presence is one of the few things that cannot be multiplied indefinitely, an asset with some inbuilt scarcity. Presence means to be engaged or occupied with an activity but not hired or employed. It means more often than not to be locked down in standby mode, as a reserve element for potential engagement, part of a crowd of extras to provide stochastic weight.

Interestingly enough, the demand for total presence and immediacy arises from mediation; or more precisely from the growing range of tools of communication, including the internet. It is not opposed to technology but its consequence.

According to William J. Mitchell, the economy of presence is characterized by a technologically enhanced market for attention, time, and movement—a process of investment that requires careful choices.3 The point is that technology gives you tools for remote and delayed presence, so that physical presence is just one option and probably the scarcest one. According to Mitchell: “Presence choice occurs when an individual decides whether face-to-face presence is worth the time and money.” Presence in fact becomes a mode of investment.

But the economy of presence is not only relevant for people whose time is in demand and who could basically sell (or barter) more time than they have; it is even more relevant to those who have to work multiple jobs in order to make a living, or even not make a living, to those who coordinate a jumble of microjobs, complete with the logistical nightmare of harmonizing competing schedules and negotiating priorities, or to those who are on permanent standby in the hope that their time and presence will become exchangeable for something else eventually. The aura of unalienated, unmediated, and precious presence depends on a temporal infrastructure that consists of fractured schedules and dysfunctional, collapsing just-in-time economies in which people frantically try to figure out reverberating asynchronicities and the continuous breakdown of riff-raff timetables. It’s junk-time, broken down, kaput on any level. Junktime is wrecked, discontinuous, distracted and runs on several parallel tracks. If you tend to be in the wrong place at the wrong time, and if you even manage to be in two wrong places at the same wrong time, it means you live within junktime. With junktime any causal link is scattered. The end is before the beginning and the beginning was taken down for copyright violations. Anything in between has been slashed because of budget cuts. Junktime is the material base of the idea of pure unmediated endless presence.

Junktime is exhausted, interrupted, dulled by ketamine, Lyrica, corporate imagery. Junktime happens when information is not power, but comes as pain. Acceleration is yesterday’s delusion. Today you find yourself crashed and failing. You try to occupy the square or bandwidth but who is going to pick up the kid from school? Junktime depends on velocity, as in the lack thereof. It is time’s substitute: its crash-test dummy.

So how does junktime relate to a cult of presence? Here is a question to all the philosophers out there—and it concerns the title of this talk.

The question is: is this cult of presence revitalizing Heideggerian ideas about Dasein in the age of task rabbits and Amazon Turkers? Is the cult of an embodied and engaged presence that cannot be copied and pasted an expression of the relentless quantification of everything within most contemporary occupations? Is it going hand in hand with the body count performed by institutions to prove their perceived importance by attendance numbers while simultaneously harvesting visitors’ data and preferences? Is the fragmented junktime of multiple occupations, the necessity of multiplying and juggling scraps and shreds of time, creating the conditions for some kitsch ideal of an unalienated uninterrupted radiating endless mindful awful Anwesenheit?

If some of you agree, I suggest to call this text: The Terror of Total Dasein. It sounds like an early movie by Christoph Schlingensief.

Let’s come back to the topic of strike. In an economy of presence a strike necessarily takes on the form of absence. But since the kind of presence I have tried to describe is in fact a range of grades of withholding absence, the absence that tries to oppose it also inversely has to integrate some form of presence. It might need to take on the form of a range of strategic withdrawals, or what Autonomia Operaia called absenteeism.

Let me describe a very simple model situation: A strike could take the form of a work called “The Artist is Absent” in which there would be just a laptop on the table with a prerecorded and looped stare, or rather an animated GIF of her. This is kind of banal, but then again the audience would equally be represented by similar props, because frankly it hasn’t got much time either. Or, actually, the much more elegant and dare I say standard solution for managing the economy of presence and making actual and real-life presence choices is to check your email or Twitter feed while pretending to simultaneously listen to me. In this case you are using yourself, more precisely your own body, as a stand-in or proxy or placeholder, while actually you go about your junktime commitments, which I think is perfectly fine as a form of absence management.

And I also think this is already a form of evasion of the terror of total Dasein.

This small example shows the role of proxies and stand-ins in a situation, in which basically presence is required in multiple places simultaneously, but physically impossible. And this is where techniques of evasion, doubling, dazzle and subterfuge set in. They open up to a proxy politics, a politics of the stand-in and the decoy.

A stand-in or proxy is a very interesting device. It could be a body double or a stunt double. A scan or a scam. An intermediary in a network. A bot or a decoy. Inflatable tanks or text dummies. A militia deployed in proxy warfare. A template. A readymade. A vectorized bit of stock imagery. All these devices have just one thing in common: they help out with classic dilemmas arising from an economy of presence.

Here is a small example of such device. It is one of the simplest examples of desktop proxy and quite widespread. Everyone has seen this generic sample text:

Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.



Developed as a printers’ font sample, the design proxy Lorem Ipsum was integrated into standard desktop publishing software as a random text dummy. It became a cornerstone of text-based digital industries and their forms of ADHD occupation.

Why is it used? Because maybe there is no copy. Perhaps the text has not yet been written or aggregated. Or there is no time or money to fill the space at all. Perhaps the writer is dead or asleep or busy on a different tab. In the meantime the space has to be designed. Advertisements have been sold already. The deadline swiftly approaches. This is when Lorem Ipsum swings into action. It is a dummy providing yet another extension, catering to a demand for eternal and relentless presence.

But Lorem Ipsum is not only a dummy. One can also understand it as a text. It is a fragment of a treatise on ethics by Cicero called “On the Ends of Good and Evil.”4 In this treatise, different definitions of goods and evils are compared. And this precise fragment deals with pain—or rather a shortened down version of it, namely “(pa)in itself.”

Let’s focus on the meaning of the original sentence. It reads: “Neque porro quisquam est qui dolorem ipsum quia dolor sit amet consectetur adipisci velit sed do eiusmod tempor incididunt ut labore et dolore magna aliqua.” Which means: “Neither is there anyone who loves, pursues or desires pain itself because it is pain, but there can be cases where labor and pain can procure some great pleasure.” So basically it is about sucking up for some greater good to arrive later. It is a classical case of deferred gratification, which would later constitute one of the moral pillars of the Protestant work ethic of capitalism.

But what actually does the Lorem Ipsum version mean? It has been cut up to take away the gratification altogether. It translates:

… in itself because it is pain, but there can be cases where labor and pain can procure some great…



The Lorem Ipsum version has blithely cut off pleasure or reward from Cicero’s sentence. There is no more gratification. So now you are not enduring pain for some greater good or thereafter but just enduring it without actually knowing why. There may just as well be no outcome, no product, no pay, no end. In Lorem Ipsum pain is not a means to an end, it just so happens.

Junktime, the fragmented time of networked occupation, is to continuous time as Lorem Ipsum is to its original. Its fragments are scrambled, cut up, shut up and confused in their sequence, spoiling the glow of the uninterrupted flow of text and meaning. And every time I read Lorem Ipsum’s mutilated jumble I cannot help thinking of Cicero’s head and hands cut off and ending up being nailed to the rostra on the Forum Romanum following his assassination.

There is a variation of Lorem Ipsum on the website of Berghain gay sexclub laboratory. It shows some interesting differences to the standard Lorem Ispum. First of all, it is on the rules of the club, so the Lorem Ispum sentences actually become a code of conduct.5

There are quite a few changes to the standard Cicero mash up. The word pleasure, or a variation thereof, has been reintroduced. It also goes on praising the virtues of physical exercise, which makes total sense in a place which has an athlete fetish party on offer. This version loops back between pain, toil as pleasure, and physical exercise or sports.

The sexclub rules of conduct become an extremely stressful-sounding set of instructions in which the pursuit of pleasure, labor, and physical exercise forms an endless loop: You have to find pleasure through work, then work out and have sex, in this order and without any break. Then repeat. It sounds like the junktime version of Churchill’s famous quip: If you are going through hell just keep going. Except now there is no exit, and if you keep going it just means there will be more hell ahead.

But the Lorem Ipsum rules of engagement could also be read differently, in the sense that the mix of pleasure, sports, and pain is so exhausting that one would rather send a proxy or dummy or Lorem Ipsum itself—to have all the sex, pain, toil, and sports on one’s behalf. Because, frankly, to keep going in this mode is just too time consuming, and, additionally, it might become slightly cumbersome to check your emails while you are doing it. So just leave it to Lorem Ipsum to take care of it on your behalf and manage your absenteeism.

Perhaps the preoccupation with stock footage, serialized stock photography of commodities, all sorts of templates for creative labor, copy and paste, aggregation, but also the fascination with corporate aesthetics and the corporation as proxy could all be seen as potentially responding to the need to be absent. All these are proxies that one can use on behalf of oneself or one’s work. Is this some sort of applied absenteeism? A sneaky boycott of constant presence? Using stock footage and templates is kind of the equivalent of periodically saying “awesome” in order to pretend one is listening to an annoying conversation while one has left behind laser-cut stand-up displays to fake participation and attendance in several places at the same time.

The point is: people use proxies in order to deal with the terror of total Dasein, or an economy of presence based on the technologically amplified scarcity of human attention and physical presence.

Even strike-organizer Djordjevic started pursuing a form of proxy politics after the failed art strike. He stopped making art under his own name. Years later he reemerged as a technical assistant for a certain Walter Benjamin’s lecture tours, and has kind of represented him ever since. Whether Benjamin himself is on strike is not known.





4


Proxy Politics: Signal and Noise


A while ago I met an extremely interesting developer. He was working on smartphone camera technology. Photography is traditionally thought to represent what is out there by means of technology, ideally via an indexical link. But is this really true anymore? The developer explained to me that the technology for contemporary phone cameras is quite different from traditional cameras: the lenses are tiny and basically rubbish, which means that about half of the data being captured by the camera sensor is actually noise. The trick, then, is to write the algorithm to clean the noise, or rather to discern the picture from inside the noise.

But how can the camera know how to do this? Very simple: It scans all other pictures stored on the phone or on your social media networks and sifts through your contacts. It analyzes the pictures you already took, or those that are associated with you, and it tries to match faces and shapes to link them back to you. By comparing what you and your network already photographed, the algorithm guesses what you might have wanted to photograph now. It creates the present picture based on earlier pictures, on your/its memory. This new paradigm is being called computational photography.1

The result might be a picture of something that never ever existed, but that the algorithm thinks you might like to see. This type of photography is speculative and relational. It is a gamble with probabilities that bets on inertia. It makes seeing unforeseen things more difficult. It will increase the amount of noise just as it will increase the amount of random interpretation.

And that’s not even to mention external interference into what your phone is recording. All sorts of systems are able to remotely turn your camera on or off: companies, governments, the military. It could be disabled in certain places—one could for instance block its recording function close to protests or conversely broadcast whatever it sees. Similarly, a device might be programmed to autopixelate, erase, or block secret, copyrighted, or sexual content. It might be fitted with a so-called dick algorithm to screen out NSFW (Not Suitable/Safe For Work) content, automodify pubic hair, stretch or omit bodies, exchange or collage context, or insert location-targeted advertising, pop-up windows, or live feeds. It might report you or someone from your network to police, PR agencies, or spammers. It might flag your debt, play your games, broadcast your heartbeat. Computational photography has expanded to cover all this.

It links control robotics, object recognition, and machine learning technologies. So if you take a picture on a smartphone, the results are not as premeditated as they are premediated. The picture might show something unexpected, because it might have cross-referenced many different databases: traffic control, medical databases, frenemy photo galleries on Facebook, credit card data, maps, and whatever else it wants.


Relational Photography

Computational photography is therefore inherently political —not in content but form. It is not only relational but also truly social, with countless systems and people potentially interfering with pictures before they even emerge as visible.2 And of course this network is not neutral. It has rules and norms hardwired into its platforms, and they represent a mix of juridical, moral, aesthetic, technological, commercial, and bluntly hidden parameters and effects. You could end up airbrushed, wanted, redirected, taxed, deleted, remodeled, or replaced in your own picture. The camera turns into a social projector rather than a recorder. It shows a superposition of what it thinks you might want to look like plus what others think you should buy or be. But technology rarely does things on its own. Technology is programmed with conflicting goals and by many entities, and politics is a matter of defining how to separate its noise from its information.3

So what are the policies already in place that define the separation of noise from information, or that even define noise and information as such in the first place? Who or what decides what the camera will “see”? How is it being done? By whom or what? And why is this even important?


The Penis Problem

Let’s have a look at one example: drawing a line between face and butt, or between “acceptable” and “unacceptable” body parts. It is no coincidence that Facebook is called Facebook and not Buttbook, because you can’t have any butts on Facebook. But then how does it weed out the butts? A list leaked by an angry freelancer reveals the precise instructions given on how to build and maintain Facebook’s face, and it shows us what is well known, that nudity and sexual content are strictly off limits, except art nudity and male nipples, but also how its policies on violence are much more lax, with even decapitations and large amounts of blood acceptable.4 “Crushed heads, limbs etc are OK as long as no insides are showing,” reads one guideline. “Deep flesh wounds are ok to show; excessive blood is ok to show.” Those rules are still policed by humans, or more precisely by a global subcontracted workforce from Turkey, the Philippines, Morocco, Mexico, and India, working from home, earning around $4 per hour. These workers are hired to distinguish between acceptable body parts (faces) and unacceptable ones (butts). In principle, there is nothing wrong with having rules for publicly available imagery. Some sort of filtering process has to be implemented on online platforms: no one wants to be spammed with revenge porn or atrocities, regardless of there being markets for such imagery. The question concerns where and how to draw the line, as well as who draws it, and on whose behalf. Who decides on signal vs. noise?

Let’s go back to the elimination of sexual content. Is there an algorithm for this, like for face recognition? This question first arose publicly in the so-called Chatroulette conundrum. Chatroulette was a Russian online video service that allowed people to meet on the web. It quickly became famous for its “next” button, for which the term “unlike button” would be much too polite. The site’s audience at first exploded to 1.6 million users per month by 2010. But then a so-called “penis problem” emerged, referring to the many people who used the service to meet other people naked.5 The winner of a web contest called to “solve” the issue ingeniously suggested running a quick facial recognition or eye tracking scan on the video feeds—if no face was discernable, it would deduce that it must be a dick.6

This exact workflow was also used by the British Secret Service when it secretly bulk extracted user webcam stills in its spy program Optic Nerve. Video feeds of 1.8 million Yahoo users were intercepted in order to develop face and iris recognition technologies. But—maybe unsurprisingly—it turned out that around 7 percent of the content did not show faces at all. So—as suggested for Chatroulette—they ran face recognition scans on everything and tried to exclude the dicks for not being faces. It didn’t work so well. In a leaked document the GCHQ admits defeat: “There is no perfect ability to censor material which may be offensive.”7

Subsequent solutions became a bit more sophisticated. Probabilistic porn detection calculates the amount of skintoned pixels in certain regions of the picture, producing complicated taxonomic formulas, such as this one:

a. If the percentage of skin pixels relative to the image size is less than 15 percent, the image is not nude. Otherwise, go to the next step.

b. If the number of skin pixels in the largest skin region is less than 35% of the total skin count, the number of skin pixels in the second largest region is less than 30% of the total skin count and the number of skin pixels in the third largest region is less than 30% of the total skin count, the image is not nude.

c. If the number of skin pixels in the largest skin region is less than 45% of the total skin count, the image is not nude.

d. If the total skin count is less than 30% of the total number of pixels in the image and the number of skin pixels within the bounding polygon is less than 55 percent of the size of the polygon, the image is not nude.

e. If the number of skin regions is more than 60 and the average intensity within the polygon is less than 0.25, the image is not nude.

f. Otherwise, the image is nude.8



But this method got ridiculed pretty quickly because it produced so many false positives, including, as in some examples, wrapped meatballs, tanks, or machine guns. More recent porn-detection applications use self-learning technology based on neural networks, computational verb theory, and cognitive computation. They do not try to statistically guess at the image, but rather try to understand it by identifying objects through their relations.9

According to developer Tao Yang’s description, there is a whole new field of cognitive vision studies based on quantifying cognition as such, on making it measurable and computable.10 Even though there are still considerable technological difficulties, this effort represents a whole new level of formalization; a new order of images, a grammar of images, an algorithmic system of sexuality, surveillance, productivity, reputation, and computation that links with the grammatization of social relations by corporations and governments.

So how does this work? Yang’s porn-detection system must learn how to recognize objectionable parts by seeing a sizable mass of them in order to infer their relations. So basically you start by installing a lot of photos of the body parts you want eliminated on your computer. The database consists of folders full of body parts ready to enter formal relations. Not only pussy, nipple, asshole, and blowjob, but asshole, asshole/only and asshole/mixed_with_pussy. Based on this library, a whole range of detectors get ready to go to work: the breast detector, pussy detector, pubic hair detector, cunnilingus detector, blowjob detector, asshole detector, hand-touch-pussy detector. They identify fascinating sex-positions such as the Yawning and Octopus techniques, The Stopperage, Chambers Fuck, Fraser MacKenzie, Persuading of the Debtor, Playing of Cello, and Watching the Game (I am honestly terrified of even imagining Fraser MacKenzie).

This grammar as well as the library of partial objects are reminiscent of Roland Barthes’s notion of a “porn grammar,” where he describes the Marquis de Sade’s writings as a system of positions and body parts ready to permutate into every possible combination.11 Yet this marginalized and openly persecuted system could be seen as a reflex of a more general grammar of knowledge deployed during the so-called Enlightenment. Michel Foucault as well as Theodor W. Adorno and Max Horkheimer compared de Sade’s sexual systems to mainstream systems of classification. Both were articulated by counting and sorting, by creating exhaustive, pedantic, and tedious taxonomies. And Mr. Yang’s enthusiasm for formalizing body parts and their relations to one another similarly reflects the huge endeavor of rendering cognition, imaging, and behavior as such increasingly quantifiable and commensurable to a system of exchange value based in data.

Undesirable body parts thus become elements of a new machine-readable image-based grammar that might usually operate in parallel to reputational and control networks, but that can also be linked to it at any time. Its structure might be a reflex of contemporary modes of harvesting, aggregating, and financializing data-based “knowledge” churned out by a cacophony of partly social algorithms embedded into technology.


Noise and Information

But let’s come back to the question we began with: What are the social and political algorithms that clear noise from information? The emphasis, again, is on politics not algorithms. Jacques Rancière has beautifully shown that this division corresponds to a much older social formula: to distinguish between noise and speech in order to divide a crowd between citizens and rabble.12 If one wants not to take someone else seriously, or to limit their rights and status, one pretends that their speech is just noise, garbled groaning, or crying, and that they themselves must be devoid of reason and therefore exempt from being subjects, let alone holders of rights. In other words, this politics rests on an act of conscious decoding —separating “noise” from “information,” “speech” from “groan,” or “face” from “butt,” and from there neatly stacking the results into vertical class hierarchies.13 The algorithms now being fed into smartphone camera technology to define the image prior to its emergence are similar to this.

In light of Rancière’s proposition, we might still be dealing with a more traditional idea of politics as representation.14 If everyone is aurally (or visually) represented, and no one is discounted as noise, then equality might draw nearer. But the networks have changed so drastically that nearly every parameter of representative politics has shifted. By now, more people than ever are able to upload an almost unlimited number of self-representations. And the level of political participation by way of parliamentary democracy seems to have dwindled in the meantime. While pictures float in numbers, elites are shrinking and centralizing power.

On top of this, your face is getting disconnected—not only from your butt, but also from your voice and body. Your face is now an element—a face/mixed_with_phone, ready to be combined with any other item in the library. Captions are added, or textures, if needs be. Face prints are taken. An image becomes less a representation than a proxy, a mercenary of appearance, a floating texture-surface-commodity. Persons are montaged, dubbed, assembled, incorporated. Humans and things intermingle in ever-newer constellations to become bots or cyborgs.15 As humans feed affect, thought, and sociality into algorithms, algorithms feed back into what used to be called subjectivity. This shift is what has given way to a post-representational politics adrift within information space.16


Proxy Armies

Let’s look at one example of post-representational politics: political bot armies on Twitter. Twitter bots are bits of script that impersonate human activity on social media sites. In large, synchronized numbers they have become formidable political armies.17 A Twitter chat bot is an algorithm wearing a person’s face, a formula incorporated as animated spam. It is a scripted operation impersonating a human operation.

Bot armies distort discussions on Twitter hashtags by spamming them with advertisements, tourist pictures, or whatever. They basically add noise. Bot armies have been active in Mexico, Syria, Russia, and Turkey, where most political parties have been said to operate such bot armies. In Turkey, the ruling AKP alone was suspected of controlling 18,000 fake Twitter accounts using photos of Robbie Williams, Megan Fox, and other celebs: “In order to appear authentic, the accounts don’t just tweet out AKP hashtags; they also quote philosophers such as Thomas Hobbes and movies like PS: I Love You.”18

So who do bot armies represent, if anyone, and how do they do it? Let’s have a look at the AKP bots. Robbie Williams, Meg Fox, and Hakan43020638 are all advertising “Flappy Tayyip,” a cell-phone game starring the then AKP prime minister (now president) Tayyip Recep Erdoğan. The objective is to hijack or spam the hashtag #twitterturkey to protest PM Erdoğan’s banning of Twitter. Simultaneously, Erdoğan’s own Twitter bots set out to detourne the hashtag.

Let’s look at Hakan43020638 more closely: a bot consisting of a copy-pasted face plus product placement. It takes only a matter of minutes to connect his face to a body by way of a Google image search. On his business Twitter account it turns out he sells his underwear: he works online as an affective web service provider.19 Let’s call this version Murat, to throw yet another alias into the fray. But who is the bot wearing Murat’s face and who is a bot army representing? Why would Hakan43020638 be quoting Thomas Hobbes of all philosophers? And which book? Let’s guess he’s quoting from Hobbes’s most important work Leviathan. Leviathan is the name of a social contract enforced by an absolute sovereign in order to fend off the dangers presented by a “state of nature” in which humans prey upon one another. With Leviathan there are no more militias and there is no more molecular warfare of everyone against everyone.

But now we seem to be in a situation where state systems grounded in such social contracts seem to fall apart in many places and nothing is left but a set of policed relational metadata, emoji, and hijacked hashtags. A bot army is a contemporary vox populi, the voice of the people according to social networks. It can be a Facebook militia, your low-cost personalized mob, your digital mercenaries, or some sort of proxy porn. Imagine your photo being used for one of these bots. It is the moment when your picture becomes quite autonomous, active, even militant. Bot armies are celebrity militias, wildly jump-cutting between glamour, sectarianism, porn, corruption, and conservative religious ideologies. Post-representative politics are a war of bot armies against one another, of Hakan against Murat, of face against butt.

This may be why the AKP pornstar bots desperately quote Hobbes: they are already sick of the war of Robbie Williams (IDF) against Robbie Williams (Electronic Syrian Army) against Robbie Williams (PRI/AAP), they are sick of retweeting spam for autocrats—and are hoping for just any entity organizing day care, gun control, and affordable dentistry, whether it’s called Leviathan or Moby Dick or even Flappy Tayyip. They seem to say: we’d go for just about any social contract you’ve got!20

Now let us go even one step further. Because a model for this might already be on the horizon. And unsurprisingly, it also involves algorithms.


Blockchain

Blockchain governance seems to fulfill the hopes for a new social contract.21 “Decentralized Autonomous Organizations” would record and store transactions in blockchains akin to the one used to run and validate bitcoin. But those public digital ledgers could equally encode votes or laws. Take for instance bitcongress, which is in the process of developing a decentralized voting and legislation system (www.bitcongress.org). While this could be a model to restore accountability and circumvent power monopolies, it means above all that social rules hardwired with technology emerge as Leviathan 2.0:

When disassociated from the programmers who design them, trustless blockchains floating above human affairs contain the specter of rule by algorithms … This is essentially the vision of the internet techno-leviathan, a deified crypto-sovereign whose rules we can contract to.22



Even though this is a decentralized process with no single entity at the controls, it doesn’t necessarily mean no one controls it. Just like smartphone photography, it needs to be told how to work: by a multitude of conflicting interests. More importantly, this would replace bots as proxy “people” with bots as governance. But then again, which bots are we talking about? Who programs them? Are they cyborgs? Do they have faces or butts? And who is drawing the line? Are they cheerleaders of social and informational entropy? Killing machines? Or a new crowd, of which we are already a part?23

Let’s come back to the beginning again: How to separate signal from noise? And how does the old political technology of using this distinction in order to rule change with algorithmic technology? In all examples, the definition of noise rested increasingly on scripted operations, on automating representation and/or decision-making. On the other hand, this process potentially introduces so much feedback that representation becomes a rather unpredictable operation that looks more like the weather than a Xerox machine. Likeliness becomes subject to likelihood—reality is just another factor in an extended calculation of probability. In this situation, proxies become crucial semi-autonomous actors.


Proxy Politics

To better understand proxy politics, we could start by drawing up a checklist:

Does your camera decide what appears in your photographs?

Does it go off when you smile?

And will it fire in a next step if you don’t?

Do underpaid outsourced IT workers in BRIC countries manage your pictures of breastfeeds and decapitations on your social media feeds?

Is Elizabeth Taylor tweeting your work?

Have some of your other fans’ bots decided to classify your work as urinary mature porn?

Are some of these bots busily enumerating geographic locations alongside bodily orifices?

Is your total result something like this?





Welcome to the age of proxy politics!





A proxy is “an agent or substitute authorized to act for another person or a document which authorizes the agent so to act” (Wikipedia). But a proxy could now also be a device with a bad hair day. A scrap of script caught up in a dress-code double bind. Or a “Persuading the Debtor” detector throwing a tantrum over genital pixel probability. Or a delegation of chat bots casually pasting pro-Putin hair lotion ads to your Instagram. It could also be something much more serious, wrecking your life in a similar way—sry life!

Proxies are devices or scripts tasked with getting rid of noise as well as bot armies hell-bent on producing it. They are masks, persons, avatars, routers, nodes, templates, or generic placeholders. They share an element of unpredictability—which is all the more paradoxical considering that they arise as result of maxed-out probabilities. But proxies are not only bots and avatars, nor is proxy politics restricted to datascapes. Proxy warfare is quite a standard model of warfare—one of the most important examples being the Spanish Civil War. Proxies add echo, subterfuge, distortion, and confusion to geopolitics. Armies posing as militias (or the other way around) reconfigure or explode territories and redistribute sovereignties. Companies pose as guerillas and legionnaires as suburban Tupperware clubs. A proxy army is made of guns for hire, with more or less ideological decoration. The border between private security, PMCs, freelance insurgents, armed stand-ins, state hackers, and people who just got in the way has become blurry. Remember that corporate armies were crucial in establishing colonial empires (the East India Company among others) and that the word “company” itself is derived from the name for a military unit. Proxy warfare is a prime example of a post-Leviathan reality.

Now that this whole range of activities has gone online, it turns out that proxy warfare is partly the continuation of PR by different means.24 Besides marketing tools repurposed for counterinsurgency ops there is a whole range of government hacking (and counter-hacking) campaigns that require slightly more advanced skills. But not always. As the leftist Turkish hacker group Redhack reported, the password of the Ankara police servers was 12345.25

To state that online proxy politics is reorganizing geopolitics would be similar to stating that burgers tend to reorganize cows. Indeed, just as meatloaf arranges parts of cows with plastic, fossil remnants, and elements formerly known as paper, proxy politics positions companies, nation-states, hacker detachments, FIFA, and the Duchess of Cambridge as equally relevant entities. Those proxies tear up territories by creating netscapes that are partly unlinked from geography and national jurisdiction.

But proxy politics also works the other way. A simple default example of proxy politics is the use of proxy servers to try to bypass local web censorship or communications restrictions. Whenever people use VPNs and other internet proxies to escape online restrictions or conceal their IP address, proxy politics is given a different twist. In countries like Iran and China, VPNs are very much in use.26 In practice though, in many countries, companies close to censor-happy governments also run the VPNs in an exemplary display of efficient inconsistency. In Turkey, people used even more rudimentary methods—changing their DNS settings to tunnel out of Turkish dataspace, virtually tweeting from Hong Kong and Venezuela during Erdoğan’s short-lived Twitter ban.

In proxy politics the question is literally how to act or represent by using stand-ins (or being used by them)—and also how to use intermediaries to detourne the signals or noise of others. And proxy politics itself can also be turned around and redeployed. Proxy politics stacks surfaces, nodes, terrains, and textures—or disconnects them from one another. It disconnects body parts and switches them on and off to create often astonishing and unforeseen combinations—even faces with butts, so to speak. They can undermine the seemingly mandatory decision between face or butt or even the idea that both have got to belong to the same body. In the space of proxy politics, bodies could be Leviathans, hashtags, juridical persons, nation-states, hair-transplant devices, or freelance SWAT teams. Body is added to bodies by proxy and by stand-in. But these combinations also subtract bodies (and their parts) and erase them from the realm of never-ending surface to face enduring invisibility. In the end, however, a face without a butt cannot sit. It has to take a stand. And a butt without a face needs a stand-in for most kinds of communication. Proxy politics happens between taking a stand and using a stand-in. It is in the territory of displacement, stacking, subterfuge, and montage that both the worst and the best things happen.





5


A Sea of Data: Apophenia and

Pattern (Mis-)Recognition




This is an image from the Snowden files. It is labeled “secret.”1 Yet one cannot see anything on it. This is exactly why it is symptomatic.

Not seeing anything intelligible is the new normal. Information is passed on as a set of signals that cannot be picked up by human senses. Contemporary perception is machinic to a large degree. The spectrum of human vision only covers a tiny part of it. Electric charges, radio waves, light pulses encoded by machines for machines are zipping by at slightly subluminal speed. Seeing is superseded by calculating probabilities. Vision loses importance and is replaced by filtering, decrypting, and pattern recognition. Snowden’s image of noise could stand in for a more general human inability to perceive technical signals unless they are processed and translated accordingly.

But noise is not nothing. On the contrary, noise is a huge issue, not only for the NSA but for machinic modes of perception as a whole.

Signal v. Noise was the title of a column on the internal NSA website running from 2011 to 2012. It succinctly frames the NSA’s main problem: how to extract “information from the truckloads of data”: “It’s not about the data or even access to the data. It’s about getting information from the truck-loads of data … Developers, please help! We’re drowning (not waving) in a sea of data—with data, data everywhere, but not a drop of information.”2

Analysts are choking on intercepted communication. They need to unscramble, filter, decrypt, refine, and process “truckloads of data.” The focus moves from acquisition to discerning, from scarcity to overabundance, from adding on to filtering, from research to pattern recognition. This problem is not restricted to secret services. Even WikiLeaks’s Julian Assange states: “We are drowning in material.”3


Apophenia

But let’s return to the initial image. The noise on it was actually decrypted by GCHQ technicians to reveal a picture of clouds in the sky. British analysts have been hacking video feeds from Israeli drones since at least 2008, a period which includes the recent IDF aerial campaigns against Gaza.4 But no images of these attacks exist in Snowden’s archive. Instead, there are all sorts of abstract renderings of intercepted broadcasts. Noise. Lines. Color patterns.5 According to leaked training manuals, one needs to apply all sorts of massively secret operations to produce these kinds of images.6

But let me tell you something. I will decrypt this image for you without any secret algorithm. I will use a secret ninja technique instead. And I will even teach you how to do it for free. Please focus very strongly on this image right now.



Doesn’t it look like a shimmering surface of water in the evening sun? Is this perhaps the “sea of data” itself? An overwhelming body of water, which one could drown in? Can you see the waves moving ever so slightly?

I am using a good old method called apophenia.

Apophenia is defined as the perception of patterns within random data.7 The most common examples are people seeing faces in clouds or on the moon. Apophenia is about “drawing connections and conclusions from sources with no direct connection other than their indissoluble perceptual simultaneity,” as Benjamin Bratton recently argued.8

One has to assume that, sometimes, analysts also use apophenia.

Someone must have seen the face of Amani al-Nasasra in a cloud. The forty-three-year-old was blinded by an aerial strike in Gaza in 2012 while sitting in front of her TV:

“We were in the house watching the news on TV. My husband said he wanted to go to sleep, but I wanted to stay up and watch Al Jazeera to see if there was any news of a ceasefire. The last thing I remember, my husband asked if I changed the channel and I said yes. I didn’t feel anything when the bomb hit—I was unconscious. I didn’t wake up again until I was in the ambulance.” Amani suffered second degree burns and was largely blinded.9



What kind of “signal” was extracted from what kind of “noise” to suggest that al-Nasasra was a legitimate target? Which faces appear on which screens, and why? Or to put it differently: Who is “signal,” and who disposable “noise”?


Pattern Recognition

Jacques Rancière tells a mythical story about how the separation of signal and noise might have been accomplished in Ancient Greece. Sounds produced by affluent male locals were defined as speech, whereas women, children, slaves, and foreigners were assumed to produce garbled noise.10 The distinction between speech and noise served as a kind of political spam filter. Those identified as speaking were labeled citizens and the rest as irrelevant, irrational, and potentially dangerous nuisances. Similarly, today, the question of separating signal and noise has a fundamental political dimension. Pattern recognition resonates with the wider question of political recognition. Who is recognized on a political level and as what? As a subject? A person? A legitimate category of the population? Or perhaps as “dirty data”?

What is dirty data? Here is one example:

Sullivan, from Booz Allen, gave the example the time his team was analyzing demographic information about customers for a luxury hotel chain and came across data showing that teens from a wealthy Middle Eastern country were frequent guests.

“There were a whole group of 17-year-olds staying at the properties worldwide,” Sullivan said. “We thought, ‘That can’t be true.’”11



The data was dismissed as dirty data—messed up and worthless sets of information—before someone found out that, actually, it was true.

Brown teenagers, in this worldview, are likely to exist. Dead brown teenagers? Why not? But rich brown teenagers? This is so improbable that they must be dirty data and cleansed from your system! The pattern emerging from this operation to separate noise and signal is not very different from Rancière’s political noise filter for allocating citizenship, rationality, and privilege. Affluent brown teenagers seem just as unlikely as speaking slaves and women in the Greek polis.

On the other hand, dirty data is also something like a cache of surreptitious refusal; it expresses a refusal to be counted and measured:

A study of more than 2,400 UK consumers by research company Verve found that 60% intentionally provided wrong information when submitting personal details online. Almost one quarter (23%) said they sometimes gave out incorrect dates of birth, for example, while 9% said they did this most of the time and 5% always did it.12



Dirty data is where all of our refusals to fill out the constant onslaught of online forms accumulate. Everyone is lying all the time, whenever possible, or at least cutting corners. Not surprisingly, the “dirtiest” area of data collection is consistently pointed out to be the health sector, especially in the US. Doctors and nurses are singled out for filling out forms incorrectly. It seems that health professionals are just as unenthusiastic about filling out forms for systems designed to replace them as consumers are about performing clerical work for corporations that will spam them in return.

In his book The Utopia of Rules, David Graeber gives a profoundly moving example of the forced extraction of data. After his mom suffered a stroke, he went through the ordeal of having to apply for Medicaid on her behalf:

I had to spend over a month … dealing with the ramifying consequences of the act of whatever anonymous functionary in the New York Department of Motor Vehicles had inscribed my given name as “Daid,” not to mention the Verizon clerk who spelled my surname “Grueber.” Bureaucracies public and private appear—for whatever historical reasons—to be organized in such a way as to guarantee that a significant proportion of actors will not be able to perform their tasks as expected.13



Graeber goes on to call this an example of utopian thinking. Bureaucracy is based on utopian thinking because it assumes people to be perfect from its own point of view. Graeber’s mother died before she was accepted into the program.

The endless labor of filling out completely meaningless forms is a new kind of domestic labor in the sense that it is not considered labor at all and assumed to be provided “voluntarily” or performed by underpaid so-called data janitors.14 Yet all the seemingly swift and invisible action of algorithms—their elegant optimization of everything, their recognition of patterns and anomalies—is based on the endless and utterly senseless labor of providing or fixing messy data.

Dirty data is simply real data in the sense that it documents the struggle of real people with a bureaucracy that exploits the uneven distribution and implementation of digital technology.15 Consider the situation at LaGeSo (the Health and Social Affairs Office) in Berlin, where refugees are risking their health on a daily basis by standing in line outdoors in severe winter weather for hours or even days just to have their data registered and get access to services they are entitled to (for example money to buy food).16 These people are perceived as anomalies because, in addition to having had the audacity to arrive in the first place, they ask that their rights be respected. There is a similar political algorithm at work: people are blanked out. They cannot even get to the stage of being recognized as claimants. They are not taken into account.

On the other hand, technology also promises to separate different categories of refugees. IBM’s Watson AI system was experimentally programmed to potentially identify terrorists posing as refugees:

IBM hoped to show that the i2 EIA could separate the sheep from the wolves: that is, the masses of harmless asylum-seekers from the few who might be connected to jihadism or who were simply lying about their identities …

IBM created a hypothetical scenario, bringing together several data sources to match against a fictional list of passport-carrying refugees. Perhaps the most important dataset was a list of names of casualties from the conflict gleaned from open press reports and other sources. Some of the material came from the Dark Web, data related to the black market for passports; IBM says that they anonymized or obscured personally identifiable information in this set …

Borene said the system could provide a score to indicate the likelihood that a hypothetical asylum-seeker was who they said they were, and do it fast enough to be useful to a border guard or policeman walking a beat.17



The cross-referencing of unofficial databases, including dark-web sources, is used to produce a “score,” which calculates the probability that a refugee might be a terrorist. The hope is for a pattern to emerge across different datasets, without actually checking how or if they correspond to any empirical reality. This example is actually part of a much larger subset of “scores,” credit scores, academic ranking scores, scores ranking interaction on online forums, etc., which classify people according to financial interactions, online behavior, market data, and other sources. A variety of inputs are boiled down to a single number—a superpattern—which may be a “threat” score or a “social sincerity score,” as planned by Chinese authorities for every single citizen within the next decade. But the input parameters are far from being transparent or verifiable. And while it may be seriously desirable to identify Daesh moles posing as refugees, a similar system seems to have worrying flaws.

The NSA’s SKYNET program was trained to find terrorists in Pakistan by sifting through cell-phone customer metadata. But experts criticize the NSA’s methodologies. “There are very few ‘known terrorists’ to use to train and test the model,” explained Patrick Ball, a data scientist and director of the Human Rights Data Analysis Group, to Ars Technica. “If they are using the same records to train the model as they are using to test the model, their assessment of the fit is completely bullshit.”18

The Human Rights Data Analysis Group estimates that around 99,000 Pakistanis might have ended up wrongly classified as terrorists by SKYNET, a statistical margin of error that may have had deadly consequences given the fact that the US is waging a drone war on suspected militants in the country, and between 2,500 and 4,000 people are estimated to have been killed since 2004: “In the years that have followed, thousands of innocent people in Pakistan may have been mislabelled as terrorists by that ‘scientifically unsound’ algorithm, possibly resulting in their untimely demise.”19

One needs to emphasize strongly that SKYNET’s operations cannot be objectively assessed, since it is not known how its results were utilized. It was most certainly not the only factor in determining drone targets.20 But the example of SKYNET demonstrates just as strongly that a “signal” extracted by assessing correlations and probabilities is not the same as an actual fact, but is determined by the inputs the software uses to learn, and the parameters for filtering, correlating, and “identifying.” The old engineer wisdom “crap in—crap out” seems still to apply. In all of these cases—as completely different as they are technologically, geographically, and also ethically—some version of pattern recognition was used to classify groups of people according to political and social parameters. Sometimes it is as simple as, we try to avoid registering refugees. Sometimes there is more mathematical mumbo jumbo involved. But many of the methods used are opaque, partly biased, exclusive, and—as one expert points out—sometimes also “ridiculously optimistic.”21


Corporate Animism

How to recognize something in sheer noise? A striking visual example of pure and conscious apophenia was recently demonstrated by research labs at Google:22

We train an artificial neural network by showing it millions of training examples and gradually adjusting the network parameters until it gives the classifications we want. The network typically consists of 10–30 stacked layers of artificial neurons. Each image is fed into the input layer, which then talks to the next layer, until eventually the “output” layer is reached. The network’s “answer” comes from this final output layer.23



Neural networks were trained to discern edges, shapes, and a number of objects and animals and then applied to pure noise. They ended up “recognizing” a rainbow-colored mess of disembodied fractal eyes, mostly without lids, incessantly surveilling their audience in a strident display of conscious pattern overidentification.





Google DeepDream images.

Source: Mary-Ann Russon, “Google DeepDream robot: 10 weirdest images produced by AI ‘inceptionism’ and users online,” ibtimes.co.uk, July 6, 2015.



Google researchers call the act of creating a pattern or an image from nothing but noise “inceptionism” or “deep dreaming.” But these entities are far from mere hallucinations. If they are dreams, those dreams can be interpreted as condensations or displacements of the current technological disposition. They reveal the networked operations of computational image creation, certain presets of machinic vision, its hardwired ideologies and preferences.

One way to visualize what goes on is to turn the network upside down and ask it to enhance an input image in such a way as to elicit a particular interpretation. Say you want to know what sort of image would result in “Banana.” Start with an image full of random noise, then gradually tweak the image towards what the neural net considers a banana. By itself, that doesn’t work very well, but it does if we impose a prior constraint that the image should have similar statistics to natural images, such as neighboring pixels needing to be correlated.24



In a feat of genius, inceptionism manages to visualize the unconscious of prosumer networks: images surveilling users, constantly registering their eye movements, behavior, preferences, aesthetically helplessly adrift between Hundertwasser mug knockoffs and Art Deco friezes gone ballistic. Walter Benjamin’s “optical unconscious” has been upgraded to the unconscious of computational image divination.25

By “recognizing” things and patterns that were not given, inceptionist neural networks eventually end up effectively identifying a new totality of aesthetic and social relations. Presets and stereotypes are applied, regardless of whether they “apply” or not: “The results are intriguing—even a relatively simple neural network can be used to over-interpret an image, just like as children we enjoyed watching clouds and interpreting the random shapes.”26

But inceptionism is not just a digital hallucination. It is a document of an era that trains smartphones to identify kittens, thus hardwiring truly terrifying jargons of cutesy into the means of production.27 It demonstrates a version of corporate animism in which commodities are not only fetishes but morph into franchised chimeras.

Yet these are deeply realist representations. According to György Lukács, “classical realism” creates “typical characters” insofar as they represent the objective social (and in this case technological) forces of our times.28

Inceptionism does that and more. It also gives those forces a face—or more precisely, innumerable eyes. The creature that stares at you from your plate of spaghetti and meatballs is not an amphibian beagle. It is the ubiquitous surveillance of networked image production, a form of memetically modified intelligence that watches you in the shape of the lunch that you will Instagram in a second if it doesn’t attack you first. Imagine a world of enslaved objects remorsefully scrutinizing you. Your car, your yacht, your art collection observes you with a gloomy and utterly desperate expression. You may own us, they seem to say, but we are going to inform on you. And guess what kind of creature we are going to recognize in you!29


Data Neolithic

But what are we going to make of automated apophenia?30 Are we to assume that machinic perception has entered its own phase of magical thinking? Is this what commodity enchantment means nowadays: hallucinating products? It might be more accurate to assume that humanity has entered yet another new phase of magical thinking. The vocabulary deployed for separating signal and noise is surprisingly pastoral: data “farming” and “harvesting,” “mining” and “extraction,” are embraced as if we were living through another massive Neolithic revolution31 with its own kind of magic formulas.

All sorts of agricultural and mining technologies—developed during the Neolithic—are reinvented to apply to data. The stones and ores of the past are replaced by silicon and rare earth minerals, while a Minecraft paradigm of extraction describes the processing of minerals into elements of information architecture.32

Pattern recognition was an important asset of Neolithic technologies too. It marked the transition between magic and more empirical modes of thinking. The development of the calendar by observing patterns in time enabled more efficient irrigation and agricultural scheduling. Storage of cereals created the idea of property. This period also kick-started institutionalized religion and bureaucracy, as well as managerial techniques including laws and registers. All these innovations also impacted society: hunter and gatherer bands were replaced by farmer-kings and slaveholders. The Neolithic revolution was not only technological but also had major social consequences.

Today, expressions of life as reflected in data trails become a farmable, harvestable, minable resource managed by informational biopolitics.33

And if you doubt that this is another age of magical thinking, just look at the NSA training manual for unscrambling hacked drone intercepts. As you can see, you need to bewitch the files with a magic wand.



File browsing menu of Image Magick, a free image converter.

Source: ISUAV Video Descrambling, Anarchist training Module 5, GCHQ manual leaked by Edward Snowden.



The supposedly new forms of governance emerging from these technologies look partly archaic and partly superstitious. What kind of corporate/state entities are based on data storage, image unscrambling, high-frequency trading, and Daesh Forex gaming? What are the contemporary equivalents of farmer-kings and slaveholders, and how are existing social hierarchies radicalized through examples as vastly different as tech-related gentrification and jihadi online forum gamification? How does the world of pattern recognition and big-data divination relate to the contemporary jumble of oligocracies, troll farms, mercenary hackers, and data robber barons supporting and enabling bot governance, Khelifah clickbait and polymorphous proxy warfare? Is the state in the age of Deep Mind, Deep Learning, and Deep Dreaming a Deep State™? One in which there is no appeal nor due process against algorithmic decrees and divination?

But there is another difference between the original and the current type of “Neolithic,” and it harks back to pattern recognition. In ancient astronomy, star constellations were imagined by projecting animal shapes into the skies. After cosmic rhythms and trajectories had been recorded on clay tablets, patterns of movement started to emerge. As additional points of orientation, some star groups were likened to animals and heavenly beings. However, progress in astronomy and mathematics happened not because people kept believing there were animals or gods in space, but on the contrary, because they accepted that constellations were expressions of a physical logic. The patterns were projections, not reality. While today statisticians and other experts routinely acknowledge that their findings are mostly probabilistic projections, policymakers of all sorts conveniently ignore this message. In practice you become coextensive with the data-constellation you project. Social scores of all different kinds—credit scores, academic scores, threat scores—as well as commercial and military pattern-of-life observations, impact the real lives of real people, both reformatting and radicalizing social hierarchies by ranking, filtering, and classifying.


Gestalt Realism

But let’s assume we are actually dealing with projections. Once one accepts that the patterns derived from machinic sensing are not the same as reality, information definitely becomes available with a certain degree of veracity.

Let’s come back to Amani al-Nasasra, the woman blinded by an aerial attack in Gaza. We know: the abstract images recorded as intercepts of IDF drones by British spies do not show the aerial strike in Gaza that blinded her in 2012. The dates don’t match. There is no evidence in Snowden’s archive. There are no images of this attack, at least as far as I know of. All we know is what she told Human Rights Watch. This is what she said: “I can’t see—ever since the bombing, I can only see shadows.”34

So there is one more way to decode this image. It’s plain for everyone to see. We see what Amani cannot see.



In this case, the noise must be a “document” of what she “sees” now: “the shadows.”

Is this a document of the drone war’s optical unconscious? Of its dubious and classified methods of “pattern recognition”? And if so, is there a way to ever “unscramble” the “shadows” Amani has been left with?





6


Medya: Autonomy of Images


In a work called Auge/Maschine, Harun Farocki coined the term “suicide camera.” Auge/Maschine shows cameras mounted to the tips of missiles during the first Gulf War. The camera would broadcast live until it exploded. But contrary to all expectations, the camera was not destroyed in this operation. Instead it burst into billions of small cameras, tiny lenses embedded into cell phones. The camera from the missile exploded into shards that penetrated people’s lives, feelings, and identities, skimming their ideas and payments.

The camera on the missile tip was supposed to identify and track objects. But as itself destroyed, it multiplied. It is now not only identifying and tracking objects, but the devices embedded into them, their owners, their motions and emotions as well as most of their actions and communications. If the cameras in the tip of the missiles were suicide cameras, the ones in cell phones are zombie cameras, cameras that failed dying.

But what if not only the cameras exploded but also the images they produced? What if this created a situation in which images were broken to the point of being unintelligible?



Fig. 1. A pillar at Göbekli Tepe, Turkey, showing a vulture, a crane, and a man without a head.



The figure above apparently shows a vulture flying above a headless person. At least this is what archeologists claim. It is difficult to figure out just from looking at it. You can’t really see what they are talking about. It looks like a radioactive chicken. And the strange shape below is supposed to be the guy without a head.

I wanted to see this relief in person, on a pillar dating back 12,000 years. So I went to the Göbekli Tepe complex near Urfa, Turkey, the oldest known ritual structure in the world. It looks somewhat like Stonehenge, only it’s 6,500 years older, and instead of one massive stone-pillar circle there are around twenty, most of them unexcavated. Many of the pillars bear exquisite carvings of scary animals.

But it turned out that the relief I was looking for is not visible on site. One can only see the pillar’s back side; the relief itself is hidden. The only way I could see it was on a cell phone. One has to go online and Google it. Of course you can do that almost everywhere. In so-called reality, however, it is not accessible.

But it was not only me who watched the image. My cell phone was also watching me, my location, and my activities.

In January 2015, the rumble from the battle of Kobanê in Northern Syria could be heard at Göbekli Tepe. In October 2014, the ci