À la suite de la discussion entre Matthew Fuller et Kim Sacks lors du colloque Matière/Matériau(x)/Medium du 23 janvier 2023, Ambre Charpier a renouvelé cette invitation pour poursuivre la conversation avec l'intention de développer certaines idées théoriques sur les filtres, la cybernétique, la théorie des systèmes et les pratiques esthétiques critiques. Cette conversation a eu lieu le 31 octobre 2023.
Following Matthew Fuller and Kim Sacks’ discussion at the Matière/Matériau(x)/Medium colloquium on January 23, 2023, Ambre Charpier renewed this invitation to further the conversation with the intention of developing some of the theoretical ideas on filtering, cybernetics, systems theory and critical aesthetic practices. This conversation took place on October 31st 2023.
Ambre Charpier :
As the saying goes when asking what media do : « Media either determine a given social, cultural, or political dimension, or media are themselves determined by the social, cultural, or political » What does the filter do ? What are its most common (mis)conceptions ? Since it's both an allegorical term on contemporary aesthetic experience and an actual process, what defines its materiality ? And having this, how does it work in everyday life? Is there maybe an example that you can give us?
Matthew Fuller :
A good example is autotune as a filter. It's something that has become very known in music culture and something that, if you think about the history of hip hop and related music, has become crucial. Since the '80s, the vocoder has been used to make voices sound strange, with metallic electronic sounds to intensify them making them somehow alien or to explore the voice as an instrument. Then this century in R&B, you have the production of audibbly auto-tuned voices. Auto-tune was originally used as a software to make voices sound better, modifying and evening out the pitch of a voice. But, it was a kind of hidden technique; so if you were not particularly a good singer, your producer could work with your voice using auto-tune, like a plastic surgeon to smooth out the waveform of your throat.
Artists like T-Pain started using it as a deliberately artificial technique of producing different texture and colour into the voice, adding a machinic melancholy effect to its sound. This was also taken up in the so-called sad rap era of about 10 years ago, by artists such as Yung Lean, where they reworked masculine voices to make them affective rather than aggressive. This filter has a feeling associated with it, a kind of textural quality that changes the nature of the voice. When we listen to a voice and if the music is gripping, we listen to multiple levels : to feel the voice speaking to us, but maybe also to inhabit the voice to sing along with it, to remember its refrains or just to feel that the voices are landscapes that we inhabit, voices pass through us and make us inhabit vocalisation differently.
Another example would be filters around faces. A few years ago there was a discussion of the ‘Instagram Face’ and the way this produced a different kind of beauty norm that was circulated among Instagram users. People have also more recently been using a filter called Bold Glamour that also tweaks the face on the basis of symmetry. They both work on different kinds of feature recognition and rework those features that were seen as indexes of different kinds of racialization. These filters work with ideas of beauty and of gender norms, of racial norms and of presentation of the self. They work with some parameters of this idea of the artificial, in some way, that the face is not communicated transparently, is natural but is something that you perform. Then this kind of tweaking of parameters of self presentation becomes interesting in these debates and arguments around different cultures of use around these filters.
So I think these are good examples of the way in which people inhabit filters as part of their daily life in some ways, but also in terms of the notions of themselves that one conjures with as part of life. This can be very punitive, it can be repressive, but it can also be a space of experimentation in some ways, perhaps with other filters.
Kim Sacks :
During your talk on our colloquium on materiality, you presented a new theoretical framework and perspective on computational media, through the notion of the filter. Anticipating your presentation, Ambre and I discussed how Shannon's mathematical model of communication deeply affected our perception of the act of filtering. So to start this conversation, maybe you could give us your insights on Shannon's work and its effect on media theory and computational theory ?
I think it's worth thinking about different types of filters. First of all, something like a Photoshop filter would add noise, to add blur, to sharpen, and so on. So we can understand filters through their embedding in quite old objects that have existed in digital media for decades, that are part of everyday working practices in media production. Then you could think of other kinds of filters associated with digital media : through machine learning, with personalization filters in social media or music software for instance.
Those things are quite different; the Photoshop filter generally makes one pass at it and doesn't involve feedback. You could call it a thin kind of filter while the machine learning filter would be something that is deeper, that enrols the user as part of the system. It involves layers of feedback between the user and the system and multiple users. It is something that aggregates multiple acts of use into a wider system.
However both of these examples rely on the quantization of information. Whether it’s turning an image into a set of numerical data - for the Photoshop filter - or turning people's use of social media or listening to music into sets of numbers into quanta - in the use of Spotify - all of them involve this transition. And this is essentially what was done by Shannon's filter and by numerous techniques in the history of media systems of turning the continuous into something discrete. This is something that marks the relationship of the digital to the analogue. There's this constant movement backwards and forwards in media systems between the discrete and the continuous and different media systems embody them differently.
Language for instance, entails the difference between graphemes and phonemes, between the symbols used to describe letters or sounds and the sounds that are made, is another kind of relationship between the discrete and the continuous. Alternately, it’s found in musical notation as discrete marks on a page compared to the resonances of sound made by musical instruments. All of these are movements backwards and forwards between the discrete and the continuous. Digital technology changes this again, by quantizing in different ways according to specific kinds of algorithm. The texture of particular kinds of algorithm is something that we need to understand when we're understanding the texture of digital media : what it means, what it feels like to be part of it, what it what it feels like to be processed by it, to use it as a tool but also to be part of it as an environment or an ecology.
All of these different qualities of the movement backwards and forwards between the discrete and the continuous are integral to digital media and processes of design or of cultural work more broadly with it. Thinking about what algorithm you use, what kind of software you use that has particular kinds of qualities is really important for thinking about these kinds of processes. And so we can look at longer term histories of these technologies through particular breakthroughs such as Shannon's and think : how do they construct this relationship between the discrete and continuous ?
Shannon's work obviously takes speech and turns it into a series of numerical values, which then turns them into electrical charges. It filters these electrical charges which filter these numerical values in order to reduce the bandwidth that is allocated to speech while being constrained by economical consideration and limited material resources such as passing speech down copper wire, its bandwidth and the variability of human speech. These are brought together in order to make communication possible but they also entail an abstraction of what communication means and what is audible and meaningful in a voice, what frequencies of sound can be edited out and therefore, what kind of speech is meaningful.
Those technical specificities produce certain kinds of textures notably in recording. When hip-hop or other producers are using voice samples from old records or from voices down the phone, the texturing of speech that occurs, by the use of old vinyl, old microphones or odd recording techniques, gives one a sense of those sounds having meaning with their particular qualities or density.
That's something that music as a field is well aware of and can also be seen with graphic design where designers use particular kinds of filters for working with images and typography and so on in a way that works with the specific kinds of texturing of digital media. We can think of Wim Crouwel’s nineteen-sixties New Alphabet typeface in this light for instance, in which he used only horizontal and vertical lines to work with the constraints of the cathode ray screen. Those examples give a sense of the way in which filtering systems at different levels of abstraction have meaning and create textures. They also integrate technical systems that produce and train culture while being shaped by economic forces like Shannon's work that tried to make speech economical for the Bell Telephone Corporation.
Building upon what you said, one of the things that strikes me about Shannon's work is the fact that one of his main achievements - and to go back to the question of his influence - was the premiss of his work. He thought he could implement a technical method to make communication perfect. Here, the filtering process is not as much about trying to reconstruct something with loss, but rather what are the conditions under which filtering is a system in which you denoise something with the encoding system that goes with it, in order for the receiver to receive a perfect message. An example of this is the CD player. You can drop and smudge your fingers on a CD, put it in the player and the music is still readable with its original sound. The message in and of itself is still perfect.
The model he and other mathematicians at the time invented was based on the possibility of removing noise and transmitting encoded data completely perfectly. It's worth noting that Shannon was not only a mathematician, but some sort of a designer or inventor. He was known for building weird machines like a mouse machine that could get out of mazes or the self balancing unicycle. He had a real hands-on mindset and at the very beginning of the cybernetic conferences, when information theory was debated, his views were extremely practical, refusing both a strictly theoretical approach and utopian views. Being pragmatic, his decisions were based on what we could with the technology of the time implement into machines. This attitude toward conception was, If I remember correctly, already true in his 1938 master's thesis on the analysis of relay switches, but became very obvious in his later work on the mathematics of communication.
With the three themes we have in this journal in mind, I did a bit of research to see what a genealogy of Shannon's work would look like. First of all, Shannon was obsessed with both George Boole and Gottfried Wilhelm Leibniz. We credit Shannon for binary digits in theory, but in fact, his main point was technical wondering how to implement it. Boole and Liebniz already had thought of both the machine and the encoding symbolic system that would enable communication, or at least, encoding data into a simple symbolic system. It's worth noting that one of his students was Ivan Sutherland, renowned in the design field as the creator of Sketchpad at MIT in the 60s as it paved the way for interfaces. Sutherland was also, I believe, the Master’s Director of Douglas Engelbart who designed the mouse.
If you trace a genealogy of Shannon's work, you can see that most of human computer interaction theory or UX/UI design and contemporary design theory, they're all infused with Shannon's heritage in some shape or form. What we now call design thinking is mostly based on Bruce Archer and Christopher Alexander's work, and that both of them, specifically Bruce Archer literally quotes Shannon in his structure of design processes. He decides to use the feedback loop that Shannon designed to use it in the marketing feedback loop at the end of the design process.
I won't extend too much on the notion of art in the Design, art, médias triad, because we published in our journal a fabulous article by David-Olivier Lartigaud that looks back at the cybernetic theorist Abraham Moles’ first manifest of Permutational Art published in 61. All that was published by the philosopher Max Bense, which is really interesting, because what Abraham Moles does is that he uses cybernetics and starts to think about its relation to aesthetics, obviously its computational potential. Clearly Moles was one of the early advocates of broadening the applications of cybernetic theories to other fields including psychology. But broadening it to art more than others. In different fields, whether it's Gerstner for designing programs, or Art et ordinateur by Moles or on the other side Umberto Eco’s The open work, all of them directly quote Abraham Moles, so there is a lineage.
Of course, it all leads back to the notion of media. In media theory, if we look at the exemplary work of Mcluhan, I think there are clear proximities in the model of information. Specifically, the idea of hot and cold media and the capacity of an environment to generate a form of a noise that would define how media in and of themselves are perceived. I think that if we were to take a deep look - and Ambre knows a lot more about this - at Kittler's relation to Shannon's model, we'd find very close similarities : one of them that comes to my mind immediately is the materialist definition of media itself as a mean to store, transmit and process information, which comes back in contemporary media theory for instance in Lev Manovich in his early 2000’s work (The Language of New Media).
This is more like an open question, but for me it's interesting to think that Booles’s view of the symbolic binary system, which Shannon obviously took back, was in his early work real philosophical views of reducing the world to a simple dualist system with only two digits and Boole’s idea that we could reduce thought, in his book Laws of Thought. So my question to you, Matthew, is : do you think Shannon's view also had a very strong impact on how we perceive the concepts in media theory as a binary form of a dualist model ?
This is a fantastic condensed history of media, notably the late modernist period. I would also say that we're in a different historical moment where the lessons that the people you're talking about and what they learned from this history, are different from the ones that people are engaging with now in some way. There will be a way of rereading that history and thinking about it in a slightly different tangent.
Amongst the kind of machines that Shannon developed, one was called the ultimate machine that switches simply itself on and off. This one simple box was switched on by a human and an arm would come out of the box and switch itself back off. It's a very playful system and it's ludicrous and funny. It plays with the idea of ordering behaviour as a joke and I think that's an interesting thing to bring out as well. This kind of playful side, it's something you often find in the beginnings, of any particular beginning of something.
For instance, if you look at the beginning of the novel in the English language, you have the deconstruction of the novel, like Laurence Sterne's book, The Life and Times of Tristam Shandy. It is very much an anti-novel. It breaks apart the conventions that later become the norm in the novel, and you could say that Shannon's machine does something of that. It makes a perfectly closed system while pointing out the sheer absurdity of it. There is an element of comment on the idea of perfect communication or perfectly clear communication.
When we think of the Internet as the space for communicating in the present day, some discourse still carries 1960s-onward utopian views, of networks as spaces for perfectly clear communications in which humans could come together and speak honestly and transparently to each other in a well-reasoned manner, like the kind of idealised version of a Greek agora. This is clearly not what people feel the Internet is now, which is, amongst other things, a space of noise, confusion, hate, misinformation, manipulations of all kinds. You don’t go to the Internet expecting pure, reasoned communication anymore. For better or worse, that's where we're at. And if so, I think we need to see the kind of genealogy you develop in the light of the position we're in and reread it in order to see how it can help us understand that position, but also to understand how our position now differs from what people imagined communication would be in the 1960s.
To go, for instance, from this idea of Leibniz introducing binary into Western Europe, part of the genealogy of this is that Liebniz was given a copy of an I Ching with a description of it by a Jesuit missionary into China. Leibniz misreads what the I Ching is. He doesn't see it as a book of divination, of guidance, of spiritual practice. He sees it as an example of what he takes to be a binary coding system. So there's a very productive misreading of the I Ching that happens to be absolutely essential to the foundation of these technologies. So all the technologies that are based on the idea of transparent communication are fundamentally built on an additional layer of misinterpretation. And that's where we're at at the moment; this double picture of clarity on the one hand, but also productive misinterpretation.
On Abraham Moles and the idea of a cybernetic art that people of that era had, we can take many different examples of late modernists of the 60s to the 1980s, say, working on the idea that one could make a kind of Apollonian art that was highly formalised, highly structured and absolutely transparent to its users, to its viewers. It was something that was an interesting proposition, but I don't think it achieved what it claimed or aimed to achieve. If you look at the work of poets in that era, they would also look at the work of interpretation of the reader and think about the way in which the process of addressing such kind of classically formalist work, would need to involve a training of the reader, a process of learning, of refining sensitivities or sensibilities towards such work.
We're now living in the aftermath of the ideals of a purified modernism that allowed for absolutely transparent, absolutely unique, statements, structures, processes that had one meaning. We're in a phase where things are multiply interpreted, multiply generated. They have multiple layers where the most transparent and the clearest ideas swerve in terms of their meaning, especially if they're reread by multiple systems of reading. And you could take this back again to Shannon, when we think we have clear communication and transparent communication between one person and another. We forget that this communication is also being read by the system by which that communication is being made in order to abstract information. Even if there’s a model of clear communication, there's a layer of abstraction which hits it.
Tangibly, the machining processing of communication is itself an actor in that process. And that's where someone like Michel Serres reads, rereads Shannon in the parasite to think about noise as co-constitutive of meaning. And that position of multiple intersecting lines of intercommunication seems a more viable reading of the present situation than the linear communication that Shannon ostensibly works with.
The idea of a transparent communication and the idea of the filter can be thought of as a political implementation in a technical form. You talked about the notion of agora, which was the model for McLuhan’s ideal networked communication. While it's supposed to be this political space of representativity, it was mostly a separation between the people who had voices and those who made noises, those whose words were meaningful and those merely reduced to sounds. The utopian perspective of the Toronto School was based on the belief that technology could create both a sort of technological nomadism where communities could sprout freely and a global phenomenon where all voices would be heard. It completely negated the structural problem of the agora, its later architecture and its exclusionary nature. This failure of communication is palpable today and somehow the differentiation process posed by our current technologies is still articulated in the filtering process ; who deserves to be heard and what are the conditions and profit derived from it?
Yes, absolutely. And you indicate who the people excluded from the Agora were: It was slaves and women and all the people who, in a sense, inherited the long consequences of that. From the history of slavery in European and North American cultures, we're still living hundreds of years later through, what Christina Sharpe calls The Wake of those processes and some of that is beginning to be acknowledged and to be worked with and recognized. This question of of what is included and what is excluded is foundational to the question of media; at the self-evident level of content filtering, but also in the political constituency that are being composed in and through media systems.
I think users now recognize the ways in which the filter imposes mode of control. People now know how to tiptoe around onlines policies dictated by platforms, covering up what they're saying online with tactical wording (notably for monetization purposes) and obscuring the content with formal changes in language - ‘grape’ for rape or ‘seggs’ for sex come to mind. By experiencing censure and flagging, people have to find ways to play around it, bypassing some technological filters.
The artist duo Eva and Franco Mattes recently exposed in their exhibition titled Fake Views, their now famous “the bot” videos where people who used to be content moderators, discuss their former terrible employment conditions while performing makeup tutorials. They explain how they would have to physically filter through content whilst dealing with regular changes in filtering policies. The makeup tutorial format is vastly exploited by users to discuss political questions; since it seems inoffensive content to both bots and content moderators, who skip the middle of the video to only scan the beginning and the end.
Therefore, people can actually have conversations about politics, such as going on strike, criticising platforms and getting censored - ranging from political to pornographic -
content without actually being banned from these hosting platforms. While they can’t actually describe the process and the total logic of those filtering processes, a space of negotiation is still possible because they’re, well we, are subjected to their effects and therefore able to learn from them.
That's absolutely right, it’s a great example and a fantastic project that Eva and Franco Mattes did. To go back to the question of transparent communication, if computers set up a fantasy of absolutely transparent communication with one to mapping between the signs that represent things and the meanings that are taken to be those signs, what we see is more baroque proliferation of meanings. People go both around these indexes systems and the one to one mapping that's imposed by state regulators or moral regulators and so on.
I think another example would be the use of characters in Mandarin. In the Chinese context, because of the structure and the material composition of the language, you can create new characters by the composition of different parts of ideograms together.
This allows for a very creative use of language so that you can create combinations of symbols that mean something. That is not readily apparent. That is not pre-codified into a filter that is made on a dynamic basis or a poetic basis by users in order to circulate ideas, meanings to play with each other, to solicit, or to develop, perhaps prohibited ideas or ideas on the borderline of prohibition, in ways that aren't indexable to a direct prohibition. So if the computer provides the fantasy of a list of words that are prescribed, and of a blocking mechanics, the materiality of Mandarin ideograms gives this capacity for the proliferation of new terms. New words and phrases that allow for at least a certain amount of the bypassing of these univocal systems or these very prescriptive systems.
I would like to go one step back and discuss misconceptions on how filtering works because what you’re saying about prescriptive systems is based, I think, on the same misconceptions. The fact that Shannon had a real pragmatic approach and the first generation of cybernetics had this practical definition of implementation fed into this utopian idea that filtering was technically possible, because it is technically possible. When you abstract semantics from symbolic systems, basically by separating content from the technical aspects of filtering, filtering becomes in the mind of many, a simple technical issue that could be solved by technical implementation.
So the belief is that machine learning algorithms could just by themselves manage to filter content perfectly, whereas we know that this is not the point of the early cybernetics. The point is the technical aspect of signal analysis, not ways in which we're filtering content for its meaning. And what artists show, including the examples you both gave, are that the aesthetic aspects of those filtering issues tackle the question of filtering and what I like to call filtering ambiguities.
Artists such as Eva and Franco Mattes reveal that the ambiguity part of the filtering process is, literally, semantics. So on one hand, we believe that filtering, from a technical aspect, can be really easy. We like to believe that when we're filtering the signal, deleting the background noise from a video or a sound file on a computer, we can perceive what part is kept, what part is cleaned, for example on low and high passes filtering. Setting the threshold is actually kind of easy when things are this obvious. But when it's not, like the Instagram filters, edge detection, skin pigmentation, things are a bit more complicated and ideological, well more ambiguous. A purely machine learning example would be when Google suggests to me, the pronoun for my name Kim. It's hard for it to decide whether I'm a he or a she just by syntax analysis. It's determining it on a model that's a generic model and is trying to alleviate the ambiguity just by making a random decision.
We’re in a given political situation, in a given complex social system, to paraphrase the systems theorist Jay Forrester. He argues that social systems belong to a class called multiloop nonlinear feedback systems, which nobody understands - even the most talented systems analysts. In this context, the whole idea of self-moderation in social networks, of bypassing filtering systems is basically ways in which artists decide to create a failsafe system, a hack of a system to complex to understand. So, these artists or activists manage to exploit where computers fail, as they rely a lot on non automated filtering to process content because that was not the premise of the whole cybernetics. The whole point was merely technical. So maybe there are two aspects of filtering. Part of what the artists show is that there are multiple aspects to filtering, one of which is highly technical and to what you said earlier had to be rethought in a different way and one that is misconstrued for something that is more machine than actually human, human involvement in social cybernetic systems. And that would be filtering the content, filtering policies, like defining policies and showing to the world that those systems can be bypassed or unfiltered.
A good principle is that in any system you use, you should be able to find out about as much about that system as the system finds out about you. So facial recognition systems are being implemented in various parts of the world, used by different police forces and have repeatedly been shown to target different communities such as black people, Arab people and others. They are particularly treated negatively by those systems by being racialized and criminalised.
Some argue that it is something simply transparent, but the technical is never simply technical. It’s always cultural. It’s always political. It's always social. Jay Forrester, as a systems theorist, worked on understanding the Vietnam War for the Americans and part of his work was trying to describe the strategic formation of the Vietnam War. Felicity Scott in her book Outlaw Territories looks at how the post cybernetic actors and systems theoretic claims produced political effects. One of the aspects of Forrester’s work was that it made very visible how systems thinking could be used to extend powers of imperialist projects like the War in Vietnam, or to give the illusion of clarity that sustained them.
When a model of the world is said to be more complex than the world or when the model reveals the complexity of the world and the model is then reimposed on the world to try and produce an order, it’s not simply an immediate realisation of an effect. It’s something that involves the moment of processing itself as part of that exertion of political power. Being processed itself has a politics. The use of abstraction in defining politics, for instance the classic friend or foe relationship in warfare, is another form of filtering. In many contemporary situations, the imposition of a system—the feedback of a system onto what it models— is all about undergoing the experience of being, well, processed. There is a type of ‘hassle politics’ that arises in being processed by the state system, the military system or in being under scrutiny of a system of visualisation or filtering, knowing that you are experiencing a system that is evaluating you, classifying you and organising what it is that you use, what it is that you’re able to experience. Such processes can be very abrupt or extend through time in a way that exceeds, and diminishes, human life.
So there is, in the modernist aspirations of these systems, a desire to find an equate mapping of the world and then to use that mapping to order and control the world. It is recognized that it's not really working but, at the same time, the capacity to act on those mappings to impose or to force on reality, whether it's through military systems, through classificatory systems of racialization or of gendering, through economic access, becomes not simply something that's done once and for all but is an ongoing process of the experience of political formation. It’s a dynamic interaction that’s often experienced as forms of duress on one hand, and a kind of invisible empowerment on the other, that people sometimes refer to as privilege. So ease of access, fluency of movement, experientially erases difficulty of access, slowness of movement, the torpor of passing time, the inability to access economic resources and so on. So these also act as the part of the (non)experience of filtering, whether it's in social media or in questions of citizenship, of movement across the world or in terms of targeting for warfare, as we’ve seen with the increased computerization of warfare in recent years. The grind of being filtered occurs in order to lubricate the distorted ease of those who are also filtered, but upwards.
I've always thought that one of the misconceptions of filtering is about homeostasis. It’s not only important for systems in which you're creating a loop to stabilise the system as a whole but it can also be used to amplify something, as signal or noise, or if applied to social media or social systems, to amplify the voices being heard. So systems use feedback loops to stabilise the system in order to control amplification via positive or negative feedback loops but we tend to think that there is a possibility of things breaking down when too much filtering is applied. But, in real situations, it doesn't seem to be very likely that a positive feedback loop would provide and create an amplification big enough to reach a point of self destruction where the reciprocity in complex systems, from the user to the service and back to the user, could actually provoke a collapse.
One of the things that we often hear is the difference between a user's perspective and a service’s perspective. While those huge companies are accumulating data and providing more extensive datasets to feed algorithms, the act of bypassing systems seems necessary to circumvent an imbalance. Simply put, the data surplus is immeasurable on one side and so small on the user side in huge companies like Amazon, Facebook or Google. So this difference of balance makes me question the fact that in having feedback loops when you relate to buffer size ; the bigger the data set is, the less impact user feedback will have on the modification of the system itself, because the user is only a fragment and has little to no leverage. But, the other way around, a small modification of an algorithm can have a huge impact on the user who only has a very small buffer size.
Basically we don't leave in a just-in-time version of the filtering system. We have extremely complex systems, specifically when we're talking about analysing content in time of war, but from the user’s perspective, one of the frustrations of the society of the filter is that this filtering process is a two-tier filtering process. There’s this utopian view that everything is possible, transparent, but people feel as if they have absolutely no control over what their feedback is to huge companies. This small amount could seem worthless compared to the broadness of the systems. From a company’s perspective, the economical impact is colossal when they harvest such a huge amount of data from so many users. But from the user's perspective, it's only a tiny part of the puzzle which does not change a thing to the system as a whole as a result of buffer size imbalance. There is a disjunction between the frustration from the user facing inflexible systems that, in fact, do rely on massive feedback loops from users towards services.
This is actually a great idea to talk about the analysis of the buffer size as an index of power in the present. Maybe there is something to be said about Jean Baudrillard’s idea of the silent majorities, the non-participating and the non-interacting as a way of thinking about this reaction to this absolute power imbalance.
We jump-started the last question, talking about the idea of transparency and power imbalances. This omnipresent idea affects the modality of people’s interactions online where people both “have nothing to hide” but are wary of their online presence. Originally this desire for transparency derives from a function of neoliberalism, where information was put into circulation through public scrutiny, then finally liberalised, commodified and then monetized. It was also an ideological position for both states and corporations, a way of denoising and clarifying how institutions debate and produce policies.
But this surabondance of information submerged us, costing both its legibility and readability; at the same time, it’s impossible to process this quantity and everything seems so highly filtered and therefore limited. So there is also a power imbalance, in the idea that you can access it but are unable to make sense of its totality. You can just see how and where the filtering of information leads you, especially on the internet. For any user, this is overwhelming because we cannot comprehend those different levels of networked organisation, to their structure to their materiality, which are a necessity to be able to deliberate on it. The idea of transparency as a concept, I would say, seems then antithetical to the idea of filtering. While transparency is supposed to legitimise social democracies, we’re constantly in the filtering process, obscuring what should or not be accessed. Or maybe, is it something that works for the transparency itself or perhaps for different types of transparency ?
Preparing for this talk, I’ve realised that our students are very attached to the idea of transparency, especially when they're making an object or product from a computational medium. Although they use this notion, what they really mean is opacity. Their intention is to make the medium disappear, to make the internal process of the medium disappear only to have a finite message that they want to share. While this may be a mild criticism, it happens to be one because in talking about transparency, we forget what it costs us.
It's interesting to look at Edouard Glisant’s discussion of transparency. For him, every form of transparency brings with it its own form of opacity. Even if there is a moment of crystalline clarity, it also produces their own kind of entailments of non-knowledge, of ignorance; what is missed or what becomes a glitch in those. Moments of absolute transparency also rely on a large amount of stabilisation. In the classical model of scientific work in the laboratory, a particular kind of variable is isolated - whether it's within a chemical reaction or a form of behaviour - and then reworked again in order to work out the parameters of that variable; up to a point where it becomes scientifically describable in a way that it's understood to be transparent.
What is occulted is that the process generally involves an enormous amount of stabilisation, of creating institutions, of organising value, of training people's judgement and producing instruments. Transparency is then possible through designing tools and designing methods in order to make something seem transparent or becoming transparent under certain conditions,. We also have to realise that it’s extremely valuable because we need transparency in certain moments in order to diagnose a medical condition or to understand a situation in the world. But we always have to recognize that there are the costs to that transparency, what are the things that we are stabilising, what are the things that we are writing out of the picture or of the description. So we have to be aware of the limits of each moment of transparency.
At the end of the day, I think we have to take in consideration that this idea of transparency is derived from our need and desire for trust. Trusting media could be understood in several ways : via the materialist approach of media theory but also via a political Chomsky-esque understanding of mass media. In Chomsky’s Manufacturing Consent, he uses the word filtering in the context of a propaganda model of communication. There’s a backlash today, where trust is an actual issue for users because the distance separating the content from the content creator, the owner of the means of information production from the end user and public institutions from their citizens is so broad that it causes a growing schism between people and the services they rely on.
It leads back to utopian ideals of the second order of cybernetics - which seem to have failed - where technical systems were thought to be solutions for growing entropia. The premise was the analysis of the self-organising systems. And, as anarchists communes and cultural revolutions in the 70s have proven, self-organisation seems to be more or less a failure ; not necessarily a political failure but a failure in terms of their longevity. Considering our current political framework in relation to filtering, the notion of transparency and the idea of perfect communication is an imperfect answer to the growing distrust of any corporation that would hold power on the means of producing content based systems. This, I think, is key to understanding the importance of self-moderation and control over your own identity through social media.
I guess this kind of question of a more self-organised mode of social formation, such as developed in some anarchist practices, haven't been there long enough to produce counterfactuals. There aren't enough test cases to know whether it’s a failure or not. I think it would be interesting to see some version of this, to actually see what would happen. But, when you factor in the question of the amount of resources that are put into destroying, delegitimising or hampering those attempts, I would say there are real grounds for further experiment. Indeed, to reverse the formulation, and to follow from Piotr Kropotkin or David Graeber’s views, contemporary societies essentially depend on a submerged layer of mutual aid, and that this needs to find it’s proper political form. Nevertheless, it's certainly essential to learn from historical processes and to think, how did humans work those particular sets of possibilities, in particular historical moments.
So in a way, you're calling for a new art of reading our contemporary situations, something that Stuart Hall calls the conjuncture. How do we reflect on the present ? How do we reflect on all the different forces that are coming at play at any particular historical moment, and how can we see what are the dynamics opening up to make news openings possible ? I think, in a way, this is what we need: a new art of seeing the possible in the present. I think there are so many terrible things, wars, climate damage, economic constriction and these repressive forms of socialisation that political powers are developing. We need a new vocabulary for thinking about openings, that involve both complex forms of reflexivity but also a way of producing openings to a new texture of the possible and to new futures.