The close of the first decade of the 21st century marked the final end of the dream of deregulated global capitalism as a historical horizon of peace and prosperity. From 2008, the so-called Great Recession established new political coordinates characterised by the normalisation of precariousness and the rise of illiberal movements. Similarly, the close of the second decade of the 21st century has been marked by the end of the hopes placed on digital technology as a means of extra-political solution—analogous and complementary to the commercial—to our economic, cultural and social problems.
For at least three decades—from the 1980s to the outbreak of the crisis—the vertigo of social weakening and the vital risk associated with global financialisation, labour flexibility and the loss of political sovereignty were somehow curbed by expectations of economic growth and, above all, technological progress. It is difficult to call into question the decomposition of this social programme. The aspiration of a restoration of lex mercatoria is simply science fiction: it faces not only insurmountable internal limits related to decreased profit rates but also, above all, external, material barriers: the environmental impossibility of infinite economic growth. For a whole decade, digital technology replaced the market as a guarantor of those hopes of progress based on spontaneous equilibrium, rather than major political interventions.
In 2019, the fragility of this commitment to technological solutionism also became evident. After a series of scandals and judicial setbacks associated with the criminal practices of large corporations such as Google and Facebook, governments and international institutions, from the OECD to the European Commission, coincided in pointing out the dangers associated with the digital dominance of a handful of Silicon Valley megacorporations. Specialist digital technology literature has also highlighted this turn: assessments of the problems associated with the digital turn, highly peripheral at the beginning of the decade, have become hegemonic and the repertoire of evils and problems attributed to digital technologies is overwhelming. This catastrophism is surely as unrealistic as the prior optimism. And, in fact, it is noteworthy that a feature that remains unchanged in this apocalyptic and integrated dialectic is its sequential nature. Technologies that centralise disproportionate hopes or fears successively appear: virtual reality, social media, blockchain, big data and, in recent times, artificial intelligence (AI).
Traditional debates
The contemporary centrality of AI in technological discourses is noteworthy because it marks a return to prominence after several decades of invisibility. AI occupied a highly prominent place in the first debates regarding the scope, possibilities, risks and implications of digital technology, both in the academic literature and in popular culture. Perhaps the zenith of these discussions was the 1989 publication of The Emperor’s New Mind, a bestseller in which physicist Roger Penrose cited a diverse set of objections to the programme called ‘strong AI’ (roughly the position of theorists who saw the brain as a ‘biological computer’). In reality, Penrose’s work was highly speculative, but it is symptomatic of a moment in our recent intellectual history in which the discussion about the degree to which machines could replicate (or not) human intelligence was considered to have taught us something about the nature of our own rationality.
In fact, one of Penrose’s central arguments was a reformulation of a famous mental experiment by the philosopher John Searle known as ‘the Chinese room’. In a 1980 article, later included in Minds, Brains and Science, Searle imagined a machine that was apparently capable of translating from Chinese to Spanish, to the point that it was able to pass the Turing test. In reality though, inside that computer, there was a human isolated from the outside but able somehow to receive messages to translate and produce translated texts. That person did not understand Chinese but had dictionaries and manuals that allowed him to translate the text he received and, thus, deceive those who requested the translations. For Searle (and Penrose), the experiment refutes the positions of the most vehement supporters of artificial intelligence, as the whole of the Chinese room passes the Turing test (that is, it is able to convince an external observer who knows how to speak Chinese) although none of its components include Chinese. The key point that Searle wanted to underline is that, in human rationality, semantics, and not just syntax, is crucial.
Artificial Intelligence is the technological version of that post neoliberal authoritarianism. A technological leviathan whose arrival we perceive as inevitable
In general terms, criticism of the most radical positions of AI theorists tends to underline the dialogic dimensions of our rationality, that is, the intersubjective conditions of our intelligence. We class as intelligent those beings with whom we could reach an agreement in principle on the justification of certain theoretical, moral or aesthetic judgements. It is a definition that, in fact, excludes all non-human animals and surely any machine, no matter how much one or the other is able to solve complex problems, in some cases more effectively than people, or pass the Turing test.
In any case, this kind of debate declined throughout the 1990s. In the first place because traditional or symbolic versions of artificial intelligence lost significance compared to neural networks and other subsymbolic or inductive developments, which monopolised academic and media attention. Secondly, because the expansion of the Internet completely changed the questions that were considered relevant in the debates about the relationship between technology and rationality. In the era of global connection, the fundamental question began to focus rather on the extent to which machines transformed human intelligence through hybridisation relationships. The Internet and social media were often seen as a space for the concurrence of fragments of knowledge that were grouped together to compose a kind of hive mind that provided us with access to increased perceptions and understandings. Few people continued to discuss the possibility of programming human-like digital intelligences because a) distributed relationships seemed much more powerful and promising and b) human rationality itself seemed to be altered by digital prostheses.
Post-theoretic AI
The ultimate expression of this epistemological agnosticism after the heroic era of AI is possibly big data. For many social researchers, the possibility of accessing huge amounts of information from digital networks represents a real epistemological revolution: it provides, due to the size and extent of the data, a new, more objective and precise form of knowledge, the best possible at present. For the first time, we are told that we do not have to choose between sample sizes or in-depth studies nor do we face situations where the significance of the calculations is not guaranteed. The volume of data offsets any other methodological problem: biases, errors, sample designs, etc. The big data programme is based on a kind of simple inductivism that promises an extreme reduction of interpretive work through the use of aseptic algorithmic techniques. Indeed, some have occasionally spoken of the ‘end of social theory’ thanks to the direct intelligibility provided by big data.
Criticism of this model, however, underscores the operating loop that interrelates knowledge production tools with the results of its application. Not only because non-mathematisable information in big data is invisible, but because many of the statistical recipes used are designed for users interacting through social media or decentralised virtual media, meaning that such records may lack interpretative context outside the online space. In other words, big data itself alters our conception of what can be understood as research and knowledge. The objects that we consider to be subject to research correspond to specific physiognomies that change not only the methodology but the underlying social theory.
The appearance of authentic artificial intelligence systems with a practical and daily presence in our lives generates fears or hopes but, comparatively, few epistemological debates about its nature
It is a central problem for understanding the contemporary return of AI, which is closely related to the rise of big data. Not only, of course, because the availability of large data sources is decisive in the development of contemporary expert systems, but also because its current configuration has inherited its atheoretical aspirations. It is not very clear what the implications would be if the person in the Chinese room did not have access to a set of dictionaries, as Searle stated in his original formulation of the problem, but to Google Translate. What is curious is that not many people seem concerned about the question.
The historical appearance in our time of authentic artificial intelligence systems with a practical and daily presence in our lives generates fears or hopes but, comparatively, few epistemological debates about its nature or congruity or incongruity with our own rationality. It is noteworthy because the same questions of forty years ago still stand. In the words of Margaret Boden:
“AI’s methodological range is extraordinarily wide. (…). A host of AI applications exist, designed for countless specific tasks and used in almost every area of life, by laymen and professionals alike. Many outperform even the most expert humans. In that sense, progress has been spectacular. But the AI pioneers weren’t aiming only for specialist systems. They were also hoping for systems with general intelligence. (…) Judged by those criteria, progress has been far less impressive” [1]1 — Margaret A. Boden, Inteligencia artificial, Madrid, Turner, 2017 .
From that perspective, perhaps the real challenge is not to describe how AI is changing society, taking for granted that such a thing is happening. In reality, that is a risky prediction because, in general, it is extremely difficult to know which technology will be decisive in the medium term. Often, seemingly secondary technologies end up playing a central role and, vice versa, dazzling technologies disappear shortly after their emergence. An interesting theoretical alternative is to reverse the perspective and try to think what kind of social transformations have occurred for us to accept with such naturalness and inevitability the presence of AI systems and set aside the most profound or disturbing gnoseological questions.
AI as a new Leviathan
It would be absurd to underestimate the spectacular advances that have occurred in the field of AI but it is true that, as in the case of big data, we tend to conceal the failures and, above all, ambiguities of these technologies. The negative feedback loops that generate catastrophic failures in the algorithms that are now applied to important aspects of our communal living—education, labour market, justice, finance, etc.—are well known [2]2 — Cathy O’Neal, Armas de destrucción matemática, Madrid, Capitán Swing, 2018. . These are structural and persistent problems that have to do with operationalisation processes—which, by definition, have a strong interpretative and normative component—and the power of calculation or increasing sophistication of AI not only does not solve them but can also amplify them.
The hegemonic media has for years had a real obsession with the effects of AI and robotisation on employment or the public sphere. There has been much talk about the presence of Twitter bots, fake users that are actually created by software. They are a real electronic threat but, oddly enough, have a correlative in a labour dystopia. Click farms, factories located in Southeast Asia where workers enduring arduous working conditions are employed to manually increase the number of likes and followers of their customers using thousands of mobile phones, have been known about for a long time. Something similar happened with the scandal of so-called artificial pseudo-intelligence. It was discovered that certain cutting-edge technology companies that offered artificial intelligence services were actually using poorly paid human beings because it was simply much cheaper to hire precarious workers than to build technology from scratch. It was a strategy that Amazon actually launched with its Mechanical Turk project, an ultraprecarious crowdsourcing platform designed to find workers to perform simple, low-unit-price tasks that require a certain degree of intelligence higher than that of a standard machine. Mechanical Turk is ironic even in its name. It was announced with great cynicism as ‘artificial artificial intelligence’. Amazon has, in fact, specialised in these sleight-of-hand manoeuvres: it dazzles us with its proposals to deliver shipments using drones but, in reality, it promotes working conditions in its warehouses and distribution networks that have been described as ‘slavery’ in countries like Germany.
Following a well-known pattern from the beginning of capitalism, contemporary automation is expressed through a dual social mechanism, one of whose dimensions is much more visible than the other. On the one hand, it is an effective way to increase productivity by replacing human workers with machines. But, on the other, and not to a lesser extent, automation is a disciplinary strategy aimed at disqualifying and controlling human labour. Often, the effect of introducing AI into a workplace is not necessarily the replacement of humans by robots but the ‘robotisation’ of workers: it makes employees lose control over the production process and makes them more easily replaceable. In fact, from the beginning of the recession, automation (or the threat of it) served to curb increases in industrial action through technologically mediated coercive mechanisms. One of the best known cases is, again, that of Amazon warehouses, in which the supervision of human workers has been almost completely automated through permanent monitoring of work activity to the point that the machines themselves make recommendations for dismissal.
The ‘network society’ has finally revealed itself as the ideal environment for the thriving of some of the largest monopolies in history
For many years, digital technology was an inseparable travel companion to contemporary free trade. Supporters of commodification decidedly opted for communication networks as a condition of the material possibility of financial deregulation, but also as an ideological weapon. The digital environment was seen as a kind, dialogic and non-monetised extension of global markets. In the same way that, according to economic orthodoxy, the market reaches points of equilibrium without the intervention of a regulatory centre, social media would be, from a widely shared perspective, capable of generating stable structures of sociability from uncoordinated interaction.
Today, we live in a time of the irruption of counter democratising and illiberal political alternatives in whose agendas the defence of the free market occupies a secondary place, to the extent that it has become subordinate to the preservation of the hegemony of national elites. From the French National Front to the government programme of Donald Trump, through the supporters of Brexit or the Lega Nord, the radical right’s proposal for the decline of neoliberal globalisation is a commitment to the recovery of national political sovereignty snatched away by global markets, aligned with policies of ‘order’: measures aimed at curbing social conflict through repression or ideological mobilisation.
Something similar occurs with the technological substitutes of the market. The ‘network society’, the great hope of democratisation and equality over the past decades, has finally revealed itself as the ideal environment for the thriving of some of the largest monopolies in history, digital megacorporations that no government is able to control. Similarly, the image of social media that is becoming increasingly widespread is not that of a promising terrain of increased intelligence but that of a jungle of aggression, panoptic surveillance and fake news. So perhaps the growing public appeal of AI can be seen as part of a reactive political and ideological shift.
The contemporary visibility of AI—which the media systematically portrays as an irrepressible irruption—is the digital correlative of social tensions, the result of the collapse of the neoliberal utopia, which are fuelling anti-democratic and illiberal political movements. Strong, neo-conservative political figures are legitimised as an alternative to the failure of cosmopolitan sociability in a world perceived as conflictive and threatening. The surrendering of freedom or tolerance is the price to be paid for a promise of defence against an indeterminate but terrifying accumulation of global dangers. AI is the technological—to a certain extent, again, extra-political—version of that post neoliberal authoritarianism. A technological leviathan without bureaucrats whose arrival we perceive—this is the key to its legitimacy—as inevitable. AI requires us, like the emerging radical right, to surrender rights of employment, privacy, freedom or political sovereignty. It offers us, in return, a promise of calculability and order in a world of terrifying uncertainties. A promise, with all certainty, as false as that of the far-right politicians who appeal to the injured narcissism of their voters, but in a manner purified of neo-fascist atavisms and adhesions through the language of cyber-fetishism.
-
References
1 —Margaret A. Boden, Inteligencia artificial, Madrid, Turner, 2017
2 —Cathy O’Neal, Armas de destrucción matemática, Madrid, Capitán Swing, 2018.

César Rendueles
César Rendueles is a researcher, translator, holds a PhD in philosophy and is currently professor at the department of Sociological Theory at Universidad Complutense in Madrid. He has been an associate professor at Universidad Carlos III in Madrid and visiting professor at Universidad Nacional in Colombia. He has also worked as a cultural projects director at Círculo de Bellas Artes in Madrid and at the association for alternative culture Ladinamo. His main work areas are epistemology, political philosophy and the impact of social media and internet on political action and personal relationships in the current postmodern societies. He has writen several books: Sociofobia (Capitán Swing, 2013), Capitalismo canalla. Una historia personal del capitalismo a través de la literatura (Seix Barral, 2015) i En bruto. Una reivindicación del materialismo histórico y Los (bienes) comunes (Icaria, 2016).