Artificial Intelligence is a technology used to plan for the future. Planification implies intelligibility, calculability, and systematization. The future as a concept has been, in occidental cultures, closely tied to monotheism and the development of a linear narrative about societies, with a predicted end of the world, where individuals end up either in paradise or hell. This was a radical change from the narratives of classic cultures, where there was no notion of the past or prehistory, but rather a narrative of a cultural, god-given origin similar to the present. It did not anticipate change in the manner of future narratives. Future narratives see the time to come as a time when evolution happens, when neither clothes nor context nor social habits remain the same. With the development of Protestantism and capitalism, the future became more than a point in time when the story would end. It became an unwritten point of opportunity to be shaped by human beings.
At the beginning of the twentieth century, the idea of the future was closely tied to technology as an instrument for changing historical contexts and shaping societies. Elites initiated a technical discourse focused on scientific pragmatism and technocracy, with social engineering focused on the creation and “neutral” planification of big societal projects. In 1933, sociologist Hans Freyer stated, “If the immanent utopia of technology is the transformability of all materials and forces into each other, then the immanent utopia of planning is the transformability of all historical situations into each other”[1]1 — Freyer, Hans. 1987. “Herrschaft und Planung. Zwei Grundbegriffe der politischen Ethik.“ In Herrschaft, Planung und Technik. Aufsaetze zur politischen Soziologie, edited by Hans Freyer, 17–43. Weinheim: VCH Verlagsgesellschaft. p. 22 . Karl Mannheim even declared the year 1935 the end of “unplanned things”[2]2 — Mannheim, Karl. [1935] 1958. Mensch und Gesellschaft im Zeitalter des Umbaus. Darmstadt: Wissenschaftliche Buchgesellschaft. . Technical experts postulated that by organizing societies’ infrastructure based on efficiency and rationality, for the sake of the common good, it would even be possible to overcome subjectivity in politics: “We are so used to fighting that we cannot see there is a better way—the way of planning”[3]3 — Kizer, Ben H. 1939. “The Need for Planning.” In National Conference on Planning. Proceedings of the Conference held at Minneapolis, Minnesota, June 20-22, 1938. Chicago: 1–9. , was the argument advanced at a conference in 1939, at the end of the American New Deal era; this architectonical planning from the scratch approach was also known on the European continent, and the intellectual elites behind it conducted an extensive occidental international exchange.
The constrictions of planification through technical infrastructure crystallized in public discussions during the 1970s. New issues such as climate change and sustainability tested the limits of what could be planned for. More and more, the value of public opinion and public participation in infrastructure projects initiated scrutiny of the political dimension of infrastructure and its alleged neutrality. In the very same decade, Foucault presented his new theory of power within infrastructure and the social and ethical assumptions implied therein.
On the Infrastructural Nature of Artificial Intelligence
Artificial intelligence (AI)—algorithmic systems—comprises technologies the world is still trying to understand in their essence, in order to assess their impact and risks. AI and algorithmic systems do not understand individuals. Conceptually, they represent ideas of the social. The way they compute and classify patterns is relational. Algorithms categorize people in fine granular groups. The identity of individuals is no longer relevant. Personalization may be perceived by the user as the technical procedure for individualization, but technically, personalization is relational: it is the classification of this individual into a very specific collective of people with similarities.
AI and algorithmic systems do not understand individuals. Conceptually, they represent ideas of the social. Algorithms categorize people in fine granular groups. The identity of individuals is no longer relevant
It does not necessarily become clear to the person concerned that he or she is being classified into a collective that may not be part of the conventionally known social categories in a society. Personalized advertisement and “microtargeting” may give the impression that marketing is addressing potential consumers individually, based on information about the preferences of the individual. But technically the individual is being assigned to various categories shared with many other individuals. The connection of all these categories results in an intersectional profile encompassing more categories than the usual ones, such as age, gender, and social status; and that profile is equally shared by many other individuals. This level of granularity and intersectionality is easy to confuse with individuality.
As a result, assessments of AI tend to focus on detecting individual damage and human rights abuses, although problematic algorithmic systems primarily discriminate against collectives without detecting individual damage[4]4 — Sloot, Bart van der. The Individual in the Big Data Era: Moving towards an Agent-Based Privacy Paradigm. . This is a classic effect when it comes to assessing the impact of infrastructure. The effects of the shape, standards, and rules of infrastructure mediating the flow of resources, mobility, or telecommunication can only be detected with an architectonical overview of the system. Then infrastructure is the physical Foucaultian dispositif or apparatus to distribute power, create the conditions for societal inclusion or exclusion, and shape the space of a society. In “Confessions of the Flesh” he defined the term as
“a thoroughly heterogeneous ensemble consisting of discourses, institutions, architectural forms, regulatory decisions, laws, administrative measures, scientific statements, philosophical, moral and philanthropic propositions–in short, the said as much as the unsaid. Such are the elements of the apparatus. The apparatus itself is the system of relations that can be established between these elements”[5]5 — Foucault, M. 1980. “The Confession of the Flesh” [Interview, 1977]. In Power/Knowledge: Selected Interviews and Other Writings, edited by Colin Gordon, 194-228. New York, NY: Pantheon Books. .
The mobility infrastructure in a city determines the way its citizens access its geography, the way it fosters or demotes inclusion. The streets in a suburb in the United States with roads and no sidewalks shape the mobility of its residents differently than the streets of Amsterdam, with sidewalks, bicycle paths, and roads. Pedestrian traffic lights that have very short green intervals may generate a more fluid car traffic, but they certainly present a challenge for older pedestrians.
Artificial intelligence is a new form of infrastructure. It is not a product; it is immaterial infrastructure. Everything that is a process implies a certain system and a set of standards, which can then be formalized in mathematical language and become partially or fully automatable. Such standardization goes beyond cables and hardware. Automating a process with AI implies setting a fine invisible layer of software to permanently mediate interactions with and among all involved parties of the process. In this way, immaterial infrastructure is being built into sectors where an infrastructural dimension was unthinkable before.
The current understanding of infrastructure therefore needs revision. Presently, infrastructure denotes either the institutions preserving the economic, cultural, educational, and health functions of a country—soft infrastructure—or “all stable things that are necessary for mobility and an exchange between people, goods and ideas”[6]6 — Laak, Dirk van. 2018. „Alles im Fluss“. Die Lebensadern unserer Gesellschaft – Geschichte und Zukunft der Infrastruktur. Frankfurt am Main: S. Fischer Verlag. —hard infrastructure. An essential characteristic defining both soft and hard infrastructure is stability, whether procedural, in the case of soft infrastructure; or physical, in the case of hard infrastructure). They are a form of fundamental planning to systematically design access, distribution, and interaction with goods and services that are of interest for a collective. For Foucault, this is a fundamental aspect of political power. His concept of dispositif regulating the politics of health, sexuality, or architecture was novel for broadening the definition of power beyond mere rules to a collection of “relations of power, practices and actions”[7]7 — Elden, Stuart. 2016. Foucault’s Last Decade. Cambridge: Polity Press. depicted both in normative and material infrastructure and mechanisms. Infrastructure is thus the planification of power and its distribution through a set of standards embodying societal ideas of efficiency and fairness of procedures and distribution.
Another relevant characteristic of infrastructure is that modularity is inherent in it. Hannah Arendt’s criticism of the bureaucratization of murder during the Third Reich in Germany is a fundamental criticism of soft infrastructure[8]8 — Arendt, Hannah. 2006. Eichmann in Jerusalem: A Report on the Banality of Evil. New York: Penguin Books. . Administration as soft infrastructure—once seen by Max Weber as the mechanism of democracies to ensure equality before the law and its procedures in opposition to the arbitrariness of charismatic autocracies—entails risks. Dividing the extermination processes into standardized administrative steps or modules led individuals to decontextualize each module from the broader process and foster moral distance from its ultimate consequence. Administration banalized evil into a bureaucratic procedure that would obfuscate responsibility through modularization, making bureaucrats in the system accountable only for a mere step of the process.
Automating a process with AI implies setting a fine invisible layer of software to permanently mediate interactions with and among all involved parties of the process
Furthermore, infrastructure usually has an interdependent character: information and telecommunication infrastructures are fundamentally dependent on electricity infrastructure.
One last characteristic to mention is the unavailability of infrastructure and infrastructure goods to single households and companies, both for production and cost reasons: “Although bread is able to satisfy our hunger, it is not an infrastructure good, since the ingredients for bread production are easy to obtain; today everyone can bake bread for himself”[9]9 — Buhr, Walter. 2009. “Infrastructure of the Market Economy, Volkswirtschaftliche Diskussionsbeiträge” Fachbereich Wirtschaftswissenschaften, Wirtschaftsinformatik und Wirtschaftsrecht: Discussion Paper No. 132–09. U. Siegen. .
Because fixed costs are very different depending on the capital goods, the supply of infrastructure happens under different market forms: mainly (natural) monopolies (e.g., electricity supply), but also competition (e.g., housing construction). While a single household may afford a generator or solar panels, a constant, secure supply of electricity still needs connection to the grid, the grid itself being the infrastructure that a single household cannot afford.
Artificial intelligence entails many of these aspects. It has physical prerequisites such as cables and hardware. It is not stable in its ontology; its formulas and code are constantly changing. But it does create a stable layer of a mathematically formalized structure concurring with or complementing the rules and constrictions given by soft and hard infrastructure. Further, it automates processes through technical and mathematical modularization—each process is split into several steps depending on the technical requirements, but not necessarily the administrative and social context. AI systems are not instruments or derivations of the rules and mechanisms of soft infrastructure. They have a different rationale running parallel to that of soft infrastructure, and, consequently, they require their own categorization.
Social media, for example, could be seen as a form of immaterial infrastructure in the communicative sector. A software layer orchestrates the interface, the timeframe and the format (videos, text, pictures) in which people interact with each other. This infrastructure is thus standardizing and moderating communication along with the rules and standards of soft infrastructure (in this case speech rights, personal rights, etc.), and its present market consists of monopolies defined by the format[10]10 — Existing social media companies have monopoly positions within their own specific formats: the format offered by Twitter is different than that offered by Instagram, Snapchat, Youtube, or Facebook. .
Because infrastructure is the architectonic expression of the politics of a society, AI is a technology impacting societies architectonically, and thus on a collective rather than individual level
Predictive policing systems would be another example. They are used to identify pattern behavior for concrete crime categories that are applied strategically to prevent similar crimes. For example, organized criminal groups have a modus operandi for robberies, and within a given time frame and geographical parameter, this information is being used systematically to prevent similar robberies in the area by the same actors. A predictive policing system for this use would standardize the geography of a city and identify the smallest geographical unit. It would be built upon a set of definitions and human-made decisions: which data categories will be used and which will be excluded; how old the data may be and whether there is an expiration date for the data; what crime categories will be included and correlated to each other; and so forth.
With this, information about crime, the geography of the jurisdiction, and so on is structured and “datafied”—and saved as a usable data set of standards and rules that function together with those specified by soft infrastructure. Although informing, assisting, and thus systematizing police work becomes very much dependent on the ideas and concepts of optimization, fairness, and efficiency[1]1 — Freyer, Hans. 1987. “Herrschaft und Planung. Zwei Grundbegriffe der politischen Ethik.“ In Herrschaft, Planung und Technik. Aufsaetze zur politischen Soziologie, edited by Hans Freyer, 17–43. Weinheim: VCH Verlagsgesellschaft. p. 22 of the diverse actors designing and implementing the technology, these social ideas of efficiency and fairness are at the same time constrained in their translation into algorithms by the rules of mathematics and the limits of “datafication”[12]12 — Not all social contexts and circumstances can be comprehensively turned into data. . Thus, soft infrastructure and immaterial infrastructure are two separate systems dialectically influencing and constraining each other.
Another central aspect of AI as an architect and moderator of social relations, practices, and actions is the value at the center of its optimization rules. AI systems optimize for a specific goal or value, to the detriment of other values. Does a system optimize for efficiency in the sense of pragmatism or of fairness?
All the above examples illustrate traits of the technology that are also applicable in other AI mediated sectors and services: health, agriculture, mobility, communication, social welfare, banking, e-commerce, employment, and so on. Using artificial intelligence in those sectors implies standardizing and creating a second layer of norms in mathematical modules that will automatically structure the human relations, practices, and interactions within that context.
Because infrastructure is the architectonic expression of the politics of a society, AI is a technology impacting societies architectonically, and thus on a collective rather than individual level.
The Inherent Methodological Individualism of Western Norms and Laws
Algorithms and artificial intelligence do not understand individuals; and democracies, from their law-dogmatic perspective, do not understand collectives. The Western ethical approach and its legal cultures are individualistic in methodology and anthropocentric in ontology. In the history of political theory of the legitimation of political power, the good society—legitimate political order—could only be achieved by morally good citizens. So, in the first place, the individual had duties and obligations towards society. Rights emanated from those obligations but were not central to the existence of society. Constitutionalism changed that narrative: the concentration of power into a political order was legitimate if it achieved protecting the fundamental rights of individuals. First society owed rights to its citizens, and from those rights emanated obligations. Under the new narrative, rights came first, duties second.
This was the narrative birth of the legitimation of occidental democratic power, based on the idea that political power is there to protect individuals. But the Leviathan-state, the monster with the monopoly on violence, was created to protect individuals from a very specific form of violence, a war of all against all.
“That a man be willing, when others are so too, as farre-forth, as for Peace, and defence of himselfe he shall think it necessary, to lay down this right to all things; and be contented with so much liberty against other men, as he would allow other men against himselfe. For as long as every man holdeth this Right, of doing anything he liketh; so long are all men in the condition of Warre”[13]13 — Hobbes, Thomas. 1968. Leviathan. Edited by Crawford Brough Macpherson. London: Penguin Books. .
The purpose of the Leviathan-state was not to grant liberties and rights to individuals but to overcome (civil) war and make a life in society possible by granting restricted liberties and rights to individuals. Rights were instrumental to making society and cooperation among individuals possible.
Most constitutionalist theories part from a self-centered, rationalist anthropological view of man. The construction of the social contract had to incentivize man to accept handing over power and to trust the resulting political architecture in order to be able to cooperate and benefit individually within the given rule structure. Increasingly, the focus on appealing to the rationality of individuals to accept and follow the social contract distracted from the purpose of rationality and contractual incentives[14]14 — Along with the Enlightenment’s command of daring to know, making use of rationality, the increase of literacy rates, and the permeation of the contractual narrative into the cultural expectations of societies, the concept of privacy emerged, marking the threshold where state power ended. Both rationality and privacy reinforced the methodological individualistic approach in the social contract. This would lead Jürgen Habermas, with The Structural Transformation of the Public Sphere: An Inquiry into a Category of Bourgeois Society, and Richard Sennett, with The Fall of the Public Man, to a fundamental criticism of how this development is eroding the public dimension in the lives of private individuals and, with this, essential aspects of the public sphere and societal cohesion. .
Individual autonomy can only be achieved through a societal lens and is essentially dependent on the structural framework within which it operates.
While vervet monkeys have a feeding rank where dominant females eat first and longer, domestic dogs eat alone and do not share their food. An individualistic lens focused only on individual benefits and rights would consider an animal able to eat first and as much as it wants to as clearly the one enjoying the greatest freedom. Without looking at the broader structural context, this could apply to an animal living in a zoo or a kennel. Most animals living in packs have dominance hierarchies reflected in food, reproduction, and other vital aspects. Domestic animals are generally denied the pack and enter a higher dependency and hierarchy with a human master. Mastership over oneself is not defined by individual benefits or rights, such as eating alone, but by relational factors dependent on the structure within which those rights are embedded.
The Western ethical approach and its legal cultures are individualistic in methodology and anthropocentric in ontology
As a consequence, most democratic cultures have legal instruments to assess impact, and provide protection and redress, merely on an individual level. There are already politically relevant examples caused by the lack of an assessment rating based on structural risks and harms. The fitness tracking app Strava released a visualization map showing all the activity tracks by the app’s users all over the world. The data was anonymized. However,
“in locations like Afghanistan, Djibouti and Syria, the users of Strava seem to be almost exclusively foreign military personnel, meaning that bases stand out brightly. In Helmand province, Afghanistan, for instance, the locations of forward operating bases can be clearly seen, glowing white against the black map”[15]15 — Hern, Alex. 2018. “Fitness tracking app Strava gives away location of secret US army bases.” Accessed Jan. 4, 2019. .
Similarly, a few software programs for predictive policing used within the European Union are falling out of the scope of regulation and thorough impact assessment: in several parts of Germany, state police are using software to predict burglaries, given a specific modus operandi, within a specific time frame and geographical parameter, by using anonymous data about the crime type and procedure as well as geographical data. The software makes sense in regions with fewer police officers because it may assist police in creating more efficient patrol shifts regarding burglary prevention. The corresponding data-protection state agencies permitted use of the software since it did not process personal identifiable data and thus did not fall under their jurisdiction.
But these systems leave many societal questions open. If the software is fed historical data, it consequently raises concerns known from structural ethnical biases associated with ZIP codes: to what extent is the data bank reflecting data asymmetries? How can representation be controlled so some regions are not overrepresented while others are underrepresented? To what extent may the system amplify this kind of ethnic bias and affect the social cohesion of a city as a consequence? If a greater police presence is being observed in structurally poorer regions, will residents feel more secure or will this lead to a massive exodus of residents able to afford housing in a different part of the city? How and to what extent is this tool meaningfully embedded in a broader prevention strategy? This is yet another example of the need for a more collective approach to evaluate algorithmic systems.
However, there are only a few areas of law—labor law, for example—where most legal cultures possess instruments to address the collective dimension of discrimination. In the future, discrimination will be a phenomenon observed in all sectors where AI is used, be it the distribution of energy or critical resources, the health sector, or social welfare. The fact that discrimination does not only exist in the labor sector, and that the use of these technologies will take place in all sectors, points to one of the legal gaps that new technologies make more tangible.
For these reasons, focusing on individual rights is ironically detrimental to individual autonomy and rights. The autonomy of the individual depends very much on the social framework and infrastructure within which this autonomy is exercised. Being able to eat alone is not a sign of self-mastery. The absence of individual harm with regard to human or fundamental rights does not mean that human beings are not harmed in general. There is such a thing as societal harm, and assessing societal impact requires other questions and criteria than the ones applied for human and fundamental rights.
Defining the Values that Shape Fair Immaterial Infrastructure
Europeans have long been aware of the potential conflicts among different kinds of material infrastructure due to the scarcity of space. In 1970, the Council of Europe initiated the CEMAT high-level minister conferences for spatial planning, leading to the approval in 2000 of the Guiding Principles for Sustainable Spatial Development on the European Continent. In 1999, the European Union developed the European Spatial Development Perspective, a policy framework providing the conditions and criteria that are instrumental in building trans-European networks (transport, energy, telecommunications). Both the European Council and the European Union enshrine social cohesion as one of the guiding principles in their policies and frameworks of spatial planning when building and coordinating (material) infrastructure. And yet the conflict between the standardizing, homogenizing character of cables, bridges, and roads and its impact on pluralist societies has not yet been the object of a thorough discussion comparable to the debate on citizenship. If the ceiling of an underpass or tunnel is too low for buses, then only (mainly private) car drivers would be able to use it. It would exclude people who can only afford public transport. What alternatives are being provided to ensure access between the areas at both ends of the tunnel?
The example given above of the low tunnel ceiling resembles many examples of bias seen in algorithmic systems—for example, the automated soap dispenser unable to detect users with darker skin. The problem behind the soap dispenser was an issue of bias at the standardization level. Near-infrared technology is not good at detecting darker skin, and since the developers and testers of these articles were lighter-skinned individuals, this problem had gone unnoticed for a long time.
Both examples are about the creation of standards by making implicit assumptions about humans and their contexts that do not reflect otherness—the variety in human nature and of social contexts.
Even though the concrete implementation of material infrastructure is reduced to allegedly neutral numbers and mathematical standards, on a more abstract level, its collective social impact is fully accounted for by norms and rules as they’re written in the European continent. The principles that guide the creation and coordination of infrastructure are enshrined in corresponding spatial planning policies at local, regional, national, and continental levels. Those principles constitute a set of criteria to guide, assess, and evaluate the impact of infrastructure very much applicable to algorithmic systems:
- Balancing social, economic, ecological, and cultural conditions (also considering geographical asymmetries)
- Safeguarding diversity
- Providing for stable and continuous access to public services ensuring fairness of opportunity
- Providing for a balanced economic structure that fosters a wide range of
- Preserving and developing cultures
- Ensuring sustainability and respect for nature
- Ensuring that the needs to provide for defense and civil protection are taken into account
- Ensuring the conditions necessary for social cohesion
And while all these questions and approaches do have an impact on individuals, the approach and the criteria differ fundamentally from the catalogue of questions used in the individual rights approach. The principles listed here are better suited to generating a deeper analysis of the impact of algorithmic systems by providing answers on a structural level. This more collectivistic approach at a regulatory and normative level is thus not unknown to many democracies. However, several aspects and implications of infrastructure still need more scrutiny and further legal and methodological thinking.
In conclusion, algorithmic systems and artificial intelligence as collectivistic technologies amplify a weakness of democracies: the methodological individualism of democratic systems has characterized the normative approach of democratic powers. However, democracies do also have societal purposes, though their corresponding regulatory instruments, are less developed. The implementation of AI requires societal thinking. Given this new era of automatization, the most imperative task for democracies lies in the further development of the idea of public interest, the common good, and the shape of society.
-
References
1 —Freyer, Hans. 1987. “Herrschaft und Planung. Zwei Grundbegriffe der politischen Ethik.“ In Herrschaft, Planung und Technik. Aufsaetze zur politischen Soziologie, edited by Hans Freyer, 17–43. Weinheim: VCH Verlagsgesellschaft. p. 22
2 —Mannheim, Karl. [1935] 1958. Mensch und Gesellschaft im Zeitalter des Umbaus. Darmstadt: Wissenschaftliche Buchgesellschaft.
3 —Kizer, Ben H. 1939. “The Need for Planning.” In National Conference on Planning. Proceedings of the Conference held at Minneapolis, Minnesota, June 20-22, 1938. Chicago: 1–9.
4 —Sloot, Bart van der. The Individual in the Big Data Era: Moving towards an Agent-Based Privacy Paradigm.
5 —Foucault, M. 1980. “The Confession of the Flesh” [Interview, 1977]. In Power/Knowledge: Selected Interviews and Other Writings, edited by Colin Gordon, 194-228. New York, NY: Pantheon Books.
6 —Laak, Dirk van. 2018. „Alles im Fluss“. Die Lebensadern unserer Gesellschaft – Geschichte und Zukunft der Infrastruktur. Frankfurt am Main: S. Fischer Verlag.
7 —Elden, Stuart. 2016. Foucault’s Last Decade. Cambridge: Polity Press.
8 —Arendt, Hannah. 2006. Eichmann in Jerusalem: A Report on the Banality of Evil. New York: Penguin Books.
9 —Buhr, Walter. 2009. “Infrastructure of the Market Economy, Volkswirtschaftliche Diskussionsbeiträge” Fachbereich Wirtschaftswissenschaften, Wirtschaftsinformatik und Wirtschaftsrecht: Discussion Paper No. 132–09. U. Siegen.
10 —Existing social media companies have monopoly positions within their own specific formats: the format offered by Twitter is different than that offered by Instagram, Snapchat, Youtube, or Facebook.
11 —These ideas of efficiency and fairness do not necessarily need to be specified in the set of rules constituting a soft infrastructure. They can be the expression of common social expectations and prejudices in a society running contrary to legal and administrative rules.
12 —Not all social contexts and circumstances can be comprehensively turned into data.
13 —Hobbes, Thomas. 1968. Leviathan. Edited by Crawford Brough Macpherson. London: Penguin Books.
14 —Along with the Enlightenment’s command of daring to know, making use of rationality, the increase of literacy rates, and the permeation of the contractual narrative into the cultural expectations of societies, the concept of privacy emerged, marking the threshold where state power ended. Both rationality and privacy reinforced the methodological individualistic approach in the social contract. This would lead Jürgen Habermas, with The Structural Transformation of the Public Sphere: An Inquiry into a Category of Bourgeois Society, and Richard Sennett, with The Fall of the Public Man, to a fundamental criticism of how this development is eroding the public dimension in the lives of private individuals and, with this, essential aspects of the public sphere and societal cohesion.
15 —Hern, Alex. 2018. “Fitness tracking app Strava gives away location of secret US army bases.” Accessed Jan. 4, 2019.

Lorena Jaume-Palasí
Lorena Jaume-Palasí is one of the most outstanding experts on AI in Spain and the Balearic Isands. She is also an expert on law philosophy, and her work focuses on the ethical aspects of digitalisation and authomatisation. She cofounded the NGO AlgorithmWatch and, shortly after, The Ethical Tech Society, which studies the social relevance of authomatic systems. In 2017 she was appointed by the spanish government member of the Wise Committee for AI and Big Data. She is the cofounder of the Dynamic Coalition on Publicness at the Internet Governance Forum at the United Nations (IGF), where she is the director of the german national section secretariat. She is also the leader of projects on Internet governance in Asia and Africa.