Let’s begin with the full acknowledgement that the COVID-19 pandemic is an evolving situation with numerous externalities and effects. Let’s also acknowledge the vast amount of thought-provoking pieces already available that address the gaps, the wins and the unknowns facing humanity’s future re-emergence from this pandemic. Lastly, let’s acknowledge that there will be a re-emergence. As Arundthi Roy so eloquently wrote in her recent Financial Times op-ed, “Historically, pandemics have forced humans to break with the past and imagine their world anew. This one is no different. It is a portal, a gateway between one world and the next.”

Heeding this call, I take on but a sliver of this reimagining to consider: What role can emerging technologies, particularly artificial intelligence, play in helping us be our better selves, collectively?

What role can emerging technologies, particularly artificial intelligence, play in helping us be our better selves, collectively?

So, how is this charge any different from the current call to create “AI for good?” First, the pandemic has fully exposed deficiencies all across the board―from insufficient healthcare systems to inequities in food supply, education, and work benefits to a growing digital divide. More specifically, the challenges of COVID-19 underscore a realization that artificial intelligence―and technology writ large―is not a silver bullet. Machine learning and neural nets have not (yet) generated a new vaccine or solved supply chain issues while attempts to track the spread of the virus using AI-enabled contact tracing technologies has placed an enormous challenge on our willingness to trade personal privacy and autonomy for the sake of the greater good.

These are the pain points. For good or bad, our current situation has accelerated the recognition that building AI for good requires more than injecting a technology intervention as a solution to complex systemic issues. Second, the provocation above urges for the design, development and deployment of emerging technologies and AIs that move beyond optimizing for individual efficiency. The benefits of hyper-personalization and targeted ads are uninspiring in moments of crisis. Instead, we should consider how to redesign these technologies to reflect a more integrated, diverse, and holistic view of the world. Already, glimpses into the possibilities of this future are surfacing through innovative projects and ideas such as Truluv.ai, CityShare/Canada or the Aligning AI with Human Values project.

We’ll leave the future projections to the fortune tellers. Instead, we’re leaning into what we know and what we’re learning. This is an opportunity to re-evaluate, to reset and to reboot. In particular, we examine the intersectionality of AI across three key issues―trust, privacy, and social connections. Critically, we consider what’s at stake when we begin to prioritize what should and must come with us as we cross through the portal into a new world.

AI + Trust

What we know: There are over 240 (and counting) ethical frameworks and guidelines for the development of AI. The idea of “human-centric” AI, which includes big themes such as fairness, accountability, transparency, democratic governance, etc., serves as a cornerstone for much of AI policymaking, particularly in the U.S. In early 2020, the European Commission released a white paper titled: “Artificial Intelligence: A European approach to excellence and trust.” The paper outlines policy options on how to achieve two key objectives: promoting the uptake of AI and of addressing the risks associated with certain uses of this new technology. The paper further articulates an approach grounded in fundamental values and rights such as human dignity and privacy protection.

What’s at stake: Despite the growing number of frameworks, guidelines, executive orders and white papers, trust in AI systems unravels on one major point. A trusted system is not equivalent to a “trustworthy” system. For example, a technology may be designed and optimized for trust using heuristics like “do people use these AI-based applications?” It then follows that more users equals higher levels of trust. But this exempts a myriad of implications such as poor corporate governance, cybersecurity, privacy, safety, etc., that may inform a consumer’s idea of what is trustworthy. For example, in areas where Uber is the only rideshare available, a user who may normally opt for another service due to privacy concerns about Uber, has then very limited alternatives. In this case, the lack of market options trumps trust. The question is then who gets to decide what is “trustworthy”? And are there other better heuristics to be considered? These remain open questions that have yet to be fully explored.

AI + Privacy

What we know: The tension between security and privacy (and to some degree safety) are on the mainstage during the crisis. As more governments contemplate the trade-offs of expansive contact tracing on citizens, privacy advocates are sounding the alarm on the potential of casting away the civil liberty in reactionary haste. As Andrew McLaughlin, former Deputy Chief Technology Officer for Internet Policy in the Obama White House and former head of the global public policy team at Google, said: “It’s a mistake to go cut back on expectations of privacy in moments of crisis. It’s very tempting to do that but it’s very difficult to go backwards once you cross certain rubicons.”

What’s at stake: Echoing the previous section, the second-order impact of such privacy violations is on the consumer’s ability to then trust tech companies and or the government. This is a critical time for both industry and public officials to strike a balance between privacy concerns and public safety. The new partnership between Apple and Google to build a Bluetooth-based tracking system that can automatically log people’s interactions is just the beginning. Its efficacy remains unknown and relies heavily on the scale of adoption of the technology alongside additional testing and integration with a larger public health strategy. If increased surveillance via technology significantly flattens the curve, to what extent are we willing to continue and replicate such monitoring? And, can we really not go back once the line is crossed? Moreover, as we contemplate the current power structures of the technology ecosystem, recalibrating for more consumer agency should be top of mind. Experimental thinking in areas of data sovereignty, personal data stores, and algorithmic agency are critical in establishing the much needed infrastructure to support a rebalancing and ensuring trust.

AI + Social Connections

What we know: This pandemic is challenging that most primal of human instincts—to be together. We’re over a month (at least in the United States) into physical isolation and many are feeling lonely, agitated, and afraid. But, advances in technology and widespread access to broadband have offered novel ways to connect despite the physical distance (aside from the now ubiquitous Zoom)—from Love is Quarantine to TikTok cloud raves to virtual happy hours to QuarantineChat. We are quickly finding that in the absence of physical contact, there are indeed possibilities to “feel connected” online and with others. Is this the portal to which a new normal includes an even bigger reliance on technology to feel human?

What’s at stake: As the timeline for the pandemic drags on, it begs the question: what are the potential long-term effects and is this type of interaction sustainable? For decades, research into mediated communication and its impact on human development or behavior has yet to take the main stage. However, as more of our daily routines come online and for longer periods of time, we must consider the growing spectrum of human-machine touchpoints. On one end are the seemingly innocuous interactions, like voice-command GPS instructions; while, on the other end, we find increasing examples of humans offloading emotional and mental states onto technologies, like Alexa or the PARO for use in behavioral therapy. In this spectrum, we are seeing emerging corollaries in an increase of depression and anxiety among teens and social media. As our technology dependency grows (well beyond just economic factors), there is an urgent need to explore the impact on our individual mental health and overall well being. Furthermore, as individual and inter-communication norms shift, so will our social interactions. The big question is whether this shift in our behaviors with one another will change our understanding of what it means to be in a community and/or a society. Even larger is the potential implication of what this may mean for appropriate governance structures and the future of democracy. And, to what extent, can AI be both helpful and detrimental in this evolution?

For some critics, the recent resurgence of AI and its suite of computing techniques has served as a mirror to humanity- reflecting both the pain points and possibilities of our society’s ability to respond to rapid changes across verticals and sectors. But, this is an unprecedented moment and COVID is an accelerant. With rising unemployment, failing healthcare systems, and mass migration to online education and remote working, the crisis is pushing us towards a future we had only begun to wrap our heads around. For now, we’ll remain cautiously optimistic and find inspiration in human resilience and innovation. The challenge is to keep pace both technologically and culturally. The call is then for individuals, organizations, and governments to recognize the immense opportunity to understand what’s at stake and to be nimble in response.

  • Disclaimer

    This is a piece by Kristine Gloria and does not necessarily reflect the views of The Aspen Institute.

Kristine Gloria

Kristine Gloria serves as the Associate Director of the Emerging Technologies Initiative for the Aspen Digital program in Washington, D.C. Prior to joining Aspen, she served as a visiting researcher for the Internet Policy Research Initiative (IPRI) at MIT-CSAIL in Cambridge, MA, where she conducted research on methodologies employed by members of the human-computer interaction and usability communities in designing privacy preserving technologies. She also held a position as a Privacy Research Fellow with the Startup Policy Lab (SPL) and a fellowship with the Center for Society, Technology and Policy (CSTP) at UC-Berkeley. Her work focused on privacy-by-design and municipal drone policymaking with the city of San Francisco.