Sharing a planet with AI: Collective Responsibility & the Unpredictability of the Future

This blog highlights the importance of understanding the "black box" of AI to ascertain its accuracy, and be fully involved in the future of the world, where AI will play a significant role.

How does one live with something one cannot see, but whose presence is felt, whose nature is difficult to understand, or even grasp? 

This is a central question in my ongoing engagement with artists, creative professionals, and data scientists who work with and develop AI in India. The recent pandemic reminded me of the pressing nature of this question beyond the immediate concerns of the then novel corona-virus. AI is not a (computer) virus, but it is spreading steadily, penetrating daily life and changing the way we live. As a metaphor it works only partially, but it’s a productive one to sit with and ponder over; the rapid ascent of AI-powered technology coinciding with a global pandemic during which it was almost inescapable not to have to share sensitive data in the form of apps proving ones negative and/or vaccination status to continue to access public life. 

For its functioning AI depends on immense datasets to which our collective digital footprint is key. Making use of social media, accepting cookies to access websites, sharing personalized details to buy goods and services online, and even proving yourself not to be a robot all contribute to the richness and vastness of databases. Almost everything we do generates data that is stored and utilized, leading one of my research interlocutors, a Bengaluru-based data scientist and CEO of a startup, to emphasize that “we exude data.” AI is necessary to make sense of “data” while it also draws upon it for its existence and growth. While it may not be alive in the classical sense, it is something we all live with. 

That AI provokes consternation and an exchange of ideas, not to mention a preoccupation with what it is, is confirmed by a visit to any random bookstore (in India or elsewhere), its section with AI books ever-expanding. Almost all start out with the question what is AI which is then followed by a lengthy exploration of the many different definitions of AI in circulation. In simplest of terms, AI can be thought of as a set of theories and techniques that develop complex computer programs which are able to simulate certain aspects of human intelligence (e.g. learning, reasoning). Some authors work with a division between weak and strong AI whereby what separates the two is generally envisioned in terms of its ability to (re)produce intelligent interactions (by analyzing, reasoning, performing rational actions) and associated thinking skills. In the future this may translate into what has come to be referred to as Artificial Super Intelligence (ASI), signaling the idea of computing power that through the augmentation and distribution of its systems gives evidence of capabilities beyond that of the human brain. 

Definition studies notwithstanding the question of how AI functions or operates remains a question that not only confuses the public at large, but also, data scientists who are responsible for its development. In particular the question of its alleged “intelligence” continues to befuddle. However, in contrast with the general audience, the data scientists and others involved in AI’s development I interacted with over time, were less concerned about this than their ability to explain AI’s exact functioning. They were particularly keen to discuss what they often conceptualized as the inherent unpredictability of AI. As they built the algorithmic architecture on which their AI applications relies and let it use and learn from massive datasets, the question of accuracy seemed more of relevance than pure predictability. As was repeatedly explained to me, it was very hard to “reverse-engineer” a particular outcome. Instead, the focus was more on verifying if a particular outcome could speak to a certain context or situation. Could these results then be relied on?  

As the use of AI spreads, and thus, the commercial interests in the field increases, the question of responsibility becomes a more critical one. Already the EU and other governing bodies have made attempts to regulate its use, especially with regards to questions of privacy, bias and its potential use in warfare. Yet, at a deeper level I suggest we should not forget to contemplate what at first glance may feel peripheral philosophical questions. If AI increasingly influences our lifeworlds, impacting our choices as consumers, our political opinions, our sense of facts and truths, then what does this mean (for the future)? What is this new presence in our lives? The notion of Cartesian dualism helps ground this conundrum. To make the concept more accessible I asked ChatGPT to offer a brief summary: 

“Named after the philosopher René Descartes, [Cartesian Dualism] is a philosophical concept that posits the existence of two fundamental substances: mind and matter. Descartes proposed that the mind (or soul) and the body are distinct and separate entities, each with its own essential nature. The mind is characterized by consciousness, thoughts, and self-awareness, while the body is associated with physical attributes and mechanical functions.” 

We have now entered a situation whereby we communicate directly with a computer for which we rely on answers. ChatGPT continued to explain to me that accordingly the mind and body interact in the pineal gland, where the non-material mind influences the material body. Unsure if “the pineal gland” was a known thing during the time of Descartes, ChatGPT managed to confirm this to me and even elaborated that the philosopher believed it to be the “seat of the soul”. Yet asked for its source, it argued that it is based on 

“general knowledge of the history of philosophy and anatomy.” 

Furthermore, it advised me to consult Descartes’ relevant works including Meditations on First Philosophy and Passions of the Soul. Yet, asked for an estimation of its own accuracy, it simply explains that it strives to do its best and that the data available to the tool was last updated in January 2022. 

While in some cases companies and institutes offering AI-powered tools offer some insight into the accuracy and reliability of their tools, in general decisions about it are made behind closed doors, and thus by data scientists and their managers. While discussions about the actual black box in AI is ongoing with those warning for its dangers and criticasters arguing not to hide behind them, the reality of AI at present is that we lack the proper tools to investigate its inner workings. In its reliance on planetary resources, and data centers already accounting for 1 to 1.5% of global energy use, it is crucial to remain vigilant of what this means for lifeworlds which we share with an ecology of other beings. I suggest that we need to think of AI as one such being that besides, plants, animals and humans, we bear a collective responsibility over. Integrating this reality into the way we look at the world means that we can also be fully involved in its future in which AI will undoubtedly play an important role. 

REFERENCES

Afnan, Michael Anis Mihdi, Yanhe Liu, Vincent Conitzer, Cynthia Rudin, Abhishek Mishra, Julian Savulescu, and Masoud Afnan. “Interpretable, Not Black-Box, Artificial Intelligence Should Be Used for Embryo Selection.” Human Reproduction Open 2021, no. 4 (September 1, 2021): hoab040. https://doi.org/10.1093/hropen/hoab040.

Bathaee, Yavar. “The Artificial Intelligence Black Box and the Failure of Intent and Causation.” Harvard Journal of Law & Technology (Harvard JOLT) 31 (2018 2017): 889.

Carabantes, Manuel. “Black-Box Artificial Intelligence: An Epistemological and Critical Analysis.” AI & SOCIETY 35, no. 2 (June 1, 2020): 309–17. https://doi.org/10.1007/s00146-019-00888-w.

Christin, Angèle. “The Ethnographer and the Algorithm: Beyond the Black Box.” Theory and Society 49, no. 5 (October 1, 2020): 897–918. https://doi.org/10.1007/s11186-020-09411-3.

Crawford, Kate. Atlas of Ai: Power, Politics, and the Planetary Costs of Artificial Intelligence. New Haven: Yale University Press, 2021.

Rudin, Cynthia. “Stop Explaining Black Box Machine Learning Models for High Stakes Decisions and Use Interpretable Models Instead.” Nature Machine Intelligence 1, no. 5 (May 2019): 206–15. https://doi.org/10.1038/s42256-019-0048-x.

Rudin, Cynthia, and Joanna Radin. “Why Are We Using Black Box Models in AI When We Don’t Need To? A Lesson From An Explainable AI Competition.” Harvard Data Science Review 1, no. 2 (November 22, 2019). https://doi.org/10.1162/99608f92.5a8a3a3d.

“The AI Boom Could Use a Shocking Amount of Electricity | Scientific American.” Accessed January 19, 2024. https://www.scientificamerican.com/article/the-ai-boom-could-use-a-shocking-amount-of-electricity/.

Zednik, Carlos. “Solving the Black Box Problem: A Normative Framework for Explainable Artificial Intelligence.” Philosophy & Technology 34, no. 2 (June 1, 2021): 265–88. https://doi.org/10.1007/s13347-019-00382-7.

Share this:

Related:

Measuring What Matters

Transitions Research, as the Tracking, Learning and Sharing(TLS) lead of Adaptation Research Alliance(ARA), engaged with ARA’s 250+ community of organisations to gather insights on measuring...