Mark Coeckelbergh is is a Belgian philosopher of technology, Professor of Philosophy at the University of Vienna and former President of the Society for Philosophy and Technology.
He is the author of several books, including Growing Moral Relations (2012), Human Being @ Risk (2013), Environmental Skill (2015), Money Machines (2015), New Romantic Cyborgs (2017), Moved by Machines (2019), the textbook Introduction to Philosophy of Technology (2019), and AI Ethics (2020). He has written many articles and is an expert in ethics of artificial intelligence. He is best known for his work in philosophy of technology, ethics of robotics and artificial intelligence (AI), he has also published in the areas of moral philosophy and environmental philosophy.
— -
Technology is eroding our sense of shared experience. We’ve become accustomed to private existence, glancing briefly up from our screens as we pass each other before returning to a curated stream of ads, shows, news and networks: billions of one-person digital worlds, brushing shoulders in the physical.
Democracy is contingent on a mass public dialogue, and is directly threatened by the siloing of our experiences. With digital trends now powered by AI, the threat to democracy has only grown. I spoke with Mark Coeckelbergh about his latest book ´Why AI Undermines Democracy and what can we do about it´.
Coeckelbergh points to healthy democracies requiring citizens to proactively engage in deliberating the ´common good´ as a key part of forming a collective vision. A prerequisite for healthy deliberation is a common knowledge-base (a set of agreed truths and shared skills) as well as high levels of trust in government institutions and our fellow citizens. Coekelbergh paraphrases Mahatma Ghandi´s answer when Ghandi was asked what he thought of Western Civilization “Democracy? I think it would be a good idea!”.
Amongst other things, Coeckelbergh advocates for a Data and AI commons, as well as forming new experimental public institutions to help shape our relationship to technology. I started by asking him about how his book has been received so far — is he being perceived as an unrealistic idealist? Or a source of sober inspiration?
“I’m afraid that so far it’s going more in the direction of the ´unrealistic idealist´ but then if we philosophers don’t deal in ideals, who else will? Here in Austria I´ve been called “ a bit of a dreamer” especially when it comes to notions of global collaboration and governance of AI. But I do feel it´s useful to point to the end of the Second World War, where we transitioned so rapidly from a state of conflict to previously unknown levels of global collaboration. So yes, people say I am optimistic (a diplomatic way of calling me unrealistic) but I´m not prepared to give up on the idea.”
Cockelbergh points to an incompatibility between for-profit big tech and the ideals of developing technology for the common good. With their focus on financial gain, big-tech tends to overlook negative impact on society or our natural environments. As an example, polarization on social media is largely attributable to algorithmic favoring of the sensational, which is good for business but damaging to our social harmony. Big tech has been slow to fix this issue. He suggests a radical and simple solution:
“These algorithms have been developed to maximize levels of attention and volumes of tradable personal data or behavior. They are not set to maximize human or planetary welfare. It seems logical that public ownership of the technology is more suited to prioritization of the common good.”
But with technology developing at breakneck speed, and with pervasive adaptation — I asked Mark if he thought such a change could happen fast enough.
“I believe in pushing for change on all kinds of levels and with all kinds of stakeholders. How the change is achieved and at what speed is totally out of my control, but we do need it to happen. Afterall, we´re going to need healthy democracies to tackle the bigger global issues including managing resource scarcity and surviving through environmental collapse while also avoiding major global conflict.
And how do we create healthy democracies? On one level, we need to educate new generations on what should be expected of them as citizens of a democracy. We have to teach our citizens to agree on a common knowledge base and set of truths, and to foster curiosity and proactive engagement in defining the ´Common Good´. Ideally, with a less anthropocentric view of the world where our common good also applies to non-human life. Immediately, we need governments to participate in public discourse on the nature of our relationship to technology. We need to develop new public roles and institutions that are designed to respond at the same fast pace we see in the technology´s development. I´ve been part of an international advisory group who contributed to the European AI Act but this has taken us 6 years to bring about new legislations. This is clearly way too slow.
So we need to address the problematic gap between the speed of technological development on the one hand and the pace of ethical and public philosophical reflection in order to develop strategic approaches. I don´t have all the answers, but we do need to connect experts to politicians and citizens in new ways. We need to be much more innovative in terms in institutional base. Part of this could be achieved if we directed more of our human intelligence into political philosophical and political theoretical thinking.”
I asked Mark to expand on his own sense of ´common good´ and how to define it on a national or even international level.
“The definition of common good is always in flux. Firstly I find it useful to draw a distinction between what is ´right´ and what is ´good´. We often mistakenly think of ethics as a framework for ´right and wrong´ in a narrow moral sense. And to an extent it seems to work well, because generally people don´t want to do wrong. We are able to have such things as International Human Rights because we find it relatively easy to deliberate and agree on what we should not be doing. This is our baseline — wrong to kill, or to abuse, oppress, discriminate, or deny freedom or access to food, shelter, health and education etc. Avoiding the negative serves us well enough, but it is morality in the most narrow sense. And unfortunately these types of international agreement are needed, because without being explicit about the fundamental human rights (or even in spite of them) there are people who will cause tremendous suffering.
Defining ´the good life´ is a bigger challenge and has always been a subject in philosophy. Yes I shouldn’t kill another person, but how should I be living? What should I be doing? And even if we manage to define a good life for ourselves, we find it even more challenging to discuss how we will organize for the good life on a societal level. The complexity of the question is daunting.
Take food as an example, just to illustrate this difficult jump between individual level and collective level. You see, we can all try to eat healthily and to form our own relationships to food. You try it in your way, I´ll try it my way, and we all approach food in the best way we can manage. But when we pose the larger question many of us are not equipped for deliberating in a meaningful way. How should we organize around food consumption or food production and trade? How should we treat our land and our animals? And in what ways will we organize for a collective evolution of our relationship to food in the face of the Climate Crisis? Part of the complexity relates to our lack any common truths or knowledge base from which we can launch a meaningful discussion.
In a similar way, we tend to find it easier to discuss technology on personal level — me and my relationship to my phone, or to AI, while it´s harder to discuss the concepts on a higher level. How might AI foster a healthy democracy? How might might AI either hinder or contribute to a progressive and enlightened society. How should we collectively shape our relationships with technology?
So I strongly support those more difficult conversations to help society to define the good life, and from there we might explore how technology and AI can help us move in the right direction.”
The defining of a common good (and the good life as Coekelbergh calls it) is something that I´ve been slightly obsessed by lately. It stems from an anxiety about that loss of shared experience, shared values and shared vision. Coekelbergh highlights the need for common knowledge base and shared basic truths as an important foundation for for dialogue. Meanwhile concepts of multiple truths and fake news are being amplified by AI, leading to accelerated isolation and erosion of shared experience.
“We also have to develop our ability to listen to each other, and try to understand views that are in conflict with our own. By listening carefully, we have a chance to find something in common, from where we build something new. And when we stop listening it becomes dangerous — it can very quickly lead to us not regarding those others as fellow humans. What matters now, is that we make sure people at all ages have positive experiences of being in safe and constructive dialogue even with those who disagree. And when we come together to talk, to debate, perhaps to laugh and to experience a shared physical reality, we can experience a fresh sense of community. Shared experience can give new opportunities for shared values so that eventually we can reach for some common goals.
It is essential that we create these kind of experiences, and that’s where I think our effort has to go. Honestly I don’t know really how exactly to do it, but we can experiment yes? We can try things at a smaller scale and see what works, and that´s where we need fluid and explorative institutions to help us push into unknown territory. We should also work on an infrastructural level, to explore how we might create a data commons, and public ownership of AI so that the power can shift to serve our common good — the AI equivalent to CERN — so instead of only blaming big tech companies and trying to regulate them, governments could invest in developing their own internationally shared systems that are built to serve the common good. And why should Governments build their own AI? Well because it´s going to be otherwise very difficult to take any power from Google or Microsoft and Facebook.
Interesting!