Entrevista com Mark Coeckelbergh

Por Paola Cantarini

Mark Coeckelbergh

BIO

Mark Coeckelbergh is a Belgian philosopher of technology. He is Professor of Philosophy of Media and Technology at the Department of Philosophy of the University of Vienna and former President of the Society for Philosophy and Technology. He was previously Professor of Technology and Social Responsibility at De Montfort University in Leicester, UK, Managing Director of the 3TU Centre for Ethics and Technology, and a member of the Philosophy Department of the University of Twente. He is the author of several books, including Growing Moral Relations (2012), Money Machines (2015), New Romantic Cyborgs (2017), Moved by Machines (2019), Introduction to Philosophy of Technology (2019), and AI Ethics (2020). [CM1] He has written many articles and is an expert in the ethics of artificial intelligence.

Essa entrevista foi realizada originalmente em inglês no dia 05.12.2023.

A entrevista foi revisada em sua tradução inicial[CM2]  realizada por Paola Cantarini, por Guershom David (Graduado em Direito, mestrando no Programa de Pós Graduação em Direito Político e Econômico, pós graduação lato sensu em Direito Processual Civil, Direito e Processo Penal (em curso) e diretor do projeto MackriAtIvity, pela Incubadora da universidade Presbiteriana Mackenzie).

Versão original

Paola Cantarini: What’s your area of expertise? Would you start telling us about your work related to AI and data protection?

Mark Coeckelbergh: I am Mark Coeckelbergh, a professor of Ethics at the University of Vienna. I specialized in the philosophy of technology, particularly in [CM3] AI, robotics and ethics fields, regarding its intersections with political philosophy.

Paola Cantarini: Is there a need and, therefore, the possibility for a worldwide law to regulate AI globally, at least fixing minimal standards?

Mark Coeckelbergh: Yes, that would be an excellent idea since AI doesn’t know borders. Meanwhile, it is almost everywhere. So, it would be good to have international agreements on it and a framework for global governance. I have already seen some efforts in that direction. Of course, the EU already has a supranational effort for regulation, but at the worldwide level, not much is happening except that now the UN is considering these matters. I think that’s very positive because we need a global approach there.

Paola Cantarini: How would the so-called “trade-off” between innovation and regulation work? Or would regulation by itself prevent or compromise innovation and international competition? According to Daniel SOLOVE, in his book “Nothing to Hide. The False Tradeoff Between Privacy and Security” (Y.U. Press, 2011), this would be a mistaken concept. Could you comment on this point?

Mark Coeckelbergh: There is always a need for a certain balance between innovation and regulation. Hence, AI shall be used for all its benefits to society. So, we should make sure that there is room for innovation. But currently, there is a significant need for regulation that needs to be robust. Furthermore, it can provide trust to companies and organizations that are involved in the development of AI technology. In that sense, for innovation, I think it’s good to have that certainty that results in stability, which is currently critical in this area since it’s a dynamic field based on the ongoing discussions regarding ethics and policy. I think it will be beneficial to society to have some well-established frameworks implemented, which would also reduce the burden and uncertainty for the various actors involved.

Paola Cantarini: Taking as a paradigmatic example in the area of data protection the LIA – the evaluation of legitimate interest, provided in the Brazilian LGPD and in the GDPR of the European Union as being a mandatory compliance document, when using the legal basis for processing personal data, that is, the legitimate interest (measured by a proportionality analysis/test), would it be possible to create a “framework” aiming to protect fundamental rights embedded in a specific document, the AIIA – Algorithmic Impact Assessment? And thus establishing, after a weighted analysis, adequate, necessary, and strictly proportional risk mitigation measures to such rights.

Mark Coeckelbergh: Yes, I think it’s very important to protect fundamental rights, particularly privacy. For this purpose, I believe it’s crucial first of all to implement monitoring mechanisms to identify potential violations of these rights. Additionally, having regulations in place, such as the GDPR in Europe, sets a precedent that can inspire similar regulations globally to address privacy concerns.

Paola Cantarini: What is to you meant by AI governance and what is the relationship do you see between innovation, technology, and law?

Mark Coeckelbergh: As for me, governance undoubtedly envelops law, which needs to provide frameworks to address individuals who deviate from the ethical essence of AI. Also, it gives developments in ruling towards soft law and ethics in this field, supposing we have a public framework allowing approaches and ethical considerations. Imposing excessive restrictions with little room for freedom is problematic. Therefore, it’s crucial to achieve a balance with approaches that vary based on the political culture and democratic decisions of each country, allowing plural solutions globally to reach an international compromise on these matters.

Paola Cantarini: In this year’s Venice Architecture Biennale (2023) the theme of the Brazilian stand is “Earth and ancestry”, that is to say, decolonization (“De-colonizing the canon”, Brazil’s “Earth” pavilion at the Venice Biennale). Would it be possible to escape such colonialist logic, which is also present in the AI/data areas?

Mark Coeckelbergh: Yes, it’s crucial to address the significant issue of AI being intertwined with various hegemonic systems and ways of thinking, including colonial perspectives. Therefore, it is essential to embark on the decolonization of AI, ensuring its development and utilization does not perpetuate the dominance of one group over another or endorse new colonial approaches in these interactions. Notably, it’s vital in a global context, especially when contemplating the global governance of AI. Discussions on these matters mustn’t be dominated by Western or Northern perspectives, imposing their way of thinking on the rest of the world. Instead, we should foster meaningful and respectful dialogues assuring the sovereignty of nations, democratic rights of people, and cultural differences.

Paola Cantarini: What are the main challenges today with the advancement of AI, and after the controversy with ChatGPT and the “moratorium” requested in a letter/manifesto by Elon Musk and other leading figures?

Mark Coeckelbergh: I believe it’s crucial to establish regulations that encompass new technologies, including foundation models, such as those used in AI. Unfortunately, in Europe, currently, there is a pressure to regulate the role of AI, influenced by certain companies. It’s essential to address this and, at a global level, tackle the existing power imbalances and asymmetries between countries and regions in AI development, particularly in generative AI like CHATGPT, where a small number of individuals and companies hold significant influence. The consequences of AI development disproportionately affect the rest of the world. Therefore, we should explore ways to develop AI more equalized globally and empower other countries and citizens, fostering diverse voices in the technology’s development. Where are, for example, the costs not only for the United States of America as a part of this? Finally, I would like to recommend my upcoming book called “AI Undermines Democracy and What We Can Do About It” where I delve even more deeply into AI studies and perspectives.


 [CM1]Creio que ele tem também livros mais recentes públicados, não?

 [CM2]qual tradução?

 [CM3]seria “in the”? Apenas para confirmar a transcrição

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top