Entrevista com Luciano Floridi

Por Paola Cantarini

Luciano Floridi – BIO

Luciano Floridi is the director of the Digital Ethics Center at Yale University, Professor of Sociology of Culture and Communication at the University of Bologna, Department of Legal Studies, where he is the director of the Centre for Digital Ethics. Luciano Floridi is adjunct professor (“distinguished scholar in residence”), Department of Economics, American University, Washington D.C.

Essa entrevista foi realizada originalmente em inglês no dia 19.10.2023.

A entrevista foi revisada em sua tradução inicial realizada por Paola Cantarini, por Guershom David (Graduado em Direito, mestrando em Direito Político e Econômico, pós graduação lato sensu em Direito Processual Civil, Direito e Processo Penal (em curso) e diretor do projeto MackriAtIvity na Incubadora Mackenzie).

Versão original

Paola Cantarini: What’s your area of expertise? Would you start telling us about your work related to AI and data protection?

Luciano Floridi: I am the founding director of the Digital Ethics Centre here at Yale University, and I am also a professor at the Politic Science Program. My area of research is digital ethics, broadly understood, meaning that I also look at conceptual, more philosophical, and epistemological issues that lie behind the ethical challenges, but also forward the consequences, the legal challenges that might intersect with ethical problems and policy. So, it is what philosophers do. They are curious and nosy about almost anything that is going on in the world. In this case, essentially, the impact of digital technologies on our lives, our culture, the way we interact, and the way we understand the world.

Paola Cantarini: Is there a need and, therefore, the possibility for a worldwide law to regulate AI globally, at least fixing minimal standards?

Luciano Floridi: I would distinguish between the need and the probability. There is certainly a need, so there is a need for world peace or agreements on pharmaceutical products. There is a need – forgive me for the trigger point, for some agreement on even plugs and electricity. We haven’t yet agreed, not even about how we shape a plug for a computer between the US and the UK. So, the probability difference of the need for international regulation of AI is very, very slim. I strongly doubt it. I will welcome it, and I am, in fact, if anything helps, insofar as one citizen of this big planet among many billions of common views of truths contributing to what can be done. But do I believe that it’s likely? No. Are we going to get there? I strongly doubt it. And I’m not talking about some general UN declaration on ‘let us all be good people,’ not something like that. I’m talking about regulation. Now, I’m afraid I have very strong doubts because we haven’t achieved that level of agreement, not even on much more pressing issues. See what happens these days about the illegal use of civil rights and so on. So, hopefully, we’ll get there. I believe that, morally, we should work in that direction. But if you ask the logician how likely it is that we are going to be successful, I wouldn’t put my mind to it.

Paola Cantarini: How would the so-called “trade-off” between innovation and regulation work? Or would regulation by itself prevent or compromise innovation and international competition? According to Daniel SOLOVE, in his book “Nothing to Hide. The False Tradeoff Between Privacy and Security” (Y.U. Press, 2011), this would be a mistaken concept. Could you comment on this point?

Luciano Floridi: Yes, I think, once again, we need to distinguish between two different points. One is the need for compromise; trade-offs are what happens in life. You would like to have two good things at the same time, but unfortunately, circumstances do not allow you to have both. Should we work towards obtaining both, for example, security and privacy, when AI comes into play? Absolutely. And the more we get both of them, the better the society in which we live. Are there circumstances in which this is literally not possible? It does not happen. You have to choose. You have to get a little bit more security and a little bit less privacy, or more privacy and less security, I think. So that is what life teaches us. Let me give you a straightforward example that gets rid of any theoretical ‘we should get both.’ This is a myth. FBI, Apple, when they had a fight precisely about security, in this case, this case Century, and privacy. FBI wanted to crack the iPhone of an allegedly criminal organization, people, etc. Apple said privacy. FBI said safety. I mean, these are potential criminals. We need to be able to read on that iPhone. Did we get a copy? Did we save both privacy, safety, and security in this case? No, we didn’t. What happened? As you all know, someone else got the iPhone. So, Apple could say I didn’t give in, and the FBI said we got what we wanted, but it was a compromise at the end of the day. Actually, the compromise was that in this particular case, not always, not generally, but in this particular case, there was less privacy and more safety. It’s like a little bit of the difference between, if you like, running an experiment in an ideal context. Now, imagine an object going down a plane without any friction. This will happen. Well, that’s true. But then, in real life, your car always has some friction on the road. And so, I think that sometimes confusion happens because people think in ideal terms, which, as a philosopher, I appreciate a lot. Yes, ideally, we would like to have all human rights respected 100%. But then, unfortunately, in practice, there are some conflicts where you need to say: “Look, in this particular case, I’m afraid no, this has to give in a little bit in terms of trend box.” Let me just conclude one point because otherwise, you will say: “Well, so I don’t care, then everything is a set-up.” No, not true. Because we do have some fundamental principles that are not negotiable. It is not privacy, and it’s not safety, but, for example, torture. When it comes to torturing people, we do not compromise. There is no urgency, no crisis in the world that will legally, ethically enable someone to say we need to compromise just one case of torture. Please, I need to solve my problem. There is a line that we do not want to cross. But in many other cases, there are other things like, for example, the right to free speech. We do accept that if there’s a war in a country, legally, the population agrees to suspend some levels of free speech because of the war and, therefore, the enemies listening, etc. We do it legally. We do it with the acceptance of the whole population. We do it for a fixed amount of time. But of course, what we want to say is, well, this is a bit of a compromise because the risk is so high. Now, the enemy is listening. Be careful. You can’t tell everybody where the weapons are etcetera. The right to assembly. Of course, we want to have it. But then, in some cases if there is a terrorist attack, there is an extreme case. So, the difficulty is not to get absurd. The difficulty is to be intelligently and ethically able to reach the right compromises ‘if’ and ‘when’ they are needed and then go back to the good, absolute terms that we all want to have. Freedom of speech, privacy, security defense. Now, it seems that much more work needs to be done. Otherwise, we’re just talking about an ideal world, and in ideal worlds, anything can do, anything can happen.

Paola Cantarini: Taking as a paradigmatic example in the area of data protection the LIA – the evaluation of legitimate interest, provided in the Brazilian LGPD and in the GDPR of the European Union as being a mandatory compliance document, when using the legal basis for processing personal data, that is, the legitimate interest (measured by a proportionality analysis/test), would it be possible to create a “framework” aiming to protect fundamental rights embedded in a specific document, the AIIA – Algorithmic Impact Assessment? And thus establishing, after a weighted analysis, adequate, necessary, and strictly proportional risk mitigation measures to such rights.

Luciano Floridi: This is a fundamental question that would require much, much longer answers. Forgive me if I’m a little bit short. I’m aware of the complexity of the financing. Indeed, the question is, even if you’d like it broader, it is a question about how we are going to reconsider some of the fundamental concepts that have served us very well in the past, but they are no longer adequate to cope with the current challenges. Now, a legitimate interest that you mentioned in AI and, for example, the use of algorithms or the use of data collected throughout algorithms, etc., is one of those. The other one that I have in mind, for example, is fair use again in AI. How far can I go in relying on fair use when it comes to using databases to train my large language models? How far can I go in terms of legitimate interest in collecting data from users when it comes to my business? Now, these two pillars but there are millions legitimate use, fair use, and so on. These are part of the architecture that legally and ethically have really helped us a lot in the past to make sense of the world in which we live and to cope successfully with new businesses, new interactions, new, for example, policies. Are we just able to import all the solutions from, let us say the ’90s, thirty years ago, or something today when we have new forms of artificial intelligence, new ways of using, exploiting, if you like, immense sources of data? We are able to collect automatically now so many data points about individuals. How far, in other words do we want to import today good solutions that worked in the past? And my point is that not too far. What worked in the past has become a little bit less successful in order to solve our current problems. And this due, you mentioned legitimate interest. It is certainly one of them. If it was a legitimate interest, let me give you a silly example of the shopkeeper that when my grandmother was around, and she was taking or they were taking a little note saying, oh, they bought this, they’re always like this. They are all, as they say, “10 reais”, or they had to pay, and that was like, that is my data, but it was on a notebook, on a pen like. And of course, it wasn’t massive data online, etc., imagine saying, well, look, that was okay, that was legitimate interests. They wanted to know their grandma wanted milk or bread or wanted that. Fast forward, and today infringes on the privacy of the individual. It is no longer I’m doing this for you, so you have a service, but sometimes I’m doing it for me so that you buy more products. You get pinged, and a recommended system says you may like to buy some milk on channel because your fridge is empty. That’s too much. What’s the difference is that we have automated and scaled up this legitimate interest to a level that infringes on the privacy of the individual. It’s no longer I’m doing this for you, so you have a service, but sometimes I’m doing it for me so that you buy more products. You choose this rather than that. You go here rather than there. Then you vote for this rather than that. That is what you prefer. Well, once you pass this right field they want to continue on. The difference becomes a whole iterative difference. Are we really in a different game? And so, the old rules. Like, “Oh no, it was okay to take notes of people when they are grandpa or just use a bit of an algorithm just to keep track of online who was paying for what. But they no longer apply as successfully as they used to? So, my suggestion is let us use these tools, but revisit them in a 21 century’s perspective. Legitimate interest, fair use. You can’t simply say: “oh, it was fine yesterday,” so it must be fine today”. No, it was fine yesterday for those reasons. Today, we need to realize it, and therefore, honestly, personally, who? Higher limits both on the concept of legitimate interest and the concept of fair use because of most of them. You can’t simply exploit them because you have such powerful computational tools and algorithmic approaches that what was allowed a little bit in the past becomes an open game, for we need to be a little bit stricter than we were in the past.

Paola Cantarini: What is to you meant by AI governance and what is the relationship do you see between innovation, technology, and law?

Luciano Floridi:  Well, governance has many, many meanings. Now, not only do people joke about governance being what governments do, which doesn’t help in brief, but imagine making it simple that governance is what you do with something. Now, how you use it, what you want to achieve by using something. So, I could have the governance, for example, of my mobile phone. Okay, what do I do with it and what do I want to achieve by using it. Now, when it comes to AI technology of the highest potential impact and so powerful, governance means not looking just at innovation, new tools, services, new business models, new asset market strategies, new political or illegal certain approaches. But it is not just, therefore, the innovation, but it is what you do with innovation. There is a major problem that we don’t seem to be focusing enough. There’s such a rush to innovate that we forget why you are innovating. Imagine that is the line you have innovation, innovation, innovation. Meanwhile, you need to think about what you do with that innovation when you are deploying it. Is it the right thing to do? Are we going to have some ways of solving the problems that this innovation is generating? When the problems arise what kind of solutions do we want to put there? So-called redressing. So, to me, innovation is what makes you able to do things. But deciding what to do, how to do it, what you want to achieve with all that innovation, that is governance. Now, to your point about legislation and innovation is a classic. But I think it is a bit of a story that we have heard too many times from people who actually are doing the innovation. And here is the story, I’m an innovator. I’m coming up with something new. Don’t stop me because this is good for you. It will be good for the future. And I need to be able to explore, test, try. And the legislators say, oh, actually, if we stop innovation now, we’re going to kill the possibility of improving business, making more money, or the environment or society will have less opportunities. So don’t stop innovation legislation afterward. And so, we can introduce a narrative, then innovation first, legislation later. Now imagine that this is a story. Okay, now go back to the past. We are at the beginning of 1900 and there is one of the most successful companies in the world is transporting people across the Atlantic, thousands of people, and it is very successful, it is building aircraft one after the other. In fact, they transformed the skyscraper in New York, and they built the Empire State Building on purpose for this company so that Zeppelin can get to New York. You know, innovate first, regulate afterwards. Let me make, you know what I need to do in case we make problems afterward. Are we building the Zeppelin again, it is that a good idea? So, this told of you first let me innovate etcetera. Take care of that ‘regulation afterwards’ because it might actually have a Zeppelin moment. It was not a good idea. That stop was inflammable black nothing else. And we know that disaster with dead people, it was completely obliterated we do not have Zeppelins anymore, but we could have intervened much earlier. We did it after this innovate first and the innovator that can make amends in case of, we have problems afterwards. So, I don’t want to exaggerate. AI is not like the Zeppelin, but whenever I hear someone says, oh, innovation first, don’t touch it. Well, what kind of innovation is that? A good innovation? Is it a bad one? Is it just to make money? Is it good for the environment? Is it good for society? Are you not? Is it like, okay Open AI, open source, or all of a sudden you will become a business? And goodbye open source, I am going to make my own money. So, I would like to have someone there saying why are you innovating? Let me also check what exactly you are doing. And therefore regulate, innovate, regulate, innovate. But then we’re looking at parallel lines. We’re not looking at 1 and 2, but we are looking at step, step, step, step with 2 legs. So, my suggestion again is innovation, regulation, innovation. But not something like innovate first, so, now leave me alone, let me do my own things, know what I’m doing because you’re not. And then cope with the problems afterward. Basically, if we had been more careful with regulations, also in terms of impact on the environment, we wouldn’t have the problems we have today. And if you think of all the other disasters we have done in the past, not with regulations and innovation, I mean we had, we had to bomb two cities with upcoming bombs before saying not a good idea. Really? It is that really what we want to do with innovation all the time. So, I am a bit more skeptical on this rhetoric. The final point, allow me for this because I think it’s very important. Regulation doesn’t tell you how fast you go. It tells you where you’re going. So, with an analogy, imagine innovation on gas, the pedal going fast, regulation and on the wheel. So, if you have good innovation, you just want to go as fast as possible. We’re going toward a fantastic technology that does marvelous things for society, amazing things, good things for sustainability in the environment. Go, go, go fast as possible, I am driving you there, but if it is like, oh no, no, no, no. I don’t want to have anyone on the wheels. I just want to go wherever I want as fast as I can. Well then, it’s not called innovation; that’s just called not a free market for all. And you know, the cost is always societal and environmental. The bank innovation will be paid by externalities that will society and the environment. So, I know I sound a little bit too much European, not enough American at the moment, but I would like to see some serious good regulation when it comes to innovation before the problems really hit the fan because then patent of good not fixing it afterward, that’s not a good idea.

Paola Cantarini: In this year’s Venice Architecture Biennale (2023) the theme of the Brazilian stand is “Earth and ancestry”, that is to say, decolonization (“De-colonizing the canon”, Brazil’s “Earth” pavilion at the Venice Biennale). Would it be possible to escape such colonialist logic, which is also present in the AI/data areas?

Luciano Floridi: I think it’s not only possible; it’s a necessity. It has to be done. Now normally in ethics, we say if you we force someone to do something, that something must be possible. Maybe we can also ask not to force someone to fly because they cannot. But we can force someone not to behave more decently because they can. So, in this case, I think that the must is what falls straightforward. From ‘can’, it can be done. It must be done. I would like to add a note here of at least a personal interpretation. I think we should also distinguish between like a colonization that is superficial. That really does not make any difference. Not the decolonizing when it comes to that. That is not what we are talking about. And then I hear a lot. For example, now we should know. Decolonizing my culture from McDonald’s. That is not really what we are talking about here. I mean, I’m from Rome. I have seen the first McDonald’s open in downtown next to the Colosseum. Not very close. I thought that is not really what we need. Keep in mind there is plenty of good food. We can find out more variety, more choices. It is not really colonizing anything. So, I am afraid that sometimes the decolonizing has privileges, low-hanging fruit which is pretty nonsense. Oh, that is still decolonizing from this particular attitude, this approach. It could be, for example, in a parallel context what kind of voices are used by artificial intelligence, or even the kind of English that is used by, you know, English-speaking community. Why is it always that kind of English? Why never speaks Scottish or Indian English or Canadian English or American? So many you can choose itself. But this is, this is not deep enough. I’m not saying it’s not important, but it doesn’t go to the roots of the problem. The decolonization here is to understand that the world is a big place that you cannot reduce the conversation always to EU, China, US and see what else happens in the world as a sort of and by the world if it is true. There are a lot of things happening there, but the colonized mentality is also self-imposed. It is almost the idea that you send me to Italy or Brazil. As you know, half of my family is Brazilian, my wife and brother in-law, and always looking at someone else and say, oh, we need to reform the school system; let us see how they have done it in here and then import it the model. Or we need to reform the market; let us see how they did it elsewhere, and then import. That is for me colonization that is deep, is profound. We do not even perceive it, and that is why I tend to, and it can happen in different ways. Like, let me give you a neutral example. Debate in the past in England, not the UK. England reform the school. Let us see if we can do like they did it in Finland. That is colonization mentality. Like now say, OK, we improve someone else we impose their model on us. Wrong culture, wrong people, wrong history, wrong circumstances. You are importing from another place, imposing. Now sometimes, therefore, decolonizing is also decolonizing your mentality, getting out yourself from say in Italy or let us see how they did it in Germany. So do not get me wrong. I mean there are, there are lots of issues under they go under the umbrella of decolonizing, and they’re all important. I think that sometimes we underestimate how much decolonizing we should do of our own culture, which is constantly looking at other models as if they could be simply imposed on at home without any justification, as is a different place, a different context. My example is not just a joke here is you do not want to plant olive trees in Finland. Why would you like to import Finnish models of education in Italy? It is quite similar, but they said in Italia they do it better down there. They know that’s them all. I mean, I’ve just recently, for example, in terms of colonizing or organizing mentality, passive. Now let us look at Singapore. Oh, Singapore is so good. Look at what I’m doing. Oh, I wish we were like in Singapore. It is entirely; this is what, like, it’s a completely unique case. So, when it comes to decolonizing, I think 1. The problem is more complex; 2. “O buraco é mais embaixo”, that is to be more precise. The poll is way, way deeper here; and 3. Sometimes the problem is also getting out ourselves of our own self-colonizing mentality. We get a lot of impose, but we also allow the imposition to happen, and I am not tracking like a saddle philosophical point. It is just like having the self-respect or saying that there is an Italian way; there is a Brazilian way; there is an X way of doing things. I do not have to get these models imposed from someone else. Just look, and I know we are recording so I am going to be careful, but just look what happened when this was colonization in Argentina when the Chicago Boys went and imposed market rules in a way that has, you know, really affected the economy. The Chicago Boys are known in Chile for literally, I mean the people were sent to Chicago, came back to Chile, and made the economy are completely not market-oriented, no restraints, no safeguards, etc. Obviously, a disaster. Argentina had a similar problem, but especially sorry to be precisely Chile and you look at that, it is like that is a form of self-imposed colonization. You think that is the model “I want to be there now”, no, I think we should avoid that very carefully. So, hanging fruits? By all means but let us look also the other fruits up there because there are things that we need to decolonize way further ahead. They are just a nice wall against the eye, which the most of us don’t.

Paola Cantarini: What are the main challenges today with the advancement of AI, and after the controversy with ChatGPT and the “moratorium” requested in a letter/manifesto by Elon Musk and other leading figures?

Luciano Floridi: How many challenges? I don’t know how much time we have, so, I will try to be short, apologies. Let me start with one challenge that is the challenge of understanding the challenges. And I am not trying to be too philosophical here. The letter you mentioned, Elon Musk, Open AI, in the best scenario, they are being naive. In the worst scenario, they are being manipulative. They are trying to convince us that some problems we have are the sci-fi problems. Imagine I come to see you and say, ‘Look, well, that is it, Paola. We have a problem. Zombies. They might actually come.’ And you say, ‘No, no, no, no, Sean, look, there’s a pandemic. That is the problem. We have a pandemic, right? No, no, no, no pandemic. Pandemic. This is a small thing. Zombies. If the zombies arrive, that’s the end of the planet, our lives. And then you distract public opinion. So, the public opinion starts thinking about sci-fi scenarios, and two things happen. One, some people say, ‘Oh, sci-fi, sci-fi every day. The zombies, the pandemic,’ they say, ‘Oh, it’s all rubbish. No, don’t worry. I mean, these people don’t worry about conscious AI, intelligent AI governing the world coming over. Yeah, you know. And then the other people, no, no. OK, so this, I think you’re banned on everything. Or some people get really worried about the wrong kind of things. I say, ‘Oh my goodness, we have an existential risk here. This is the real challenge.’ Meanwhile, meanwhile, millions of people, not one, not two million people, are being affected by engineered artifacts, that decide about your mortgage, control traffic in town, do content moderation, allow, or do not allow some kind of news to arrive? Build fake news, influences elections, 0 intelligence. Huge impact. But on that, the challenge is that people do not really want to talk about it. Not the big companies, not the Elon Musk of the world. That they are hypocritical, and I’m saying this on record is a fact you just have to check. Weeks after signing the letter, Elon Musk invested heavily in building his own company to produce AI. The same open AI that promoted the letter, etcetera, and developing these large language models as if there were not tomorrow, and it’s now even further. Have you seen new ChatGPT one day ChatGPT 5 ChatGPT now talks to search engine ChatGPT now can handle images on and on. More innovation. What happened to the fact that we were supposed to stop; otherwise, the world would explode. So, are they naive? Or it is misleading or manipulative, certainly hypocritical. The real challenges. So that’s the question, it’s not sci-fi Do not get distracted. Do not be like the little kid with the keys or the car and the key, looks at that. Meanwhile, know something happens. Let’s look at what is happening, what is happening with all the challenges we have. I know what I am saying. I always simplify. Imagine two big buckets of problems. Old problems made more acute, exacerbated by AI, for example, even the manipulation of, say, elections, manipulations of information, etcetera. All problems we have had for decades now because of the digital evolution. But no, the GDPR was completed way before this new wave of AI. So clearly, you know, data protection, etcetera. It is just that now with AI everything is automated, is massively bigger, more impact. So, all the old problems become way bigger previously, and all the others, copyright and on and on. So, all problems become more serious, and there are some new problems. And that’s the other bucket. You know, when I say new problems, I do not mean you have never seen them before, but problems that we did not think were so pressing. Like it is hard to find on this planet a problem that hasn’t happened before in the history of humanity. But at this stage, they are emerging as more and more pressing. I will give you a couple of examples. So, we know that they are concrete. One is individual autonomy, how much my decisions are being affected. I mean, it is one thing to have my privacy or problem breached, someone knows what I’ve done yesterday, prefer not to, discovers that I have some kind of sexual inclinations and exploits them. Or another thing is to have like recommended systems or algorithms that try to influence my choice. Every day, every time, quietly, silently. And if anyone listening to us, etcetera, thinks it’s a joke, just remember the last time you chose a movie on Netflix, was it your choice or was it recommended? And now, one day after the next, fast forward not ten years from now, and you will be watching what Netflix told you to watch. Now I am putting a light problem here that does not make a big difference but imagine that multiplied. For example, for the job or the school. I choose the job. I choose the political party. Where I take my holiday, if I take a holiday, my religious preferences, etc. So, push, push, push. Human autonomy is being eroded more and more. And we do not scream about this because it happens quietly every day. It’s like boiling up the frog, they say, you know, the frog, you just turn on the heat and the frog dies. It is a story. It is not true. It jumps, but the story is that gently, you know, a little bit more, a little bit more, a little bit more. And finally, the frog is then boiled. That is what happens. Autonomy. The other problem in the new bucket is content production. We never had agents that could produce content that were not human agents. If you had a photograph, a movie, a piece of music, a story told, written somewhere that you knew, this should have related to some human people, some human agents, some people have been doing it. Today, content is increasingly produced synthetically by artificial agents. It is a good development; however, it is undermining our sense of who we are. Why are we special? What, for example, means creativity? What is art? Or even why am I worth anything if the same thing that I can do now can be done by anyone else, so undermining our sense of who we are? That is the other big challenge, I think in both cases, autonomy and identity, sort of speak. There will be problems that philosophically and culturally, intellectually, as a society, we will solve by going through this revolution. But it will take more than just a few years. It’s if you like digesting as a society this enormous transformation, but it will take more education, more sensitivities, more discussions like the one we are having today.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top