Entrevista com Ann Cavoukian

Por Paola Cantarini

Ann Cavoukian – BIO

Ann Cavoukian is the former Information and Privacy Commissioner for the Canadian province of Ontario. Her concept of privacy by design, which takes privacy into account throughout the system engineering process, was expanded on, as part of a joint Canadian-Dutch team, both before and during her tenure as commissioner of Ontario (1997 to 2014). She was hired by Ryerson University (now Toronto Metropolitan University) as a distinguished visiting professor after the end of her three terms as IPC. Cavoukian was appointed executive director of the Ryerson’s Privacy and Big Data Institute in 2014. Since 2017, Cavoukian has been the Distinguished Expert-in-Residence of the university’s Privacy by Design Centre of Excellence.

Versão original

Paola Cantarini: Thank you so much for accepting to participate in this project of university of São Paula, Institute of Advanced Studies, project UAI – Understanding AI

Ann Cavoukian: Good afternoon to everyone

Paola Cantarini: My name is Paola Cantarini. I am a university professor, philosopher and a postdoctoral researcher at USP, IEA. This initiative is part of the project Understanding AI.

Our guest today is Ann Cavoukian.

Thank you so much for accepting and spending this afternoon here with us. It’s a great honor for all of us. Ann Cavoukian needs no further introduction because she’s internationally recognised for her work in the field of data protection. She is the former Information and Privacy Commissioner for the Canadian province of Ontario and have developed the well known principles of privacy by design, that is applyed in the field of data protection. Before we launch in the main subject of the interview in the area of artificial intelligence and data protection, big data and humanities, I would like to ask you just a little bit about your own background in the field of data protection.

Paola Cantarini: What’s your area of expertise? Would you start telling us about your work related to AI and data protection?

Ann Cavoukian: Thank you very much for inviting me. I started, as you said, as commissioner and in Ontario, Canada. And the interesting thing was you see my background. I’m not a lawyer, I’m a psychologist. My background has been psych, my Ph.D is in psychology and law. So when I started at the office, at the Commissioner’s Office, there were brilliant lawyers who wanted to apply the law and make sure it worked and all that. But I wanted more than that. I wanted to be proactive, meaning, let’s prevent the privacy harms from arising, not just offer solutions after they’ve been perpetrated, so literally at my kitchen table over three nights I created Privacy by Design, which is all about trying to prevent the privacy harms from arising, addressing privacy and security hand in hand, not one versus the other. And then I developed this and it has seven foundational principles. Then I took it into the office and I sold it to the lawyers, if you will. And they were great. They understood I was all for the law. I was just hoping we wouldn’t have to invoke the laws because we could avoid a lot of the privacy concerns and the data breaches etc. So that’s how it all started. And I was Commissioner for three terms, a long time, 17 years, and we had great success with it. It was unanimously passed as an international standard in 2010 by the International Assembly of Privacy Commissioners and data protection authorities in Europe at one of our conferences. And then, as you know, in 2018, it was included in the European Law, the GDPR, the General Data Protection Regulation, which is huge. They’ve been working on that for five years and then included privacy by design and also the second of the seven foundational principles, privacy as the default setting, which is so important, it means you don’t ask your customers, your citizens, to go find the ways to protect their privacy. No, you say to them, we give it to you automatically. It’s the default setting. You don’t have to worry. People love that. It builds trust, not like no other, at a time when there’s so little trust.

Paola Cantarini: Do you think there is a need and therefore also the possibility for a worldwide law to regulate AI globally, at least fixing minimal standards?

Ann Cavoukian: Well, I think a global approach would be admirable because you could focus on one standard. The EU just produced the standard, the AI regulation act. It’ll be finalized later in the year, but they just produced it. The US, the United States is working on one and they’re talking about working on it together. The US and EU are standards, and I think that should be global because if you have a bunch of different ones like Canada, where the federal commissioner is working on one with two provinces and then the Ontario commissioner is working on another one completely by himself. You can have all these different things that you don’t want, that you want one strong like GDPR  standard, that says you do not encroach upon people’s personal information and that that’s going a long way because with AI it does so much so quickly vis-a-vis our ability to address those issues and detect them. I mean, forget it. It’s going to take an enormous amount of time and effort. And you have to understand the people who create these amazing tools, AI and ChatGPT etc. they’re brilliant, of course, but there’s also very brilliant hackers and fishers who are going after all of this to obtain lots of information and ChatGPT, for example, does not prevent personally identifiable data from being accessed. So where are the privacy protect measures? I mean, there’s nothing there right now. And I know he’s working with the governments to try to get something going, but we have to move on this very quickly.

Paola Cantarini: How would the so-called “trade-off” between innovation and regulation work? Or would regulation by itself prevent or compromise innovation and international competition? According to Daniel SOLOVE, in his book “Nothing to hide. The false tradeoff between privacy and security” (Yale University Press, 2011), this would be a mistaken concept. Could you comment on this point?

Ann Cavoukian: I don’t agree with that at all. It’s not privacy versus innovation or privacy versus data utility. You have to have both. I always refer people to Steve Jobs, you know, the brilliant creator of Apple, he said: Look, if I didn’t have privacy, with privacy, I can do crazy blue sky thinking, come up with wild ideas and then throw them out because it’s ridiculous. But also I can end up with the brilliant formulation that was Apple back then. He believed very strongly in privacy and innovation, of course. So you can and must have both. We have to get rid of the dated 0 sum game. Either or win, or lose. That’s so yesterday. Forget about that. You do both. If you’re smart and you can innovate in a brilliant way, you can also innovate in a way that builds in privacy and data protection.

Paola Cantarini: Taking as a paradigmatic example in the area of data protection the LIA – the evaluation of legitimate interest, provided in the Brazilian LGPD and in the GDPR of the European Union as being a mandatory compliance document, when using the legal basis for processing personal data, that is, the legitimate interest (measured by a proportionality analysis/test), would it be possible to create a “framework” aiming to protect fundamental rights embedded in a specific document, the AIIA – Algorithmic Impact Assessment? And thus establishing, after a weighted analysis, adequate, necessary and strictly proportional risk mitigation measures to such rights.

Ann Cavoukian: I’ll wait and see, but I don’t want to say no outright, but I also want to say that’s again the either or one or the other. I want you to do both and companies as well. I’m often invited to speak to boards of directors and I walk in and the CEO and his team, their heads are down. They don’t want to hear what I have to say. And I always say to them, give me 10 minutes, give me 10 minutes to let me show you how privacy can enhance your operations and the delivery of your products and services to your customers. And then if you’re not interested, I’ll leave. So all of a sudden they wake up and they go, oh, OK, go ahead. And I talked to them about how it has to be positive, some meaning hand in hand, privacy and innovation, privacy and data utility. And then they’re all for it. They say, I didn’t know we could do both. Of course you can do both. It’s not that hard to ensure that personally identifiable data, the personal identifiers are removed or they’re synthetic data, now you create and recreate the data so that it’s not personally identifiable but can be used widely for data utility purposes, for innovation. There are so many ways to do this. We just have to give up the old world thinking of either or one or the other and move ahead. There’s so much going on.

Paola Cantarini: What is to you meant by AI governance and what is the relationship do you see between innovation, technology and law?

Ann Cavoukian: Well, AI governance is going to be a tough one, there’s no question. Because for AI governance you need the regulators to understand how the AI is working as very complicated as it is. I’ve been trying to work on neural Nets and increase my own learning and it takes a lot of time. So the likelihood of these guys doing it is slim. So what I’m telling people, the regulators is to make sure you have experts, staff in this area who can advise you about this because audits will be taking place and when there’s an audit of your operations, if you’re an AI, you better know how to answer the questions. So it can’t be one versus the other. When I explain that, of course they’re going to have some tech staff or get one of your tech staff really focused on AI and data protection. How you calculate to protect personally identifiable data in a way that doesn’t minimize your AI but enhances it because then it frees you from having to protect the data in any way. You can then go wild and use it for a variety of purposes you probably hadn’t contemplated. So I wouldn’t give up on this at all.

Paola Cantarini: In this year’s Venice Architecture Biennale (2023) the theme of the Brazilian stand is “Earth and ancestry”, that is to say, decolonization (“De-colonizing the canon”, Brazil’s “Earth” pavilion at the Venice Biennale). Would it be possible to escape such colonialist logic, which is also present in the AI/data areas?

Ann Cavoukian: I never want to say no to things, so I’m always open. I’d like to see what they develop and convince me that personal information is in fact strongly protected while ensuring the fluidity and use of AI. I’m not opposed to AI at all. It enables you to do amazing things. If you go to ChatGPT, especially the 4th ChatGPT, four or five, it’s amazing. The answers you get are remarkable. But I’ve also heard that if it doesn’t find an answer, it hallucinates, so it can make up things. My God, that’s the last thing we want. Nor want them to prove to me that that’s not going to happen. And what are the main challenges today with the development of an AI? And after the controversy exactly about this, we’ve shot Egypt T and the moratorium requested in a latter manifesto by Elon Musk and other leading figures. It’s going to take time because there is a lot of controversy associated with it. You now, there’s a group of a very brilliant tech group saying put the brakes on this right now including well not Sam Altman who’s the leader who created ChatGPT, but he’s participating with various governments in terms of how do we address the privacy issues. We have to address this now and whether you put the brakes on it completely or you work with those who are creating it and using it, I’m not sure the direction that will go in. But it’s clear that this has to be addressed. Because if it’s not addressed, companies who use AI and ChatGPT, etc, they’ll end up in the courts. There will be lawsuits. There will be class action lawsuits because people’s personal information has been used in ways that was not consented to or that it wreaks havoc on individuals lives. That’s what we’re trying to avoid by saying you have to look under the hood now, trust, but verify. Actually, in this case, don’t trust, just verify and look under the hood and do audits.

Paola Cantarini: Professor Ann Cavoukian thank you again so much. It is a great honor to us. I think this wonderful interview gave us full of Insights that will contribute with the main focus of the project Understanding AI, at the University of São Paulo, that is to broader and to democratize the discussion about AI and also to be compromised with scientific thought and to create more awareness about AI around people, to value critical thought and interdisciplinary and inclusive approach. So thank you again so much to being here with us.

Ann Cavoukian: Oh, it’s my been my pleasure. I love getting the word out that AI is not bad. You know, a lot of people are saying AI is terrible, but no, it’s not. It has enormous offerings. You just have to make sure that privacy is built into the process so you can have amazing outcomes from AI and privacy and data protection. We can do both. Get rid of the data 0 sum mindset of either or. We’re going to do this.

Paola Cantarini: Thank you so much. Have a great day. Bye.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top