Paola Cantarini – Ethikai Institute
This proposal was the subject of postdoctoral research at the Institute for Advanced Studies of USP OScar Sala Chair, supervised by Virgilio Almeida, in the years 2022 and 2023
This proposal aims to address the following issues in an interdisciplinary and holistic approach, focused on the perspective of an inclusive, decolonial, sustainable, and democratic AI: how to reduce the environmental impact of AI, and what are the main challenges in Brazil as a Global South country regarding the protection of fundamental rights of vulnerable populations (indigenous and Afro-descendants); how to promote social justice and social inclusion by combining “design justice,” “algorithmic justice,” “epistemic justice,” “data justice,” and “environmental justice”; how an alternative AI governance model can contribute to achieving these objectives while aligning with innovation and technological development, integrating innovation with ethics (“metainnovation”) and responsibility; and how the concept of “life-centered AI” and the presented framework for the protection of fundamental rights and environmental impact can contribute to adequate environmental protection by moving away from an anthropocentric approach toward a more holistic and sustainable understanding.
MOTIVATION, FOUNDATION, FRAMEWORK
Problem to be addressed and research evidence: how to reduce the environmental impact of AI, as every AI application affects the climate, and the main challenges in Brazil as a Global South country regarding the protection of fundamental rights of vulnerable populations, with a focus on indigenous and afro-descendant communities; how to promote social justice and social inclusion through alternative AI governance models, using the theoretical framework of the Maiori data governance in New Zealand, the Toronto Declaration, which advocates for the inclusion of potentially affected groups in decision-making on design and review, and the Global Indigenous Data Alliance’s “CARE Principles of Indigenous Data Governance” (https://www.gida-global.org/care). How can such a governance model and the proposal of the new principles of “fundamental rights by design” contribute to aligning economic and technological development and innovation with the adequate protection of fundamental rights, integrating innovation with ethics and responsibility, including the enforcement of fundamental rights by courts in the AI era? How will the concept of “life-centered AI,” rather than just “human-centered AI,” contribute to environmental protection by shifting from an anthropocentric to a holistic and sustainable perspective?
Research evidence: Algorithmic Impact Assessment (AIA) and a fundamental rights-based approach are recommended in various international documents, including the Directive from the Treasury Board of Canada (https://www.tbs-sct.gc.ca/pol/docWeng.aspx?id=32590); European Commission (“legal frameworks on fundamental rights”), Council of Europe (”Unboxing AI: 10 steps to protect Human Rights”), European Parliament (“Governing data and artificial intelligence for all – Models for sustainable and just data governance”); Federal Trade Commission (FTC), National Telecommunications and Information Administration (NTIA), Future of Privacy Forum, European Union Agency for Fundamental Rights (FRA), Dutch Data Protection Authority; Brazil’s National Data Protection Authority (ANPD) (https://www.gov.br/anpd/pt-br/assuntos/noticias/anpd-publica-analise-preliminar-do-projeto-de-lei-no-2338-2023-que-dispoe-sobre-o-uso-da-inteligencia-artificial); Amnesty International and Access Now (“Toronto Declaration”); EU High-Level Expert Group on AI (AI HLEG – “Ethical Guidelines”); Australian Human Rights Commission (2018 Project); UNESCO (“Recommendation on the Ethics of Artificial Intelligence” – https://en.unesco.org/artificial-intelligence/ethics).
Additionally, studies highlight the vulnerability of the afro-descendant population regarding facial recognition and predictive policing, and the greater potential for harm to fundamental rights in contexts with a documented historical past of discrimination and by communities systematically denied various rights throughout their history (“A Fuster and others, ‘Predictably Unequal? The Effects of Machine Learning on Credit Markets,’ 2021. 77 (1) J Finance 4; Safiya Umoja Noble, ‘Algorithms of Oppression: How Search Engines Reinforce Racism,’ NYU Press, 2018), as well as the environmental impact and decolonial perspective and vulnerability aspects of the indigenous population:
Climate Change 2021: The Physical Science Basis, Intergovernmental Panel on Climate Change, Working Group I (2021);
Decolonial AI: Decolonial Theory as Sociotechnical Foresight in Artificial Intelligence, Mohamed et al. (2020);
Ars Technica (https://arstechnica.com/tech-policy/2020/06/police-arrested-wrong-man-based-on-facial-recognition-fail-aclu-says/);
National Institute of Standards and Technology (NIST) (https://learn.g2.com/ethics-of-facial-recognition);
Access Now, Amnesty International, and others (https://www.accessnow.org/wp-content/uploads/2021/06/BanBS-Portuguese.pdf);
Co-designing Māori data governance, https://tengira.waikato.ac.nz/__data/assets/pdf_file/0008/973763/Maori_Data_Governance_Model.pdf).
Despite extensive international publication, such themes are omitted in Brazil, with a sub-representation of both the Global South and the fundamental rights theme. There is only one LAPIN study on a “framework” dedicated to fundamental rights but incomplete.
The aim of this proposal is to bring epistemological foundations to develop AI in a way that does not hinder innovation, combining economic and technological development with adequate protection of fundamental rights. This involves addressing its multiple dimensions, focusing not only on the individual aspect but also on collective and social aspects, addressing environmental impact, and aligning innovation with ethics and responsibility (“metainnovation” and “responsibility for innovation”).
It goes beyond analyzing impacts and proposing a theorization as a foundation. Instead, it presents a concrete proposal on how to address these issues, contributing to transforming ethical principles into effective practices. This approach is innovative, policy-oriented, and implementation-focused, providing a specific “framework” for the necessary environmental impact assessment in the field of AI.
The proposed “framework” takes into account existing models like ISO (ISSO IEC 23894/2023, ISSO/IEC 42001), and NIST but aims to go beyond, considering fundamental rights in AI, also based on the proposal of the EU Agency for Fundamental Rights (FRA). It expands the initial proposal of fundamental rights principles by “design.”
This approach is essential for achieving the Rule of Law from conception and “algorithmic justice,” which depend on adequate protection of individual, collective, and social fundamental rights. It addresses a gap in the field of AI, where numerous ethical principles catalogs lack effectiveness due to the absence of enforcement, potentially leading to “ethical washing.”
Legislative proposals in Brazil, such as PL 2120/2338/23, lack specificity regarding environmental impact. Similar omissions are found in GDPR and the EU’s AI ACT, even in their latest versions. This proposal considers direct and indirect environmental impacts of AI, providing concrete measures to address these issues.
It relates AI, equity, data colonialism, and climate justice, recognizing their interconnectedness. Considering climate change and digital transformation as the main trends of the century, alignment with social and democratic values is crucial. The proposed “framework” covers all potentially affected fundamental rights, incorporating the principles of “fundamental rights by design,” akin to “privacy by design” concepts.
Preparation of Environmental Impact Assessments (EIAs) becomes a requirement, acting as an additional burden of argumentation in favor of affected fundamental rights. The multidimensional nature of fundamental rights, covering individual, collective, and social dimensions, obligates the adoption of damage mitigation measures.
The “life-centered AI” or “planet-centered AI” concept is broader than “human-centered AI,” considering both direct and indirect environmental impacts. This is essential for obtaining a green AI certification, similar to ISO 14.001, SBC, BREEAM, WELL, CTE certifications.
Existing initiatives often focus on “human-centered AI,” emphasizing human control and respect for human values, which, while important, are insufficient. The “life-centered AI” approach considers the multidimensionality of fundamental rights, making it obligatory to adopt damage mitigation measures.
In conclusion, this proposal aims to provide a comprehensive framework and principles for the development of AI in Brazil, ensuring alignment with democratic values, inclusivity, and decolonization.
FRAMEWORK AND PARAMETERS
When observing such a framework for Environmental Artificial Intelligence (EAI), it would be possible to obtain the “green seal” – “sustainability by design.” At the same time, we would be aligning innovation and economic and social development with sustainability, looking not only at the short term but also the medium and long term, and addressing two of the main problems mentioned in the 2023 Global Risks Report, namely, failures in mitigating climate change and large-scale environmental damage. Innovation aligns with responsibility and ethics, and sustainability with international competition, being a market differentiator and bringing public engagement while maintaining high levels of trust and transparency. This is also highlighted by the European Union’s Green Deal aiming to recover the long-term competitiveness of economic sectors sustainably, referring to competitive sustainability.
The framework would be required, unlike a voluntary form, as is generally the case with AI impact assessments and some literal, but not systemic and functional predictions and interpretations of the institute, such as the Data Protection Report. This document, while for the GDPR, it is mandatory in case of high risk due to poor legislative technique in the LGPD in this aspect, raises doubts and legal uncertainty since part of the doctrine argues that it is not a mandatory document as a rule, as the law states that it may be requested by the ANPD.
Moreover, unlike the GDPR, which only provides for cases of high risk and a fixed list of risk levels, its preparation in the case of environmental impact would always be necessary, and the mitigation measures, in this case, depend on a proportional approach to the level of risk, considering the likelihood of its occurrence, scope, and severity of the risk. It is worth remembering that in the environmental context, and likewise, it has been understood in the context of data protection, when damage occurs, it is hardly resolved with only post hoc repair measures, just think of an oil spill in the ocean, making it impossible to return to the status quo ante.
Therefore, prevention is better than cure, and the EIA is the instrument par excellence for such prevention. Also, when observing the WP29 2016 GUIDE: definition of processing operations subject to DPIA, the elaboration of the DPIA – Privacy Impact Assessment Report is considered mandatory when there is an assessment or classification – definition of profiles and provision of aspects related to professional performance, economic situation, health, personal preferences or interests, reliability, or behavior, location/movement; here it would cover the hypothesis of predictive policing, as well as combining with the other mention of mandatory nature when it always refers to the processing of data of vulnerable data subjects. Mutatis mutandis, in the case of indigenous people and Afro-descendants being vulnerable, the EIA would always be required in the scope of AI applications.
The Algorithmic Accountability Act (USA), the bill in the Federal Senate requires, in the case of an automatic decision system considered high risk, given the novelty of the technology used, its nature, scope, context, and purpose of the automated decision that poses a significant risk to privacy, security, or results in unfair and prejudiced decisions, the obligation to prepare a data protection impact report and a more generic impact report in cases where there is no data processing, but when AI is used to automate decision-making processes. It states that the impact of algorithms on their accuracy, fairness, discrimination, privacy, and security should be measured.
In the same way that it is justified in the case of the specific application of AI that was considered, namely, the production of automatic decisions, it is justified in other AI applications when there is also a risk of bias and harm, not only to the considered one of many fundamental rights (privacy) but also when there is the potential for infringement of all other fundamental rights, as there is no hierarchy between such rights, and they are all on an equal footing as constitutional principles.
FRAMEWORK FOR AIA – FUNDAMENTAL RIGHTS AND ENVIRONMENTAL IMPACT
Prepare the EIA for environmental impact in advance – through an independent, multidisciplinary, and multiethnic team, with the participation of representatives of vulnerable groups, and publish the document on its website, containing the following steps:
a. LEGITIMACY, LEGALITY, AND REASONABILITY OF AI APPLICATION: demonstrate compliance with existing environmental standards and LGPD, providing information about data processing, legal bases, and supporting compliance documents in this area.
b. ACCOUNTABILITY AND DAMAGE FORECAST: assess the level, probability, severity, and scope of direct and indirect environmental impacts.
c. FAIR USE – PROPORTIONALITY ANALYSIS: Demonstrate why AI is being used, pointing out its benefits and harms to fundamental environmental rights and others; show if the same purpose could be achieved with less environmentally intrusive means; describe the nature, scope, context, and purposes of AI application; analyze the use of AI for public, collective, and social purposes or for recreational purposes, for example, in the case of the METAVerse (these would not pass the proportionality test and would not be allowed when there is environmental impact unless the mitigation measures presented fully address such damages); priority will be given to authorizing AI applications focusing on environmental sustainability or for the improvement of health, for example.
d. TRANSPARENT USE: prove through capable documentation the measurement and calculation of waste usage and environmental impact, such as energy consumption, CO2 emissions, water consumption, and disclose such documentation on the company’s website; develop predictions about the use of energy, water, and other environmental impacts, and bring optimization strategies and possible use of renewable energy sources.
e. SUSTAINABLE USE: Perform a balancing procedure between conflicting fundamental rights. f. Identify gaps and future improvements.
MITIGATION MEASURES
g. Adopt the following measures of mitigation or compensation for damages, proportionally, depending on the level of damage (probability, severity, and extent of damage):
g.1. Adopt mitigation measures for the damages foreseen in the case of the specific AI application.
g.2. Facilitate the creation of open data standards on environmental aspects related to AI and create an open platform to enable easy access and sharing of data.
g.3. Contribute to financing the development of an international catalog of data relevant to the environment and open-source models and software.
g.4. Support accessible cloud storage systems for academic researchers, civil society, and small and medium enterprises, and startups.
g.5. Support AI applications for the environment/health (since health is also affected by environmental damage most of the time).
g.6. Finance interdisciplinary research on innovation and environmental protection.
g.7. Finance literacy programs and “reskilling” of AI, digital transformation, environmental impacts, and fundamental rights.
g.8. Implement restrictions on AI training and consumption limits, depending on the social utility of the AI model on one hand, and its more utilitarian and exclusively economically oriented vision on the other.
g.9. Ensure that cloud computing, if used, is included in carbon reporting and pricing policies.
ACTION PLAN – “FUNDAMENTAL RIGHTS BY DESIGN” PRINCIPLES – “EQUITY SEAL”:
This development was theoretically grounded in the FUNDAMENTAL PRINCIPLES OF PRIVACY BY DESIGN formulated by Ann Cavoukian in the field of data protection, now expanded and focused on the scope of AI applications. Furthermore, they were inspired by existing principles developed by a group of experts from various fields (human rights, technology, law, and ethics) by the European Union Agency for Fundamental Rights (FRA) and the European Data Protection Authority (EDPB), in collaboration with the Council of Europe and the European Union Intellectual Property Office (EUIPO). However, it also broadens the perspective, as it is incomplete, addressing only privacy, nondiscrimination, freedom of expression and information, and with weaknesses, as it does not mention collective and social impacts and damages, and it speaks only of transparency and responsibility, which is not sufficient.
According to this development, the Principles of “Fundamental Rights by Design” would be:
Respect for human dignity: Technology must be designed to respect the inherent value and worth of every human being and to avoid any form of discrimination or dehumanization.
Non-discrimination: Technology should not be designed to discriminate against any particular group of people based on their race, gender, religion, sexual orientation, or other characteristics.
Right to privacy: Technology must respect the right to privacy and ensure that personal data is protected against unauthorized access, use, or disclosure.
Freedom of expression and information: Technology should not be designed to restrict or censor freedom of expression or the free flow of information.
Transparency and responsibility: Technology must be designed to ensure transparency and accountability in its operation, so that users can understand how it works and can hold those responsible for any harmful effects accountable.
Participation and empowerment: The design and implementation of technology should involve the participation of all relevant stakeholders and empower users to make informed decisions about its use.
NEW “FUNDAMENTAL RIGHTS BY DESIGN” PRINCIPLES – PAOLA CANTARINI
- Be proactive, not reactive – preventive, not corrective: Prior and mandatory preparation of the Environmental Artificial Intelligence (EAI) and adoption of measures in the design focused on protecting all potentially affected fundamental rights and the environmental impact, as every AI application causes environmental impact.
- Fundamental rights as the default setting: The default configuration of a particular system and Algorithmic Impact Assessment (AIA) must be observed and developed in a mandatory and prior manner, preserving all potentially affected fundamental rights. Automatic protection of fundamental rights as a standard, automatically, from design and “compliance” documents, without requiring active intervention from third parties.
- Fundamental rights embedded into design: Potentially affected fundamental rights must be incorporated into the architecture of systems and business models and should be outlined in the AIA development framework.
- Full functionality – positive-sum, not zero-sum: All involved interests must be accommodated, avoiding false dichotomies, i.e., providing protection for fundamental rights without losing complete functionality. Align innovation with ethics and responsibility; promote innovation and adequate protection of rights.
- Security and responsibility by design – end-to-end security – full lifecycle protection: Protection of fundamental rights throughout the lifecycle of AI application – prove the adoption of security measures and damage mitigation through reliable documentation. Provide protection for all potentially affected fundamental rights in advance, in design, and in compliance documents, extending such protection throughout the AI application’s lifecycle.
- Visibility and transparency – keep it open: Act with confidence and transparency for explainable AI. Develop compliance documents (AIA) and reports that prove the adoption of the mentioned measures, and make them available for reading and analysis on the company’s website in an easily accessible location.
- Respect for fundamental rights and human and democratic values – keep it life-centric: A requirement to respect fundamental rights, republican and democratic values, the Democratic Rule of Law, human values, human control of technology, and ethical and environmental considerations.
- Be “beneficence, not maleficence,” respect human dignity and algorithmic justice, environmental justice, and epistemic justice, and all fundamental rights. Act with responsibility, care, transparency, and ethics.
- Contestability and due informational process: Ensure human review of automated decisions and other AI applications with the potential to infringe fundamental rights and the environment. Have an effective communication channel and a responsible party, such as the Data Protection Officer (DPO), responsible for compliance and communication with the public.
- Independent review and oversight: Have an independent, autonomous, and legitimate “Oversight” with an interdisciplinary, multiethnic team, diversity, and collaboration with representatives of vulnerable groups.