There are ongoing discussions about the need to regulate artificial intelligence (AI) worldwide, including in Brazil, where the majority of scholars defend that some type of regulation is necessary.
Many bills of law ("BLs") have been drafted in recent years, the most relevant being BLs nos. 5051/2019, 21/2020, 872/2021, and the latest, 3592/2023.
In July 2021, the approval of the urgency regime for the analysis of BL 21/20 by the Brazilian Congress drew criticism, as a fast-track procedure would prevent a deeper public debate on such a complex theme. Also, the BL’s text was considered excessively generalist and simplistic.
In view of the criticism, the Brazilian Senate created, in February 2022, a legal committee (CJUSBIA) to draft a more suitable replacement for the BLs in discussion. The committee published its final report in December of that same year, which later became Bill of Law 2338/23, the so-called "Artificial Intelligence Act" or "AI Act".
The "AI Act" has a regulatory approach based on risks and rights, creating an asymmetrical regulation of regulated agents, with a stricter set of obligations for agents/operations involved with a higher level of risk, similar to what has been seen in the European Union with the "E.U. AI Act".
The new text establishes that people affected by AI systems have the following basic rights:
(a) to be previously informed about their interactions with AI systems;
(b) to receive an explanation about the decision, recommendation, or diagnosis reached by AI systems;
(c) to contest decisions or conclusions reached by such systems that produce legal effects or significantly impact the interests of the affected person;
(d) to request human participation in decisions rendered by AI systems;
(e) to non-discrimination and the correction of direct, indirect, or abusive discriminatory biases; and
(f) to the right of privacy and personal data protection.
The text provides that all AI systems shall be evaluated to indicate their level of risk (self-evaluation). The relevant authority (which is not yet defined in the "AI Act") may reclassify the system’s risk level if it does not agree with the self-evaluation.
Moreover, BL 2338/23 prohibits the implantation of excessive risk AI systems, those that work with subliminal techniques that may induce individuals to behave in certain ways that are harmful or dangerous to health and safety, that exploit vulnerabilities of specific groups, or that governmental authorities use for classifying or ranking the population.
The text also establishes that the use, by governmental authorities, of AI systems for remote biometric identification in a continuous manner and in public places, may only occur if previously authorized by a specific law or a court order relating to an individualized criminal activity, and for:
(a) the prosecution of people being charged for crimes with imprisonment penalty exceeding two years;
(b) the search for victims of crimes or missing persons; and
(c) flagrant crimes. Therefore, facial recognition in public places of the so-called "smart cities" must be carefully analyzed, especially considering questions of algorithmic racism.
The BL also establishes that AI systems may be considered high-risk if used for purposes deemed sensitive, such as infrastructure safety, education and professional formation, recruitment and evaluations, the definition of priorities in emergencies, autonomous vehicles, applications in health, biometric identification systems, criminal investigation, public safety, management of migration, and border control.
For high-risk AI systems, the assessment of the algorithm impact is mandatory and shall be performed by a technically competent professional, independent from the system’s developer/operator, and, afterwards, forwarded to the competent authority.
One significant issue that the "AI Act" deals with is the civil liability of AI systems’ agents. In this regard, the supplier or operator of the system that causes financial, moral, individual, or collective harm is required to repair such damage in full, regardless of the system’s level of autonomy. For AI systems of high or excessive risk, the supplier or operator is strictly liable for the damage caused. If the AI system is not high-risk, the agent’s fault is presumed, i.e., the person who was harmed needs to present evidence of the damage suffered.
The "AI Act" also establishes the following administrative penalties to be applied to agents:
(a) warning;
(b) fines limited to BRL 50 million or 2% of the group’s revenues;
(c) publicization of the breach;
(d) prohibition to participate in the regulatory sandbox regime;
(e) suspension of the development, operation, or supply of AI systems; and
(f) prohibition to process personal data.
The "AI Act" represents an important evolution in relation to the initial bills of law but may certainly be improved after the public hearings that will be called upon to debate the law. One of the themes that may gain more importance after the hearings, relates to the protection of copyright/intellectual property of products created with AI and products used for purposes of machine learning.
Furthermore, it is important to establish, as soon as possible, who will be the relevant authority for regulating this matter, to allow its effective application, in an agile way, after the BL enters into force.
Finally, more recently and as a result of certain marketing activities using AI (like a controversial Volkswagen advertising using the image, recreated by AI, of a famous Brazilian deceased singer), BL 3592/2023 was published establishing the consumer’s right to know that a certain material was created by AI and the right of a person to include in his/her will his/her intent not to be "resurrected" by AI after his/her death.
We hope to have more news on the matter by the end of 2023. In any event, even prior to the approval of the "AI Act" or any other BL on the topic, AI systems are already subject to applicable Brazilian legislation, especially the Civil Code, the Consumer Defense Code, and the Data Protection Law.