The approved text introduces substantial modifications to the initial draft of the Bill and remains subject to further amendments as deliberations progress in the House of Representatives.
On December 10, the Federal Senate approved Bill No. 2338/2023 (“Bill”), known as the Artificial Intelligence Legal Framework, which aims to establish guidelines for the development, use, and governance of Artificial Intelligence (“AI”) in Brazil.
Although it was approved through a symbolic vote, the Bill’s legislative process was marked by intense debates, with the involvement of economic sectors and civil society groups on various fronts.
Unsurprisingly, the text underwent multiple amendments during its course, especially in the past month, when the Temporary Artificial Intelligence Commission ("CTIA") presented the final version of the Bill, which was subsequently approved in full by the Senate days later.
Among the most significant changes in this latest version, the following stand out:
-
Scope of application: the new text narrowed the scope of the Bill, excluding its application for AI uses related to research, testing, and development of AI systems.
-
Relaxation of rules: some obligations have been relaxed or removed, such as (i) the "preliminary assessment" before the use or market availability of AI systems, which was mandatory and now is considered a "good practice", except for developers of general-purpose and generative AI systems; and (ii) the exclusion of third-party participation in algorithmic impact assessment processes.
-
Development vs. application: the new text more clearly distinguishes the responsibilities of AI developers and users, especially regarding governance measures and internal processes.
-
High-Risk: the new text revises some conditions for classifying AI systems as "high risk". Among the changes, AI systems that perform content curation, recommendation, and distribution over the internet – such as those used to identify content preferences on social media – have been excluded from this category.
-
Systemic risk: the text introduces governance measures for a new risk category: "systemic risk". Unlike "high risk" and "excessive risk" categories, the Bill does not provide an exhaustive list of situations that constitute "systemic risk". Instead, it defines the term broadly as "potential adverse effects arising from general-purpose and generative AI systems, with significant impact on fundamental rights, both individual and social."
-
Copyright: Articles related to copyright have been significantly adjusted, providing more information on (i) expected transparency about the use of copyrighted content in training AI Systems; (ii) limitations to copyright rights; (iii) the right to opt-out by copyright holders; and (iv) criteria regarding compensation for affected copyright holders. Notably, the Bill does not mention the figure of the "author", which could lead to discussions on the ability of the copyright holder to exercise the opt-out right.
-
Regulation: the article requiring that regulations and norms issued by the competent authority be preceded by public consultation was removed.
The text now moves to the Chamber of Deputies, where the still-controversial topics of the Bill are expected to be the object of discussions and amendment suggestions. Among these topics, special attention should be given to copyright remuneration and the relaxation of risk classification criteria – which, in the version approved by the Senate, excludes from the "high risk" classification AI systems used by digital platforms for content production, analysis, recommendation, and distribution.
After approval by the Chamber of Deputies, the text will return to the Senate for a new evaluation and, subsequently, will proceed to presidential sanction. The expectation is that the Bill will be approved in 2025.
The new Bill text can be accessed in full here.
For more information on the Bill, check out the material produced by our Technology and Innovation team available below.