The European Context and the Anthropocentric Approach
Prior to national intervention, the European legislator had already paved the way. With the adoption of EU Regulation 2024/1689, known as the AI Act, the European Union established a harmonized, risk-based approach to AI regulation. The AI Act acknowledges the need to foster innovation but also sets strict rules for high-risk systems, such as those used in healthcare or biometric identification. The goal is to ensure that AI remains safe, transparent, and trustworthy. This comprehensive European framework has prepared the ground for the legislative initiatives of individual member states, ensuring that the individual and their rights are central to every technological advancement.
Italy's Response: A Decisive Step for Digital Rights
In alignment with this vision, Italy has taken a decisive step by approving the framework law on Artificial Intelligence. This legislation goes beyond a generic approach, introducing substantial modifications to the Criminal Code to specifically and incisively combat AI abuses. The core of these amendments lies in the recognition that human dignity and self-determination must be protected even within the digital sphere.
The law intervenes on multiple fronts, reflecting an awareness that digital threats are not limited to a single act but can affect the individual, society, and democracy itself.
The New Offence of Deepfake: Protecting Personal Identity
The most significant novelty is the introduction of the new crime of "Unlawful dissemination of content generated or manipulated with artificial intelligence systems," commonly known as deepfake. Article 612-quater of the Criminal Code punishes with imprisonment from 1 to 5 years anyone who causes unjust harm to a person by disseminating, without their consent, images, videos, or voices falsified with AI that are likely to deceive the public about their authenticity.
The rule was strategically placed among the "Crimes against the Person," specifically against "moral liberty." This placement underscores that the offence is not merely about file forgery, but about the violation of an individual's identity and autonomy. A deepfake is not just a trick; it is a profound manipulation that can severely compromise the victim's reputation, relationships, and dignity, infringing upon their right to be recognized for who they truly are.
Aggravating Factors and Threats to Democracy
In addition to the deepfake offense, the law updated the Criminal Code to address the use of AI in broader contexts.
Common Aggravating Circumstances (Art. 61 c.p.): The article listing aggravating circumstances is now updated. It recognizes that committing a crime with AI systems is more serious if the technology acted as an "insidious means," hindered public or private defense, or exacerbated the consequences of the crime. AI is no longer seen as a simple tool but potentially as a weapon that amplifies the inherent danger of the offense.
Political Rights (Art. 294 c.p.): The law introduces a special aggravating circumstance for crimes against citizens' political rights (Art. 294 c.p.) when AI is used. The penalty range shifts from the basic 1–5 years to an aggravated 2–6 years. This measure acknowledges the severe risk that AI could be leveraged on a large scale to influence public opinion and undermine the free expression of the vote, a pillar of a healthy democracy.
In conclusion, the Italian AI law represents a fundamental step in adapting our legal framework to the pressing challenges of the digital age. It's not just about updating a code, but about constructing a "Law of Dignity" in the digital era, fulfilling, in part, the intellectual legacy of Stefano Rodotà and ensuring that technological progress does not come at the expense of fundamental human rights and liberty.