-
What are your countries legal definitions of “artificial intelligence”?
To date, there is no approved legal definition for “artificial intelligence” (AI) under Italian legislation. The main definition stems from the AI Act, which defines an AI system as “a machine-based system designed to operate with varying levels of autonomy, that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments”.
Currently, an Italian AI bill is in the process of being drafted. The bill – to date – provisionally defines an “AI system” as “an automated system designed to operate with varying levels of autonomy and which may have adaptivity after deployment and which, for explicit or implicit purposes, deduces from the input it receives how to generate outputs such as predictions, content, recommendations, or decisions that may affect physical or virtual environments”.
-
Has your country developed a national strategy for artificial intelligence?
Yes, Italy has developed a national strategy for AI. In April 2024, AgID (Agenzia per l’Italia Digitale, i.e., an Italian authority with competences on various digital matters) and the Department for Digital Transformation published an executive summary of the Italian AI Strategy, which outlines the framework of Italy’s AI strategy for the period stemming from 2024 to 2026, structured around four key pillars:
- Scientific Research: The strategy emphasizes enhancing the national AI research ecosystem by fostering collaboration among universities, research centers, and businesses. It aims to support the development of innovative startups, attract and retain talent, and promote advanced AI research.
- Public Administration: The plan includes using AI to improve the efficiency of public administration and provide better services to citizens. This involves developing AI systems for interoperability, ensuring proper data management, and training public personnel in AI.
- Business and Industry: The strategy aims to integrate AI into Italy’s industrial and entrepreneurial sectors, especially within SMEs, to boost competitiveness and innovation. It supports collaboration between ICT companies and research institutions, enhances regulatory and certification processes, and promotes AI adoption among SMEs through funding and development of AI solutions.
- Education and Training: Addressing the shortage of AI skills, the strategy proposes enhancing AI education across all levels, from schools to PhD programs. It includes initiatives for upskilling and reskilling workers in both the public and private sectors and promoting AI literacy among the general population.
The strategy also underscores the importance of ethical AI, focusing on privacy, security, gender issues, and environmental sustainability. It aims to ensure AI development and deployment adhere to these principles.
The strategy is supported by a public investment and involves multiple stakeholders, including the Ministry of Enterprises and Made in Italy, the Ministry of University and Research, and the Ministry of Technological Innovation and Digital Transition.
The document provides a comprehensive understanding of the Italian government’s AI objectives and prepares the groundwork for an impending domestic AI bill, which is designed to complement the AI Act by addressing specific sectors within Italy.
-
Has your country implemented rules or guidelines (including voluntary standards and ethical principles) on artificial intelligence? If so, please provide a brief overview of said rules or guidelines. If no rules on artificial intelligence are in force in your jurisdiction, please (i) provide a short overview of the existing laws that potentially could be applied to artificial intelligence and the use of artificial intelligence, (ii) briefly outline the main difficulties in interpreting such existing laws to suit the peculiarities of artificial intelligence, and (iii) summarize any draft laws, or legislative initiatives, on artificial intelligence.
Italy has taken steps to address the regulation of AI through a combination of national guidelines and alignment with EU initiatives.
National Rules and Guidelines
1. National Strategy for Artificial Intelligence (AI) 2024-2026
As mentioned in the answer above, the AgID published the executive summary of the National Strategy for AI, which outlines the strategic vision for the development and use of AI in Italy, focusing on 4 pillars (scientific research, public administration, business and education).
2. Guidelines by the Italian Data Protection Authority
The Italian Data Protection Authority has issued guidelines on the ethical and legal use of AI, emphasizing the importance of data protection and privacy. In particular, the Authority has published a Decalogue for the implementation of national health services through AI systems, and Guidelines for defending personal data from web scraping.
Existing Laws potentially applicable to AI
1. GDPR and Italian Data Protection Law
GDPR, which is applicable across the EU including Italy, and Legislative Decree 196/2003 (so-called “Codice Privacy”) sets a comprehensive framework for data protection that impacts AI systems processing personal data.
2. Consumer Protection and Product Liability Laws
Italian consumer protection laws, which incorporate EU directives, can be applied to AI systems to ensure that they do not deceive or harm consumers. These laws mandate transparency and fairness in services and products offered to consumers.
3. Anti-Discrimination Laws
Italy boasts extensive anti-discrimination laws, as part of the transposition of many EU anti-discrimination law directives. This body of law applies to AI systems used for taking decisions which affect natural persons, in case where such decision unlawfully discriminates against said person, with impact on one or more of the so-called “protected grounds” set forth by the law (e.g., race, sexual orientation, age), which may vary according to the sector where the AI system is deployed (e.g., in the context of a labour relationship, in the context of service provision, etc.).
Main Difficulties in applying existing Laws to AI
Traditional legal frameworks are often ill-suited to address the autonomous and complex nature of AI systems, which can operate without direct human intervention. Applying existing laws (e.g., Italy’s Civil Code and Consumer Code which provide general provisions that can be applied to AI, in cases of liability for damages) for determining accountability and liability for actions taken by AI systems is challenging.
Furthermore, existing laws were not designed with AI in mind, leading to ambiguities in their application to AI-specific scenarios, such as decision-making transparency and to address biased or discriminatory outputs from AI systems.
Draft Laws and Legislative Initiatives
1. AI Act
The EU AI Act, which will be applicable in Italy as an EU Member State, aims to establish a harmonized legal framework for AI across the EU. The AI Act proposes a risk-based approach to regulation, categorizing AI applications based on their potential risk to individuals and society. High-risk AI systems will face stringent requirements, including mandatory risk assessments, data governance standards, and transparency obligations.
2. Italian AI Bill
The document outlines the Italian legislative framework on AI, establishing principles and regulations to ensure the ethical and responsible development, deployment, and use of AI systems. The bill emphasizes transparency, accountability, data protection, and human-centric AI.
The primary goals include promoting research, fostering technological innovation, and ensuring the development of AI respects human dignity and rights. The bill aims to protect individuals’ rights and fundamental freedoms, aligning with EU regulations.
General principles mandate that AI development must respect fundamental rights, ensure data accuracy, non-discrimination, gender equality, and sustainability. Systems must be transparent, secure, and proportional to the sectors they are used in, ensuring human autonomy and decision-making are preserved. AI should not undermine democratic processes, and cybersecurity is crucial throughout the AI lifecycle to prevent misuse and ensure resilience.
The government is tasked with adopting legislative measures to address the use of AI for illegal purposes, including introducing specific criminal offenses and ensuring effective, proportionate, and dissuasive sanctions.
The bill also highlights the need for national authorities to oversee AI implementation, coordinate public-private partnerships, and ensure public awareness and education on AI. These efforts aim to support economic development while safeguarding national security and public welfare.
-
Which rules apply to defective artificial intelligence systems, i.e. artificial intelligence systems that do not provide the safety that the public at large is entitled to expect?
In Italy, the legal framework governing defective AI systems is influenced by several areas of law, including consumer protection, product liability, and specific regulations.
In Italy, product liability is governed by Legislative Decree No. 206 of 2005, the “Consumer Code”, which implements both the EU Product Liability Directive (Directive 85/374/EEC) and the EU General Product Safety Directive (Directive 2001/95/EC). It holds producers and suppliers liable for any damage caused by defective products, which may include AI systems. Consumers can seek compensation from the producer if the AI system does not meet the safety standards they are entitled to expect.
AI systems are not explicitly covered by current regulations, but they can cause damage when integrated into a product. However, existing laws may not fully address AI-related complexities. For example, the liability exemption for development risks might not apply to AI, and identifying the liable party can be difficult since the focus is mainly on the producer. To address these issues, the EU Commission has proposed revising Directive 85/374/EEC which should include AI systems and AI-enabled goods as “products.” The current version of the Directive has several shortcomings concerning the burden of proof (i.e., the need, in order to obtain compensation, to prove the product was defective and that this caused the damage suffered) which is challenging for injured persons to satisfy in complex cases (e.g. when the damage is caused by AI-embedded products). The revision will therefore encourage the roll-out and uptake of such new technologies, including AI, while ensuring that claimants can enjoy the same level of protection irrespective of the technology involved.
Although not yet fully implemented, the upcoming EU AI Act will create a comprehensive regulatory framework for AI across the EU, including Italy. This regulation will classify AI systems into different risk categories and impose varying levels of obligations and requirements on providers and deployers to ensure safety and compliance.
GDPR remedies can also be invoked by the data subject in case data protection violations related to AI, which may take where the AI system processes personal data.
-
Please describe any civil and criminal liability rules that may apply in case of damages caused by artificial intelligence systems.
Regarding material harms caused by AI, liability concepts such as the special liability for defective products, responsibility from hazardous activity (especially of high-risk AI cases), fault-based liability and liability for things can apply.
In case of personal data breach, data protection laws can be invoked, where the data subjects can ask for compensation for damages.
As for criminal liability, the draft Italian AI bill introduces a new offence within the Italian Criminal Code (Art. 612-quarter, Illicit dissemination of artificially generated or manipulated content), which provides as follows (authors’ translation):
“Whoever causes unjust damage to others, by sending, delivering, transferring, publishing or otherwise disseminating images or videos of persons or things or voices or sounds that are wholly or partially false, generated or manipulated through the use of artificial intelligence systems, designed to mislead as to their genuineness or origin, shall be punished by imprisonment from one to five years […]“.
In this regard, arguably it is necessary that harm to others (e.g., economic, reputational, moral) occurs through the sending, delivery, assignment publication or dissemination of AI-generated audio or audiovisual material. Also, in order to protect other constitutionally guaranteed rights (e.g., freedom of expression, right to satire), a constitutive element of the offense concerns the suitability of the material to mislead as to its genuineness or origin.
The draft Italian AI bill also provides for an amendment to Articles 640, 640-ter, 648-bis, 648-ter, 648-ter.1 (fraud; informatic fraud; money laundering; use of money, goods or utilities of unlawful origin; and self-laundering) with the inclusion of the aggravating circumstance of using AI systems. Similar provisions have been included for the crime of corporate or banking market rigging (Article 2637 Civil Code).
-
Who is responsible for any harm caused by an AI system? And how is the liability allocated between the developer, the user and the victim?
In terms of product liability, see Answer no. 4.
In case of non-contractual liability, the responsible party for damages is determined based on the fault that caused the harm, which can be on the producer, supplier or any other party in the “liability chain”. The providers and the deployers of AI can be responsible for any harm caused by the AI system. Indeed, in the “liability chain”, each party suffering a prejudice can claim compensation from the accountable party on the higher level (e.g., the user of the AI from the supplier, the supplier from the producer).
-
What burden of proof will have to be satisfied for the victim of the damage to obtain compensation?
In civil liability scenarios, several key conditions must typically be met for someone to be held responsible for damages. These conditions include proving fault, demonstrating that damage occurred, and establishing a direct causal relationship between the fault and the damage. Generally, the burden of proof falls on the victim who suffered the damage. When dealing with cases involving defective products, this changes.
According to Art. 114 of the Italian Consumer Code the producer is liable for the damage caused by defects in their product. Liability is therefore not absolutely strict, but only presumed. In other words, liability is not based on fault but on the causal traceability of the damage to the presence of a defect in the product.
To hold a manufacturer accountable for damages caused by their defective products, the injured party must prove the defect, the damage, and the causal connection between them, as stated in Article 120(1) of the Consumer Code. The manufacturer, on the other hand, must demonstrate facts to exclude their liability, such as that the defect did not exist when the product was marketed, or that it arose from the need to comply with mandatory regulations. They can also refute the claims by showing that, based on technical knowledge available at the time of commercialization, the defect could not have been detected. If the dispute concerns a component, the manufacturer can prove that the defect is entirely due to the design of the product in which the part was incorporated or that it complied with instructions from the manufacturer who used it (Article 118).
National jurisprudence holds that liability for defective products is presumed and not strict, meaning it does not depend on proving the manufacturer’s fault but does require proof of the product’s defect and its causal link to the damage. The Consumer Code recognizes mixed and limited strict liability, easing the burden of proof for the injured buyer, while the manufacturer must show diligence in the design phase. The Supreme Court has clarified that even simple presumptions can be used to prove defectiveness if they are serious, precise, and consistent. Mere occurrence of damage does not suffice to prove defectiveness or the product’s danger in normal use conditions.
There are instances where the manufacturer’s liability can be excluded if the injured party fails to provide sufficient proof. For example, damage must occur under normal usage conditions and following the provided instructions.
The right to compensation for damage caused by a defective product cannot be excluded or limited by any prior agreement. If the injured party is partially at fault, compensation is assessed according to Article 1227 of the Civil Code, and it is not due if the product was used with awareness of its defect and the associated risk. The right to compensation is subject to a three-year prescription period from the date the injured party knew or should have known about the damage, the defect, and the identity of the responsible party. If the damage worsens, the prescription period does not begin until the injured party becomes aware of the sufficiently serious damage. The right to compensation expires ten years after the product causing the damage was put into circulation, with the expiration prevented only by legal action against one of the responsible parties, without affecting others (Articles 124-126 of the Consumer Code).
Article 123 of the Consumer Code identifies two main types of damages caused by defective products: death or personal injury and the destruction or deterioration of property other than the defective product, provided it is typically intended for private use or consumption and was primarily used by the injured party.
-
Is the use of artificial intelligence insured and/or insurable in your jurisdiction?
Currently, in Italy there is no specific insurance coverage tailored on the use of AI systems. Traditional insurance policies designed for civil liability, cybersecurity, product liability, and directors’ liability can be customized to address the risks associated with AI systems by providing further coverage.
-
Can artificial intelligence be named an inventor in a patent application filed in your jurisdiction?
When the invention is not the result of an activity of a human subject making mere use of the AI tool, and when it is also not the result of a complex and organized research and development activity, the following has to be considered. AI systems lack the requirement of “legal subjectivity“, which is necessary to recognize an authorship of the invention in the head of a “subject”: no matter how technologically innovative, the AI tool has to be considered as a “machine” without legal personality. Accordingly, and taking into account the difficulty of recognizing the paternity of the invention to an AI system, the Italian and EU legal systems (e.g., the Italian Industrial Property Code) expressly provide that the inventor must be designated in the patent application, indicating first and last name (this further confirms the need for the inventor to be a person).
All decisions made by patent offices around the world, which have been invested with the issue, have ruled out the possibility of patenting inventions designating an AI system as the author. The invention made by AI always constitutes the result of the investment of a company or at any rate of the person who created and/or instructed the AI system to behave and operate in a certain way.
To date, there is as yet no evidence for the existence of an AI capable of creating independently of its initial programming.
Recent debates and legal cases, such as the “DABUS” case, have highlighted this issue internationally. These cases have sparked discussions on whether AI should be recognized as an inventor given its increasing role in innovation. However, until legislative changes are made to explicitly allow for AI inventorship, to date only natural persons can be named as inventors in patent applications in Italy.
-
Do images generated by and/or with artificial intelligence benefit from copyright protection in your jurisdiction? If so, who is the authorship attributed to?
Following the example of the AI Act, on April 23rd in Italy, the Council of Ministers approved the so-called “Butti” Bill, which is still undergoing adoption at parliamentary level, aimed at integrating and completing the new EU provisions within the Italian legal framework. Regarding copyright law, the bill includes within Article 24 a set of rules that should introduce some significant changes to the Copyright Law (Law n. 633/1941). A major modification (Article 1 Copyright Law) allows works generated with the assistance of AI to be protected by copyright, provided that the human contribution is creative, relevant, and demonstrable. While this provision aims to ground a human-centric view of AI, it raises concerns as it introduces two new criteria (relevance and demonstrability of human contribution) that brings interpretive doubts and shift the burden of proving the creativity and relevance of one’s contribution on the author. Furthermore, based on the wording of the text, it is not clear whether the relevance of the (creative) human contribution should be assessed quantitatively or qualitatively.
The second point of interest in the Bill concerns the introduction of a new Article 70-septies into the Copyright Law, which mirrors the provisions of the AI Act on the extraction of text and data from third-party content for AI training. If this article were adopted, authors wishing to opt-out and reserve rights over their content by prohibiting its use for AI system training would need to do so in a way that makes their choice machine-readable, i.e., understandable to AI systems, allowing them to automatically block the extraction of data, content, and information.
Beside the new draft Bill, the case law provides some insights on new regulatory trends in Italy. More precisely, in Ruling No. 1107 of 09.01.2023, the Italian Supreme Court ruled that the reproduction of an image constitutes infringement of the creator’s copyright, even in cases where the creative process was perfected by making use of a software. According to the Supreme Court, in fact, the use of digital technology for the realization of a work does not in itself preclude the possibility of recognizing the work as the fruit of the intellect, except in cases where the use of technology has not predominantly absorbed the creative elaboration of the artist. The protection of the work, therefore, would be guaranteed in the case where the creative elaboration of a human is significant, while in the case where the creative contribution of a human is marginal, traditional protections cannot be evoked. While in the former case the rights to the economic exploitation of the work and the moral right to be recognized as an author are guaranteed, in the latter case the protection is more controversial.
-
What are the main issues to consider when using artificial intelligence systems in the workplace?
Beside all the already mentioned problems related to the use of AI, when implementing AI systems in the workplace several labor law issues must be considered to ensure compliance with Italian regulations.
Workplace Surveillance & privacy
Italian labor law imposes strict limits on employers’ possibility to monitor employees. Article 4 of the Workers’ Statute (Statuto dei Lavoratori) regulates the use of surveillance equipment, which may include AI. Installing systems for monitoring often requires an agreement with trade unions or authorization from the National Labour Inspectorate (Ispettorato Nazionale del Lavoro).
Discrimination and Bias
AI systems must not be used in a way that discriminates against employees based on protected grounds such as race, gender, age, or religion. Ensuring that AI algorithms do not perpetuate unlawful biases is crucial. AI systems used for performance evaluations must be fair, transparent, and non-discriminatory. Employers must ensure that such systems do not lead to discriminatory treatment of the worker.
From another perspective, employers may also need to invest in reskilling and upskilling programs to train employees in using AI tools.
Health and Safety
The use of AI should not create new health and safety risks for employees. Employers are responsible for ensuring a safe working environment, which includes assessing and mitigating risks associated with new technologies. The introduction of AI can lead to increased stress, anxiety or other kinds of negative emotions amongst employees, so employers should consider the psychological impact and provide appropriate support.
Transparency and accountability
Employers must ensure that AI systems are transparent and that employees understand how these systems make decisions affecting them. Recently, the EU is finalizing the adoption the Platform Work Directive, which – once introduced also in Italy – will provide new rules, ensuring that so-called Gig Economy platforms must ensure human oversight on important decisions that directly affect the platform workers. Platforms are also forbidden from processing certain types of personal data belonging to workers, such as data on someone’s emotional or psychological state and personal beliefs.
-
What privacy issues arise from the use of artificial intelligence?
Italy, as a member of the European Union, is subject to the GDPR. The use of AI must therefore comply with GDPR requirements, and with the Italian privacy law (d.lgs 196/2003).
Starting from the above, it should be noted that not all AI systems leverage personal data. For those that do, however, the privacy issues can be categorized into several areas:
- Security: AI systems often require large datasets, which may include personal information. Ensuring the security of this data is paramount to prevent breaches that could lead to unauthorized access, misuse, or loss of personal data.
- Legal basis for processing data: AI applications must have a legal basis for data processing (from training to use). Ensuring that consent is genuinely informed and voluntary is a significant challenge.
- Data Minimization and Purpose Limitation: According to GDPR principles, data collected should be adequate, relevant, and limited to what is necessary for the purposes for which it is processed. AI systems should not collect excessive data and must clearly define the purpose of data collection to avoid misuse.
- Transparency and Accountability: AI systems have to be transparent in their operations. This includes providing explanations about how AI decisions are made, which can be challenging given the complexity of many AI algorithms. Ensuring accountability involves keeping detailed records of data processing activities and decisions made by AI systems.
- Right to Rectification and Erasure: Under GDPR, individuals have the right to have inaccurate personal data rectified and to request the erasure of their data. Implementing these rights within AI systems can be complex, especially if the data has been integrated into decision-making processes.
- Automated Decision-Making and Profiling: GDPR restricts decisions based solely on automated processing, including profiling, that produce legal effects or similarly significantly affect individuals.
- Surveillance and Tracking: AI technologies, such as facial recognition and predictive analytics, can be used for extensive surveillance and tracking of individuals, leading to concerns about invasion of privacy.
-
How is data scraping regulated in your jurisdiction from an IP, privacy and competition point of view?
Data scraping is not specifically regulated in Italy, but it is regulated by several different bodies of law, especially the legislation on personal data protection, unfair competition, intellectual property, copyright and sui generis right on the protection of databases. For instance, EU law grants a sui generis protection to database creators based on a substantial investment criterion in obtaining, verifying, or presenting database contents. Directive 96/9/EC grants exclusive rights to database makers, allowing them to charge for database use and select licensees. In this scenario, scraping may be lawful under exceptions such as temporary copies and text/data mining provisions. In this scenario, legal disputes often arise regarding whether a scraped website constitutes a protected database, with courts assessing investment and extraction substantiality.
-
To what extent is the prohibition of data scraping in the terms of use of a website enforceable?
A significant challenge in assessing the legality of data scraping lies in interpreting website’s terms of service and whether they constitute enforceable contracts. Most scraping activities fall under ‘browsewrap’ agreements, raising questions about their enforceability. Courts have grappled with the issue of whether scrapers can be held liable for violating ToS to which they never explicitly agreed, highlighting the complexities of regulating online behavior.
-
Have the privacy authorities of your jurisdiction issued guidelines on artificial intelligence?
Yes. The Italian Data Protection Authority has issued guidelines on the ethical and legal use of AI, emphasizing the importance of data protection and privacy. In particular, the Authority has published a Decalogue for the implementation of national health services through AI systems, and Guidelines for defending personal data from web scraping.
-
Have the privacy authorities of your jurisdiction discussed cases involving artificial intelligence?
The Italian Data Protection Authority (Garante per la protezione dei dati personali) has taken several measures against OpenAI to ensure compliance with privacy regulations concerning its famous generative AI system ‘ChatGPT’. Here is a summary of the main actions taken:
- Temporary Suspension of ChatGPT: In March 2023, the Garante temporarily suspended the use of ChatGPT in Italy, citing concerns about the processing of personal data and the lack of adequate information provided to users and non-users whose data was being collected.
- Compliance Request: The Garante required OpenAI to provide more information on how personal data is collected and processed, and to adopt measures to ensure compliance with the General Data Protection Regulation (GDPR). This included the need to verify the age of users to prevent access by those under 13 years old.
- Changes to the Privacy Policy: Following the Garante’s requests, OpenAI implemented changes to its privacy policy to make it more transparent and understandable.
- Service Restoration: at the end of April 2023, after adopting the changes requested, the ChatGPT service was restored in Italy. Amongst others, OpenAI introduced measures to facilitate the exercise of data subjects’ rights, implemented tools for age verification and improved the privacy policy.
-
Have your national courts already managed cases involving artificial intelligence?
To date, Italian courts have had a few occasions to deal with cases involving AI systems.
In the public sector, the Council of State examined a case concerning an administrative decision automated through an algorithm. In the specific case, the Council of State’s ruling No. 2270 of 2019 concerned a teacher hiring procedure handled entirely by an algorithm, which led to irrational outcomes, such as assignments that did not conform to preferences and appointments in distant provinces. The plaintiffs challenged the automated procedure for lacking transparency and reasoning, and for the absence of individual evaluation by a public official.
The Council of State upheld the appeal, pointing out the violation of the principles of impartiality and transparency. The court said that although the use of algorithms in administrative decisions can improve efficiency, this must be done within well-defined limits, ruling out automation in the presence of administrative discretion.
The Council of State also ruled that the algorithm must be “knowable” and understandable, and the responsibility for the decision must remain with the administration, which must ensure the transparency of the decision-making process. Finally, the Court emphasized the importance of legal oversight of algorithmic decisions, requiring a multidisciplinary approach and ensuring the “human in the loop” principle at every stage of the decision-making cycle.
In the private sector, on the Court of Bologna upheld an appeal filed by workers’ unions against Deliveroo. The object of dispute was the algorithm used by the platform to organize the work performance of its employees. This algorithm, known as “Frank,” determined the distribution of work among riders in an automated manner, based on a system of booking work sessions and a score given to each rider. However, the algorithm penalized riders who did not comply with the booked work sessions, making it difficult for them to exercise their right to strike. The court ruled that the application of this algorithm was unlawful and ordered Deliveroo to pay damages to the plaintiff unions.
-
Does your country have a regulator or authority responsible for supervising the use and development of artificial intelligence?
According to the draft Bill complementing the EU AI Act, Italy will have two agencies responsible for supervising the use and development of AI, which are going to be the AgID (Agenzia per l’Italia Digitale) and ACN (Agenzia per la Cybersicurezza Nazionale).
-
How would you define the use of artificial intelligence by businesses in your jurisdiction? Is it widespread or limited?
The adoption of AI by businesses in Italy is still limited. According to an analysis by Eurostat, only 6% of companies in Italy currently use AI. This percentage is lower than the EU average of 20%. One of the main challenges hindering AI adoption is the lack of alignment between integration plans and strategic goals. Currently, only one in four Italian companies has an AI plan that aligns with their overall strategy, and 65% of companies say they do not have the appropriate technology infrastructure to meet the challenges of AI. In addition, the majority reported a lack of expertise in this area as a major barrier to adoption.
-
Is artificial intelligence being used in the legal sector, by lawyers and/or in-house counsels? If so, how?
Italian law firms and legal departments are increasingly recognizing the benefits of integrating AI into their workflows. Some notable examples include:
Legal Research:
AI platforms can assist lawyers by providing comprehensive and accurate legal research. These tools can search through vast legal databases, case law, statutes, and regulations to find pertinent information, saving time and improving the quality of research. For instance, there are AI-powered tools which synthesize the ruling of courts.
Automation of Routine Tasks:
Routine legal tasks such as drafting standard documents and organizing case files can be automated using AI, freeing up lawyers to focus on more complex and value-added activities.
AI-powered tools can quickly review large volumes of documents, identify relevant information, and highlight potential risks or inconsistencies. This is particularly useful in due diligence processes in the context of mergers and acquisitions.
Contract Analysis and Management:
AI solutions can analyze and manage contracts by extracting key terms, identify obligations, and flag non-standard clauses. This helps in streamlining contract lifecycle management and ensuring compliance with legal standards.
-
What are the 5 key challenges and the 5 key opportunities raised by artificial intelligence for lawyers in your jurisdiction?
Challenges:
- Job displacement and role changes are concerns, with the potential for AI to automate routine tasks, leading to job displacement or changes in job roles, and requiring preparation for and management of this transition in job functions, necessitating adaptation and upskilling.
- Overreliance on potentially wrong outputs: Dependence on these outputs without critical assessment can lead to erroneous legal advice and flawed strategies, jeopardizing client advice. Balancing AI insights with human expertise is crucial to maintaining reliable legal practice and mitigating AI’s limitations.
- Data Protection and cybersecurity: data privacy and security is another major challenge, as lawyers will face several cases in which data is processed by AI systems, while needing to have strict data privacy and security standards, also taking into account increased cybersecurity risks inherent to AI.
- Legal liability: lawyers will address responsibility and accountability issues, including allocating liability among human operators, AI developers, and AI systems.
- Discrimination: bias and fairness pose challenges, as AI algorithms may have inherent biases that could lead to unfair or discriminatory outcomes, requiring mechanisms to ensure transparency and explainability in AI decisions, and to correctly address the accountability on the stakeholders involved in the processing.
Opportunities:
- Enhanced Efficiency and Productivity: AI can automate routine and time-consuming tasks such as document review, legal research, and contract analysis, allowing lawyers to focus on more complex and strategic work and enhancing overall productivity while reducing operational costs.
- Predictive Analytics: AI enables lawyers to analyze legal data, extract valuable insights, and make informed predictions. For instance, AI can assist in determining the likelihood of success for actions, predicting rulings, and automating legal research and drafting. This can increase efficiency, reduce costs, and broaden access to legal services.
- Document Automation and Contract Analysis: AI-powered tools can help automating the drafting of legal documents, saving time and reducing certain human errors.
- Legal Research and Analysis: AI can assist lawyers in conducting legal research, analyzing precedents, and identifying relevant case laws or regulations. Legal research platforms can use AI to enhance the accuracy of legal research outcomes.
- New Practice Areas: AI creates new practice opportunities for lawyers in areas like legal tech consulting, AI policy and ethics, compliance automation, and intellectual property related to AI innovations. These areas allow lawyers to diversify their expertise and lead in shaping the legal landscape amidst technological advancements.
-
Where do you see the most significant legal developments in artificial intelligence in your jurisdiction in the next 12 months?
In the next 12 months, significant legal developments in AI in Italy are likely to focus on several key areas:
- Implementation of the EU AI Act
Italian stakeholders will need to comply with the AI Act, as soon as it becomes directly applicable in the next few months (regarding prohibited AI practices) and years (concerning rules on high-risk AI systems).
- Intellectual Property (IP) and AI
Questions around the ownership of AI-generated works and inventions are already becoming more pressing. Italian law may see updates or new interpretations in this area, especially concerning the protection of IP created by, or with the assistance of, AI.
- AI in Employment and Labor
The impact of AI on the workforce will continue to be a significant area of legal concern. This includes issues related to AI-driven automation and worker rights. Legal frameworks might be updated to address the challenges and opportunities presented by AI in the labor market.
- Sector-Specific Regulations
Different sectors such as finance and public administration may see specific AI regulations tailored to their unique needs and challenges. This could involve guidelines for the use of AI in financial services to prevent fraud, or regulations to ensure safety and compliance with traffic laws.
Italy: Artificial Intelligence
This country-specific Q&A provides an overview of Artificial Intelligence laws and regulations applicable in Italy.
-
What are your countries legal definitions of “artificial intelligence”?
-
Has your country developed a national strategy for artificial intelligence?
-
Has your country implemented rules or guidelines (including voluntary standards and ethical principles) on artificial intelligence? If so, please provide a brief overview of said rules or guidelines. If no rules on artificial intelligence are in force in your jurisdiction, please (i) provide a short overview of the existing laws that potentially could be applied to artificial intelligence and the use of artificial intelligence, (ii) briefly outline the main difficulties in interpreting such existing laws to suit the peculiarities of artificial intelligence, and (iii) summarize any draft laws, or legislative initiatives, on artificial intelligence.
-
Which rules apply to defective artificial intelligence systems, i.e. artificial intelligence systems that do not provide the safety that the public at large is entitled to expect?
-
Please describe any civil and criminal liability rules that may apply in case of damages caused by artificial intelligence systems.
-
Who is responsible for any harm caused by an AI system? And how is the liability allocated between the developer, the user and the victim?
-
What burden of proof will have to be satisfied for the victim of the damage to obtain compensation?
-
Is the use of artificial intelligence insured and/or insurable in your jurisdiction?
-
Can artificial intelligence be named an inventor in a patent application filed in your jurisdiction?
-
Do images generated by and/or with artificial intelligence benefit from copyright protection in your jurisdiction? If so, who is the authorship attributed to?
-
What are the main issues to consider when using artificial intelligence systems in the workplace?
-
What privacy issues arise from the use of artificial intelligence?
-
How is data scraping regulated in your jurisdiction from an IP, privacy and competition point of view?
-
To what extent is the prohibition of data scraping in the terms of use of a website enforceable?
-
Have the privacy authorities of your jurisdiction issued guidelines on artificial intelligence?
-
Have the privacy authorities of your jurisdiction discussed cases involving artificial intelligence?
-
Have your national courts already managed cases involving artificial intelligence?
-
Does your country have a regulator or authority responsible for supervising the use and development of artificial intelligence?
-
How would you define the use of artificial intelligence by businesses in your jurisdiction? Is it widespread or limited?
-
Is artificial intelligence being used in the legal sector, by lawyers and/or in-house counsels? If so, how?
-
What are the 5 key challenges and the 5 key opportunities raised by artificial intelligence for lawyers in your jurisdiction?
-
Where do you see the most significant legal developments in artificial intelligence in your jurisdiction in the next 12 months?