-
What are your countries legal definitions of “artificial intelligence”?
Portugal has not yet adopted a specific legal definition of “artificial intelligence” (“IA”).
We should consider that the legal definition of “artificial intelligence” (AI) aligns with the European Union’s framework, particularly the upcoming Artificial Intelligence Act (“AIA”). According to the AIA, an “AI system” is defined as a machine-based system designed to operate with varying levels of autonomy, which can infer from the input it receives to generate outputs such as predictions, content, recommendations, or decisions that influence physical or virtual environments.
-
Has your country developed a national strategy for artificial intelligence?
The “AI Portugal 2030”1 strategy, launched in 2019, aims to enhance education, research, and AI-supported product development, aligning with the EU’s AI Action Plan. It focuses on leveraging AI to tackle global challenges like health, climate, agriculture, and cybersecurity, fostering economic growth, and enhancing Portugal’s AI research position.
The strategy seeks to attract international investment, promote innovation among startups and SMEs, and enhance workforce tech skills. Initiatives include funding PhDs, organizing AI events, and participating in international forums like Horizon Europe and the Digital Europe Program. These efforts are supported by policy actions and funding programs aimed at fostering a vibrant AI ecosystem in Portugal, namely through the establishment of AI research centres, promoting innovative solutions for public administration.
The “AI for Good” initiative by the International Telecommunication Union (ITU) aims to create a global repository of AI solutions for sustainable development. ITU received contributions of innovative AI cases impacting sustainable development goals in areas such as health, finance, and automation.2
Regarding the public administration sector, the Agency for Administrative Modernization (AMA) is leading the Responsible AI Project3 to promote ethical, transparent, and accountable AI practices. The project outlines AI’s definition, societal effects, and its context in Portugal and globally, stressing the importance of data ecosystems in public administration.
Footnote(s):
1 Estratégia Nacional de Inteligência Artificial – INCoDe 2030
2 ANACOM – Iniciativa ”AI for good” recolhe soluções de IA para o desenvolvimento sustentável
3 Guia para a Inteligência Artificial, Ética, Transparente e Responsável (bussola.gov.pt)
-
Has your country implemented rules or guidelines (including voluntary standards and ethical principles) on artificial intelligence? If so, please provide a brief overview of said rules or guidelines. If no rules on artificial intelligence are in force in your jurisdiction, please (i) provide a short overview of the existing laws that potentially could be applied to artificial intelligence and the use of artificial intelligence, (ii) briefly outline the main difficulties in interpreting such existing laws to suit the peculiarities of artificial intelligence, and (iii) summarize any draft laws, or legislative initiatives, on artificial intelligence.
Portugal has approved a Charter of Human Rights for the Digital Age: Law 27/2021. Albeit the Charter is mostly composed of declaratory provisions, Article 9 of said Law states that the use of artificial intelligence and robots should be guided by respect for fundamental rights. This involves ensuring a fair balance between the principles of explainability, security, transparency, and accountability. Algorithmic decision-making that has a significant impact on individuals must be communicated to the concerned parties, be subject to appeal, and be auditable. The creation and use of robots must adhere to the principles of beneficence, non-maleficence, respect for human autonomy, as well as the principles and values enshrined in the Treaty on European Union, including non-discrimination and tolerance.
Nevertheless, it is correct to say that Portugal has not yet adopted AI-specific legislation and no specific national regulations or guidelines have been issued by the relevant national regulatory authorities (NRAs).
In line with the national priorities outlined in the AI Portugal 2030 strategy, Portugal might implement additional measures to oversee high-risk AI applications. These measures could foster a more supportive environment for certain AI developments while maintaining a cautious approach in sensitive areas, such as health-related data management and labor relationships.
For instance, Article 106 (3) of the Labour Code specifies that employers are required to give workers information, including the parameters, criteria, rules, and instructions underlying algorithms or other artificial intelligence systems that influence decisions on employment access and retention, as well as working conditions, including profiling and monitoring of professional activities.
The formal appointment of specific regulatory authorities for AI oversight is still pending. However, existing NRAs are anticipated to act as market surveillance authorities within their respective domains, as foreseen by the AI Act.
-
Which rules apply to defective artificial intelligence systems, i.e. artificial intelligence systems that do not provide the safety that the public at large is entitled to expect?
In Portugal, while there are no specific AI liability laws, existing national laws such as the Civil Code and Decree-Laws 7/2004, 58/2019, and 383/89 apply. These laws address liability for defective products, holding producers accountable for damages caused by product defects, irrespective of fault.
PT Decree-Law 69/2005 requires producers to ensure product safety and take necessary actions, including market withdrawal if required.
Following future legal developments in the EU (e.g. the AI Liability Directive and the Product Liability Directive), these rules will need to be updated.
Moreover, to promote the accountability of AI outputs, it is crucial to establish robust technical, organisational, and contractual mechanisms. This involves implementing measures that enable transparent examination of AI-generated outcomes. Additionally, clear contractual agreements between stakeholders are essential to allocate responsibility for AI system errors and biases.
-
Please describe any civil and criminal liability rules that may apply in case of damages caused by artificial intelligence systems.
Under Portuguese law, concerning both civil and criminal liability for harm caused by AI systems, there are multiple potential pathways for seeking restitution.
Civil Liability
The tort or extra-contractual liability regime, as set forth in Article 483 et seq. of the Portuguese Civil Code, requires the establishment of fault, defined as a judgment of censure regarding the agent’s conduct. Given the autonomous and adaptive nature of AI systems, coupled with their lack of legal personality, attributing fault directly to them presents a significant challenge.
An alternative perspective involves the civil liability system under Article 493 of the Civil Code. Given the self-governing and adaptable characteristics of AI systems, a parallel can be drawn to the civil liability regime applicable to damages caused by incapacitated individuals or animals, especially considering their high level of intellect. In this context, Article 493 could be invoked to expand civil liability to include the oversight of AI systems.
Article 493(1) of the Civil Code establishes a presumption that any individual in possession of tangible or intangible assets, with the responsibility of overseeing them, is accountable for any harm inflicted by the assets, unless they can demonstrate their absence of culpability or that the harm would have transpired irrespective of their fault or lack thereof.
Conversely, Article 493(2) of the Civil Code addresses hazardous activities, which are identified based on their greater likelihood of causing harm compared to other activities. Some experts have suggested that this legal principle may not apply to AI systems, as they do not necessarily present an abnormal risk of causing damage. For instance, smart TVs and smartphones are typically not regarded as hazardous.
However, AI systems classified as high-risk under the EU AI Act may be covered by this section due to their significant potential to cause harm, in accordance with Article 6(3) of the AI Act, which establishes that these high-risk systems pose substantial risks to health, safety, or fundamental rights, even if they do not directly influence decision-making outcomes.
The presumption in Article 493(1) CC can be rebutted if it is proven that there was no fault or that the damage would have occurred even if there had been no fault. The presumption in Article 493(2) can also be rebutted if it is demonstrated that all the necessary measures have been taken to prevent the damage from occurring. This is a relatively straightforward task in the context of artificial intelligence.
A third option for applying civil liability rules to damages caused by AI systems is strict liability, which does not require fault.
Strict liability is an exceptional legal regime. Accordingly, one is only liable due to strict liability if that situation is provided for by a legal norm, as set forth in Article 483(2) of the Civil Code.
The Portuguese legal system includes several instances of strict liability, such as the liability of the principal, as outlined in Article 500 of the Civil Code.
The AI system can be made available to a specific individual. In this context, the entity acts in the interest of and on behalf of the aforementioned person. In accordance with Article 500 of the Civil Code, the person who entrusts another with a commission is liable, regardless of fault, for any damage caused by the agent.
Furthermore, the agent is also liable for compensation. However, for this provision to be applicable, three conditions must be met: (i) The existence of a commission relationship, (ii) the commission of the unlawful act in the exercise of the principal’s duties, and (iii) the agent’s liability if item (ii) is not met.
An alternative approach would be to apply the liability regime for damages caused by vehicles as set out in Article 503 of the Civil Code.
In accordance with Article 503 of the Civil Code, the owner or user of an automated vehicle is liable for any damage caused by the vehicle. It should be noted, however, that not all damages are eligible for compensation. Only damages resulting from inherent risks associated with the vehicle are eligible for compensation.
From a forward-looking perspective: While current legal frameworks primarily rely on fault-based liability, strict liability regimes appear to be a more effective approach for addressing damages caused by AI systems under the supervision or oversight of their respective users. The legal system may need to adapt further to address the unique challenges posed by AI technologies.
Criminal Liability
While AI systems themselves lack legal personality and cannot be held criminally liable, individuals or entities responsible for deploying, managing, and using these systems may face criminal liability. If an AI system causes harm due to negligence, recklessness, or intentional misuse by its operator or developer, these individuals or entities could be subject to criminal prosecution under existing criminal laws. This could result in charges such as negligence resulting in injury or homicide. In instances where the AI system’s actions result in serious harm or death.
For instance, in the event that an AI system fails to undergo the requisite updates or maintenance, resulting in an incident that causes substantial damage or injury, the responsible parties may be held liable for negligence or even deliberate actions (i.e., when the developer “takes the risk of (possible) damage to the legal good seriously, that he takes it into account and that he nevertheless decides to carry out the act/omission” – Article 14(3) of the Portuguese Penal Code).
-
Who is responsible for any harm caused by an AI system? And how is the liability allocated between the developer, the user and the victim?
In accordance with civil and criminal liability legal provisions, as well as GDPR rules, users of AI systems would typically be held liable for damages caused intentionally or with negligence when using AI systems. This is contingent upon the injured party being able to demonstrate that the injury would not have occurred in the absence of the negligent or intentional misuse of the AI system. Some authors have noted that, even under the well-established civil liability framework for tort or extra-contractual damages, it is often challenging for injured parties to demonstrate the causation link. Furthermore, the lack of sufficient case law in the context of AI systems, particularly given the asymmetry of information surrounding the machine learning mechanisms underlying their use, further complicates matters.
Another complicating factor is the matter of liability allocation between the developer, the deployer, and the user. This is necessary to establish a causal link between the injury and the fault of the various agents involved in the AI system’s use.
Regarding developers, although Decree-Law 383/89 is currently regarded as applicable to defective AI systems, there is still some uncertainty as to whether, under Portuguese law, this act is fully applicable to AI systems built upon algorithms fed with incorrect or inaccurate data. It is therefore evident that while it is relatively straightforward for example to apportion liability to manufacturers and sellers of automated vehicles or other AI-based products whenever they present defects or the seller fails to comply with the information obligations to which it is subject, this does not apply to deployers and end-users, particularly in the case of deployers of AI systems, since there is the underlying question of whether they are handling a product or a service.
In the event of a malfunction in an AI system resulting in damages, the various agents involved in its manufacture, deployment, and operation will be held jointly liable for compensation to the injured party in accordance with the proportion of their respective negligence or intentional acts or omissions in causing the damage.
-
What burden of proof will have to be satisfied for the victim of the damage to obtain compensation?
In accordance with Article 563 of the Portuguese Civil Code, an individual seeking compensation for damages caused by artificial intelligence systems must demonstrate that the harm suffered by the injured party was directly caused by the actions of the artificial intelligence system. This necessitates establishing a clear and direct link between the injury and the specific actions of the AI system. It is essential to provide evidence and documentation to substantiate the claim that the AI system’s actions were the cause of the damage or injury.
In essence, Article 563 of the Portuguese Civil Code sets a high standard for establishing liability in cases involving artificial intelligence. This section is particularly relevant in cases where damages are claimed against users of AI systems, rather than producers.
If damages are claimed against the producer of artificial intelligence systems, claimants may rely on the liability regime set out in Decree-Law 383/89. This legislation holds producers liable for damage caused by defective AI systems, regardless of fault. In such instances, the burden of proof shifts to the producer, who must demonstrate that the AI system is not defective and therefore that there is no causal link between the alleged injury and the AI system’s actions.
As previously stated, the draft AI Liability Directive aims to establish a presumption of causality, which can be rebutted by the defendant. From a forward-looking standpoint, the transposition of this directive into national legislation and its enforcement by national courts may present challenges for legislators and courts when translating the directive into national law. The differing roles and interests of the various parties involved may impact the assessment of causation and damages.
-
Is the use of artificial intelligence insured and/or insurable in your jurisdiction?
While there seems to be no material obstacles for the insurability of AI products, a report published in 2023 by a major Portuguese insurance company revealed a lack of available insurance products for the use of artificial intelligence. This is due to the market awaiting the enactment of the EU AI Act, which will clarify liability distribution between the various agents.
-
Can artificial intelligence be named an inventor in a patent application filed in your jurisdiction?
In Portugal, the legal framework for patents requires that the inventor named in a patent application be a natural person. This aligns with European and international patent systems, which currently do not recognize AI as an inventor.
The European Patent Convention (EPC), to which Portugal is a party, explicitly requires that the inventor be a human being. Article 61 (1) paragraph c) of the PT Industrial Property Code, Article 81 of the EPC and Rule 19 (1) states that the designation of the inventor must include the “surname, given names and full address of the inventor,” and Article 60(1) states that the right to a European patent belongs to the inventor or their successor in title, implying a natural person.
In fact, it is the human being who, under the free development of his personality (Article 26 of the Portuguese Constitution), can conceive an invention, given the patent system centres its concern on protecting the inventor.
Thus, under the current regime, only natural (human) persons may be inventors, because only they are the bearers of creative thought and intellectual activity.
This stance was reinforced by a recent case law involving the AI system DABUS4 (Device for the Autonomous Bootstrapping of Unified Sentience). Applications were filed in various jurisdictions, including the European Patent Office (EPO), naming DABUS as the inventor. These applications were rejected on the grounds that the listed inventor, must be a natural person. For example, in the case decided by the EPO, the decision was made clear that “an inventor designated in the application has to be a natural person and not a machine“.<sup5 The designation of a natural person as an inventor is both possible and necessary even if a system with artificial intelligence was used to find the claimed invention.
Therefore, under the current legal framework in Portugal, AI cannot be named as an inventor on a patent application. Any patents resulting from AI-created inventions must be filed in the name of a natural person who is associated with the AI system, typically the individual who programmed or operates the AI.
Footnote(s):
4 EPO Decision J 8/20, j200008eu1.pdf (epo.org)
5 Tribunal Federal Alemão de Patentes, BUNDESGERICHTSHOF BESCHLUSS, 11 Junho 2024.
-
Do images generated by and/or with artificial intelligence benefit from copyright protection in your jurisdiction? If so, who is the authorship attributed to?
Under Portuguese copyright law, in line with EU directives, an image created solely by artificial intelligence (AI) without human intervention is unlikely to be copyrighted. Copyright protection is typically granted to works that are original and created by a human author.
Article 1(1) of the Copyright and Related Rights Code states that copyright applies to “intellectual creations in the literary, scientific, and artistic fields” that are externalized and original. CJEU jurisprudence asserts that originality requires a conscious, creative process by a human.
If a human uses AI as a tool and the resulting work meets the originality requirement, the human author can be granted copyright protection.
Given the nascent regulatory framework and the absence of specific Portuguese case law, Portugal aligns with the broader EU framework and international standards, with the key criterion being the extent of human involvement in the creation.
-
What are the main issues to consider when using artificial intelligence systems in the workplace?
Article 106 (3) of the Labour Code states that employers must inform workers about the parameters, criteria, rules, and instructions on which AI systems affecting employment decisions and working conditions are based. Additionally, the General Data Protection Regulation (GDPR) applies, ensuring the protection of personal data in AI applications.
When integrating artificial intelligence (AI) systems in the workplace, several key issues must be addressed:
Firstly, ethical considerations are paramount. AI algorithms should be rigorously tested to minimize bias and promote fair treatment of all employees. Transparency is also crucial; employees need to understand how AI tools function, what data is used, and how decisions are made.
Legal compliance is another critical factor. AI systems must adhere to labor laws and GDPR, particularly in matters related to employee monitoring and rights.
Employee training is essential for effective use of AI tools. Employees should be adequately trained to understand and utilize AI systems in their roles. Finally, the reliability of AI systems must be ensured through rigorous testing for accuracy and regular updates. There should be established protocols for addressing errors or malfunctions, and employees should know how to report issues and what steps will be taken to resolve them.
-
What privacy issues arise from the use of artificial intelligence?
The utilization of AI tools presents several privacy considerations, such as data collection and training, data security, data anonymization, adherence to data minimization and purpose limitation principles, transparency, accountability, and ensuring the adopting of an adequate legal basis.
AI systems typically require substantial amounts of data to operate effectively. While this often results in the collection of extensive personal data, adherence to data minimization principles ensures that only the necessary data is collected.
To address vulnerabilities in AI systems that may lead to unauthorized access to sensitive personal information, robust cybersecurity measures should be implemented. Proactively securing AI systems can prevent breaches that could otherwise expose large volumes of personal data, thereby safeguarding individuals’ privacy.
Conducting thorough Data Protection Impact Assessments (DPIAs) is essential when deploying AI tools. These assessments help identify potential privacy risks and facilitate the implementation of measures to mitigate them. Additionally, employing advanced anonymization techniques and regularly reviewing their effectiveness can protect privacy by preventing the re-identification of anonymized data.
Transparency is crucial according to GDPR rules, which require clear communication about how personal data is processed.
In conclusion, while the use of AI tools does raise privacy concerns, these can be successfully managed through compliance with laws, adherence to best practices, and a commitment to transparency and accountability.
-
How is data scraping regulated in your jurisdiction from an IP, privacy and competition point of view?
From an IP perspective, data scraping is governed by copyright protection and terms of service agreements. Original intellectual creations, including web content and databases, are protected by copyright law, and the EU Database Directive provides additional protections for databases. Websites often include Terms of Service that prohibit scraping, and violating these terms can result in legal action for breach of contract.
Regarding privacy, the General Data Protection Regulation (GDPR) regulates the processing of personal data within the EU. It mandates lawful grounds for data processing, data minimization, transparency, and respect for data subjects’ rights. Scraping personal data without proper legal grounds can result in significant fines and penalties under the GDPR.
Data scraping can intersect with competition law, especially when it aims to unfairly undercut competitors or collect proprietary information.
-
To what extent is the prohibition of data scraping in the terms of use of a website enforceable?
The enforceability of data scraping prohibitions in a website’s terms of use depends on factors like jurisdiction, clarity, accessibility of the terms, and the specific circumstances.
In Portugal and the EU, terms of use are generally enforceable if they are clearly presented and accepted by the user. If a user violates these terms by data scraping, the website owner can pursue legal action for breach of contract, which can lead to injunctions, damages, and legal costs.
However, enforcement can be challenging if the scraper is in a different jurisdiction or if the terms were not properly communicated. The effectiveness of enforcement also depends on the website’s technical measures to prevent scraping and the ability to identify the infringer.
-
Have the privacy authorities of your jurisdiction issued guidelines on artificial intelligence?
As of now, the Portuguese Data Protection Authority (“CNPD”) has not issued specific guidelines on artificial intelligence (AI). Portugal’s “AI Portugal 2030” strategy outlines the country’s approach to AI, emphasizing the need for regulatory measures and oversight in line with EU directives.
-
Have the privacy authorities of your jurisdiction discussed cases involving artificial intelligence?
As of now, CNPD has not discussed cases explicitly involving artificial intelligence (AI).
Although it does not constitute an enforcement procedure, CNPD addressed the data protection implications of AI for the first time in the context of a legislative procedure related to implementing new electronic identification schemes in public administration. These schemes, specifically designed for authentication on public administration portals and websites, utilize facial recognition technologies and deep learning algorithms to verify the security of identity documents. In its opinion, CNPD primarily emphasised the controller’s obligation to provide data subjects with information regarding automated decision-making.
CNPD has recently issued a suspension order to the Worldcoin Foundation, prohibiting the collection of biometric data through the “Orb” device on national territory. CNPD highlighted the need to guarantee the security and privacy of personal data, particularly in the context of the use of AI to collect and process this data. This action reflects a growing trend towards strict regulation in response to technological innovations.6
Footnote(s):
6 CNPD, Resolution/2024/137, available here.
-
Have your national courts already managed cases involving artificial intelligence?
According to recent legal analyses and reports, there have been no notable judicial decisions in Portugal directly addressing AI-related issues. However, there are two still pending class actions in civil courts against a global digital platform which also touches on algorithmic decision-making processes. In addition, there has been a significant number of labor cases against delivery platforms that also cover whether algorithmic decisions made by the platforms entail labor relationships with the couriers. In this case, the substantial majority of the decisions so far has ruled in favor of the platforms.
-
Does your country have a regulator or authority responsible for supervising the use and development of artificial intelligence?
There is no specific regulatory or authority responsible for supervising the use and development of artificial intelligence in Portugal. Instead, there are several entities issuing guidelines regarding the use of artificial intelligence in specific sectors or markets, such as the public administration, health, justice, education.
However, given its designation as Digital Services Coordinator, it is also expected that ANACOM, the communications regulator, may play a similar coordination role over AI.
-
How would you define the use of artificial intelligence by businesses in your jurisdiction? Is it widespread or limited?
A report published earlier in 2024 revealed that 35% of Portuguese companies had adopted AI technologies by 2023, representing a 25% growth rate in just one year compared to 28% in 2022. This growth was primarily driven by the adoption of cloud-based services.
Moreover, companies in Portugal have increased their investments in digital technology by 61% since September 2022, representing a ten-percentage point increase above the European average (51%).
Portugal also stands to gain significantly from the anticipated benefits of AI. According to the same survey, 60% of Portuguese citizens believe that AI could play a pivotal role in addressing global challenges such as climate change.
-
Is artificial intelligence being used in the legal sector, by lawyers and/or in-house counsels? If so, how?
Artificial intelligence (AI) is increasingly used in the legal sector in Portugal by both lawyers and in-house counsels to improve efficiency, accuracy, and productivity. Law firms implement AI for document automation and legal research, enhancing service delivery and competitiveness. AI tools can quickly analyze contracts to identify key terms, risks, compliance issues, and proofread documents, which is especially useful for due diligence in mergers and acquisitions. Additionally, AI assists in drafting legal documents and performing precise searches in contracts, case law, and other legal sources, allowing lawyers to focus on strategic tasks.
In-house legal departments use AI to streamline contract management, ensure regulatory compliance, and improve operational efficiency. As technology advances and becomes more accessible, AI use in Portugal’s legal sector is expected to grow, leading to significant improvements in legal service efficiency and accuracy.
-
What are the 5 key challenges and the 5 key opportunities raised by artificial intelligence for lawyers in your jurisdiction?
In Portugal, the integration of artificial intelligence (AI) into the legal profession presents several key challenges. Data privacy and security are major concerns, given the strict GDPR regulations. Lawyers must ensure AI systems handle sensitive legal data securely and compliantly. Ethical and bias concerns also arise, as AI algorithms can produce biased outcomes, threatening fairness and impartiality. Keeping up with evolving regulatory compliance is resource-intensive, and the potential for job displacement due to AI automation of routine tasks requires lawyers to focus on complex, non-automatable work. Ensuring the quality and reliability of AI outputs is crucial to avoid errors that could lead to legal malpractice.
However, AI also offers significant opportunities for lawyers. It can increase efficiency and reduce costs by automating routine tasks, allowing lawyers to focus on higher-value activities. AI-powered tools provide enhanced legal research and insights, enabling better-informed decisions. Improved client services through AI chatbots and virtual assistants offer around-the-clock legal advice, leading to higher client satisfaction. Predictive analytics can help assess the likelihood of success for different legal strategies, enhancing case management. Additionally, AI fosters innovation in legal practice with automated contract generation and real-time compliance monitoring.
Clients need to understand that AI-generated decisions carry a certain level of risk, similar to human-made decisions. Lawyers should discuss these risks with clients to determine the appropriate level of AI integration they are comfortable with.
-
Where do you see the most significant legal developments in artificial intelligence in your jurisdiction in the next 12 months?
Given that the AI Act is still pending publication, it is premature to make any definitive predictions regarding the most significant legal developments that will take place over the next 12 months.
However, in light of the responses and challenges identified in our assessment, we anticipate that the most significant legal developments will focus on the following aspects:
- The implementation of the EU AI Act will have a significant impact on the legal industry. The enactment of the EU AI Act will undoubtedly result in the adoption of new compliance measures for AI developers, deployers and users, with the aim of ensuring safety and transparency. This will have an impact on the development of new practices of legal assistance, audits and new legal products.
- Cross-functional implementation. IA compliance mechanisms will not be only about the implementation of the AIA, as the risk-based approach presupposes their integration with other impact assessments, which include, for example, data processing practices, IP rights management, cybersecurity measures, or other sectoral obligations (e.g. labor, environmental, social, health, etc.).
- Transposition of the AI Liability Directive: Portugal must prioritize integrating the AI Liability Directive into national law and implementing a more streamlined liability regime regarding compensation for damages in cases where injured parties submit compensation claims. As previously stated, the burden of proof may require adjustments, including a reversal of the onus. Additionally, a light approach to the causality link between the damage and the fault may be necessary.
- AI Governance Mechanisms. Since the compliance assessment should focus on the specific risks of AI tools both within each organization and sector of activity, as well as the life-cycle of each AI system (considering the various operators along the value chain) AI will require robust governance frameworks ensuring permanent compliance and monitoring of AI tools.
- Data Protection and Privacy Enhancements: Given AI’s reliance on extensive data, Portugal will be required to update its data protection laws to address AI-specific challenges. This will reinforce privacy and data security measures.
- The development of ethical AI guidelines is expected to be a significant area of focus for the government and I&D entities in Portugal. This will involve the creation and adoption of governance and ethical guidelines for AI, with a particular emphasis on mitigating bias, ensuring fairness, and enhancing accountability and transparency in AI systems.
- The introduction of legal reforms is likely to provide greater clarity on liability and accountability for AI-related damages, particularly in sectors such as autonomous vehicles, healthcare, and finance. These reforms will define the responsibilities of developers, users, and other stakeholders.
Portugal: Artificial Intelligence
This country-specific Q&A provides an overview of Artificial Intelligence laws and regulations applicable in Portugal.
-
What are your countries legal definitions of “artificial intelligence”?
-
Has your country developed a national strategy for artificial intelligence?
-
Has your country implemented rules or guidelines (including voluntary standards and ethical principles) on artificial intelligence? If so, please provide a brief overview of said rules or guidelines. If no rules on artificial intelligence are in force in your jurisdiction, please (i) provide a short overview of the existing laws that potentially could be applied to artificial intelligence and the use of artificial intelligence, (ii) briefly outline the main difficulties in interpreting such existing laws to suit the peculiarities of artificial intelligence, and (iii) summarize any draft laws, or legislative initiatives, on artificial intelligence.
-
Which rules apply to defective artificial intelligence systems, i.e. artificial intelligence systems that do not provide the safety that the public at large is entitled to expect?
-
Please describe any civil and criminal liability rules that may apply in case of damages caused by artificial intelligence systems.
-
Who is responsible for any harm caused by an AI system? And how is the liability allocated between the developer, the user and the victim?
-
What burden of proof will have to be satisfied for the victim of the damage to obtain compensation?
-
Is the use of artificial intelligence insured and/or insurable in your jurisdiction?
-
Can artificial intelligence be named an inventor in a patent application filed in your jurisdiction?
-
Do images generated by and/or with artificial intelligence benefit from copyright protection in your jurisdiction? If so, who is the authorship attributed to?
-
What are the main issues to consider when using artificial intelligence systems in the workplace?
-
What privacy issues arise from the use of artificial intelligence?
-
How is data scraping regulated in your jurisdiction from an IP, privacy and competition point of view?
-
To what extent is the prohibition of data scraping in the terms of use of a website enforceable?
-
Have the privacy authorities of your jurisdiction issued guidelines on artificial intelligence?
-
Have the privacy authorities of your jurisdiction discussed cases involving artificial intelligence?
-
Have your national courts already managed cases involving artificial intelligence?
-
Does your country have a regulator or authority responsible for supervising the use and development of artificial intelligence?
-
How would you define the use of artificial intelligence by businesses in your jurisdiction? Is it widespread or limited?
-
Is artificial intelligence being used in the legal sector, by lawyers and/or in-house counsels? If so, how?
-
What are the 5 key challenges and the 5 key opportunities raised by artificial intelligence for lawyers in your jurisdiction?
-
Where do you see the most significant legal developments in artificial intelligence in your jurisdiction in the next 12 months?