-
What are your countries legal definitions of “artificial intelligence”?
Greek legislation does not provide a definition for the term “artificial intelligence.”
-
Has your country developed a national strategy for artificial intelligence?
Greece is in the process of developing its national AI strategy, coordinated by the Hellenic Ministry of Digital Governance. More specifically, Article 11 of Law 4961/2022 provides for the establishment of a Coordinating Committee for AI, whose mission is to coordinate the implementation of the National Strategy for the development of artificial intelligence. Its responsibilities include: a) making decisions for the implementation and improvement of the National Strategy, b) shaping national priorities and directions for this purpose, and c) submitting proposals and recommendations for corrective measures if deviations are identified regarding the implementation of the National Strategy or impacts on the fundamental rights of individuals. According to Article 13, the Executive Body of the Coordinating Committee is designated as the Committee of the National Strategy for the development of artificial intelligence, which consists of officials from the Ministry of Digital Governance and is responsible for the implementation of the National Strategy. Finally, Article 14 of Law 4961/2022 provides for the establishment of an Artificial Intelligence Observatory within the Ministry of Digital Governance, under the General Secretariat for Digital Governance and Simplification of Procedures, with the primary mission of collecting data related to the implementation of the National Strategy for the development of artificial intelligence.
-
Has your country implemented rules or guidelines (including voluntary standards and ethical principles) on artificial intelligence? If so, please provide a brief overview of said rules or guidelines. If no rules on artificial intelligence are in force in your jurisdiction, please (i) provide a short overview of the existing laws that potentially could be applied to artificial intelligence and the use of artificial intelligence, (ii) briefly outline the main difficulties in interpreting such existing laws to suit the peculiarities of artificial intelligence, and (iii) summarize any draft laws, or legislative initiatives, on artificial intelligence.
Law 4961/2022 introduces inter alia a national framework for regulating AI technologies in these sectors, imposing the following obligations on different categories of entities:
A. Public Entities
- Algorithmic Impact Assessment: In addition to conducting an impact assessment under Regulation (EU) 2016/679 (GDPR), entities must prepare an algorithmic impact assessment to evaluate risks to the rights, freedoms, and legitimate interests of individuals affected by the system. Safeguards will be specified by Presidential Decree.
- Transparency of Operation: Public entities must provide information about the AI system, including start time, operational parameters, and decisions made or supported by it. Complaints regarding transparency violations are examined by the National Transparency Authority.
- AI System Register: Public entities must maintain a register of the AI systems they use.
B. Private Entities
- AI in Employment: Before using an AI system that affects decision-making processes regarding employees or job applicants, impacting working conditions, selection, hiring, or evaluation, enterprises must provide relevant information to employees. This applies to digital platforms with contracts of dependent work, independent services, or projects. Employers must also conduct an impact assessment to safeguard employees’ rights, with potential sanctions for non-compliance imposed by the Hellenic Labour Inspectorate.
- Ethical Use of Data: Medium or large private sector entities, as defined in Article 2 of Law 4308/2014 (“Greek Accounting Standards”), must adopt a data ethics policy outlining measures, actions, and procedures related to data ethics in AI use. Entities preparing a corporate governance statement under Article 152 of Law 4548/2018 (Government Gazette A’ 104) must include information about their data ethics policy. The content of these policies will be specified by a Joint Ministerial Decision.
- Registry of AI Systems: Medium or large private sector entities must maintain a register of the AI systems they use.
- Public Contracts: Public contracts for AI system design or development must include the following obligations for the contractor:
- Provide the contracting authority with information ensuring transparent operation of the system, respecting military, commercial, and industrial secrecy.
- Deliver the AI system under conditions allowing the contracting authority to study its functionality and parameters, make improvements, and publish or distribute those improvements.
- Ensure the system complies with the legal framework, particularly regarding human dignity, privacy and personal data protection, non-discrimination, gender equality, freedom of expression, universal access for persons with disabilities, employee rights, and good governance principles.
The provisions of Law 4961/2022 regarding AI technologies do not affect the rights and obligations established under the General Data Protection Regulation (GDPR) and Law 4624/2019 for personal data protection.
The AI Act published on 12 July 2024, shall be binding and directly applicable in all Member States, including Greece. However, each Member State shall establish or designate as national competent authorities at least one notifying authority and at least one market surveillance authority for the purposes of this Regulation 12 months after the date of entry into force of this Regulation. The AI Act adopts a risk-based assessment. Based on this approach, AI systems are divided into four levels according to the type and level of risk they pose: unacceptable risk, high risk, low risk, minimal risk. AI systems that fall into the unacceptable risk category are completely prohibited, high risk systems must comply with specific requirements, while low or minimal risk systems must comply with fewer or no requirements at all. Finally, it provides for specific requirements for general-purpose AI systems.
-
Which rules apply to defective artificial intelligence systems, i.e. artificial intelligence systems that do not provide the safety that the public at large is entitled to expect?
The existing framework for product liability, which was introduced by the Product Liability Directive (Directive 85/374/EEC) and was implemented by amendments to the Greek Consumer Protection Law 2251/1994, is also applicable to new digital technologies. The Consumer Protection Law establishes a strict liability regime under which producers of defective products are held liable when such products cause damage to natural persons or their property, while the injured consumers are not required to prove the fault of the producer. Producer liability is non-contractual, meaning that it is not necessary that a prior contract is concluded between the producer and the injured party.
Under Greek consumer protection legislation, in order for a producer to be held liable for damages suffered by a consumer the conditions must be met: a) the product which has been placed on the market by the producer is defective, b) the consumer has incurred damage, and c) a causal link is established between the defect and the damage.
Provided that AI systems qualify as “products” under Greek law, applying the strict liability regime of producers is best suited to the operation of AI systems because of the difficulty in proving the cause of any defect due to the complex construction of the autonomous system.
It should be noted that the revised Product Liability Directive is expected to be published soon, bringing significant and impactful changes to this field. These changes are likely to modernize the existing legal framework, addressing new technological advancements and challenges, particularly those related to artificial intelligence and digital products. The revised Product Liability Directive (PLD) introduces provisions to address liability for products such as software (including AI systems) and digital services that affect the functioning of the product (e.g. navigation services in autonomous vehicles). Additionally, it clarifies the liability rules for companies that substantially modify products before reselling them. It also ensures that consumers are compensated for defective products manufactured outside the EU. The revised PLD broadens the notion of “defect” and allows compensation for damage when products like robots, drones or IoT smart-home systems become unsafe by software updates, AI or digital services that are needed to operate the product, as well as in cases where manufacturers fail to address connectivity risks and cybersecurity vulnerabilities. Under certain circumstances, the new Directive requires manufacturers to disclose information in cases where plausible claims for compensation are made, and alleviates the burden of proof for victims (consumers) seeking compensation for damages resulting from defective products. The final text of the revised PLD was formally endorsed by the European Parliament at the Plenary of 12 March 2024, which adopted its position at first reading.1 The revised PLD will enter into force on the twentieth day following its publication in the EU Official Journal.
In addition, if personal data is processed by the AI system in violation of the applicable legal framework, relevant liability is triggered.
Finally, all AI technologies in Greece should meet the essential health and safety requirements laid down in the EU safety legislation, as it has been transposed into Greek law.
Footnote(s):
1 European Parliament legislative resolution of 12 March 2024 on the proposal for a Directive of the European Parliament and of the Council on liability for defective products (COM(2022)0495 – C9-0322/2022 – 2022/0302(COD)), (Ordinary legislative procedure: first reading) – P9_TC1-COD(2022)0302, Position of the European Parliament adopted at first reading on 12 March 2024 with a view to the adoption of Directive (EU) 2024/… of the European Parliament and of the Council on liability for defective products and repealing Council Directive 85/374/EEC [“Position of the European Parliament of 12 March 2024 for the adoption of a Directive on liability for defective products (revised PLD)”].
-
Please describe any civil and criminal liability rules that may apply in case of damages caused by artificial intelligence systems.
Civil liability
Greek civil law has not thus far introduced a specific tortious liability regime covering advanced AI, therefore the general provisions on tortious liability apply.
In accordance with Article 914 and Article 932 of the Greek Civil Code, if an injured party demonstrates that they have incurred damages due to the operation of an AI system, they may be entitled to compensation. Such damage may encompass financial loss, physical injuries, property damage, and non-financial harm such as pain and suffering.
The Greek Civil Code sets out five conditions that need to be fulfilled for tortious liability to be attributable to a party:
- human behaviour;
- illegal action;
- fault;
- damage
- causal link between the behaviour and the damage.
Where a system operating in the spectrum of autonomy causes damage, a number of these conditions are challenging to substantiate, particularly determining a party’s fault and the causal link between the human behaviour and the damage that occurred.
Criminal liability
Criminal liability presents the most challenges for legal systems with regard to the operation of AI systems. Specific criminal law provisions regulating AI have not yet been introduced to Greek criminal law, however provisions of the Greek Criminal Code and special criminal laws could be applicable to the use of AI systems on a case-by-case basis.
-
Who is responsible for any harm caused by an AI system? And how is the liability allocated between the developer, the user and the victim?
In the absence of a dedicated framework to regulate AI liability, the allocation of liability is determined on a case-by-case basis, taking into consideration the facts of each incident and the role and actions of the parties involved.
For example, if design or manufacturing defects are the cause of the harm, manufacturer or producer liability might be established, while if the harm is caused due to a user deploying the system in an unsafe manner, or in a manner that does not adhere to industry standards and the guidance provided, or if they fail to properly maintain the system, they may also be responsible for the damage. Finally, it is possible for the injured party to contribute through their actions to the harm they have incurred, e.g. through negligence or improper use of the AI system. In such cases, the liability of the injured party might serve to reduce that of the other parties involved.
As the new generation of AI edges closer to operational autonomy and behavioural unpredictability through their capacity to analyse and learn from their environments, assigning legal responsibility for harmful actions is bound to present a point of contention across most jurisdictions, as the natural person at fault for damage caused by an AI system will become increasingly difficult to identify.
-
What burden of proof will have to be satisfied for the victim of the damage to obtain compensation?
Under Greek Consumer Protection Law, consumers that have suffered damage are required to prove that the damage was incurred from a defective product, however as Law 2251/1994 adopts a strict liability regime for manufacturers of defective products, consumers are not required to prove the fault of the manufacturer.
Under tort law the injured party bears the burden of proof with regard to the fault of the party liable for the damage they have suffered.
-
Is the use of artificial intelligence insured and/or insurable in your jurisdiction?
In Greece, it is common practice for the parties that offer IT services to have insurance coverage and the use of AI systems can be insured under general civil liability insurance. The existence of insurance agreements can increase the cost of the provision of the IT services, but they appear to be necessary in the contemporary international technology landscape.
-
Can artificial intelligence be named an inventor in a patent application filed in your jurisdiction?
No, because as an inventor according to Greek patent law Νο.1733/1987, an inventor can only be human. According to Article 6 of the above law, the person applying for a patent is considered to be an inventor.
-
Do images generated by and/or with artificial intelligence benefit from copyright protection in your jurisdiction? If so, who is the authorship attributed to?
There is currently no specific legislation or case law addressing the issue of copyright protection for AI-generated images. However, the Greek copyright law No. 2121/1993 generally accepts only natural persons as authors.
-
What are the main issues to consider when using artificial intelligence systems in the workplace?
When using artificial intelligence (AI) systems in the workplace, several key issues need to be considered to ensure ethical, legal, and effective implementation. AI raises important risks in terms of:
- Biases: AI systems can produce decisions that reproduce prohibited discrimination, biases and prejudice.
- Privacy: AI systems used by employers to make decisions about employees process personal data. Consequently, the principles established by the GDPR, such as purpose limitation, transparency, and legitimate basis for processing, must be followed. Employees can also exercise all rights granted under the GDPR and Greek Law 4624/2019, such as the right to be informed. Moreover, Article 22 of the GDPR applies in this context, granting employees the right not to be subject to decisions based solely on automated processing, including profiling, that produce legal effects or significantly affect them.
Greek Law 4624/2019 also imposes specific obligations on private companies that use AI systems in the workplace. According to article 9 of the Law 4961/2022 any private sector business that uses an artificial intelligence system affecting any decision-making process regarding employees or job applicants, and impacts working conditions, selection, hiring, or evaluation, must provide sufficient and clear information to each employee or job applicant before its first use. This information should at least include the parameters on which the decision is based, subject to cases requiring prior notification and consultation, and ensure compliance with the principles of equal treatment and anti-discrimination in employment and work due to gender, race, color, national or ethnic origin, genetic background, religious or other beliefs, disability or chronic condition, age, family or social status, sexual orientation, gender identity, or characteristics.
-
What privacy issues arise from the use of artificial intelligence?
The use of artificial intelligence (AI) systems in processing personal data presents several challenges. Firstly, there are significant privacy concerns due to the potential for unauthorized access and misuse of sensitive information. AI systems often require large datasets to function effectively, which increases the risk of data breaches and leaks. Ensuring the confidentiality and integrity of personal data is paramount, as these systems can inadvertently expose information through security vulnerabilities or inadequate data protection measures.
Secondly, the accuracy and fairness of AI algorithms are crucial issues. AI systems can perpetuate biases present in the training data, leading to discriminatory outcomes. These biases can stem from various sources, including the quality of the data and the design of the algorithms. Consequently, individuals may face unjust treatment based on flawed data interpretations, which can affect areas such as hiring, lending, and law enforcement. It is essential to continuously monitor and adjust these systems to minimize biases and ensure equitable treatment for all users.
Lastly, transparency and accountability in AI processing of personal data are critical for maintaining trust. The complexity and opacity of AI algorithms often make it difficult for individuals to understand how their data is being used and to challenge automated decisions that affect them. Regulatory frameworks like the General Data Protection Regulation (GDPR) emphasize the right to explanation and the need for AI systems to be transparent in their operations. This requires clear communication about data usage and the implementation of mechanisms for individuals to contest and correct erroneous or unfair decisions.
-
How is data scraping regulated in your jurisdiction from an IP, privacy and competition point of view?
IP: There are no specific provisions for data scraping in Greek IP laws. Greek IP law No. 2121/1993, as amended by Law No. 4966/2022, which incorporates the Directive (EU) 2019/790, sets rules for data mining, as analyzed below in question 14. According to the Law, text and data mining, which is any automated analytical technique aimed at analysing text and data in digital form in order to generate information which includes but is not limited to patterns, trends and correlations, is allowed for the purpose of scientific research which takes place from research organisations and cultural heritage institutions. Mining is allowed on material to which research organisations and cultural heritage institutions have legal access.
The reproduction and extraction of works and other material to whom access is lawful with the purpose of conducting text and data mining, is lawful on the condition that such use has not been expressly restricted by the author or other rightholder in an appropriate manner, such as machine-readable means in the case of content made publicly available online. Reproductions and extractions which are taking place in accordance with the above may be retained for as long as is necessary for the purposes of text and data mining.
Privacy: In Greece, data scraping is regulated mainly through the General Data Protection Regulation (GDPR), Greek Law 4624/2019, and guidelines from the Hellenic Data Protection Authority (HDPA).The GDPR defines processing as any action performed on personal data, such as collecting, organizing, storing, modifying, retrieving, using, and disseminating it. Web scraping encompasses these activities, thus making operators data controllers. As data controllers, they must fulfill specific obligations including establishing a lawful basis for processing data and following principles of transparency, data minimization, purpose limitation, accuracy, security, confidentiality, integrity, and accountability. More specifically:
- Data scraping allows businesses to extract large amounts of data from websites, often in hopes of future use. However, this practice risks violating GDPR principles of purpose limitation and data minimization, which require that data be collected only for specific, legitimate purposes and not processed further unless compatible with the original intent.
- Data minimization requires businesses to collect and process only the personal data that is relevant, necessary, and adequate for the intended purpose. Practically, this means applying tests of necessity and proportionality to ensure that all scraped data is essential and directly relevant, thereby reducing the amount of data collected to the lowest possible level needed to achieve the processing goals.
- Consent is typically required for scraping personal data unless another legal basis exists, such as legitimate interest, provided individual rights are not overridden.
- The GDPR mandates that controllers maintain transparency, a challenging requirement given the nature of data scraping. When businesses use data scraping service providers, they must provide individuals with a privacy notice detailing specific information as outlined in Article 14 GDPR, including data subject rights and how to exercise them, within one month of scraping their data.
Competition: From a competition point of view, data scraping itself is not a prohibited, anti-competitive practice under Greek jurisdiction. What could raise competition law concerns, however, is the case where data scraping is used as a tool leading to anti-competitive practices of anti-competitive agreements, abuse of a dominant position or unfair competition violations.
-
To what extent is the prohibition of data scraping in the terms of use of a website enforceable?
In line with the above mentioned in question 13, the data mining exception in Article 21B applies provided that the use has not been explicitly restricted by the rightholders in an appropriate manner, such as machine-readable media and metadata in the case of content made publicly available online, by providing for specific conditions on the relevant webpages or a content provider service prohibiting text and data mining. Any restriction should be clear, precise, easily identifiable and unambiguous, in order to be more likely to be enforceable.
If the rightholder, in this case a website, has already prohibited data scraping through its terms of use, a user wishing to mine can only invoke the exception in Article 21A. In this case, however, the purpose of extraction must be exclusively for research. If the applicant for mining is not a cultural heritage institution, then they can only invoke the exception in Article 21B.
The blocking of material by rightholders in order to prevent text and data mining is reported to be extremely widespread in practice, as publishers and material providers often explicitly prohibit any automated mining, searching, downloading of content and any other related act. In this respect, however, it should be noted that if there is a massive reservation of rights, and corresponding implementation of commitments, the exception becomes obsolete.
In addition, if scraping involves copying significant portions of copyrighted content, it can be considered infringement. Finally, it is worth noting that Directive 96/9/EC provides protection to databases and according to it, unauthorized extraction of significant parts of a database can lead to liability for violation of its rights.
Finally, noteworthy is also Decision no. 35/2022 of the Hellenic Data Protection Authority, which examined a complaint against Clearview AI Inc., a U.S.- based company scraping selfies off the internet to build an algorithmic identity-matching commercial service. The HDPA found that the company, which markets facial recognition services, violated the principles of lawfulness and transparency, resulting in a fine of twenty million euros (20,000,000) on Clearview AI Inc.
-
Have the privacy authorities of your jurisdiction issued guidelines on artificial intelligence?
No, at present, the Hellenic Data Protection Authority (HDPA) has not issued guidelines or recommendations regarding the processing of personal data in the context of using artificial intelligence systems or methods.
-
Have the privacy authorities of your jurisdiction discussed cases involving artificial intelligence?
In its Decision 57/2022, the Hellenic Data Protection Authority examined the case of COSMOTE MOBILE TELECOMMUNICATIONS S.A. regarding deficiencies and ambiguities in informing data subjects about the processing of personal data during the digital on-boarding process for the identification of prospective subscribers using Artificial Intelligence technology. The Authority decided to reprimand the company for violating Articles 5(1)(a) and 13 of the GDPR, which concern transparency and informing data subjects. The decision emphasized the need to amend the contract texts and notifications to comply with data protection requirements, highlighting the importance of transparency and accountability in the processing of personal data.
With Decision 35/2022, the Hellenic Data Protection Authority examined a complaint against the company named Clearview AI, Inc, requesting an investigation into the company’s overall practices regarding personal data protection. Specifically, the Authority found that in this case, the complained-about company, which markets facial recognition services, violated the principles of lawfulness and transparency (Articles 5(1)(a), 6, 9 GDPR) as well as its obligations arising from the provisions of Articles 12, 14, 15, and 27 of the GDPR, imposing a financial penalty of twenty million euros (€20,000,000). Additionally, the Authority issued a compliance order to the said company to satisfy the request for access to personal data made by the complainant before the Authority and imposed a prohibition on the collection and processing of personal data of subjects located within the Greek territory using the methods included in its facial recognition service. Finally, with the contested decision, the Authority also directed Clearview AI, Inc to delete the personal data of the subjects located within the Greek territory, which the company collects and processes using the aforementioned methods.
-
Have your national courts already managed cases involving artificial intelligence?
No, there is currently no case law related to AI systems in Greece or their liability or that of their manufacturers. Law 4961/2022 concerning AI systems in Greece is relatively new and has not been implemented so far by the Greek Courts.
-
Does your country have a regulator or authority responsible for supervising the use and development of artificial intelligence?
According to law 4961/2022, the Ministry of Digital Governance appoints a Coordinating Committee to coordinate the implementation of the National Strategy for the development of Artificial Intelligence. The Coordinating Committee is also responsible for taking decisions on the implementation and continuous improvement of the National Strategy, formulating guidelines for the optimal implementation of the National Strategy, designing and promoting policy proposals and actions. According to the above law, a committee for the supervision of the National Strategy for the development of artificial intelligence is established at the Ministry of Digital Governance.
In addition, an Artificial Intelligence Observatory is established with the main mission of collecting data on the implementation of the National Strategy, compiling reports on AI-related activities and supporting competent bodies.
-
How would you define the use of artificial intelligence by businesses in your jurisdiction? Is it widespread or limited?
While the use of AI by businesses in Greece is currently not widespread, there is a growing interest and gradual increase in adoption.
-
Is artificial intelligence being used in the legal sector, by lawyers and/or in-house counsels? If so, how?
There are several AI tools specifically designed for lawyers, providing multiple possibilities to law professionals such as document review and analysis, document automation, predictive analytics and even legal research. However, whether these tools are used in practice or not, cannot be confirmed by publicly available data.
-
What are the 5 key challenges and the 5 key opportunities raised by artificial intelligence for lawyers in your jurisdiction?
Challenges:
- Impact on Employment: AI has the potential to automate certain legal tasks, which could impact traditional legal jobs. Lawyers may need to adapt their skill sets and find new roles that leverage AI technology effectively to enhance their legal practice.
- Interpretation of AI-Generated Outcomes: AI systems often produce complex outputs, such as predictive analytics or natural language processing, which may require specialized knowledge for interpretation. Lawyers need to develop an understanding of AI technologies to effectively interpret and explain AI-generated outcomes to clients and the court.
- Security and Data Protection: AI relies on large amounts of data, and ensuring the security and protection of sensitive information is crucial. Lawyers must be knowledgeable about cybersecurity measures and data protection regulations to safeguard client data and prevent unauthorized access or breaches.
- Tort liability: Who is liable if a damage is caused by an AI system, how is causation established and how can the traditional criteria of attribution no longer be used if it is shown that when the system was programmed by a human being there was no prediction of the choice that ultimately led to the tort, so that it cannot be argued that the human being was responsible for the act of the AI system.
- Contractual liability: The issue of contractual liability and how it is affected in the AI environment, as the concept of e.g. accountability, but also of attribution for the abnormal development of a contract seems to be no longer self-evidently attributable to the user.
Opportunities:
- Efficiency and Productivity: AI technologies can automate routine and repetitive legal tasks, such as document review, contract analysis, and legal research. Lawyers can leverage AI tools to streamline their workflow, increase efficiency, and focus on more complex and strategic aspects of their practice.
- Enhanced Decision-Making: AI-powered analytics and machine learning algorithms can analyze vast amounts of legal data and provide insights for case assessments, risk predictions, and legal strategy development. Lawyers can leverage these tools to make more informed decisions and provide better legal advice to clients.
- Legal Research and Analysis: AI can assist lawyers in conducting comprehensive legal research, analyzing precedents, and identifying relevant case laws or regulations. AI-powered legal research platforms can save time and enhance the accuracy of legal research outcomes.
- Contract Review and Due Diligence: AI can streamline contract review processes and due diligence tasks by automating document analysis, flagging potential risks or discrepancies, and extracting relevant information. Lawyers can use AI tools to expedite contract review and focus on higher-value legal tasks.
- Innovation and New Legal Services: AI opens up opportunities for lawyers to develop innovative legal services and solutions. For example, lawyers can specialize in AI-related legal issues, such as data protection in AI systems or the legal implications of autonomous vehicles. Embracing AI can help lawyers stay at the forefront of legal innovation and expand their practice areas.
It is important to note that while AI offers numerous opportunities, it’s crucial for lawyers to approach AI implementation and adoption with a deep understanding of its limitations, potential biases, and ethical considerations. Adapting to the evolving landscape of AI technologies will require continuous learning and a proactive approach to integrate AI effectively into legal practice.
-
Where do you see the most significant legal developments in artificial intelligence in your jurisdiction in the next 12 months?
In the next 12 months, the most significant legal developments in artificial intelligence (AI) in Greece are expected to include:
- The completion of the drafting of the National Strategy for the development of Artificial Intelligence. In particular, according to the Digital Transformation 2020-2025 Βible, the National Strategy for the development of ΑΙ will:
- Define the conditions for the development of artificial intelligence, including skill and trust frameworks, data policies, and ethical principles for safe development and use.
- Describe national priorities and sectors for maximizing the benefits of artificial intelligence to address social challenges and foster economic growth.
- Analyze necessary actions related to the above priorities and propose cross-cutting interventions, along with at least one pilot application per policy area.
- The adoption of the framework for the implementation of the AI Act: Greece shall establish or designate as national competent authorities at least one notifying authority and at least one market surveillance authority for the purposes of this Regulation. Also, the AI Act gives Member States the power to determine the penalties and enforcement measures in the event of a breach of the Regulation. Therefore, Greece shall, without delay and at the latest by the date of entry into application (2 August 2026), determine the penalties and notify the Commission of the rules on penalties and of other enforcement measures.
Greece: Artificial Intelligence
This country-specific Q&A provides an overview of Artificial Intelligence laws and regulations applicable in Greece.
-
What are your countries legal definitions of “artificial intelligence”?
-
Has your country developed a national strategy for artificial intelligence?
-
Has your country implemented rules or guidelines (including voluntary standards and ethical principles) on artificial intelligence? If so, please provide a brief overview of said rules or guidelines. If no rules on artificial intelligence are in force in your jurisdiction, please (i) provide a short overview of the existing laws that potentially could be applied to artificial intelligence and the use of artificial intelligence, (ii) briefly outline the main difficulties in interpreting such existing laws to suit the peculiarities of artificial intelligence, and (iii) summarize any draft laws, or legislative initiatives, on artificial intelligence.
-
Which rules apply to defective artificial intelligence systems, i.e. artificial intelligence systems that do not provide the safety that the public at large is entitled to expect?
-
Please describe any civil and criminal liability rules that may apply in case of damages caused by artificial intelligence systems.
-
Who is responsible for any harm caused by an AI system? And how is the liability allocated between the developer, the user and the victim?
-
What burden of proof will have to be satisfied for the victim of the damage to obtain compensation?
-
Is the use of artificial intelligence insured and/or insurable in your jurisdiction?
-
Can artificial intelligence be named an inventor in a patent application filed in your jurisdiction?
-
Do images generated by and/or with artificial intelligence benefit from copyright protection in your jurisdiction? If so, who is the authorship attributed to?
-
What are the main issues to consider when using artificial intelligence systems in the workplace?
-
What privacy issues arise from the use of artificial intelligence?
-
How is data scraping regulated in your jurisdiction from an IP, privacy and competition point of view?
-
To what extent is the prohibition of data scraping in the terms of use of a website enforceable?
-
Have the privacy authorities of your jurisdiction issued guidelines on artificial intelligence?
-
Have the privacy authorities of your jurisdiction discussed cases involving artificial intelligence?
-
Have your national courts already managed cases involving artificial intelligence?
-
Does your country have a regulator or authority responsible for supervising the use and development of artificial intelligence?
-
How would you define the use of artificial intelligence by businesses in your jurisdiction? Is it widespread or limited?
-
Is artificial intelligence being used in the legal sector, by lawyers and/or in-house counsels? If so, how?
-
What are the 5 key challenges and the 5 key opportunities raised by artificial intelligence for lawyers in your jurisdiction?
-
Where do you see the most significant legal developments in artificial intelligence in your jurisdiction in the next 12 months?