-
What are your countries legal definitions of “artificial intelligence”?
Romanian law does not contain a legal definition of artificial intelligence (AI); however, several definitions of AI are mentioned in draft legislation and in the draft national artificial intelligence strategy. A draft law under Parliament approval defines AI as “an advanced virtual processing modality of large quantities of data, meta-data and information by applying predefined algorithms to achieve tasks in an anticipative manner in an optimal timeframe”. It is however widely expected, following negative feedback from legislative bodies, that this draft law will be rejected by the Parliament.
The 2024-2027 National Artificial Intelligence Strategy approved by the Romanian Government on 11 July 2024 relies on the definition included in the European Commission Communication (2018) 237 of 20181, according to which AI refers to “systems that display intelligent behaviour by analysing their environment and taking actions – with some degree of autonomy – to achieve specific goals”.
The legal definition which will be used in Romania is that included in the recently published AI Act (EU Regulation 2024/1689 laying down harmonised rules on artificial intelligence) defines AI systems as “machine-based [systems] that [are] designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments”.
Footnote(s):
1 COM (2018) 237: https://ec.europa.eu/transparency/documents-register/detail?ref=COM(2018)237&lang=en.
-
Has your country developed a national strategy for artificial intelligence?
The Romanian Government has approved on 11 July 2024 the 2024-2027 National Artificial Intelligence Strategy . AI had previously been referenced in the 2021-2024 Government Plan in the range of strategic projects and of those related to digital transformation in public authorities in Romania, as well as in the National Recovery and Resilience Plan, in other national policies and strategies.
AI is generally seen as a priority for Romania’s development in relation to areas such as human resource development, education improvement, encouragement of entrepreneurship, and cyber-security. The National Artificial Intelligence Strategy lays down the following objectives: (i) supporting education in research and development of AI competencies; (ii) development of a resilient infrastructure and of datasets usable by AI systems; (iii) development of a national AI research system; (iv) facilitating the transfer of technology via partnerships; (v) facilitating the adoption of AI in Romania; and (vi) developing systems of AI governance and regulation.
The key elements of Romania’s National Artificial Intelligence Strategy include:
- Setting out the objectives of the Strategy, detailed implementation programs and measures, and the key performance indicators related to the implementation of the Strategy;
- The key requirements for AI to be considered for the purposes of the Strategy;
- The national priorities related to AI;
- Issues related to the implementation of AI systems in Romania;
- The authorities responsible for the implementation of the Strategy, etc.
-
Has your country implemented rules or guidelines (including voluntary standards and ethical principles) on artificial intelligence? If so, please provide a brief overview of said rules or guidelines. If no rules on artificial intelligence are in force in your jurisdiction, please (i) provide a short overview of the existing laws that potentially could be applied to artificial intelligence and the use of artificial intelligence, (ii) briefly outline the main difficulties in interpreting such existing laws to suit the peculiarities of artificial intelligence, and (iii) summarize any draft laws, or legislative initiatives, on artificial intelligence.
Romania has not yet adopted rules or guidelines specifically on the use of AI and the general expectation is that the Romanian State will issue implementing regulations for the EU AI Act in the near future. Meanwhile, guidance on AI issued at EU level will apply in Romania (such as the reports of the EU’s High-Level Expert Group on AI2; EBA’s “Machine Learning for IRB Models. Follow-up Report from the Consultation on the Discussion Paper on Machine Learning for IRB Models”3; or ESMA’s “Public Statement on AI and Investment Services”4) and Romanian relevant supervisory authorities are expected to monitor their observance.
Several existing regulations which are not AI-specific could apply to the use of AI to the extent AI systems qualify as software products or technology (generally):
- Financial services: various rules apply in relation to operational risks of technology in financial services. The use of AI systems would also be subject to the rules on the outsourcing of activities of financial entities.
- Trusted services: pursuant to a local regulation, remote ID proofing conducted as automated processes relying on digital means such as AI systems must comply with the relevant existing technical standards and are subject to authorization and registration in Romania.
- Cybersecurity: entities subject to cybersecurity regulation (such as the NIS and telecoms legislation) must observe the cybersecurity requirements applicable to the use of technology.
- Data protection: the use of AI systems must comply with the data protection legislation, generally, to the extent they are used to process personal data; inter alia, the use of AI systems is subject to data protection impact assessments.
Footnote(s):
2 Full list of reports is available here: https://digital-strategy.ec.europa.eu/en/policies/expert-group-ai.
3 Full report is available here: https://www.eba.europa.eu/sites/default/files/document_library/Publications/Reports/2023/1061483/Follow-up%20report%20on%20machine%20learning%20for%20IRB%20models.pdf.
4 Press release and initial guidance statement are available here: https://www.esma.europa.eu/press-news/esma-news/esma-provides-guidance-firms-using-artificial-intelligence-investment-services.
-
Which rules apply to defective artificial intelligence systems, i.e. artificial intelligence systems that do not provide the safety that the public at large is entitled to expect?
There are no provisions in Romanian legislation dealing specifically with defective AI systems. Liability for defective AI systems could be assessed under various types of liability regimes, i.e., defective product liability, liability for lack or conformity, contractual liability, and extra-contractual liability; in addition, AI systems may be subject to administrative liability in case of breaches and security incidents, pursuant to the data protection and cybersecurity legislation (i.e., GDPR, NIS).
Romania has in place a general legal regime for defective products pursuant to the EU Product Liability Directive 85/374 acquis, which applies to defective goods, generally. Manufacturers and certain intermediaries of the goods may be held liable for defects in goods irrespective of whether there are contractual relationships with the acquirer of the defective goods. This legal regime applies to AI systems incorporated in the defective good. Following a recent revision of the EU directive, the product liability national legislation will be changed in the coming years to inter alia extend the definition of products to include software, thus covering defects in AI systems.
Producers and distributors of AI systems may also be subject to liability for lack of conformity under the consumer protection legislation on digital services and goods incorporating digital content (i.e., implementing EU Directives nos. 2019/770 on certain aspects concerning contracts for the supply of digital content and digital services and 2019/771 on certain aspects concerning contracts for the sale of goods), when AI is provided as a service or is incorporated into goods.
General contractual and extra-contractual liability rules may be relied upon to trigger liability for damages caused by AI systems subject to proven damage (which may be anticipated via penalty clauses), fault (in contractual liability, breach of a contractual obligation), guilt (or responsibility for the fault or non-performance), and causality lien.
In the near future, an AI-specific liability regime will apply in Romania based on an EU directive harmonising non-contractual civil liability regimes for AI systems (which has yet to be issued by the EU authorities).
-
Please describe any civil and criminal liability rules that may apply in case of damages caused by artificial intelligence systems.
General contractual and extra-contractual liability rules may be relied upon as well to trigger liability for damages caused by AI systems subject to the requirements discussed at item 4 above. Product-specific liability rules (described at item no. 4 above) may also apply to the use of AI systems.
Civil liability may also apply in the case of use of AI systems in infringement of intellectual property rights or illegal use of data.
Administrative liability for the use of AI systems may be triggered under the data protection and cybersecurity legislation (i.e., GDPR, NIS).
The use of AI systems to perpetrate criminal acts (e.g., IT fraud, illegal dealing in non-cash payment instruments; breach of IT systems, illegal altering of data, etc.) could trigger criminal liability.
-
Who is responsible for any harm caused by an AI system? And how is the liability allocated between the developer, the user and the victim?
Responsibility for harm varies depending on the legal capacities and interests of the parties involved; for instance, end-users may be entitled to invoke product liability as per the applicable rules (see item 4 above) or contractual liability.
Under general liability rules which may apply to the use of AI systems, the person responsible for proven damage is liable for it; responsibility is based on damage and fault or failure to perform a contractual obligation. Liability may be allocated to the various actors in the production and distribution chain of an AI system, e.g., developers, vendors, distributors, users, or victims (if they contribute to the occurrence of the damage by their own negligence or omissions). In each instance the fault would be assessed depending on the attributions and obligations of such party. In the case of data breaches and security incidents caused by the use of AI systems, the primary liability towards data subjects as well as the administrative rests with the data controller or generally the user of the system (which can nonetheless redress against the liable person, for instance the manufacturer, vendor or distributor of the system).
In the case of criminal liability, typically the user of the AI system would be held liable for criminally-sanctioned uses of AI systems.
Under the rules of product liability (described above at item 4), producers (including manufacturers, distributors, importers, etc.) are liable for product defects; these rules may apply to AI systems incorporated in goods.
-
What burden of proof will have to be satisfied for the victim of the damage to obtain compensation?
Under the general liability regimes (as described at item 4 above), the burden of proof rests with the damaged party.
In the case of product liability, it is sufficient to demonstrate the defects of the product. In the case of liability triggered by consumer protection rules, the non-conformity of the good or service (including goods with digital content or digital services) needs to be proved.
These liability regimes are not tailored for AI systems; however, they may apply if the requirements of the regimes are met, typically subject to proof of damage and fault or non-conformity. Future legislative developments, such as the revised product liability regime and the extra-contractual liability for AI, will specifically address liability for AI systems.
-
Is the use of artificial intelligence insured and/or insurable in your jurisdiction?
In Romania, there are currently no offers of insurance policies tailored specifically for AI systems. Nevertheless, general insurance products such as general liability policies (including D&O liability), product liability, IT or privacy and cybersecurity risk policies could be used in relation to the risks resulting from the use of AI systems, subject to policy restrictions.
At the same time, Romanian companies interested to acquire insurance for their AI systems may sign up for wider technology-related or AI-specific insurance plans with EU insurers which offer AI-specific insurance products.
-
Can artificial intelligence be named an inventor in a patent application filed in your jurisdiction?
No. AI systems cannot be designated as inventor when registering a patent with the Romanian Patent Office (“OSIM”). Romanian Patents Law states that only natural persons can be named as inventors (patent rights being specifically tied to a natural person’s inventive activity). In addition, the implementing rules to the Patents Law indicate that personal identifiable data relating to the inventor(s) must be submitted in the patent registration form as a prerequisite for patent registration. Moreover, Romanian patent legislation requires all patent applications to have an Inventor’s Declaration annex and no patent is granted in the absence of such declaration signed by the relevant natural person.
The same principles also apply at EU level. The European Patent Office (EPO) has rejected applications that listed AI systems as inventors. As provided under the European Patent Convention (EPC), and further upheld by the Board of Appeal of the EPO, an inventor must be a natural person. This was demonstrated in the DABUS EPO case5, where EPO specifically refused to recognize an AI system as an inventor, resulting in the denial of the related patent application.
Inventors are however generally allowed to employ any AI system throughout their creative process, provided they do not infringe third-party rights.
We note that most patent offices, including OSIM, are involved in discussions regarding the impact of AI on intellectual property legislation and there is significant debate on designating AI systems as an inventor, such as the Artificial Inventor Project published by WIPO. We expect legislative proposals and public debate on AI generated inventions and how to best protect such, however, we expect that Romanian law will not change in this regard and will continue to only allow natural persons to be designated as inventors.
Footnote(s):
5 European Patent Office appeal decision is available here: https://www.epo.org/en/boards-of-appeal/decisions/j200008eu1.
-
Do images generated by and/or with artificial intelligence benefit from copyright protection in your jurisdiction? If so, who is the authorship attributed to?
The Romanian Copyright Law does not explicitly address the copyright regime of images created by generative AI systems, and specialized IP courts have yet to issue any formal interpretations on the matter. The general principles outlined in Romanian and EU regulations and case law must be considered to determine the authorship over such images.
The cornerstone of copyright eligibility, as stated in the Copyright Law, is originality. The originality principle states that copyright protection strictly applies to original intellectual creations that embody the creative input of its authors, regardless of the manner in which they are expressed, whereas authors are defined as the natural persons who make creative contributions to the work. From this perspective, works created by artificial intelligence (AI) systems do not qualify for copyright protection as AI systems lack the specific creativity associated with human authors. AI systems are unable to express the creative characteristic required to fulfil (i) the originality requirement for authorship and (ii) the human quality of the authors itself. As a result, for an image to be eligible for copyright protection, it must contain the creative input of a natural person. Without such humanly creative input, an image fully generated by AI fails to meet the legal requirements for copyright protection.
This approach is consistent with EU jurisprudence on similar matters, as demonstrated in case C-683/17, in which the Court of Justice of the European Union (CJEU) confirmed that copyright protection is contingent on the work being an intellectual creation of a human and a result of human creative expression.6 CJUE noted in case C-5/08 that copyright protection applies only to original works that represent “the author’s own intellectual creation”7.
Furthermore, AI generated work could very well infringe on third-party copyright, as per the increasing cases of copyright infringement lodged at a global level (e.g., OpenAI, Microsoft).
When generative AI is employed as a tool in the creative process by a human, leading to an original work, the individual may be acknowledged as the creator of a work protected by copyright.
Footnote(s):
6 CJEU, 12 Sep 2019, C-683/17 (Cofemel), para 26-35, ECLI:EU:C:2019:721.
7 CJUE, 16 Jul 2009, C-5/08, para 35-51, Infopaq International A/S v Danske Dagblades Forening.
-
What are the main issues to consider when using artificial intelligence systems in the workplace?
There are several legal issues which should be considered in relation to the use of AI systems in the workplace related to (i) the use of AI systems by staff members to perform their job duties and, respectively, (ii) to the use of AI for the management of the employer’s relationship with the staff members.
The use of AI systems in the workplace typically raises legal issues such as data protection compliance; IP rights (e.g., ownership and transferability of rights created by a staff member using AI); compliance with legal and contractual obligations; liability for work performed using AI.
The use of AI by staff members further raises concerns in respect of the observance of confidentiality and security of company data, business information and systems; the compliance with regulation; accountability and liability for decisions taken using AI systems; uniformity in use, predictability, accuracy, and reliability of the AI used; effectiveness of human supervision, control, and review of AI; ensuring availability of support for the AI; etc.
Companies must have in place adequate governance frameworks and tools for the use of AI in their business, applicable and enforceable against staff members. The latter must be trained for the use and supervision of AI; in particular, staff members must be aware of the acceptable uses of AI for their employer and be provided with guidelines on AI use.
Consistent monitoring and supervision of AI systems used will enable companies to take measures to correct systems’ errors to ensure their accuracy and reliability.
AI systems may be used in processes related to the management of the employment relationship, such as recruitment, appraisals, education and training.
Companies must ensure that AI tools used to this end do not discriminate staff members or candidates in recruitment processes, or breach other requirements of the applicable employment legislation. Human supervision of the AI used to manage employment relationships and the review of AI-taken decisions are generally advisable to maintain a fair, balanced approach of the relationships in the workplace.
For data protection compliance, the use of AI in the workplace must be fully transparent towards staff members, as per the GDPR; where AI is used for automated decision-making (including profiling), companies must provide staff members with adequate information on the logic involved, as well as the significance and the envisaged consequences of such processing for the data subjects. Adequate privacy and security controls must be put in place for using AI in the workplace. Furthermore, pursuant to local data protection legislation, the use of AI systems for staff monitoring purposes is subject to specific rules in Romanian data protection legislation, such as consultation of employee representatives, proof that less intrusive measures are not effective, and limited storage periods.
-
What privacy issues arise from the use of artificial intelligence?
While Romanian legislation lacks specific privacy requirements pertaining to the use of artificial intelligence, addressing privacy concerns in AI systems involves scrutiny of EU and national privacy frameworks (GDPR, Law 190/2018, etc.) as well as careful oversight across three critical stages: development, training, and operation of AI systems.
- Development of AI Systems: As AI systems must be designed to handle large volumes of data, including personal data, incorporating the privacy by design and default principles is crucial. Given the innovative nature of AI technology, conducting data processing impact assessments is almost always required if personal data is processed by the AI System, whether during development, training or operation phases. Furthermore, strong security and technical safeguards are necessary to ensure integrity, availability, and confidentiality of personal data as well as compliance with other general privacy principles. In Romania, given the data protection authority’s predilection8 to apply fines for lack of security measures, ensuring that proper security measures are created and appropriately enabled throughout the entire life-cycle of AI systems is a first important step in ensuring AI systems compliance since the development phase. Data protection impact assessments are mandatory for AI systems and should be carried out since the early stages of development.
- Training of AI Systems: Training AI systems often entails processing large datasets, which frequently include Open-Source Intelligence (OSINT) collected using data scraping mechanisms. These processes frequently result in inadvertent scraping of personal data, with insufficient control over the scope and limits of processing. This lack of control makes it difficult to ensure that personal data used to train AI systems is limited and accurate, complicating efforts to obtain privacy-compliant AI solutions. Identifying the data types being processed could also prove challenging. Given that large numbers of data subjects are affected by data processing for training purposes, achieving proper justification and transparency of the processing activity could pose certain practical compliance issues. The uncertainties surrounding the data scraped during training makes it difficult to handle data requests from individuals. If legitimate interest is used as basis for processing data through data scraping practices, data controllers must conduct legitimate interest assessments where balancing the legitimate interest of data subjects and interests of data controllers may be a difficult task.
- Operation of AI Systems: When operating an AI system, it is critical to distinguish between data input as prompts and output as answers. Input and analysis methods must comply with the data minimization, purpose limitation, and other general privacy principles. Data output raises concerns about accountability, accuracy, and the certainty that such is a result of automatic processing and profiling. AI systems may be biased as a result of poor quality of training data, and may manifest for instance as discriminatory tendencies. Also, operating AI systems that learn from and/or analyse sensitive data or engage in profiling activities may raise concerns about opt-in and opt-out methods, which affect both system functionality and operation, as well as ensuring data subject rights. Compliance with the applicable transparency requirements is also mandatory and frequently subject to sanctions.
Footnote(s):
8 We note the Romanian Data protection authority (ANSPDCP) has issued most of its fines during the past 6 years for failure to implement appropriate technical and organizational security measures.
-
How is data scraping regulated in your jurisdiction from an IP, privacy and competition point of view?
Romanian legislation does not explicitly regulate any specific legal regime for data scraping activities in the context of developing, training or operating of AI systems. However, general IP, data privacy, competition and other sectorial laws and regulations apply to data scraping. When scraping data for the training of AI systems, large datasets parsed often include copyrighted content as well as personal data, triggering the applicability of both copyright-related regulations as well as privacy-related regulations.
In terms of IP regulations, training AI Systems through the use of data scraping mechanisms is regulated under the Copyright in Digital Single Market (CDSM) Directive, transposed within the Romanian Copyright Law. Per the CDSM Directive9, commercial text and data mining practices are permitted provided that relevant rights holders do not opt out of such (generally through mechanisms such as metadata or website terms of use). The Romanian Copyright Law fails to further develop or guide on this topic and secondary legislation does not exist.
The GDPR and the national privacy regulations are fully applicable with regards to data scraping practices. Generally, if data processing activities in the context of scraping practices are based on legitimate interest as processing justification, data subjects are allowed to opt-out of the processing at any given moment, albeit such right is oftentimes theoretical due to poor transparency of scraping practices.
There are no specific regulations that allow or restrict data scraping in Romania from a competition perspective. However, data scraping may be illegal or raise competition concerns if it seeks to gain an unfair advantage over competitors or obtain commercially sensitive information from competitors. From a practical perspective, the Romanian competition authority would conduct thorough assessments of scraping activities during its investigations, if needed.
Footnote(s):
9 Article 362 of Law 8/1996 on copyrights.
-
To what extent is the prohibition of data scraping in the terms of use of a website enforceable?
Whereas data scraping practices are generally allowed, subject to the limitations set forth by GDPR and the Romanian Copyright Law, and data subjects and rightsholders are allowed to opt-out of such data scrapping activities, website terms of use can be used as opt-out mechanisms for having personal or protected data/works used for AI training purposes.
Website owners who do not wish to have their data scraped for AI training purposes could include an opt-out option directly in the terms of use.
The prohibition/ opting-out of data scraping practices can be explicitly included in the website terms of use and enforcement could be achieved through various methods, depending on the specifics of the scraping activities and entities managing the underlying AI systems (e.g., data access requests and subsequent erasure requests, content moderation and takedown of illegal content under the Digital Service Act (DSA), etc.). We are nonetheless not aware of actual data scraping enforcement cases in Romania relying on websites’ terms of use, however the Romanian DSA regulator (ANCOM10) may block data scraping relying on the DSA implementing legislation.
Footnote(s):
10 ANCOM is the National Authority for Management and Regulation in Communications.
-
Have the privacy authorities of your jurisdiction issued guidelines on artificial intelligence?
The Romanian data protection authority (the ANSPDCP) has not yet issued any specific recommendations or official guidelines on the use of personal data in relation to the development, training and operation of AI systems. However, in a notice issued in connection with a governmental AI system (ION)11, ANSPDCP publicly stated that “Although there are currently no legal provisions regulating artificial intelligence in Romania, to the extent that personal data are processed (…), the principles and rules for processing personal data set out in [the GDPR] apply”.
ANSPDCP has been involved as EDPB member in the drafting of EDPB’s Joint Opinion 5/2021 on the proposal for a Regulation of the European Parliament and of the Council laying down harmonized rules on artificial intelligence12. ANSPDCP has analysed the draft EU AI Act and set out certain recommendations mentioned in the authority’s 2022 annual report13>.
ANSPDCP’s abovementioned recommendations on the use of AI, which we expect to be the key elements of the authority’s enforcement actions in relation to AI systems, are the following:
- adequate safeguards and qualified human oversight are necessary to protect data subjects’ rights and prevent harm to individuals;
- data subjects should always be informed about the use of their data for developing, training and operating AI systems, including the legal basis, procedure, scope of the system, and data subjects’ rights without using ambiguous language;
- processing of personal data must adhere to the GDPR principles, in particular the purpose limitation principle;
- personal data must be protected from unauthorized destruction, loss, alteration, disclosure, access, and unlawful processing through appropriate technical and organizational measures;
- the principles of privacy by design and privacy by default set out in the GDPR should be observed;
- data controllers must ensure compliance with data processing principles and demonstrate their effectiveness;
- the use of AI systems is subject to data protection impact assessments14.
Footnote(s):
11 The ANSPDCP’s views, issued following an inquiry by a Romanian media outlet, are available here: https://media.hotnews.ro/media_server1/document-2023-03-8-26129303-0-raspuns-oficial-anspdcp.pdf.
12 Available here: https://www.edpb.europa.eu/system/files/2021-06/edpb-edps_joint_opinion_ai_regulation_en.pdf.
13 ANSPDCP Activity report for 2022 – available in Romanian language at https://www.dataprotection.ro/?page=Rapoarte%20anuale&lang=ro.
14 ANSPDPC Decision no. 174 of October 18, 2018 on the list of operations mandating data protection impact assessments, available in Romanian language here: https://www.dataprotection.ro/servlet/ViewDocument?id=1556. Based on this decision, data protection impact assessments must be conducted in relation to AI systems to the extent AI systems perform large-scale processing of personal data through innovative use or implementation of new technologies or large-scale use of data generated by sensor devices that transmit data via the internet or other means.
-
Have the privacy authorities of your jurisdiction discussed cases involving artificial intelligence?
We are not aware of cases involving AI analysed by the Romanian data protection authority (the ANSPDCP) except for the views that the authority has taken in relation to an AI system used by the Romanian Ministry of Research, Innovation and Digitalisation (the ION bot); the authority’s views are detailed at item 15 above.
-
Have your national courts already managed cases involving artificial intelligence?
To the best of our knowledge, to date Romanian courts have not ruled on cases primarily involving the use of AI systems as the main substantial subject matter.
However, certain cases managed by national courts in Romania briefly address certain particularities related to the use of AI systems such as deep-fake tools, facial recognition software, and automated decision-making systems, generally from a procedural perspective, focusing on issues such as the admissibility, accuracy of evidence, and other procedural particularities. For instance, courts have assessed the implications of deep-fake technology in the context of verifying the authenticity of video evidence, or the accuracy and potential biases of facial recognition for identifying individuals. We are not aware of assessments by Romanian courts of substantial matters related to the use of AI systems, such as liability, compliance with regulatory requirements, IP and third-party rights infringements.
-
Does your country have a regulator or authority responsible for supervising the use and development of artificial intelligence?
No authority has been formally designated, to date, with the supervision of the use and development of artificial intelligence (AI) in Romania. We expect that one of the Romanian data protection authority or the Authority for the Digitalisation of Romania (which is primarily designated to coordinate the issuance of Romania’s National AI Strategy) will be designated; at the same time, the Ministry of Research, Innovation and Digitalization will be instrumental in coordinating with various authorities to further provide guidelines on AI considering the practicalities of various industry sectors.
-
How would you define the use of artificial intelligence by businesses in your jurisdiction? Is it widespread or limited?
Eurostat reported that in 2020 around 6% of Romanian companies with at least 10 employees (excluding those from the financial sector) were using AI-driven applications and this percentage dropped in 2023 to 1.6%. At the same time, based on a poll conducted in 2023, 32% of Romanians with internet access report using AI tools.
Nonetheless, we believe that use of AI is widespread in Romania, albeit AI awareness may be limited. Companies and public authorities plan investments in AI. However, in practice, companies fail to implement acceptable use policies and properly train their personnel and contractors on AI usage, which raises significant operational and legal risk (such as third-party rights infringements, data privacy implications, reputational damages, etc.).
-
Is artificial intelligence being used in the legal sector, by lawyers and/or in-house counsels? If so, how?
There is limited use of AI in the legal sector, which is mainly due to language barriers as few internationally-recognised AI legal tools are available in Romanian language and provide trustworthy results, while Romanian AI legal tools are scarce.
Legal professionals in Romania use AI tools for the following main purposes: translation, legal research, document summarisation, contract and document generation and management, language understanding and generation,and e-discovery and investigation tools.
-
What are the 5 key challenges and the 5 key opportunities raised by artificial intelligence for lawyers in your jurisdiction?
We believe that the main challenges for legal professionals related to artificial intelligence are the following:
- The lack of trust in the accuracy of AI-based legal tools for research purposes;
- The lack of awareness in relation to the benefits, limits, ethical implications, and biases of the use of AI tools, both at client and legal professionals’ level;
- Learning how to use AI tools and understanding how AI can assist legal professionals in their work;
- The increase in technical aspects of legal work when AI is involved and in the privacy and cybersecurity burden for legal professionals;
- Ethics and legal liability for work performed relying on legal tech and AI tools, including the impact on employment or an expected responsibility to verify the accuracy and detect biases of AI tools;
- Putting in place workable governance frameworks for the use of AI tools.
At the same time, the main opportunities for legal professionals related to artificial intelligence include the following:
- Better, more effective legal services relying on performant AI-driven legal tech;
- Better legal research facilities (particularly helpful with the increase of regulation and case law);
- Streamlining services such as legal due diligence, contract and document drafting and review;
- New business opportunities in legal services for compliance with the AI Act and for projects as contemplated by Romania’s National Strategy for Artificial Intelligence;
- In-depth collaboration with the Romanian tech community to develop and test AI-driven technology, offering opportunities for local and international deals.
-
Where do you see the most significant legal developments in artificial intelligence in your jurisdiction in the next 12 months?
The enactment of the AI Act on 12 July 2024, which will be followed by local implementing legislation and initiatives by Romanian public authorities, the actual implementation and practical observance of the requirements of the AI Act, putting in place AI governance frameworks and the follow-up on Romania’s recent National Strategy for Artificial Intelligence are the main topics related to AI in 2024/2025 in Romania.
Romania: Artificial Intelligence
This country-specific Q&A provides an overview of Artificial Intelligence laws and regulations applicable in Romania.
-
What are your countries legal definitions of “artificial intelligence”?
-
Has your country developed a national strategy for artificial intelligence?
-
Has your country implemented rules or guidelines (including voluntary standards and ethical principles) on artificial intelligence? If so, please provide a brief overview of said rules or guidelines. If no rules on artificial intelligence are in force in your jurisdiction, please (i) provide a short overview of the existing laws that potentially could be applied to artificial intelligence and the use of artificial intelligence, (ii) briefly outline the main difficulties in interpreting such existing laws to suit the peculiarities of artificial intelligence, and (iii) summarize any draft laws, or legislative initiatives, on artificial intelligence.
-
Which rules apply to defective artificial intelligence systems, i.e. artificial intelligence systems that do not provide the safety that the public at large is entitled to expect?
-
Please describe any civil and criminal liability rules that may apply in case of damages caused by artificial intelligence systems.
-
Who is responsible for any harm caused by an AI system? And how is the liability allocated between the developer, the user and the victim?
-
What burden of proof will have to be satisfied for the victim of the damage to obtain compensation?
-
Is the use of artificial intelligence insured and/or insurable in your jurisdiction?
-
Can artificial intelligence be named an inventor in a patent application filed in your jurisdiction?
-
Do images generated by and/or with artificial intelligence benefit from copyright protection in your jurisdiction? If so, who is the authorship attributed to?
-
What are the main issues to consider when using artificial intelligence systems in the workplace?
-
What privacy issues arise from the use of artificial intelligence?
-
How is data scraping regulated in your jurisdiction from an IP, privacy and competition point of view?
-
To what extent is the prohibition of data scraping in the terms of use of a website enforceable?
-
Have the privacy authorities of your jurisdiction issued guidelines on artificial intelligence?
-
Have the privacy authorities of your jurisdiction discussed cases involving artificial intelligence?
-
Have your national courts already managed cases involving artificial intelligence?
-
Does your country have a regulator or authority responsible for supervising the use and development of artificial intelligence?
-
How would you define the use of artificial intelligence by businesses in your jurisdiction? Is it widespread or limited?
-
Is artificial intelligence being used in the legal sector, by lawyers and/or in-house counsels? If so, how?
-
What are the 5 key challenges and the 5 key opportunities raised by artificial intelligence for lawyers in your jurisdiction?
-
Where do you see the most significant legal developments in artificial intelligence in your jurisdiction in the next 12 months?