-
What are your countries legal definitions of “artificial intelligence”?
There is no one definition of “artificial intelligence”. On December 2023, Israel’s Ministry of innovation, Science and Technology, in collaboration with the Ministry of Justice, published its first-ever policy on AI regulation and ethics (Israel’s Policy on Artificial Intelligence – Regulations and Ethics – “The AI Policy”(. According to the AI policy, “The field of artificial intelligence is a general name for the development of information technology, communication, and data science, enabling decision-making, making predictions, or performing actions by a computer at a high level of independence, in a way that simulates or is able to replace human intelligence”.
The AI policy also states that among the many attempts to create a suitable definition, recently there is a trend of adopting the definition proposed by the OECD:
“AI System: An AI system is a machine -based system that can, for a given set of human-defined objectives, make predictions, recommendations or decisions influencing real or virtual environments. AI systems are designed to operate with varying levels of autonomy”.
It has been clarified, however, that this definition, is currently re-assessed due to current and expected development in artificial intelligence, and it may be updated in the near future.
-
Has your country developed a national strategy for artificial intelligence?
Israel has been termed as “The Start-Up Nation”. Israel’s AI national AI program strategy capitalizes in Israel’s entrepreneurial spirit, multidisciplinary collaboration, and innovative culture, aiming to leverage AI for societal and economic gains. With respect to the private sector, Israel’s AI policy is premised upon the concept of “Responsible Innovation”, based upon the need to support innovation, while fostering accountability and ethically-aligned design and uses of AI.
Such principles include:
- Sector-specific regulation – Empowering sectorial regulators to address AI risk and benefit within their domains.
- International interoperability of frameworks – fostering consistency with existing approached of leading countries and international organizations.
- A risk-based approach – AI regulation should be contextualized, weighing potential risks and benefits within specific use cases, compatible with the OECD trustworthy AI principles.
- Incremental Development & Regulatory Experimentation – using “soft” regulatory tools intended to allow for an incremental development of the regulatory framework.
- Multistakeholder Consultation – fostering multi-stakeholder cooperation between the public & the private sectors, academia and civil society organizations.
Public sector use of AI shall be promoted through dedicated funding, technological assistance to the relevant government units, and guidance in the fields of trustworthy AI and risk management.
See further: https://www.gov.il/BlobFolder/news/most-news20231218/he/Israel%20National%20AI%20Program%202024.pdf
For further details regarding the recommendations on the regulation and ethics policy on AI, see:
https://www.gov.il/BlobFolder/policy/ai_2023/en/Israels%20AI%20Policy%202023.pdf
-
Has your country implemented rules or guidelines (including voluntary standards and ethical principles) on artificial intelligence? If so, please provide a brief overview of said rules or guidelines. If no rules on artificial intelligence are in force in your jurisdiction, please (i) provide a short overview of the existing laws that potentially could be applied to artificial intelligence and the use of artificial intelligence, (ii) briefly outline the main difficulties in interpreting such existing laws to suit the peculiarities of artificial intelligence, and (iii) summarize any draft laws, or legislative initiatives, on artificial intelligence.
No formal rules have been implemented to date with regards to AI. It should be noted, however, that the Ministry of Justice has published on December 18, 2022 an opinion, addressing the following: USES OF COPYRIGHTED MATERIALS FOR MACHINE LEARNING. The Opinion, aims to address the question whether Machine Learning (ML) enterprises can make unauthorized use of copyrighted materials to train Artificial Intelligence (AI) systems.
This relatively pioneer opinion, concludes that apart from certain circumstances, the use of copyrighted materials for ML is permitted under existing copyright doctrines. First and foremost, ML will typically be covered by the fair use doctrine. Second, some ML projects may fall under the doctrine that permits incidental uses of copyrighted materials. Third, in the (admittedly nonstandard) case where the copyrighted materials are erased at the end of the ML process, the enterprise may enjoy the protection provided under the doctrine of transient use. Notably, the conclusion of this Opinion—that training ML systems is generally permitted under copyright law—is consistent with the approach of other legal systems around the globe. The Opinion specifically excludes from its scope certain ML uses.
See: https://www.gov.il/BlobFolder/legalinfo/machine-learning/he/18-12-2022.pdf
-
Which rules apply to defective artificial intelligence systems, i.e. artificial intelligence systems that do not provide the safety that the public at large is entitled to expect?
As of today, Israeli law does not specifically regulate defective artificial intelligence systems, nor does it include a clear definition of this term. In the absence of specific regulation, the rules governing such systems are based on the general legal framework, which is applied either directly or through interpretation. This interpretation is expected to take into consideration the objectives of relevant laws, the digital risks involved, and how these risks should be addressed from a legal standpoint.
For example, Israeli tort laws should apply to certain uses of defective artificial intelligence systems, specifically, in cases where the malfunction results from behavior constituting civil wrongs (e.g. where negligence can be shown). In this regard, the tort laws generally provide for monetary compensation to anyone who suffered damages as a result of a tort (such as negligence or breach of statutory duty) relating to the use of defective artificial intelligence systems.
Another example is when defective artificial intelligence systems take actions that are deemed to be discriminatory under general anti-discrimination legislation. For example, cases where artificial intelligence system makes a discriminating decision, or where advertisement is made on basis of biased data collected by artificial intelligence. In these cases, the use of artificial intelligence may constitute civil wrongs.
-
Please describe any civil and criminal liability rules that may apply in case of damages caused by artificial intelligence systems.
Similar to rules that apply to defective artificial intelligence systems, Israeli law does not establish concrete rules regarding criminal and civil liability for damages caused by artificial intelligence systems. Instead, as noted above, these cases are currently addressed under existing general legal frameworks, where liability may arise through applicable general laws.
For example, general contract laws may establish civil liability where contractual damages result from one of the parties use of artificial intelligence systems.
Tort laws can may also establish civil liability where artificial intelligence systems caused damages, provided there is sufficient evidence of a tort being committed and causality could be established.
Other legal frameworks, such as consumer protection laws, may impose civil or criminal liability for the misuse of artificial intelligence systems. For instance, when artificial intelligence systems are used to target minors with advertisements that mislead or take advantage of such minors.
-
Who is responsible for any harm caused by an AI system? And how is the liability allocated between the developer, the user and the victim?
Currently, Israeli law does not provide for a prearranged formula for the purpose of liability apportionment.
In the absence of specific regulations, the liability of the developer, the deployer, the user and the victim is allocated on a case-to-case basis, considering general rules of the relevant Israeli law. For example, in cases where tort law is the basis of the claim the liability will allocated in accordance with the doctrines for allocation of liability developed under such laws. In this regard, if more than two persons are liable for a tort, they are jointly liable for the action (and it is possible to sue them jointly and severally). Although not directly relevant to the injured party, the internal allocation of liability among tortfeasors is ultimately to be decided by the court. In reaching this decision, the court primarily evaluates each tortfeasor’s moral culpability and the casual link between their conduct and the resulting harm. For example, if both a developer and a deployer are deemed to have been negligent and have caused harm to an end user, their liability would be allocated based on their respective degree of responsibility, taking into account the aforementioned factors.
It is noteworthy that while specific regulations on this issue have not been enacted, the Israeli government has stated general principles for future regulation. These principles emphasize that those who create risks should assume responsibility for managing them, rather than shifting that burden onto others. Ultimately, liability would rest with those who were responsible for the specific risk involved.
-
What burden of proof will have to be satisfied for the victim of the damage to obtain compensation?
The burden of proof that would have to be satisfied by a victim depends on the specific case and the relevant legislation being implemented. In Israeli civil law, the burden typically falls on the plaintiffs. They must demonstrate their case with a preponderance of the evidence, showing it is more likely than not that their claims are true. Conversely, in criminal proceedings, the burden of establishing criminal liability falls on the prosecution who needs to prove the defendant’s guilt beyond a reasonable doubt for a conviction to be reached.
-
Is the use of artificial intelligence insured and/or insurable in your jurisdiction?
To the best of our knowledge, at present, AI products, or at least tech products with AI abilities for ‘regular’ ‘familiar’ commercial activities, can be to some extent insured under ‘regular’ insurance policies (such as tech product liability insurance) issued by the main players in the insurance market. We assume that these matters will expeditiously evolve (perhaps by specific arrangements for AI products) in the future.
-
Can artificial intelligence be named an inventor in a patent application filed in your jurisdiction?
Similar to proceedings in other countries, Dr. Thaler attempted to register a patent in Israel in the name of the artificial intelligence system DABUS.
In March 2023, the Israeli Patent Commissioner issued a decision regarding the eligibility of an AI system to serve as an “inventor” under Israeli Patent law. It was determined that under the current legal framework, AI is not eligible to be an inventor. The Commissioner found that recognizing AI as a patent inventor, or the registration of a patent for an invention created without any human involvement, is a question of policy best decided by the legislature. Until the legislature enacts different rules, only a human being may be named the inventor of a patent.
Dr. Thaler has appealed the Commissioner’s decision. A decision on the appeal is expected in 2025.
-
Do images generated by and/or with artificial intelligence benefit from copyright protection in your jurisdiction? If so, who is the authorship attributed to?
As of today, no cases in Israel dealt with the issue of images generated by artificial intelligence, and the question of ownership in such works.
-
What are the main issues to consider when using artificial intelligence systems in the workplace?
Artificial intelligence (AI) offers numerous advantages in the workplace, potentially simplifying, accelerating, and streamlining various processes. With appropriate programming, AI can surpass human limitations in decision-making by eliminating emotional biases and relying on data and statistics.
However, employers using automated decision-making systems, either directly or through third-party contractors, remain responsible for the system’s outcomes. This includes potential liability for biases, reliance on inaccurate data, discrimination, and non-compliance with anti-discrimination, disability equality, and data privacy laws.
Consequently, determining the applicable jurisdiction is crucial, as AI regulations vary significantly between countries, states, and even cities.
To mitigate risks, employers should adopt the following principles:
- Transparency and Information: Explicitly inform employees and job candidates about the use of AI in decision-making processes.
- Flexibility: Allow employees to opt out of AI-based decisions or request system adjustments to accommodate individual circumstances.
- Human Oversight: Integrate human judgment into decision-making, especially for high-stakes decisions like terminations, or establish human appeal processes.
- Explainable AI: Utilize software that can transparently explain decision-making processes, providing clear and detailed justifications.
- Supplier Accountability: Require AI suppliers and contractors to demonstrate that their systems are trained on unbiased, thoroughly tested data. Include provisions for regular independent audits to verify fairness and compliance.
- Contractual Protection: Obtain indemnification from AI providers and contractors to safeguard against potential liabilities.
By adhering to these principles, employers can harness the benefits of AI while minimizing legal and ethical risks.
-
What privacy issues arise from the use of artificial intelligence?
As a principal, training and using AI together with personal information challenges most, if not all, principles of data privacy protection laws. Feeding and processing personal data with AI tools requires a thorough examination and attention in the application and ensuring compliance with each principle.
While some of the challenges of data protection law are applicable to other areas of law, the unique principles and rules of each area necessitate a comprehensive understanding of AI’s impact on data privacy. In Israel, the current approach is that each area must be independently examined and addressed.
For instance, when training AI tools on information that includes personal data, it is crucial to ensure that this process is conducted in full compliance with the law. This applies to both data collection and the intended use of the data.
The use of AI tools carries the potential for information security risks and inadvertent disclosure of personal data. This underscores the need for robust risk management strategies, ensuring preparedness and resilience in the face of potential data breaches.
The way AI tools work, and the logic of their outputs are not always understandable and clear. AI tools may create biases or unwittingly lead to misuse of information. For example, certain AI tools intended to make decisions or to support decision-making can ‘infer’ facts that were not fed to them and take them into account to make decisions when the law prohibits relying on those facts either in general or for the purpose of making certain decisions in which those AI tools are sought to be used. This potential for misuse underscores the need for vigilance and proactive measures.
-
How is data scraping regulated in your jurisdiction from an IP, privacy and competition point of view?
As of today, there are few binding norms from which solid conclusions can be drawn regarding the position of Israeli law regarding data scraping. At the moment, it seems that it is more permitted than prohibited.
There are two rulings of district courts that deal with data scraping of ads published on bulletin boards that are accessible to Internet surfers – 1074-05 Ma’ariv v. All You Need (J. Michal Agmon Gonen) and 55466-07-21 Shmueli v. Margalit (J. Limor Bibi). The High Court approved the former but did not specifically address data-scrapping questions.
From those rulings, it appears that data scraping can be performed even when personal information is involved, especially if it is personal information that is not sensitive, and the re-use of the information is its repeated publication for the same purposes (competing bulletin boards) or for similar purposes (online information services).
In the case of Shmueli, the court determined that the mere use of personal information published to the general public does not violate privacy (either for the reason that it is outside the scope of protected personal information or for the reason that the very publication creates consent of the data subject to re-use it). In that case, the information was being used for similar purposes (advertising on a competitor’s bulletin board), so it is difficult to conclude that any other use would be permitted, but still, it teaches that the mere scraping of the information is not prohibited and probably the main emphasis is on the uses that will be made of it.
As for intellectual property matters, as mentioned above, an opinion published by the Ministry of Justice (regarding the applicability of ‘fair use’ on using protected works for AI training) indicates that the use of protected works for the purpose of training AI tools, except in exceptional cases (such as the use of protected works of one author only) that need more examination, will be considered fair use. This does not mean that the products (outputs) of the AI tool will definitely be non-infringing, but the mere use of protected works for training in the prism of copyright laws, in itself, is not wrongdoing.
-
To what extent is the prohibition of data scraping in the terms of use of a website enforceable?
A prohibition in terms of the use of data scraping will not always be enforced, and the two cases above show this well. From the All You Need decision, it appears that the court had difficulty with the attempt of a website owner to appropriate information that surfers published on it when that information is mainly facts, which are doubtful whether they are protected works and even if they are protected works the ownership of them belongs to the surfers.
Although in these two cases, the court chose not to enforce a prohibition in terms of the use of data scraping, we would be careful in drawing lateral conclusions. There will certainly be cases in which the ban on data scraping in the terms of use will be given effect, but each case must be examined on its merits, and above all, the terms of use must be adapted to ‘what they seek to protect’ in such a way that the arrangements are not considered disadvantages or invalid. The provisions of the Uniform Contracts Law might guide and help in this matter.
-
Have the privacy authorities of your jurisdiction issued guidelines on artificial intelligence?
The Privacy Protection Authority has published several publications that include a reference to the use of artificial intelligence, but no general directive has yet been issued.
One worth mentioning relates to transparency as a condition for valid consent for data collection and processing, issued on 18.7.2022.
-
Have the privacy authorities of your jurisdiction discussed cases involving artificial intelligence?
It should be noted that the regulators in Israel treated artificial intelligence tools with all seriousness. A few years ago, several regulators (including the Ministry of Justice, Ministry of Finance, Bank of Israel, Competition Authority, the Security Authority, and the Capital Markets Authority) published a detailed report with an initial policy document after studying the topic including academic research on the use of AI tools in the financial sector, with one of the main recommendations to proceed under each field cautiously to encourage responsible AI use and to avoid unnecessary ‘bumpers’ (in a way that resembles the wise attitude in the past in many jurisdictions to avoid chilling effect to Internet in previous century).
-
Have your national courts already managed cases involving artificial intelligence?
Yes. Several decisions have been given by Judge Ido Droyan-Gamliel, as part of discussions related to the charge against a man who was detained in the airport after the system at the airport “flagged” him and drugs were seized in his possession. (See case no. 24474-01-22).
The profiling method leads, to the search of a person’s body and belongings without a judicial warrant, and this without there being reasonable suspicion against him of drug smuggling or specific intelligence information. This seems to be a serious violation of the fundamental right to privacy and equality, because it is an arbitrary and discriminatory action, carried out by a computerized system without human involvement.The Judge clarified that the the system in this respect is a “black box” that no one knows for sure – not even the police – how it works.
Further, at the end of May 2024, the Civil Rights Association petitioned to the High Court of Justice with a request to order the police to stop making decisions about delaying and searching for drugs for those returning from abroad, relying on artificial intelligence system that has been operating in the airport in recent years. (Case No. 4271/24).
-
Does your country have a regulator or authority responsible for supervising the use and development of artificial intelligence?
The AI Policy provides guidelines and instructions for sectoral regulators when addressing the regulation of AI in the private sector. The government’s policy on public sector applications of AI is being developed separately.
Based on the government decision, it has been suggested to establish a forum of regulators and a forum for public participation on AI policy.
It has been suggested that an inter-agency forum should be established, comprised of regulators and experts in technology, policy and law, in order to promote coordination and coherence in sectoral AI regulation, through cooperation and joint learning. In addition, a multistakeholder forum should be established, with representatives from industry, academia and civil society organizations. It would allow for open discussions among stakeholders to identify policy gaps and formulate potential responses.
See: https://www.gov.il/BlobFolder/policy/ai_2023/en/Israels%20AI%20Policy%202023.pdf
-
How would you define the use of artificial intelligence by businesses in your jurisdiction? Is it widespread or limited?
Israel currently holds a prominent position on the global AI stage and is ranked 7th globally according to the 2023 Tortoise Global AI Index, which Index evaluates three main pillars: implementation, innovation, and investment.
Like in other countries, the use of AI by Israeli businesses has increased in the last few years and is becoming more widespread, but it also differs substantially across different sectors. In Israel, AI adoption is more prevalent among tech companies and particularly among companies in the software sub-sector, which exhibit the highest AI adoption rates compared to other tech areas, such as life sciences, communications, agritech, and cleantech. Moreover, in Israel, renowned as the Startup Nation, AI is especially dominant among emerging companies, with over a third of the startups established in the past five years, and nearly half of the startups founded in 2023, using AI technologies. Israel also hosts a significant number of R&D centers for multinational tech corporations, some of the largest of which are currently involved in implementing AI in their products.
In contrast, the use of AI technology among more traditional Israeli companies is substantially lower but is also increasing.
Reference: Israel’s Artificial Intelligence Landscape, May 2024 https://rise-il.org/insight/israels-position-in-the-artificial-intelligence-race/
-
Is artificial intelligence being used in the legal sector, by lawyers and/or in-house counsels? If so, how?
The artificial intelligence revolution has not bypassed law firms. However, it appears that we are only at the beginning of a process that will take shape over the next few years. This is not surprising, considering that this technology’s breakthrough began just two years ago.
Many law firms are conducting pilots with artificial intelligence systems (LegalTech), currently available on the market, and some are even developing AI systems for their internal needs.
Given that the Hebrew language currently poses a challenge for the use of AI systems, the main applications of AI in law firms (mostly in pilots or examination), are in the areas of contracts, legal operations (including knowledge management, spend management, document review, and eDiscovery), mergers and acquisitions (M&A), and regulatory compliance.
-
What are the 5 key challenges and the 5 key opportunities raised by artificial intelligence for lawyers in your jurisdiction?
The 5 key challenges Raised by AI for Lawyers are:
Ethical and Legal Concerns: Ensuring AI operates within legal and ethical boundaries, and addressing issues of bias, transparency, and accountability.
Integration with Existing Systems: Incorporating AI into current legal frameworks.
Skill Gap and Training: Bridging knowledge gaps and providing adequate training and ensuring continuous adaptation to evolving AI technologies.
Regulatory Compliance: Navigating and staying updated with AI regulations.
Job Displacement and Role Changes: Managing the impact of AI on employment and job roles, and balancing AI adoption with preserving human expertise.
5 Key Opportunities Raised by AI for Lawyers:
Enhanced Efficiency and Productivity: Automating routine tasks like document review and legal research.
Improved Accuracy and Consistency: Minimizing errors and ensuring consistent legal work.
Advanced Legal Research and Insights: Providing deeper insights and comprehensive analyses.
Enhanced Client Service and Experience: Offering personalized services through AI-powered tools and improving client engagement and satisfaction.
Strategic Decision-Making: Leveraging AI for data-driven decision-making.
-
Where do you see the most significant legal developments in artificial intelligence in your jurisdiction in the next 12 months?
In December 2023, Israel’s Ministry of Innovation, Science, and Technology published its inaugural policy on AI regulation and ethics, recommending concrete steps to foster responsible AI innovation in the private sector (the “AI Policy”). Like many initial policies published by national regulators and global organizations, the AI Policy sets high-level guidelines, goals, and recommendations regarding the regulation of AI development and use.
Consistent with Israel’s established governmental approach and considering the AI Policy’s broad guidelines for sectoral regulators, Israel appears to have opted for “soft” sectoral regulation over formal, comprehensive AI legislation. This approach focuses on a risk-based method grounded in the existing regulatory framework and globally accepted principles. The aim is to create a dynamic structure that harmonizes regulations across industries and activities while fostering innovation.
Given this approach, comprehensive AI legislation similar to the EU AI Act is not anticipated in Israel within the next 12 months, nor are dramatic changes in AI regulation expected. Instead, specific policies, including soft regulation, are likely to be issued by various regulators (such as financial regulators and the Privacy Protection Authority), targeting particular sectors or topics. These specific policies will likely address cases where the application of existing legal frameworks (such as contract law, tort law, consumer protection law, and privacy protection law) is insufficient, undesirable, or unclear, and will aim to balance public interest issues with driving technological innovation.
Additionally, based on the recommendations set forth in the AI Policy, the establishment of an AI Policy Coordination Center, which will serve as an expert-based inter-agency body, may take place within the next 12 months.
Israel: Artificial Intelligence
This country-specific Q&A provides an overview of Artificial Intelligence laws and regulations applicable in Israel.
-
What are your countries legal definitions of “artificial intelligence”?
-
Has your country developed a national strategy for artificial intelligence?
-
Has your country implemented rules or guidelines (including voluntary standards and ethical principles) on artificial intelligence? If so, please provide a brief overview of said rules or guidelines. If no rules on artificial intelligence are in force in your jurisdiction, please (i) provide a short overview of the existing laws that potentially could be applied to artificial intelligence and the use of artificial intelligence, (ii) briefly outline the main difficulties in interpreting such existing laws to suit the peculiarities of artificial intelligence, and (iii) summarize any draft laws, or legislative initiatives, on artificial intelligence.
-
Which rules apply to defective artificial intelligence systems, i.e. artificial intelligence systems that do not provide the safety that the public at large is entitled to expect?
-
Please describe any civil and criminal liability rules that may apply in case of damages caused by artificial intelligence systems.
-
Who is responsible for any harm caused by an AI system? And how is the liability allocated between the developer, the user and the victim?
-
What burden of proof will have to be satisfied for the victim of the damage to obtain compensation?
-
Is the use of artificial intelligence insured and/or insurable in your jurisdiction?
-
Can artificial intelligence be named an inventor in a patent application filed in your jurisdiction?
-
Do images generated by and/or with artificial intelligence benefit from copyright protection in your jurisdiction? If so, who is the authorship attributed to?
-
What are the main issues to consider when using artificial intelligence systems in the workplace?
-
What privacy issues arise from the use of artificial intelligence?
-
How is data scraping regulated in your jurisdiction from an IP, privacy and competition point of view?
-
To what extent is the prohibition of data scraping in the terms of use of a website enforceable?
-
Have the privacy authorities of your jurisdiction issued guidelines on artificial intelligence?
-
Have the privacy authorities of your jurisdiction discussed cases involving artificial intelligence?
-
Have your national courts already managed cases involving artificial intelligence?
-
Does your country have a regulator or authority responsible for supervising the use and development of artificial intelligence?
-
How would you define the use of artificial intelligence by businesses in your jurisdiction? Is it widespread or limited?
-
Is artificial intelligence being used in the legal sector, by lawyers and/or in-house counsels? If so, how?
-
What are the 5 key challenges and the 5 key opportunities raised by artificial intelligence for lawyers in your jurisdiction?
-
Where do you see the most significant legal developments in artificial intelligence in your jurisdiction in the next 12 months?