- within Technology topic(s)
- with Inhouse Counsel
- in United States
- with readers working within the Technology, Media & Information and Construction & Engineering industries
Artificial intelligence ("AI") is a global subject of intense focus by governments, research institutions, investors, and corporations, ranging from start-ups to well-established industry leaders. As technology and regulatory frameworks evolve at a rapid pace, complex and novel legal issues continue to arise in transactional, litigation, and regulatory compliance contexts.
As an update to our December 2022 publication with the same title, this White Paper highlights key regulatory developments and questions that merit consideration by private-sector leaders and in-house counsel, in particular regarding AI risks and risk management of AI.
INTRODUCTION
The use of AI and interest in its diverse applications are steadily increasing across a wide range of industries, including advertising, banking, telecommunications, manufacturing, retail, energy, transportation, health care, life sciences, waste management, defense, and agriculture. Businesses are turning to AI systems and the related technology of machine learning to increase their revenue, quality, and speed of production or services, or to drive down operating costs through automating and optimizing processes previously reserved to human labor. Government and industry leaders now routinely speak of the need to adopt AI, maintain a "strategic edge" in AI innovation capabilities, and ensure that AI is used in correct or humane ways. Some major jurisdictions are increasingly focusing on AI as a national security concern.
Despite these developments, many major jurisdictions, including in the United States and the United Kingdom, have not yet developed a single common body of "AI law"—or even an agreed-upon definition of what AI is or how it should be used or regulated. With applications as diverse as chatbots, facial recognition, digital assistants, intelligent robotics, autonomous vehicles, medical image analysis, and precision planting, AI resists easy definition and implicates areas of law that developed before AI became prevalent. Because it requires technical expertise to design and operate, AI can seem mysterious and beyond the grasp of ordinary people. Indeed, most lawyers or business leaders will never personally train or deploy an AI algorithm—although they are increasingly called on to negotiate AI-related issues, resolve AI-related disputes, or become well-versed in the risks and challenges that AI presents to their organizations.
This White Paper examines the core legal concepts that governments in several jurisdictions—the European Union, the People's Republic of China ("PRC" or "China"), the United Kingdom, Japan, Australia, and the United States—are developing in their efforts to regulate AI and encourage its responsible development and use. Although AI legal issues facing companies will often be specific to particular industries, products, transactions, and jurisdictions, this White Paper also includes a checklist of key considerations that in-house counsel may wish to address when advising on the risk as well as the development, use, deployment, or licensing of AI, either within a company or in the transactional context. Ultimately, governments are implementing divergent and sometimes conflicting requirements. A strategic perspective and an ability to explain technical products to regulators in clear, nontechnical terms will help companies navigate the current legal terrain.
WHAT IS AI?
AI comprises complex mathematical processes that form the basis of algorithms and software techniques for knowledge representation, logical processes, and deduction. One core technology behind AI is machine learning, in which AI models can be trained to learn from a large amount of data to draw correlations and patterns, which enables them to be used, for example, in processing and for making autonomous decisions.
New forms of AI are emerging and evolving on a near-constant basis. For example, generative AI ("GenAI") focuses on creating new content by learning patterns from existing data, while predictive AI analyzes data to forecast outcomes and trends. Agentic AI, in contrast, is focused on decision-making and completing routine tasks with limited human intervention. When trained and applied correctly, AI can unlock tremendous gains in productivity—enabling results or insights that would otherwise require prohibitively lengthy periods of time to achieve by means of human reason alone, or by humans using traditional computing techniques.
For example, predictive AI can replace or augment "rote" tasks by analyzing historical data, identifying patterns, and automating repetitive processes to enable faster and more accurate decision-making than manual efforts. In other cases, GenAI can generate text (including computer code), sound, images, video, or other content in response to a user's prompt. Agentic AI's ability to take proactive steps in pursuit of complex objectives makes it a natural fit for decision-oriented applications, like virtual assistants, consistent with recent Organization of Economic Cooperation and Development and ISO/ IEC 42001:2023 definitions that emphasize autonomy, goal-oriented behavior, and accountability. An AI tool's outputs, analysis, and recommendations may offer efficiencies to a human actor, who is able to save time and hone in faster on key issues.
WHY REGULATE AI?
In many industries, integrating AI-based technology is considered critical to securing long-term competitiveness. Most industrial countries have already started the race for world market leadership in AI technologies through various means, such as public funding, private-sector investments, and military defense applications, which can drive further innovation. In addition, some governments seek to support AI's growth through legislative frameworks that allow the technology to develop and optimize its potential.
However, as has been widely reported, AI systems can present significant risk. For example, predictive AI can contribute to the creation of "echo chambers" that display content based only on a user's previous online behavior to "predict" what is desired or believed by that user, thereby reinforcing their views and interests or exploiting their vulnerabilities. A GenAI tool might "hallucinate" inaccurate or incomplete information in response to a user prompt, or it may lack appropriate guardrails to protect the confidentiality of inputted information. Depending on the application of the AI, a tool could pose a safety or security risk.
Governments seeking to regulate AI aim to build citizen trust in such technology while limiting potentially harmful applications. Yet governments (and different agencies within the same government) often vary on what constitutes an appropriate manner of training and using AI. What one authority sees as a feature, another may see as a bug. Further, they—and regulated publics—may disagree on the ideal relative weight to place on important considerations such as privacy, transparency, liberty, and security.
As governments apply different perspectives to this technically complex (and often inherently multijurisdictional) area, regulated parties face a complex and sometimes contradictory body of regulatory considerations that is unsettled and changing rapidly. Training, deploying, marketing, using, and licensing AI, particularly if these activities occur across multiple jurisdictions, increasingly requires a multidisciplinary and multijurisdictional legal perspective.
HOW IS AI REGULATED?
AI's rapid expansion has led to increased legislative and regulatory initiatives worldwide. These global legal initiatives generally aim at addressing three main categories of issues:
Data Ecosystems. First, legislation and regulations seek to create vibrant and secure data ecosystems to foster AI development and deployment. Data is required to train and build the algorithmic models embedded in AI, as well as to apply the AI systems for their intended use.
- In the European Union, AI's demand for data is regulated in part through the well-known EU General Data Protection Regulation ("GDPR")1 . Additionally, the EU Data Act, which facilitates data access and sharing, entered into force in January 2024. The United Kingdom similarly implemented data protection measures through the UK General Data Protection Regulation ("UK GDPR") and the Data Protection Act 2018.
- In comparison, the United States has taken a more decentralized approach to the development and regulation of AI-based technologies and the data that underpins them. Federal regulatory frameworks—often solely in the form of nonbinding guidance—have been issued on an agencyby‑agency and subject-by-subject basis, and authorities have sometimes elucidated their standards only in the course of congressional hearings or agency investigations rather than through clear and prescriptive published rules.
- The People's Republic of China has implemented datasecurity and protection laws to prevent unauthorized data exports. Meanwhile, new administrative measures promote and regulate cross-border data flow by raising data volume thresholds and providing conditional exemptions from prerequisite procedures (e.g., security assessment, standard contracts clauses, or personal information protection certification). Free trade zones can issue and implement their own "negative lists," allowing data to be freely exported without these procedures, resulting in freer AI data flows. While the central government promulgates generally applicable laws and regulations, specialized government agencies have provided regulations specific to their respective fields, and local governments are exploring more efficient but secure ways to share or trade data in their areas, such as setting up data exchange centers.
Market Access. Second, regulators in multiple jurisdictions have proposed or enacted restrictions on certain AI systems or uses believed to pose safety and human-rights concerns. Targets for such restrictions include AI-powered autonomous machines capable of taking lethal action without a meaningful opportunity for human intervention, or AI social or financial creditworthiness scoring systems that pose unacceptable risks of racial or socioeconomic discrimination.
- In the European Union, the sale or use of AI applications is subject to uniform EU-wide conditions (e.g., standardization or market authorization procedures). For instance, the EU AI Act aims to prohibit market access for high-risk AI systems, such as AI systems intended for the "real-time" remote biometric identification of natural persons in publicly accessible spaces for the purposes of law enforcement, subject to applicable exemptions.
- In the United Kingdom, the government has set up the AI Safety Institute ("AISI"), which is a research organization aimed at assessing and advising policymakers on the safety of advanced AI systems. The AISI will be pivotal in advising the government on the technical aspects of implementing AI safety measures in future legislation
- Members of Congress in the United States have advanced legislation that tackles certain aspects of AI technology, though in a more piecemeal, issue-focused fashion. For instance, recently passed legislation aims to combat the effect of certain applications of generative adversarial networks capable of producing convincing synthetic likenesses of individuals (or "deepfakes") on U.S. cybersecurity and election security. Australia has likewise passed legislation making it illegal to share sexually explicit deepfakes without consent. Japan has not yet issued mandatory laws or regulations restricting application of AI in any specific area for concerns such as discrimination or privacy.
- The PRC has swiftly reacted to AI technologies by issuing a series of new regulations that establish concrete requirements for the development and use of AI in China. National standards have also been promulgated as supporting documents for the implementation of these regulations. The PRC also regulates various aspects important to the realization and development of AI, such as ethics, data security, personal information and privacy protection, automation, and intellectual property and trade secret protection, among others.
Liability. Third, governments are just beginning to update traditional liability frameworks, which are not always deemed suitable to adequately deal with damages allegedly "caused by" AI systems due to the variety of actors involved in the development, interconnectivity, and complexity of such systems. Thus, new liability frameworks are under consideration, such as establishing strict liability for producers of AI systems, to facilitate consumer damage claims. The first comprehensive proposal came from the European Union's new Products Liability Directive,2 which may apply to certain AI systems.
Each of these categories is discussed in the following sections.
DEVELOPING A DATA ECOSYSTEM
Often depicted as the fuel of AI, data is essential to develop and deploy AI systems. AI systems are built with algorithms, which in turn require configuration and training with datasets. To achieve a thriving data ecosystem that meets AI needs depends on so-called Big Data, i.e., data that fulfills a "tripleV" criteria:
- Volume: abundant data that increases the accuracy of the analysis;
- Variety: data that is diverse in nature and from diverse sources, which the AI system can structure and correlate most efficiently; and
- Velocity: data that is up-to-date and transmitted in real-time (e.g., from sensors).
One could also add a fourth "V" of Veracity (i.e., data accuracy). All of these characteristics lead to a fifth "V" of Value: data that fulfills the above criteria presents the most value for AI systems.
Given the central role of data in AI systems, the regulation of data use and access is critical. Availability and access to extensive, quality-assured datasets are key to the configuration, training, and application of AI systems. However, regulation may impede or advance such use and access. Data sets are not always openly available, and their use can be restricted, for example, by intellectual property or privacy rights. Data ownership is also important and may be impacted by regulation seeking to lower barriers to entry and switching. Furthermore, data regulation can also address the veracity element, as datasets can be biased where implemented data is insufficiently screened and therefore not representative of a model's intended outcome, resulting in biased algorithms that may pose ethical and potentially legal concerns.
EUROPEAN UNION
Personal Data
The European Union has increasingly regulated the use of data, i.e., data processing. Initially, personal data was the focus of such regulation, notably starting in 2016 with the GDPR. By seeking to establish a human rights-centric approach to technology, and to provide individuals with better control over how their personal data is processed (i.e., for a legitimate purpose in a lawful, fair, and transparent way), the GDPR aims to establish a framework for digital trust, while providing for free movement of personal data within the European Union. It also regulates how international data flows outside the European Union can take place.
However, tension exists between bedrock GDPR principles (such as purpose limitation and data minimization) and the full deployment of the power of AI and Big Data.3 For instance, AI depends on vast quantities of data processed for purposes often not fully determined at the time of collection, in arguable tension with the GDPR's purpose limitation requirement. The use of data for training or using AI also faces potential constraints under the GDPR's requirement to have a legal basis (such as individual consent) for personal data processing. For this reason, for instance, facial recognition based on online data is restricted by data protection authorities in several EU Member States.
European data protection authorities have issued an opinion on certain data protection aspects related to the processing of personal data in the context of AI models,4 following a stakeholder event on AI models organized by the European Data Protection Board ("EDPB") in November 2024.5 The opinion emphasizes that AI models trained with personal data cannot always be considered anonymous and need to be assessed on a case-by-case basis. It also outlines a threestep test for using legitimate interest as a legal basis for processing personal data during AI model development and deployment: identifying the legitimate interest; assessing the necessity of the personal data processing; and conducting a balancing test to ensure data subject rights under the GDPR are respected. The EDPB Guidelines 02/2025 (adopted June 20, 2025) further clarify that legitimate interest is unlikely to apply to the large-scale scraping of publicly accessible personal data for AI training.6
As an example of the Brussels effect, the GDPR became a model for many other laws around the world, including in Chile, Brazil, Japan, South Korea, and Argentina.
Non-Personal Data
For non-personal data, the European Union adopted a regulation on the Free Flow of Non-Personal Data7 in 2018 to ensure free movement of such data and prohibit Member States from adopting (restrictive) data localization laws similar to other jurisdictions. Additionally, the European Union's Open Data Directive8 sets minimum rules allowing government-to-business ("G2B") data sharing through the publishing of data held by public authorities in dynamic and machine-readable format and through standardized application programming interfaces ("APIs").
To view the full article clickhere
The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.