The life sciences industry is no stranger to being at the forefront of technological developments and innovation. Most recently, artificial intelligence (AI) is shaping an important role. With the potential to transform research and drug development, clinical trials, manufacturing, supply chain, and regulatory processes, AI is poised to serve a critical role for life sciences companies.

2023 catalyzed significant discussions on how to regulate AI, including at a state, federal, and international level. While there is no consensus on how best to regulate AI, there are common concerns that have emerged with privacy, data ethics, and data governance consistently permeating the AI conversation. In the wake of the rapid development and adoption of AI technologies happening across all industries, 2024 is expected to bring additional guidance and regulatory scrutiny. Below, we highlight three regulatory developments from 2023 and their role in impacting the advancement of AI as we look ahead.

European Union AI Act

In December 2023, European Union ("EU") policymakers provisionally agreed to the details of the EU AI Act ("AI Act"). This flagship legal framework marks the first comprehensive piece of AI legislation in any territory. The full text of the proposed AI Act, which was first introduced conceptually in 2021, has not yet been released. As of now, the Act would be effective two years after it is enacted and becomes a "regulation" – when it would be directly applicable in all EU member states. This makes it likely that the AI Act will come into effect sometime in 2026 (or later). That said, given the sweeping nature of the legislation, developers and deployers of in-scope AI systems should begin to evaluate the implications of this law now.

While an EU law, companies based in the US may not be able to ignore the AI Act given its broad scope. The AI Act will apply to in-scope AI systems (as defined in the AI Act) that are used in or produces an effect in the EU, regardless of where a company is established. Like the GDPR, many will recognize this extra-territorial effect. The AI Act establishes a regulatory scheme discrete and separate from other privacy laws, such as the GDPR, and may create competing obligations for compliance. Ultimately, the interplay of the AI Act with other privacy legislation remains an unresolved area.

At a high-level, the AI Act adopts a risk-based methodology to establish obligations for AI technologies. On one end of the spectrum, certain AI systems which pose unacceptable levels of risk will be banned under the current proposal. According to the proposal, this would include systems which provide real-time biometric identification. Other technologies that pose a high risk, such as those which affect safety and fundamental rights, will need to be assessed prior to commercialization of the technology as well as throughout the product's lifecycle. Requirements for high-risk tools may include the need to carry out mandatory rights impact assessments. Consumers will also have a right to receive explanations about decisions based on AI that affect their rights. Those systems presenting only a limited risk will still be subject to requirements, including certain transparency obligations.

For life sciences companies, the "high-risk" category of AI technologies may encompass many medical devices, including software as a medical device. High-risk AI systems include AI systems intended to be used as a safety component of a product and products covered by EU legislation listed in Annex II (which includes certain medical devices). Device manufacturers will want to keep an eye on the discussions around the AI Act as it would apply in addition to existing EU medical device regulations, and the requirements around data governance are likely to add additional elements to current compliance programs.

White House Executive Order on AI

In October 2023, the White House released an Executive Order ("EO") on the development and use of safe, secure, and trustworthy AI. The EO specifically sets forth requirements for certain federal agencies as well as policies for AI development. The overall directives of the EO are likely to lend support to and boost the Food and Drug Administration's efforts to regulate AI in the life sciences industry. Amongst its top priorities is ensuring that privacy requirements and safeguards are met and implemented as AI technologies are developed and deployed. The order calls for NIST to create industry guidelines and best practices for deploying AI systems by the end of July 2024, including guidelines for AI developers as well as guidelines for assessing the safety and security of AI systems.

The EO also calls for bipartisan, comprehensive privacy legislation and underscores federal support for AI systems with privacy-preserving technologies. Agencies are directed to develop stronger cryptography protections as well as evaluate how such agencies collect and use commercially available information, especially personally identifiable information. Agencies are also required to develop guidelines for evaluating the efficacy of privacy-preserving techniques in AI technologies. The EO is likely to spur new privacy-related regulations and guidance from governmental agencies. Specifically, and of relevance to life sciences companies, the EO requires the Department of Health & Human Services ("HHS") to develop a strategy for regulating the use of AI in all phases of drug development as well as the implications of AI for device and drug safety more generally. Thus, life sciences companies utilizing or deploying AI may want to carefully evaluate the privacy implications and protections of their AI products.

Federal Trade Commission Guidance on AI Use and Claims

Part of the Federal Trade Commission's ("FTC") responsibilities includes enforcing consumer protection laws. Section 5 of the FTC Act broadly prohibits "unfair and deceptive acts or practices" in or affecting commerce. When it comes to the Agency's Section 5 authority and AI, the FTC has signaled that it is paying attention to the claims companies make about the use of AI in their products and services.

In February 2023, the FTC issued a blog post with guidance for companies making claims about its AI products. The FTC's guidance emphasizes the importance that products with AI claims work as advertised and that claims should not be false or unsubstantiated, including by exaggerating what an AI product can do or by promising an AI product can outperform a non-AI product. From a privacy perspective, companies will want to closely consider statements about the interplay between the collection and use of personal information and the development and/or deployment of AI. For example, life sciences companies developing AI tools or technology based on information they collect from study subjects should be mindful of the FTC's expectations around transparency. Further, any companies using AI tools as part of drug and device commercialization should consider its claims made in connection with any AI components of the products. Life sciences organizations should also be cognizant that predictive analyses issued in connection with the use of AI do not unfairly disadvantage certain persons and populations.

FTC guidance coupled with statements in recent enforcement actions make clear that the FTC has established itself as a key regulator of AI technologies. The agency is likely to provide additional guidance as AI products continue to evolve and proliferate in the upcoming year.

This authority has been bolstered by the fact that the FTC recently approved a resolution authorizing a compulsory process to issue civil investigative demands ("CIDs") in nonpublic investigations involving products and services that use or claim to be produced using AI or claim to detect the use of AI. The approval of this omnibus resolution is reflective of the current drive to inspect AI processes and the FTC's notable role in monitoring and enforcing AI development and claims.

2024 Outlook

As we look ahead to 2024, AI will continue to be top of mind for nearly every industry. In that same vein, policymakers and regulators are trying to keep pace with this technology's rapid adoption and deployment. In the absence of clear guidance and laws, life sciences companies may want to consider how to integrate an AI governance program into existing privacy programs and compliance efforts. By thinking about key privacy principles in the use and deployment of AI, such as notice, choice, and individuals rights (among others), companies may find themselves in a more agile position to shift with the evolving regulatory climate.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.