Media Library (10)

SIIA Applauds Release of NIST AI Risk Management Framework

The following statement can be attributed to Paul Lekas, Senior Vice President, Global Policy, Software & Information Industry Association.

“SIIA congratulates the National Institute of Standards and Technology (NIST) on the release of its Artificial Intelligence Risk Management Framework (AI RMF 1.0) that provides voluntary guidance for the designing, developing, deploying and using AI.  The RMF reflects the culmination of a collaborative, transparent, and expert-driven process over the past 18 months. It will have a significant role in guiding discussion about AI policy in the United States and internationally. We commend NIST’s efforts to provide detailed guidance for identifying and managing the risks associated with AI technology to ensure that the benefits of AI are realized and the risks are minimized.

“SIIA looks forward to continued engagement with NIST, along with the executive branch and Congress to advance sound AI policy that promotes innovation and fosters safe, responsible, and reliable AI tools and technologies.”

 

 

Divya Blog

2023, Let’s Make it Our Legacy: the Year to Pass Federal Privacy

Written by: Divya Sridhar and Grant Gendron

America wants privacy, but our policymakers remain divided about getting a deal done. The benefits of a federal privacy law are clear: these are the 5 “S”s: we need a federal privacy law for stability, solidarity, sales, security, and to stay up to speed. Privacy legislation is critical to keeping the American dream intact. 

There are at least five critical reasons for why Congress needs to act NOW to pass federal privacy legislation. 

In 2023, passing a federal privacy law is: 

1) a critical part of the 2023 federal directive of U.S. policymakers; 

2) imperative based on its economic impact on the global economy and cross border data flows;

3) long overdue to keep America globally competitive and technologically relevant; 

4) deemed essential to strengthening our nation’s national security; and 

5) the ultimate exemplar of leadership, dignity, and strength for the United States. 

Read each of our posts to learn more about the facts and figures that make the case.

U.S. Policymakers Are Elevating Privacy as Part of their 2023 Directive

 In at least one way, the new year has begun where 2022 ended: with a call from the highest ranks of government to advance federal privacy legislation. In a recent Wall Street Journal op-ed, President Biden urged Congress to unite in a bipartisan way to pass federal privacy legislation that would place new guardrails on business and ensure special protections for children and vulnerable communities. Leaders from Congress [House Energy & Commerce Committee Chair Cathy McMorris Rodgers and Representative Frank Pallone] and the Administration [Alan Davidson, Assistant Secretary for Communications and Information at the U.S. Department of Commerce] have echoed this sentiment.

The years’ long effort to advance federal privacy legislation holds promise in the 118th Congress. Though the House of Representatives has flipped to being a Republican majority, key members – including Chair Cathy McMorris Rodgers and Ranking Member Rep. Pallone – have long advocated for federal privacy legislation. And key members in the Senate – including Senators Cantwell and Cruz, Majority and Ranking Leader, respectively, on the Senate Commerce Committee – have expressed support for this effort. 

Last year, federal privacy legislation came closer than it had in many years to being finalized. The “Three Corners” introduced and quickly advanced federal privacy legislation in the summer of 2022, as a nod from Rep. Frank Pallone (then Chair of the House Energy and Commerce (E&C) Committee); Leader Cathy McMorris Rodgers (then Ranking Member of House E&C);  and Senator Roger Wicker (then Ranking Member of the Senate Commerce Committee) about privacy legislation being a top priority of theirs. The bill sailed through the House subcommittee on consumer protection, but stopped short of making its way to the House floor for a vote prior to the midterms. 

Recent developments at the FTC, including its Advanced Notice of Proposed Rulemaking (“ANPR”) on Commercial Surveillance and Data Security, to tackle privacy, also signals the need for movement at the federal level. But, federal legislation would be a more impactful, purposeful and divisive approach, which FTC Chairs also highlighted. (SIIA’s comment can be found here.) 

 Moving legislation forward is the best way to get the wheels in motion. Policymakers must finalize negotiations on what a federal standard on privacy looks like for the United States.

Inaction on Privacy is Directly Harming the U.S. Economy

As of this writing, there are at least eight states with active consumer privacy legislation, and it has only been a few weeks into the new year. We expect much more activity in the coming months. For reference, by last October, more than 34 states had introduced or passed privacy bills (in at least one chamber) focused on commercial collection and use of personal data. Five state consumer privacy laws (California, Virginia, Colorado, Utah, and Connecticut) are on the books and are at various stages of implementation and enforced. Keeping up with the patchwork of state laws (to say nothing of laws abroad) is now a full-time job. 

The Information Technology and Innovation Foundation (ITIF) analyzed the impact of the state privacy patchwork on businesses, particularly those that serve customers across state lines. It notes that businesses engaging in interstate commerce are subject to a combination of different state privacy laws, which creates a multiplier effect and has led to expensive and redundant compliance efforts. The costs are not negligible: it is estimated that state privacy laws could lead to somewhere between $98 billion and $112 billion annually, which over a 10-year period would lead to over $1 trillion in out-of-state costs for businesses. 

These different laws can disrupt business flow (and add further costs) if the legislation and any respective guidance takes a unique approach to how businesses and other entities must comply. For example, Colorado’s recent draft guidance includes an approach to the universal opt out mechanism that differs from California’s regulatory approach. The states also have unique views on definitions, including how the states categorize the entities that are expected to comply with the law; the exemptions these entities receive; and the way the state lawmakers define terms such as “sensitive data”, “biometric data”, and “automated decision making/profiling.” These distinctions have significant consequences for business compliance. Knowing that a federal privacy bill is coming could seemingly influence states’ approaches to their own legislation this year. But the bottom line is that one uniform bill – covering consumer privacy practices across the nation – is better than many bifurcated approaches. 

In addition to draining U.S. businesses, inaction disproportionately harms small businesses and new market entrants. ITIF states that small businesses could face $20–23 billion in out-of-state compliance costs annually. These businesses are the backbone of the U.S. economy. The Harvard Business Review documents that they account for 48% of all U.S. jobs and contribute to 43% of the U.S. GDP. We need to keep them afloat and thriving. 

Enforcement plays a part in this math too. As any Chief Privacy Officer and Chief Security Officer knows, privacy violations and data-breaches can be existentially costly for businesses.  Expensive litigation and enforcement actions are becoming more common by the month, with recent enforcement actions by the FTC related to Ed-Tech company Chegg, data broker Kochava, and California’s action against beauty product company Sephora making headlines. Last December, Epic Games agreed to settle privacy and dark-patterns claims for its Fortnite game in an amount exceeding $500M, including the FTC’s largest administrative enforcement yet.

Passing a uniform privacy standard that levels the playing field and includes appropriate exemptions is a necessary first step to protecting our economy from fiscal standstills that result from stagnant cross border data flows, unnecessary multiplier effects, and expensive litigation.

Privacy is Essential to the United States’s Competitiveness Abroad

In 2018, the European Union set a critical precedent on what privacy regulation should look like for its member states by passing the GDPR. Since then, many countries have followed in the EU’s footsteps and finalized privacy laws, leaving the United States behind. These include privacy acts in countries like Canada, New Zealand, Brazil, Singapore, Thailand, Germany, Switzerland, and proposed developments in the UK and India. 

More recently, the EU, United States, and other countries are working to streamline interoperability on data privacy rules, through new bilateral and multilateral data privacy agreements, global cross border mechanisms, and cooperations. Key among them is the recent draft adequacy decision for the U.S. and EU’s Trans-Atlantic Data Privacy Framework (TADPF). The EU itself continues charging ahead to set standards for the rest of the world, most recently with the EU AI Act and the Data Act. For its part, the Organization for Economic Co-operation and Development (OECD) has adopted the first intergovernmental agreement towards safeguards for privacy, civil liberty, and human rights, as it concerns securing personal data in the context of national security and law enforcement.

If we look strictly at the numbers, cross border data flows are essential to the global economy. In 2022, 2 trillion dollars worth of data was affected by cross border flows, which is expected to rise to an estimated 15 trillion dollars by 2025. The United States will be at a disadvantage negotiating cross border data flows if it lacks its own national privacy standard.  

We must act now on data privacy to keep up in the global race on emerging technology, as data is the foundation underlying emerging technology. Without a privacy law in place in the United States, the United States will appear behind the >150 countries that have already passed privacy laws. These optics do not paint the United States as a fair trade and tech economic partner, even if it is leading the development of privacy-related pillars and principles in the Indo-Pacific Economic Framework for Posterity (the IPEF) and the Trade and Technology Council (TTC)

By passing a federal privacy law, the United States can take part in shaping a pro-innovation, pro-democratic privacy landscape to counter this digital authoritarianism movement that imposes serious obstacles to trade

Privacy is Increasingly Essential to Uphold National Security

Comprehensive federal privacy legislation is also critical to advance national (and international) security. The recent scandals regarding potential misuses of data by TikTok has put this squarely in the spotlight. Several states, along with the federal government, have banned the use of TikTok on government devices. Congress has also introduced bipartisan legislation to ban the notorious TikTok app based on data security and privacy concerns. Lax privacy rules can foster greater mistrust in the online ecosystem and enable the spread of misinformation. This issue is now front and center with scrutiny focused on Twitter following moves by Elon Musk that would weaken privacy protections and trust and safety oversight (echoed in this 2022 whistleblower testimony). 

A uniform set of rules on data privacy is fundamental to the spirit of innovation, productive research, and creative authorship and will mitigate these national security concerns. Discussion about furthering the principles of a free-flowing internet has been made by G7 leaders, the World Trade Organization and the Organization for Economic Cooperation and Development. 

To this end, the Administration – as part of its broader goals to support the G7 and secure its place on the world stage – has undertaken at least three critical efforts in the past few months that underscore the impact of data privacy on national security. These efforts include the new Open Government National Action Plan, privacy as part of the 2022 National Security Strategy, and the White House cyber labeling initiative on the IOT. 

  • The Open Government Action Plan aims to increase the public’s access to data and improve how data is used by the government while improving the delivery of services and benefits. By publishing this plan and advocating for a more data rich ecosystem, with the government as the central decision maker, the Administration is signaling the need for new opportunities, embedded carefully in protections and guardrails that uphold national security. This initiative works hand in hand with federal privacy legislation. 
  • The 2022 National Security Strategy recognizes that an international technology ecosystem is critical to protecting our security and privacy, as a core component of “U.S. and allied technology leadership.” The international economic system should be “fit for contemporary realities,” including “high standards and protections for stability, privacy, and security” to reinforce the U.S.’s “global primacy.” 
  • Additionally, the White House announced a cyber labeling initiative for the Internet of Things (IoT) in October 2022. After a listening session with attendees including the Cybersecurity and Infrastructure Security Agency (CISA), the National Institute of Standards and Technology (NIST), the FTC, the Consumer Product Safety Commission, and manufacturers, standards-setting bodies, and other stakeholders, the White House identified concerns about the ease with which bad actors can exploit poorly secured devices to steal data, cause disruption, or conduct surveillance. A contemplated solution is a cybersecurity label barcode that could be scanned for real-time security information. 

These efforts would only be further bolstered and have more cohesive outcomes for the government and its stakeholders, if the United States were to pass a federal privacy law.

The United States can Demonstrate Leadership Through its own Unique Model for Privacy and Innovation

The United States recognizes the EU GDPR’s shortcomings and must avoid passing a law repeating similar mistakes. We can learn from the GDPR experiment but cannot (and should not) model it exactly. There are several compelling reasons why.

Despite any of its privacy benefits, the GDPR has deterred innovation. First, the GDPR does not include exemptions for small and medium sized enterprises (SMEs), which harms those players who are the backbone of the U.S. economy. Second, the National Bureau of Economic Research (NBER) researched the GDPR’s impact, including implications for both the supply and demand sides of the equation. After reviewing 4.1 million apps at the Google Play Store from 2016 to 2019, NBER notes that GDPR has induced the exit of about a third of available apps; and, since GDPR implementation, has led to half the number of new market entrants in the app marketplace. On the demand side, GDPR reduces consumer surplus and aggregate app usage by about one third. And, the research suggests that GDPR has generated significant consent fatigue, making it a less commendable model from a consumer and business usability standpoint. This is to say nothing of the immense fines being paid to Europe for tracking or targeted advertising in violation of the GDPR. 

For these reasons, other countries like the United Kingdom have made the steady shift from a pro GDPR model, to a UK GDPR, and now to a forthcoming model that will bolster competition and innovation. All the while, the UK retained appropriate safeguards for individual privacy and data protection. The Inaugural Meeting of U.S. and UK Comprehensive Dialogue on Technology & Data further underscores this theme. 

As it develops a federal privacy law, the United States must take our Constitution’s First Amendment rights of free speech into consideration (which are not woven into the EU’s GDPR). SIIA has played a pivotal role in shaping the development of state consumer privacy laws – including in Colorado and California – to ensure that they include these considerations and would pass constitutional muster. 

America can benefit from the excellent lessons learned from the GDPR. It should prompt us to tailor a U.S. law to help, rather than hinder, the political economy. It should also prompt us to focus on carefully factoring in the individual rights we are granted by the U.S. Constitution. 

In a nutshell:

While there are many more reasons than the ones stated here, we hope these five will prompt Congress to act NOW to move federal privacy legislation forward. 

You can see our previous blog on the topic here.

Vlog and trendy video content creation for social networks. Vector flat cartoon illustration of lifestyle bloggers and influencers. Internet media modern digital technology concept.

‘More Hope, Inspiration and Utility’; for Publishers, the Time (and Place) Is Here

“Could this be the year when publishers rethink their offer to address the twin challenges of news avoidance and disconnection – to offer more hope, inspiration, and utility?” Reuters Institute asks in their 2023 media report. The best answers may be found in person at our Business Information & Media Summit (BIMS) in Orlando, Feb. 23-24, where so many key players will be.

“In a creative business like ours, nothing can replace the ability to connect, observe, and create with peers that comes from being physically together, nor the opportunity to grow professionally by learning from leaders and mentors, Disney chief executive Bob Iger said last week in a memo to employees (reported by The Washington Post). Disney is requiring that employees start coming in four days a week after March 1, citing the company’s need for in-person collaboration.

It’s interesting that a plea for more in-person collaboration and interaction in a creative business comes from the Disney chief just as BIMS readies for Orlando, Feb. 23-24. While coming into the office is certainly an individual organizational decision, there’s no question that the stimulation, ideas and—maybe above all—openness you get from speaking together in-person at an event cannot be duplicated virtually or anywhere else.

The Reuters Institute for the Study of Journalism has released its Journalism, Media, and Technology Trends and Predictions 2023. Here are some of the findings along with the sessions at BIMS that address them and will surely propel the conversation forward:

An even greater emphasis on subscriptions. More publishers are investing in subscription and membership in 2023, with the majority (80%) saying this will be one of their most important revenue priorities, ahead of both display and native advertising.
Recurring Revenue: Building a Business Case and Strategic Plan – Tara Lajumoke (pictured), managing director, FT Strategies – Financial Times

Time is teching away. On average, publishers say that 3-4 revenue streams will be important this year. A third now expect to get significant revenue from tech platforms for content licensing (or innovation), significantly up from last year.
Enterprise Tech Spending Starts With a Coherent Business Strategy. Are You Focused Enough? Rhonda Wunderlin, SVP, performance marketing, Questex

Collective thinking. Expect to see a correction in the creator economy this year. The pressure of delivering to constant deadlines on your own is relentless. Collectives and micro-companies could be a new trend for 2023.
Here’s a Content-Centric Digital Native Who Wants to Remake B2B – Becca Sherman, COO and Co-founder, Workweek

Hear it from the top. Media companies are quietly integrating AI into their products as a way of delivering more personalized experiences. Over a fourth say this is now a regular part of their activities, with 39% more saying they have been doing testing in this area. New applications such as ChatGPT and DALL-E 2 also illustrate new opportunities.
Outlook 2023: A CEO Panel – Tim Hartman, CEO, GovExec; Chris Ferrell, CEO, Endeavor Business Media; Elizabeth Green, CEO, Brief Media; Tim Andrews, CEO, Advertising Specialty Institute; Gemma Postlethwaite, CEO, Arizent

More audio, video and newsletters, and marketing of them. In terms of innovation, publishers say that they will be putting more resources into podcasts and digital audio (72%) as well as email newsletters (69%), in an attempt to increase loyalty. Investment in digital video formats (67%) is also up on last year. By contrast just 4% say they’ll be investing in the metaverse. And marketing will be a huge key.

How FreightWaves Created a One-Two Punch with Marketing Services and Subscription Data – Craig Fuller, CEO, FreightWaves

A continued push for more diverse talent. Most publishers (72%) are worried about increasing news avoidance—especially around important but often depressing topics like climate change. Publishers say they plan to counter this with explainer content (94%), Q&A formats (87%), and inspirational stories (66%). For these to be successful, a continued push for new talent and diversity will need to take place.
Talent and DEI and Their Essential Role in Your Growth – Michelle Duke, chief diversity officer, National Association of Broadcasters & president, NAB Leadership Foundation, NAB; Subha Barry, president, Seramount

A bundle of joy. An alternative approach has been to try to lock subscribers in through bundling additional features or complementary brands. Publishers also say they’ll be paying much less attention to Facebook and Twitter this year and will instead put much more time and effort into TikTok, Instagram and YouTube.
Content as Product: How Editorial Leaders Are an Often Untapped Source of Great Ideas – Bibiana Campos-Seijo, media executive; Henry Howard, deputy director of media & communications, The American Legion

Media Library (9)

A Look At The Legal Intersection Of AI And Life Sciences

By Ariel Soiffer, Elijah Soko and Paul Lekas (January 20, 2023)

This article was not written by ChatGPT. Will all articles have to start with a statement like this? And will any statement like this be true?

ChatGPT uses artificial intelligence, or AI, to develop written work product. While this application of AI has grabbed the news, there are many other exciting applications of AI, including in the domain of life sciences.

In this article, we start by defining AI in the context of data, algorithms and AI systems. Next, we touch on leading regulatory efforts in the U.S. and abroad, followed by a brief overview of some key issues in compliance. After that, we assess the intersection of AI and intellectual property law. And finally, we mention some of the applications of AI in life sciences.

Artificial Intelligence

AI starts with big data, which refers to large data sets which often come from multiple sources. The data sets include a substantial number of entries, or rows, with many attributes, or columns.

All of this data is analyzed in models which are used to explain, predict or influence behavior. Generally, models become more accurate when developed using more data, although the relationship between accuracy of models and amount of data is often nonlinear.

The Organization for Economic Cooperation and Development defines an AI system as “a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments.”

AI systems are designed to operate with varying levels of autonomy.

Therefore, AI systems may perform human-like tasks without significant

oversight or can learn from experience and improve performance when exposed to data sets.

Frequently, an AI algorithm produces a model from a big data set over time, and that model can be used as a standalone predictive device. Naturally, output of AI will only be as good as the input data sets.

Machine learning is a subset of AI. Machine learning is an iterative process of modifying algorithms — step by step instructions — to better perform complex tasks over time.

In other words, machine learning applies an algorithm to improve an original algorithm’s performance, often checking the output of an analysis in the real world and using the output to iteratively refine the analysis for future inputs. Effectively, machine learning evolves the original algorithm based on analysis of additional inputs.

The AI Regulatory Landscape

AI systems analyze large data sets and produce predictions and recommendations that often has a real-world impact in areas as varied as hiring, fraud prevention and drug discovery. These many AI applications mean that AI has attracted significant attention from policymakers and regulators, which means the AI-focused legal and regulatory landscape is changing quickly.

At the state level, bills or resolutions relating to AI have been introduced in at least 17 states in 2022. However, only a few states enacted laws in 2022 — just Colorado, Illinois, Vermont and Washington — and each was focused on a narrow application of AI.

While there is currently no horizontal federal regulation of AI, many generally applicable laws and regulations apply to AI, including in many life sciences contexts. These include the Health Insurance Portability and Accountability Act, which protects personal health

data; Federal Trade Commission regulations against unfair or deceptive trade practices; and the Genetic Information Nondiscrimination Act, which prevents requesting genetic information in some cases.

Federal regulatory efforts on AI are focused on sector-specific regulations, voluntary standards and enforcement.

As an example of sector-specific regulations, the U.S. Food and Drug Administration has rules regarding medical devices that incorporate AI software to ensure safety of those medical devices.

As an example of voluntary standards, the National Institute of Standards and Technology is finalizing a framework to better manage risks to individuals, organizations and society associated with AI. The NIST risk management framework represents the U.S. government’s leading effort to provide guidance for the use of AI across the private sector.

The FTC has indicated an interest in pursuing enforcement action based on algorithmic bias and other AI-related concerns, including models that reflect existing racial bias in health care delivery. Relatedly, the White House Office of Science and Technology Policy has created a blueprint for an AI Bill of Rights, citing health as a key area of concern for AI systems oversight.

Outside the U.S., the AI regulatory landscape is also developing rapidly.

For example, the European Union is finalizing the Artificial Intelligence Act, which would regulate AI horizontally — across all sectors — and is likely to have a significant global impact, much like what occurred with privacy laws.

The EU approach focuses on high-risk applications of AI, which may include applications in life sciences and related fields. Further, the U.S. and EU, through the U.S.-EU Trade and Technology Council, have developed a road map that aims to guide approaches to AI risk management and trustworthiness based on a shared dedication to democratic values and human rights.

AI Compliance Key Issues

AI raises a number of key issues for compliance including transparency and accountability or human in the loop, fairness and bias, explainability and interpretability, safety, security, and resiliency, reliability and accuracy and validity, and privacy.

We will briefly discuss the first three key issues in this article. Human in the loop refers to a human playing a role after AI makes a recommendation but before that determination is carried out in the real world.

In life sciences, it is critical to include humans in the process regardless of the regulatory requirements. For example, humans review AI drug discovery output and test that output in a wet laboratory to evaluate the AI output and improve AI’s predictions.

Bias in AI means unwanted, unintended or unfair assumptions or prejudices built into AI systems often deriving from algorithms or data. Developers of AI systems should understand and evaluate for bias because bias limits AI’s accuracy and efficacy and creates compliance and reputational challenges. Since data may not be neutral, bias may result from data collection practices.

For example, Winterlight Labs, the developer of an Alzheimer’s detection model used speech recordings and later discovered that its technology was only accurate for English speakers of a specific Canadian dialect as a result of the training data that it used. Bias in the data may result in bias in the AI.

Explainability in AI means the ability to evaluate what output the AI system produces and at least some reasons for that output. Developers should be able to explain why certain data was or was not used. Developers should also be able to explain how a model predicts outputs based on the inputs.

Intellectual Property Rights in AI

The major categories of intellectual property are patents, trademarks, trade secrets and copyrights.

A patent protects novel inventions by giving the patentee exclusivity for that invention. A trademark protects branding by ensuring that only the owner can use a mark for a particular field. A trade secret is information that has independent economic value by not being generally known. A copyright protects original works of authorship such as books and music, as well as software.

Algorithms and models will often be protected as trade secrets. The algorithm of most major search engines is generally protected as a trade secret. If a company sought to protect the search engine algorithm with a patent, the algorithm would have to be published in the patent application. This would allow the public to see the algorithm described in detail and would enable copying.

But given that source code of a search engine cannot easily be reviewed, a patentee would not easily be able to determine whether a competitor used the patented search algorithm without permission.

AI concepts may be eligible for patent protection, but the U.S. Supreme Court’s 2014 decision in Alice Corp. v. CLS Bank International requires something more than abstract ideas when seeking a software patent.

Therefore, a pure software algorithm will be hard to patent, but an application of AI with a physical-world impact may be patentable. The output of AI, such as discovery of a novel drug, should be patentable if the output otherwise qualifies as patentable.

However, courts have been skeptical of naming an AI system as the sole inventor of a patent. The U.S. Court of Appeals for the Federal Circuit has confirmed that an AI system cannot be the sole inventor of a patent.

Similarly, the U.S. Copyright Office has determined that creative works authored by AI are not eligible for copyright protection. While authorities in the U.S. and EU have rejected patent applications citing AI as the sole inventor, South Africa and Australia have ruled that AI can be considered an inventor on patent applications.

In business transactions involving AI, ownership and rights to use the AI system are generally divided among the parties in a few ways. Many software or service providers want the right to use data to improve their services or to improve their AI.

Three generic models in AI business transactions are (1) a service model; (2) a model rights approach; and (3) an algorithm rights approach. A service model refers to the AI provider running AI as a service while the customer provides input. The AI provider provides the output and some rights to use output to the customer. A model rights approach means the customer provides input into the AI, while the AI provider develops and refines the model.

Once the model is complete, the customer gets rights to use the model, but not the underlying AI or algorithm. The algorithm rights approach allows the AI provider to retain ownership of the underlying AI, while the customer receives some rights to the algorithm.

In the life sciences context, most AI providers will propose a service model, where the AI provider delivers output and rights to use the output. AI providers in life sciences aim to apply their AI neutrally, to all potential customers, and to refine their AI system using varied inputs.

Life Sciences Applications of AI

AI in Drug Discovery

Currently, it is possible to obtain massive data sets of small molecule interactions with target proteins, for example, using DNA-encoded libraries. Eventually, this might be possible with peptides.

For example, the discovery of novel molecules is possible through the application of AI to massive data sets of small molecule interactions with target proteins.

Recently, ZebiAI Therapeutics Inc. applied machine learning to data sets that were the output of DNA-encoded library screens. The AI output could be used to predict novel small molecule targets. Human beings still play an important role, including wet lab testing to confirm and refine results from AI-based analysis.

AI in Clinical Trials

Later stage — Phase II or Phase III — clinical trials have substantial data sets with data at the individual level. AI can assess historical data to predict outcomes such as (1) whether there are subpopulations of patients with better outcomes, (2) how adverse events are distributed, and (3) what subject characteristics have better outcomes.

AI in Genomics

AI has improved our understanding of patterns in a genome, i.e., an organism’s complex set of DNA data. Next-generation sequencing can efficiently determine the order of the basic structural unit of DNA. Next-generation sequencing can gather genetic data rapidly from individuals and has driven the price of whole genome screening down to as little as $2,000, and soon for less.

Applying AI to massive genomic data sets that become increasingly available as the price of sequencing drops may improve predictions around who may develop a disease or whether certain actions may reduce risk for a disease.

Conclusion

While ChatGPT has grabbed the headlines by being able to write short essays, AI has many other applications, including making a real difference in the life sciences industry.

The opportunities are enormous. While AI innovation has outpaced regulation, development and use of AI systems are not without challenges, including compliance and reputational challenges.

Companies should focus compliance and due diligence on managing the features and risks of AI. Companies must also stay abreast of regulatory developments and prepare for how new laws and policies will have a direct impact on their development and use of AI-based technologies.

Ariel Soiffer is a partner and Elijah Soko is an associate at WilmerHale.

Paul Lekas is the head of global public policy at Software & Information Industry Association.

The opinions expressed are those of the author(s) and do not necessarily reflect the views of their employer, its clients, or Portfolio Media Inc., or any of its or their respective affiliates. This article is for general information purposes and is not intended to be and should not be taken as legal advice.

Lunch&LearnSeries_1080x1080

AM&P Network Associations Council Lunch & Learn — New Year’s Evolutions: Emerging Trends in Marketing, Content and More

Hear how associations and media organizations are adapting in the new year to innovate their content, marketing strategies, publications, videos, podcasts and other products. This event will also cover how to keep up with the continually changing social media landscape. Learn from our panelists and swap ideas with other participants to find new things you can implement in 2023.

Please login to view this video and link to event page.