Sponsored: Artificial intelligence and business legal liability – A landscape in flux

Paid advertisement by Parsons Behle & Latimer.

Artificial intelligence (AI) has become one of the hottest new technologies globally, and for good reason. AI’s ability to impact operations, streamline processes and create new areas of value is stunning; and it appears to be in just the first stages of widespread development and implementation. The impact of AI is especially acute in the business world. AI applications have proliferated global multinationals to startups to assist in existing operations and opportunities for innovation. However, the intersection of AI and the law presents unique challenges and risks that business owners should carefully consider. 

Every new technology creates novel issues and legal risks since existing laws are typically not designed to, or simply cannot, anticipate emerging tech and its impact on the world. The rise of blockchain in recent years is a perfect example, as its legal effects are only now beginning to be explored. U.S. regulatory agencies have now made clear that they will seek to interpret law as strictly as possible in this industry, but it is easy to forget that only 18 months ago, numerous U.S. lawmakers, regulators and others believed that blockchain technology could fit into the legal regime in broad ways and were actively trying to create that framework. As an example, just this summer, New York judges Jed Rakoff and Analisa Torres, both in the Southern District of New York, were deeply split in separate cases on the securities implications of sales of crypto, indicating continuing legal uncertainty in the field on a fundamental level.

AI in business, and its application under U.S. law, is, if anything, likely even more fraught than crypto. Where the latter is often focused on securities laws and related issues in finance, AI, which encompasses machine learning, algorithms and automation, can be applied to nearly all industries.

One primary concern is liability. As systems incorporating AI become more autonomous, the attribution of responsibility becomes more difficult, and unresolved questions arise. Liability may not be clear when a computer system makes its own decisions that result in harm. Is that resulting harm the fault of the company, a consumer, a governmental entity, a programmer, some other party or some mix of those involved? Product defects laws may help address this issue in some areas, but what happens when a system changes over time – whether with the input of the maker, outside influence or on its own? Generative tools may be used for content that is intentionally harmful, including by individuals who are unknown or outside the reach of U.S. law, and they may also create falsehoods that, if applied, are dangerous to other individuals and communities. Hate speech and misinformation from AI tools are not only harmful but may create significant reputational risk as well for the company behind the software. 

Many AI systems have already had problems related to bias. Because AI systems are susceptible to biases in the data used to train them, discriminatory outcomes can occur with associated legal repercussions. Avoiding discrimination is particularly important in certain regulated areas, such as housing, employment and finance, but all companies integrating or creating AI-related tools and applications should consider how their AI could incorporate bias in its learning.

Unsurprisingly, intellectual property raises special concerns in the realm of AI. A number of lawsuits have recently been filed against companies creating AI tools, claiming infringement of intellectual property and substantial damages. 

Finally, companies seeking to raise financing to develop AI-related tools should carefully consider not only legal concerns directly related to such use, but also the changing investment landscape. Valuation of new technology sectors often moves on a boom-and-bust pattern, and investment financing trends often follow this pattern, which can make timing of financing rounds critical for a company seeking enough runway to grow.

As courts have made clear in previous cycles of technological innovation, companies that are ultimately found liable for legal breaches involved in that technology – whether inadvertently or not – may face existential risks, steep losses and even personal liability for involved persons. On the other hand, businesses that harness the power of new technology and carefully seek to comply with changing legal and regulatory requirements may obtain astronomical results. It’s no accident that the most valuable companies in the world are technology companies. And the world leaders in AI appear to be just getting started. Companies interested in incorporating AI in their businesses are well advised to follow up with legal counsel to ensure their interests and operations are protected to the greatest extent possible.

Benjamin T. Beasley is an experienced business attorney at Parsons Behle & Latimer and a graduate of Harvard Law School who helps companies, investors and individuals grow their businesses, structure financing, analyze strategy, and accomplish acquisitions and exits. A former investment analyst, Ben regularly assists with company financings, formation and governance, debt and equity matters, technology transactions and securities issues. To discuss this or related issues, contact Ben by calling (801) 532-1234 or send an email to [email protected].