Top 5 AI Legal Risks for 2025 (And What To Do About Them)

We saw a proliferation of Generative AI Vendors in 2024. From virtual assistants, to notetaking tools, to marketing software that messages your customers and prospective customers, it seems there are now hundreds of AI vendors for just about every business function. For in-house counsel, it can be difficult to devote resources to forming a legal strategy for AI’s implementation in your company, let alone reviewing each vendor agreement piecemeal. For businesses that do not have the resources to hire dedicated in-house legal, sorting through AI vendor agreements can feel daunting–even if you have leverage to negotiate, how do you know on which points to press?

With these concerns in mind, AMBART LAW has identified the top 5 legal risks related to AI adoption and implementation in 2025. We also provide below practical strategies to help you think about and mitigate these risks

Before we get to the list, our first global suggestion to our clients is to create an AI Acceptable Use Policy, tailored to the needs of the business and its use cases. If you do not have an AI Acceptable Use Policy, it should be priority number one–because employees are using AI-technologies as consumers and undoubtedly bringing them into their workplace. (We are happy to collaborate with you to create one.) Next, we recommend to our clients that they develop an AI-Contracting Playbook, with different levels of risk tolerance depending on the use case for that AI technology. Keep in mind our top 5 AI legal risks below, and the corresponding mitigation tips, when developing your playbook. 

1. Confidential Data (What Goes In)

In a recent survey of 2,700+ global organizations, over half reported that they are avoiding certain Generative AI use cases due to data-related concerns, such as inputs of sensitive data and managing data privacy and security. 

AI tools typically require significant amounts of data to function effectively. This can include both the large datasets used for training the AI model and the user’s input data—such as queries, uploaded documents, or other content provided during usage. 

While input data typically doesn’t become part of the training dataset unless the tool is designed for continuous learning, risks still arise if AI vendors store or reuse input data improperly. (In fact, we have seen some AI vendor agreements that require enterprises to license their input data to them, including all calls and recordings, specifically for the AI vendor’s future training data! This prompted us to write the popular blog post “Are your employees licensing your confidential information without realizing it?”).

Why It Matters: You want to ensure that your confidential information stays confidential, so we won’t belabor this point. Separately, mishandling of personal information (data that can identify your customers or users) may result in violations of privacy laws such as the GDPR or CCPA, not to mention reputational and financial harm.

How to Mitigate:

  • In agreements with AI vendors, review the licenses you give for “user content” and pay close attention to what the AI vendor can do with that content. If possible, include contractual provisions that prohibit vendors from reusing your company’s input data to train their models. Otherwise, try to restrict how and when they can use the input data. 

  • If you are providing personal information in the input data or if the AI vendor will otherwise receive access to this type of information, negotiate and execute a data processing agreement (DPA) with the AI vendor to ensure compliance with privacy laws and to prohibit unauthorized selling or sharing. (If your business is subject to California’s privacy laws, which have a broader definition of “sharing,” pay special attention here.)

  • Educate and train employees on authorized usage practices for AI tools. If you do not have an AI acceptable use policy for your organization or enterprise, this is a key place to start. Keep in mind that a single rule on inputs may not apply enterprise-wide, as different departments often deal with information of varying sensitivity and confidentiality. For example, you may be more concerned with HR’s inputs than the Web Development team’s.

2. Ownership and Licensing Risks (What Comes Out)

The growing reliance on AI tools has sparked critical questions around data ownership and intellectual property (IP). Ambiguities surrounding who owns the outputs—such as the code, content, or designs generated by the AI tool in response to a user’s prompt—can lead to disputes between companies and vendors. Additionally, many AI models are trained on datasets with unclear licensing terms, potentially exposing businesses to IP infringement claims.

Why It Matters: Without clear contractual terms, your company may inadvertently surrender ownership rights to valuable outputs or face legal challenges from third parties claiming rights to data used in AI training (to create those outputs). The claims can sound in not only direct copyright infringement, but induced infringement, and unfair competition. (See our blog post on the Andersen v. Stability battle here for a nuanced discussion of these issues.) Additionally, although the U.S. Copyright Office generally refuses registration for AI-generated works, as of last year the Office registered over 200 works where the applicants properly disclaimed AI-generated elements in the work. Other countries, such as China, have already begun protecting copyrights for AI-generated works.

How to Mitigate:

  • Maintain detailed records of the human’s role in the AI-generated outputs, to establish clear links between human creativity and the final product. Such records can include evidence of human-in-the-loop processes, where applicable, to demonstrate “originality”—a requirement in U.S. copyright law. 

  • In a perfect world, you could require the AI vendor to disclose its training data sources. (California’s AB 2013, effective January 1, 2026, requires AI developers to provide documentation on their websites regarding the data used to train their AI system or service.) As it is unlikely most developers will agree to disclose their training data (without or until a legal mandate), instead, consider requiring a representation and warranty from the AI vendor that their training data complies with the relevant IP and licensing laws and that no sensitive or unlicensed material has been used in training the model. Another way this is commonly dealt with is through an indemnification provision.

  • Include explicit clauses to ensure your company retains ownership of all AI-generated outputs. If the vendor insists on your granting them a license to the output, try to restrict the terms of that license as much as possible. (You do not want your output used in a different context, particularly one that you would not approve of or could cause you reputational harm.)

  • Make the agreement clear as to who is responsible for defending claims of IP infringement. (We have seen broad indemnification agreements, where the AI vendor requires the company to indemnify them against any third-party claim relating to the company’s use of the AI tool, even where, presumably, the claim arises from the AI vendor’s conduct.)

  • Avoid AI tools that lack transparency or rely on improperly licensed datasets. If you get a feeling that the AI vendor seems fishy, remember that there are numerous other, similar companies offering similar AI tools and probably better agreements. The lack of clear contract language can be a symptom of the AI vendor’s corresponding lack of sophistication (or funding), or it could be something more nefarious; either way, if you do not have to risk it, look for another option.

3. AI Bias and Ethical Use Risks (What Can Go Wrong)

AI models carry significant risks to produce biased outputs, and in nearly all scenarios (particularly hiring and firing), they should not be used or relied on alone for decision-making. Bias manifests in AI models as either cognitive bias or computational bias. Both can be harmful, for different reasons, which we explained in this blog post and describe below. Companies using AI vendors should seek out vendors that prioritize transparency, explainability, and interpretability in the design of their models to minimize risks of bias.

Why It Matters: One of generative AI’s greatest societal risks may come from cognitive bias, because it may have the tendency to show users content and outputs that align with existing harmful beliefs, without exposing them to contracting perspectives. Meanwhile, computational bias can lead to a number of harms for enterprises and institutions (think: reputational damage, legal challenges, and economic losses–all resulting from the inaccuracies in the training data set). For companies operating in regulated industries, such risks may trigger additional compliance scrutiny. 

How to Mitigate:

  • Incorporate “human-in-the-loop” processes to review and validate AI decisions before deployment, particularly in high-stakes contexts.

  • Work with vendors to ensure bias mitigation measures are part of their model development process.

  • Use vendors that apply privacy-enhanced technologies (or PETs) and methods to protect individuals from unnecessary risk to their personal data. PETs come in the form of differential privacy, federated learning, and system architecture determination–which are all controlled by the AI vendor (and great questions to ask when considering whether to contract). 

  • Implement internal audits to monitor AI outputs for discriminatory or harmful patterns.

  • Adopt internal AI ethics and principles to develop consumer trust in AI systems and company deployment of AI-enabled tech.

  • Continuously evaluate ethical principles and regularly address consumer concerns arising from user experience.

4. Accountability for AI Failures (Who is Responsible When Things Go Awry)

Separate from the bias issue discussed above, AI-driven decisions can go wrong—whether due to inaccurate predictions, hallucinated outputs, or non-compliance with regulations. Although the likelihood that something could go wrong may be low, you want to make sure your agreement is helpful to you, not harmful, in the event of a problem. Many vendors attempt to limit their accountability through disclaimers, leaving the risk with the company using the AI tool.  We have also seen broad indemnification provisions, which shift the risks to the user.

Why It Matters: Without clear accountability provisions, your company could face lawsuits, financial losses, or regulatory penalties stemming from AI errors. 

How to Mitigate:

  • Negotiate service level agreements (SLAs) that include clear accountability for the AI tool performance.

  • Incorporate indemnification clauses requiring vendors to cover costs related to algorithmic errors or compliance violations.

  • Demand transparency from vendors regarding their tools’ limitations and testing protocols.

5. Navigating the Expanding Patchwork of AI Regulations

In the United States, AI regulations are developing as a patchwork of state (California, Colorado, Utah, etc.) and federal laws (Executive Orders, FTC Guidelines, etc.), not dissimilar to our current privacy law regime. The absence of a comprehensive AI law in the United States does not mean that there is no legal risk to using an AI system. Quite the opposite: improper or irresponsible deployment of an AI system can result in violations of employment laws (Title VII, EEOC), consumer finance laws, state and federal information privacy laws, biometric data laws, recordings laws, unfair and deceptive trade practice prohibitions, among other laws and regulations. 

Globally, laws in other jurisdictions are evolving as well, and may impact U.S. companies operating abroad. For example, the EU AI Act introduces rigorous standards for transparency and accountability. (See our blog post about the EU AI Act in August 2024.) 

Why It Matters: Even though the U.S. lacks comprehensive AI laws and certain states’ laws do not take effect immediately, being aware of upcoming legislation is essential for developers pushing to launch fast and promote innovation. Companies contracting with AI vendors should understand when and where they deploy AI, how each use matches a specific business use case, and what the risk level is for each.

How to Mitigate:

  • Partner with external counsel or consultants adept with AI regulation to stay ahead of new developments.

  • If your enterprise is large enough with ample resources, establish a cross-functional compliance team to monitor and adapt to regulatory changes. If not, flag the highest risk use cases and monitor those areas for legal developments.

  • Incorporate compliance requirements into your governance frameworks and vendor contracts.

Conclusion

The adoption of AI presents both incredible opportunities and significant risks. By proactively addressing and understanding ownership of input data (Risk #1)  and outputs (Risk #2), bias (Risk #3), accountability for harms (Risk #4), and the uncertainty of navigating a patchwork of applicable laws (Risk #5), you can better position your organization to unlock the benefits of this digital transformation.

AMBART LAW PLLC provides comprehensive AI Governance and Legal Counseling for creators and small and medium-sized companies. If you need more routine, day-to-day assistance, our Fractional General Counsel services may be a cost-effective solution to holistically support your organization.

Contact info@ambartlaw.com if you get stuck or need assistance in developing your AI Contracting Playbook. We’re happy to help you with creating a comprehensive guide to mitigate these risks and by drafting robust AI agreements tailored to your organization’s needs.

Previous
Previous

How to Avoid a $100 Million Mistake

Next
Next

Bias in AI Systems