Skip to Content

Contract Guardrails to Prevent AI Mishaps

*This piece was not written by, or with the help of, AI!

ICT service providers, like most other companies, are increasingly using Artificial Intelligence to drive efficiencies and savings in their back-office operations and their provision of services to customers.  While increased efficiencies and savings are, of course, beneficial for enterprise customers, the incorporation of AI means enterprises now face new legal, operational, and financial risks when contracting for ICT services.

This piece, and its companion podcast, discuss recommendations for contracting for both AI services per se and for other ICT services where there is at least a possibility that the provider will use AI to deliver some component of the service or in the background, for example to develop and improve the service, or to provide customer support functions, or perhaps for its own data analytics purposes.  

    A note on geographic scope:  The EU, unlike the US, has enacted a comprehensive AI legal framework, which could impact many of the issues discussed in this piece; however, we have focused only on issues arising in the U.S. under our laws, including state laws, for contracts that will be performed in this country.  Contracts that are formed or will be performed in the EU or elsewhere outside the US are beyond the scope of this article.  

    With that short intro, we will begin our discussion with contracts for the procurement of services that may incidentally include, or be provided with, an AI tool or functionality, such as a chatbot.  Unlike the second half of our discussion, this part focuses on incidental encounters with AI, not intentional contracting for an AI tool or solution. 

    Incidental Encounters with AI

    In many cases, customers may have no inkling that their service provider is using AI to assist in its provision of a service to them or perhaps for some background function, for example in developing and improving their services.  It is important for an enterprise to know when a provider is either formulating outputs for customers using an AI tool or exposing the customer’s data to AI, i.e., inputting it into a large language model to achieve some specific results or even just to train the model.   

    In the former case, where the customer may be receiving analysis, recommendations, or other outputs generated at least in part by AI, the customer is entitled to know as much so that it can temper its reliance on those outputs accordingly and be prepared not to “bet the farm” – or its reputation – on those outputs, as the potential for inaccuracies and falsehoods is very real.  Just ask any creative attorney who has tried to use AI to write a brief, only to have the court point out that half the cases cited were pure fabrications. 

    In the case where a provider is using the customer’s data to improve its own operations or offerings or for other internal purposes, if the customer’s data will be input into an AI tool at any point, the customer deserves to know as much and should have the right to opt out of such use of its data.  Because AI tools regurgitate what they’ve learned, any large language model trained using a customer’s data might spit it back out at any time to another entity, and once a large language model has been trained on data, that data can’t be unlearned. 

    So, what types of contract clauses should an enterprise customer always insist on from an ICT services provider?  First, the provider should either represent and warrant that it has not used, and will not use, AI in the provision of the service being sold to the customer, or, if it does use AI, it should provide a description of the purpose and function of the AI tool, along with further representations and warranties that the provider, and, if applicable, the developer of the AI tool the provider is using, obtained all rights and authorizations necessary to use the input data on which the AI tool was trained.  Note that the developer of the tool may well not be the same entity as the enterprise’s service provider, but the owner of a larger foundation model, such as OpenAI, Microsoft, or Google, among others that provides AI services to the enterprise’s counterparty vendor. 

    Second, the AI service provider should promise not to begin using AI either to provide a service to the customer or in a manner that requires input of the customer’s data into the AI tool without the enterprise’s prior written consent. 

    Third, the service provider should expressly take responsibility for the accuracy of any outputs generated by the AI component of the service and for any use of the enterprise customer’s input data by the AI component that would constitute a violation of law or the parties’ agreed confidentiality, data security, or privacy terms.  In other words, a service provider should not be permitted to escape liability for a breach of confidentiality or other safeguards on the grounds that the breach was perpetrated by the AI tool, and not by the provider itself.  Stated another way, the AI tool should be treated as a subcontractor of the provider for whom the provider has contractual responsibility. 

    Fourth, the provider should represent and warrant that its deployment of AI complies with all applicable laws and does not violate any third party’s rights.  If one of your providers is using AI as a component of, or enhancement to, its service offerings, it should be willing to state that it has no knowledge of any actual or threatened claims or actions against it or the developer of the AI tool in connection with its use of the tool or any data on which the tool has been trained.  Examples of claims would include intellectual property infringement claims – a very hot, unresolved issue in the industry – or defamation, breach of privacy, unauthorized use or disclosure of proprietary or confidential information, and the like.

    Fifth, the provider should indemnify its customer in the event that use of the AI tool violates applicable law, or that the outputs the AI tool generates, directly or indirectly, are incorrect and/or place the enterprise purchaser at risk of legal, financial, or reputational harm, for example by passing off copyrighted text or music as a unique bespoke creation for customer.  Providers who use AI to provide services to customers often hide behind broad disclaimers regarding the accuracy of their outputs or the absence of any potential third-party claims of infringement or misappropriation of data included in those outputs.

    Lastly, customers should review and include as an exhibit to the MSA the provider’s policy regarding its use of AI.  If the policy does not demonstrate that the provider uses a sufficient level of care, risk aversion, and respect for the rights and data of others, then the enterprise should think twice about contracting with that vendor. 

    Contracting for AI Services, Per Se

    Now let’s turn to the other side of the discussion:  what an enterprise should consider when it is consciously procuring an AI service of some sort, perhaps to add a “chat” function to its contact center menu or to anticipate and suggest additional purchases by visitors to its website.  Because of the risks inherent in any AI services, due diligence is critical.  Enterprise purchasers should gather information regarding prospective providers’ resources, customer reviews, experience with AI, and reputation for ethical use of third parties’ data.  It’s also important to check for any reported lawsuits against potential providers for misappropriation of data.  Moving upstream, enterprise purchasers should examine whether the underlying foundation model on which the AI service provider’s tool is built is in any legal jeopardy, as well as whether the AI tool’s outputs have been subject to any criticism in the court of public opinion. Google, for example, recently went back to the drawing board after its AI tool, Gemini, consistently provided responses that were clearly inaccurate and biased.    

    Once the enterprise has adequately vetted the service provider and found it to be relatively safe to contract with, the contract terms should ensure that the rights the provider is granting to the customer in the outputs will allow the customer to use those outputs in the intended manner. Enterprises should ask whether they are permitted to share the outputs of use of the service with affiliates or contractors if there are any geographic limitations on where the enterprise can use the service, and if there are any limits on the number of users.  Enterprises should also explore whether there are any restrictions on the ability to copy and publish the results that the AI tool generates. Overall, it is important to consider whether there are any restrictions on how the enterprise can use the service that could frustrate its business objectives in procuring an AI solution in the first place.

    A  related issue is ownership of outputs from the AI tool.  As between the customer and the provider, ownership of the outputs should vest in the customer, because the customer’s data and prompts caused the tool to generate those outputs.  Of course, providers generally take the opposite view.  There are workarounds such as specifying that ownership of the outputs vests in the customer but the customer grants back to the provider a royalty-free right to use the outputs for the provider’s internal business purposes, subject to the provider’s protection and non-disclosure of the customer’s confidential and personal information.  In our world, there are no insurmountable problems if the parties are willing to be flexible.  

    This point raises an interesting and fluid issue that may turn out differently during our lifetimes: currently, U.S. intellectual property law does not protect the products or outputs generated by AI, because they were not created by a human being.  The tech industry isn’t happy with that outcome and is agitating for a change; but content creators, such as authors and filmmakers, vehemently object to equating their creative works with something randomly generated by algorithms, no matter how sophisticated those algorithms may be.  It’s a fascinating debate, but its importance to enterprise customers boils down to locking in ownership rights to outputs in the contract, rather than assuming that traditional intellectual property principles, such as work for hire, will confer any exclusive rights on the purchaser, just because the purchaser “hired” the AI tool to create the outputs. 

    Representations and warranties are also key when procuring an AI solution, just as they are when a provider of a different service is using AI to perform some of its obligations.  As a risk management measure, the contract should require the provider to indemnify the customer and its personnel against losses and claims arising from any breach of the provider’s reps and warranties.  As mentioned above, it’s important to remember that most AI providers are reliant on an upstream foundation model provider for the underlying intelligence and so they can only commit to customers what they know they can deliver, given their own status as a customer, and not the ultimate provider. 

    Moreover, those foundation model providers, like other upstream providers in the supply chain, impose certain restrictions, limitations, qualifications, and penalties on their customers, in other words, the end user-facing entities, which the immediate provider will almost certainly flow through to the enterprise purchaser, and which are essentially non-negotiable. We run into the same dynamic when our clients buy hardware through a VAR rather than from the OEM, as almost all enterprise customers do.  The VARs have only so much flexibility to make promises to their customers, and the customers have no privity of contract with the manufacturer of the hardware, which is a less than ideal position for a customer. 

    For example, nearly all customers want certainty and predictability in their procurement contracts; thus, terms that can morph over time, such as those that are incorporated by reference and that reside on the provider’s website, create unnecessary contingent risks.  Often, the only remedy for a customer whose terms change for the worse during the contract period is to terminate the agreement without liability; but depending on how important the AI solution is to your operations, that may be an option you would never consider exercising.  In this way, changes in the foundation model provider’s terms for its customers (including the immediate provider) can flow through to the enterprise customer with unforeseen consequences. 

    Because any AI tool is going to require the enterprise user to input company data in order to generate some sort of value-added output, it is critical that the contract include strict data security, confidentiality, and privacy obligations on the provider’s part.  If the services require the vendor to process any personal information or even put the vendor in a position where they may have access to personal information, the usual privacy compliance analysis will be required, including consideration of the countries in which and from which services will be provided, identification of applicable laws, and inclusion of appropriate safeguards and obligations by the vendor for PI that it receives in connection with the contract.  

    One more risk to consider: enterprises should retain some visibility into, if not control over, the subcontractors the provider engages as part of providing services.  Theoretically, having the right to approve of each subcontractor gives the enterprise purchaser some degree of control over unknown and unvetted operators having access to enterprise data, but examining each proposed subcontractor’s track record, reliability, and business ethics might require more time than the enterprise has available.  As a stopgap, then, the contract should require the provider to flow down to its subcontractors all the substantive requirements and restrictions of the service agreement, and the provider should be responsible for the subcontractors’ compliance with all those terms.

    This piece only scratches the surface of contracting issues for AI, even incidental
    AI.  Our next piece in the AI series will look at US and European efforts to legislate and regulate the use of AI in those jurisdictions.   

    • Listen to the companion podcast here.

    Share This