Generative AI tools have transformed how businesses create, code, and communicate and we are seeing a rapid expansion in the use of AI tools, and the incorporation of those tools and AI-functionality within service applications.
AI provides unique opportunities, to streamline and automate processes, generate summaries and documentation quickly, and provide ‘smart’ services, but its use raises important legal questions.
Whether you’re using an off-the-shelf AI tool or integrating AI into your own products and services, understanding the legal issues is essential to understanding and managing the risks involved. Here are five key issues and how to manage them:
When you input data – such as prompts, documents, or proprietary information – into an AI system, that information may be used in various ways, depending on the provider’s terms.
In some ways, the use of an AI tool is no different to using other cloud services, like Google Workspace or Microsoft 365, in the course of your business. However, data is not just being stored when input as part of an AI prompt or request – it is being analysed, interpreted and used to generate new content, which may be widely disseminated, so it is important to understand what rights you are giving to the provider to use the information you provide.
Free services, in particular, are likely to store and process your information in a shared environment and will give themselves broad rights to use the information to create new content (for others, not just for you), train their language models and improve their services. If you’re paying for a service, it may offer some customisation, such as the ability to have a closed data environment and prevent the broader use of your information.
These issues should be taken into account when deciding whether to use AI tools, which tool to use, and how it will be used. Extra care should be taken before inputting personal data or materials owned by or confidential to a third party, such as a customer or client of yours. Doing so without appropriate safeguards in place is likely to be a breach of your obligations to your customer in relation to confidentiality, IP rights and/or data protection.
Recommendation: Have a clear internal policy regarding which tool(s) can be used and for what purposes. This should be based on a review your selected provider’s terms of service so you know how your input materials will be used. Look out for licence and customisation options that enable you to exercise more control over how your data is stored and used. If inputting personal data, also check privacy policies to understand how personal data is stored and protected, and ensure you have the lawful basis (or customer authority) to use the information within the AI tool.
As well as ensuring you maintain control of your inputs, it is important to understand the ownership and rights to use the outputs generated by AI tools.
There are two issues at play here:
Recommendation: take care when using AI to generate ‘original’ work that you are looking to commercialise. Check the terms to understand the ownership and rights you receive to the outputs and make sure you comply with any terms of use or restrictions. Extra care should be taken if you’ll be creating material that you will transfer or licence to your own customers, to ensure you have the necessary rights to meet your obligations to those customers.
AI-generated outputs can appear authoritative but may contain errors, bias, or fabricated information (referred to as “AI hallucinations”). Therefore, it is important to verify critical information before relying on it. If AI is being used to summarise content, or provide analysis or recommendations, those should be reviewed for accuracy before they are used or shared.
Most AI providers include disclaimers limiting liability for inaccuracies, and you should make users aware that AI content may not always be reliable or factually correct.
Recommendation: Implement an internal policy covering the authorised use cases for AI and make sure employees comply with that policy and take steps to review and verify outputs before putting them into use.
If you provide AI-driven functionality, for example by integrating a generative model into your platform or professional service, you should clearly disclose that AI is being used, and include appropriate disclaimers in your terms of service. These should address accuracy limitations, data usage, and intellectual property ownership, as well as allocate responsibility for user-generated input, and should mirror – as closely as possible – the terms of the AI tool or LLM that you are using to drive that functionality.
You should also be careful when signing up to customer contract terms. Many large corporate customers, particularly in sensitive industries such as finance, are now imposing AI-specific terms on suppliers. These might restrict the use of AI entirely without consent, or entitle the customer to assess or audit AI functionality and LLMs to ensure there is no bias or security concern.
Recommendation: If you are building AI functionality into your products and services, ensure you build appropriate terms of use and disclaimers into your service terms. These should reflect the terms imposed by the provider of the AI tool or LLM, as well as covering any specific limitations on your use of the model and the reliability / purpose of the outputs. Review customer terms to ensure you can use AI before doing so.
Because generative AI models are trained on existing works, there is a risk that outputs could infringe third-party intellectual property rights. Businesses should avoid relying blindly on AI-generated material, especially for commercial materials like marketing, branding, or product design.
This is a developing area, as more authors and content creators seek to enforce their rights against AI providers that have trained their models on the work of those authors and creators. One recent example in the news was Anthropic settling a class-action lawsuit brought by a group of authors, by agreeing to pay $1.5 billion. This is unlikely to be the last AI-related legal action and payout that hits the news.
Contractual protections, such as warranties and indemnities, from the provider can help mitigate infringement risk. However, free services will not typically give any contractual protection, so you would be fully at risk if a creator identifies that material you’re using copies their work. Paid-for service providers may provide some protection – especially for the more expensive enterprise-level packages – but the devil will still be in the detail of how the protection is worded and the liability limitations that apply. These protections become even more important if you’re providing services to your own customers, as those customers may well require warranty and IP infringement protection from you.
Recommendation: Consider the intellectual property risks based on the content you are generating and how you are using and commercialising it. The risk will be lower for data outputs (rather than creative or ‘original’ content), such as summaries/analyses of your own materials, and for materials used internally only. The risk will be higher for ‘original’ content and for materials you intend to share publicly or incorporate/supply via your own products and services, especially if you deal on customer terms, which will very likely require you to accept more significant liabilities than your own standard terms would give. Based on your assessment, consider how the risks can be mitigated, including by selecting a provider that provides contractual protection, imposing disclaimers and limits on your liability to customers, ensuring you are adequately insured, and/or implementing more stringent internal checks and clearance procedures.
Conclusion
Generative AI presents exciting opportunities but also novel legal risks. By showing diligence in how you select and use AI tools, how data is used within those tools, clarifying ownership rights, managing reliability, and mitigating intellectual property concerns as much as possible, you can harness AI responsibly and without exposing your business to excessive risk.
If you would like any assistance in relation to your use or supply of AI tools and functionality, we can assist with reviewing provider terms of use, updating your own terms, drafting appropriate disclaimers, implementing an internal AI policy, and advising on any specific application / industry / regulatory compliance issues.
Get in touch with us by sending an email to carl.spencer@roxburghmilkins.com or ian.grimley@roxburghmilkins.com.