top of page

“Sourcing the Code”: Issues in licensing AI

(Kalindhi Bhatia and Prashant Daga)


Leveraging artificial intelligence, or AI, to streamline operations and enhance productivity is emerging as a good use case of machine learning. Various service providers and IT vendors are helping businesses put in place AI systems customized for their operations. In doing so, vendors and service providers either support the client business to develop their in-house AI systems, or license their tools to them catering to the businesses’ requirements. 


Procuring AI enabled software as a service/product presents certain legal issues that need to be addressed contractually. Garden variety issues for technology licensing will apply for AI technologies too (such as rights to underlying IP, agreed service levels, limited uses, etc.), and there are some additional nuances to be considered at the contract preparation and negotiation stage. 


This piece touches upon contractual issues and related mitigation measures that our team has encountered while advising on AI contracting matters. Primarily, the use-case we are considering here is an IT client procuring an AI enabled system from an AI SaaS provider. We have also indicated considerations relevant to the contracting chain for the development of an AI system as a whole. 


  1. Intellectual Property Rights (“IPR”): IP use will vary depending on the intent of procuring the tools. In certain case, it may be for developing an AI system anew, and in other cases for using an existing AI system provided by an AI SaaS provider on a limited use basis. Additional considerations for AI arise due to its new and open-ended nature. For instance, it is not always clear in a contract if an AI system has been developed using a SaaS provider’s proprietary knowledge, or by using open-source knowledge bases. There have been scenarios where this results in unauthorized commercial use or has an overlap with another entity’s IP, or even increased susceptibility to malware. Post the development stage, AI models tend to evolve overtime via learned behaviors. This relies on the data sets fed into it, be it by users or when procured via third parties. Typically, the models do retain certain residuary information relevant for its learning process which enables improved performance. In cases where the data sets are licensed via third parties, the license should contemplate perpetual rights to the licensee for the retained data post processing and training. Representations, warranties and terms of all intellectual property going into or being produced via an AI system will have to be carefully curated, be it AI Systems license itself, or agreements between the SaaS provider and its vendors, or when licensing AI generated content, etc. 


  1. Data Privacy: AI systems function basis the data fed into it, collected, and processed via it. The principle of personal data processing is that it should be consensual, with express prior consent, and purpose specific. SaaS vendors may seek to protect themselves in relation to the personal data ‘fed’ into AI systems licensed by them. Correspondingly, client businesses may seek contractual representations that affirm that the AI systems are trained basis personal data provided under valid consent. To this end, the parties may explore mechanisms to maintain transparency in this regard, and contemplate data anonymization requirements, as needed. (A ‘data addendum’ to clarify matters is typically helpful.) Another crucial aspect linked to data privacy; security of the data sets fed into the system. Vulnerabilities and breaches can result in data theft/leak and with-it liabilities, such as damages to data principals. (Laying down cybersecurity incident protocols is critical for AI systems, see Point 3 below.)    


  1. Cybersecurity: India is one of the world’s more vulnerable jurisdictions when it comes to cyberattacks and incidents. While it is not practically feasible to alleviate all security concerns when it comes to AI, realistic steps should be mandated. By their very nature, AI systems are more susceptible to vulnerabilities, leaks, hacks, breaches, ransomware, etc. From conception to application, each stage of an AI system comprises functions that may expose it to malicious actors (for e.g., training the model using data, applying in real time scenarios, etc.). Selecting the party which is to be responsible for ensuring that cybersecurity procedures are dependent on the use and function facilitated by the AI system (for e.g., if an entity requires (out of the norm) cross-border data flows which require unconventional workarounds, the SaaS provider may pass on cybersecurity obligations onto the its customer in such a scenario). It is useful to include requisite representations / warranties and/or obligations in relation to maintenance of adequate cybersecurity protocols, and reporting obligations. Further, these covenants should be extended to processing via cloud computing (if applicable). 


  1. Interoperability: The ‘interoperable quotient’ of an AI system allows it to seamlessly work with other systems, share information, and also interact with human users. The data flow facilitated due to such capabilities translates into increased efficiency. A real world scenario to picture a feat of interoperability is driver-less cars. These move solely on the basis of various synchronised technologies like navigation, sensors, communications, etc. From a data management standpoint, this means enabling and streamlining data flows, preventing data silos, and consolidating data in one place. This, in-turn, ensures accuracy and consistency of data in real-time, thereby improving business productivity. Although the testing the systems for compatibility prior to purchase is a way to ensure this, the contract governing the license may also stipulate the required service levels/standards in this regard, and the mechanisms to periodically review such capabilities, etc. 


  1. Product Liability: This is the great unanswered question in AI contracting! Traditional contacting and sale of goods laws prescribe standards for sellers’ liabilities; liability in AI is a sliding scale. ‘Causation’ is the determinant for onus of liabilities when it comes to AI. Put it simply; in order to establish liability, it is essential to demonstrate which aspect / entity caused the lapse. Implied terms as to quality, fitness, functionalities, etc., are intrinsic to sale of goods / services under Indian law. AI may also act out of the ordinary (‘edge case decisions’, ‘hallucinations’). For these, the AI SaaS provider may contend that this is inevitable (or necessary for the development of the system as a whole). To deal with these, terms may be built into to account for such scenarios. This is, in particular, necessary in cases where such devices will be deployed in sensitive sectors, such as healthcare, fintech, manufacturing industries, etc. If any physical equipment comprising AI is to be provided as part of the services, requisite warranties may be necessary which stipulate that such physical equipment correspond to standards prescribed for such devices. The extent of IPR use should be suitably defined that allows the business to use the AI systems for its purposes, and protect the AI SaaS provider from liability arising due to unauthorized uses.


  1. Indemnity and Limitations: A contract of indemnity is a separate class of claim under Indian contract laws. The open-ended nature of AI systems and diverse stakeholders involved requires a meticulous approach for drafting and negotiating indemnification clauses for AI licensing contracts. For instance, scenarios where the solution / product created using the AI stands to infringe a third party’s IP may have to be carved out specifically (depending on the nature of technology procured). On the flip side, the SaaS provider may contend that the outcomes arising out of the use of its AI system fall outside of its scope, and may not be open to accept responsibility in this regard. Special indemnities may be sought where the results produced by the AI system have a real-time impact (for e.g., for assembly lines). Although third party claims may be covered under indemnification obligations, there are usually limitations to the extent these are recoverable. 


To conclude – Existing contractual approaches to software licensing may not fully address the capabilities of AI systems. Parties involved should factor in possible scenarios that may occur due to use of the AI system relevant to their use-cases and assess if these require contractual safeguarding. The nature of protection sought will also differ basis the role of the party (i.e., licensee or licensor). In the absence of any regulations specifically governing AI systems (at present), it is all the more crucial to opt for a personalized approach while finalizing AI licensing contracts. 

Comments


bottom of page