Thousands of researchers and engineers are currently working on machine learning (ML) and artificial intelligence (AI) software. However, developers often have limited or even no control over how this software is used once it is released publicly. For instance, the same AI tool that can be used for faster and more accurate cancer diagnoses can also be used in powerful surveillance systems. This lack of control is especially salient when a developer is working on open-source ML or AI software packages, which are foundational to a wide variety of the most beneficial ML and AI applications.
Releasing algorithms, code, or data as part of the research process, a product, or a service can mean that developers, engineers, and researchers must sacrifice all their control over the application of these artifacts which can lead to a moral dilemma and potentially harm the dissemination of scientific research.
How RAIL Works
Responsible AI Licenses (RAIL) empower developers to restrict the use of their AI technology in order to prevent irresponsible and harmful applications. Significant reports such as those from the AI100 and FAT initiatives have proposed principles and best practices for AI systems to help address the challenges and concerns raised by complex automated and semi-automated systems. However, while there is considerable appetite to follow these principles, there is a growing need for practical tools that enable practitioners to do so in their everyday work.
To empower developers to prevent the software they write from being used in harmful applications, we offer two RAIL licenses: an end-user license and a source code license. Our end-user license governs how customers use a software package. Our source code license allows developers to gain most of the benefits of open-source while mitigating the risk of releasing powerful code into the wild for anyone to use. By using a RAIL license, AI and ML developers/technology providers will have the legal right to prevent undesired applications of their code/software.
The RAIL licenses work by restricting AI and ML software from being used in a specific list of harmful applications, e.g. in surveillance and crime prediction, while allowing all other applications. The initial list of harmful applications was developed by the RAIL team. However, this list is intended only as a starting point. Working with partners in the computing community a formalized process by which new restrictions can be added to RAIL licenses is being developed.
A key goal of the RAIL team is to advance an important conversation: how can we move from mere talk to real action when it comes to developing responsible AI and ML technologies? The RAIL team believes this will require significant changes in the practice of our profession, including re-thinking fundamental infrastructure like software licenses. We expect that this will be controversial, but anticipate that the ensuing discussion will be be highly constructive, leading to new versions of RAIL licenses and perhaps other initiatives. We encourage the AI community and others to reach out with feedback via our website and we hope our licenses will serve as a useful step towards responsible AI use.