White House adopts plan for AI bill of rights | Coie Perkins

The Office of Science and Technology Policy (OSTP), part of the President’s Executive Office, recently released a white paper titled “The Blueprint for an AI Bill of Rights: Making Automated Systems Work for the American People” (Blueprint) . This Blueprint provides a non-binding framework for the responsible development of policies and practices around automated systems, including artificial intelligence (AI) and machine learning (ML) technologies.

Background

The Blueprint follows bipartisan executive orders aimed at balancing the potential benefits and risks of AI and ML technologies, and directs executive agencies and departments to develop policies regarding the responsible use of AI and of ML. In one of his latest executive orders, former President Trump required federal agencies to adhere to a set of principles when deploying AI technologies with the goal of fostering public confidence in AI. And in one of his first executive orders, President Biden directed executive departments and agencies to address systemic inequities and build fairness into decision-making processes. Some US government agencies, including the Government Accountability Office (GAO) and the US Department of Energy (DOE), have already developed frameworks to identify and mitigate risks associated with the use of AI technology.

Scope

The Blueprint is a non-binding framework designed “to support the development of policies and practices that protect civil rights and promote democratic values” with automated systems. The OSTP stresses that the Blueprint is not a binding regulation. Rather, it sets out five guiding principles that OSTP believes should be applied to any automated system that may significantly affect civil rights, equal opportunity, or access to essential resources and services. The Blueprint also includes a manual that provides detailed guidance on how to implement these principles in practice.

The Blueprint does not specifically define or limit itself to “artificial intelligence”. Instead, it places any “automated system” within its scope, which is broadly defined as “any system, software, or process that uses computation as a whole or part of a system to determine results, making or helping to make decisions, informing policy implementation, collecting data or observations, or otherwise interacting with individuals and/or communities”. OSTP notes that the definition explicitly includes, but is not limited to, AI, ML, and other data processing techniques. By its terms, the definition likely includes a much broader set of technologies that are not traditionally considered AI or ML.

The five principles

The OSTP Blueprint identifies five principles that should guide the “design, use, and deployment” of automated systems, which are summarized as follows:

  1. Safe and efficient systems. Automated systems should be developed in consultation with various stakeholders to help identify risks. They should be designed to protect against foreseeable damage and undergo both pre-deployment testing and ongoing monitoring to mitigate potential adverse consequences.
  2. Protections against algorithmic discrimination. Developers and deployers should strive to protect the public from algorithmic discrimination by their automated systems. These efforts should include proactive equity assessments, use of representative data, and ensuring accessibility in the design process. Once the system is deployed, there should be continuous testing and mitigation of discrepancies.
  3. Data Privacy. Automated systems should have built-in data privacy protections that allow users to control how their data is used. Only data strictly necessary for the specific context should be collected and any data collection should be consistent with users’ reasonable expectations. Developers and deployers should seek consent regarding data collection and use using brief, easily understandable consent requests. Sensitive data, including data on health, labour, education, crime and children, should benefit from stronger protections. Surveillance technologies should be subject to heightened scrutiny and ongoing monitoring, and surveillance should not be used in education, work, housing, or other settings where its use is likely limit rights, opportunities or access.
  4. Notice and Explanation. The public should be informed of where and how an automated system is used and how it affects results. Automated systems must be accompanied by publicly available, plain language documentation that indicates that an automated system is in use and describes the function of the system, the purpose of the automation, the entity responsible for the system, and the results.
  5. Human alternatives, consideration and withdrawal. Where appropriate, users should be able to opt out of automated systems in favor of a human alternative. Automated systems should be connected to a fallback and escalation process with human consideration in the event that an automated system produces an error, breaks down, or an affected party otherwise wishes to challenge an automated decision.

For all five principles, the Blueprint emphasizes the use of independent assessments and public reporting where possible to confirm adherence to the principles.

The OSTP has also published guidance on how to apply the Blueprint, as well as a 41-page “Technical Companion” that explains, for each of the five principles, (1) why the principle is important, (2) what what to expect from automated systems and (3) how these principles can be put into practice.

Foreshadowing the future of AI regulation

The Blueprint is the latest in a series of guidelines and frameworks recently released by various government entities and international organizations regarding the safe use of AI technologies, including the European Union. Ethical Guidelines for Trustworthy AIorganization for economic co-operation and development Principles of AIand the GAOs AI Accountability Framework.

Although these efforts have not yet resulted in regulations or binding obligations, the growing focus on mitigating the potential harms of AI by government entities and non-governmental organizations suggests that future regulation is likely, particularly if an industry is unable to self-regulate against potential harm. . These guidelines and frameworks also foreshadow the kinds of regulations that the future may bring. For example, they suggest that new laws, agency directives, and industry policies could all be used to achieve Master Plan goals.

[View source.]

Norma A. Roth