VA’s new AI strategy targets ethics and trust
The agency has adopted a new artificial intelligence policy focused on building existing capacities while fostering the confidence of veterans.
The Department of Veterans Affairs is the latest agency to publish and implement a strategy around the ethical use of artificial intelligence to improve care for veterans.
Implemented in September 2021, the plan focuses on four distinct objectives:
- Use existing AI capabilities to better deliver health care and benefits to veterans
- Develop these existing AI capabilities
- Increase the confidence of veterans and stakeholders in AI
- Rely on partnerships with industry and other government agencies.
The plan represents the first AI strategy ever officially released by the agency since the founding of the National Institute of Artificial Intelligence (NAII) in June 2019. The agency established the institute with support from February 2019. Executive Order of the U.S. AI Initiative, which dramatically increased funding for federal artificial intelligence research while creating new AI institutions across government.
The ethics strategy aims to ensure transparency and trust around the capabilities of AI.
“VA understands the importance of creating a balance between innovation, security and trust,” said NAII director Gil Alterovitz, according to a press release from the agency. “To this end, VA management, practitioners and relevant end users will be trained to ensure that all activities and processes related to AI are ethical, legal and meet or exceed standards… The New Roadmap of VA will help realize the full potential of AI by building confidence in future technology to create more efficient and effective systems for patients.
One of the main priorities of VA’s artificial intelligence program since its inception has been to leverage the expertise of private industry and transform VA into an AI learning center. VA’s new AI strategy indicated plans to expand these partnerships with private industry.
“The VA is already collaborating with other federal agencies on research and data sharing and overseeing AI technology sprints that bring industry partners to the table with specific goals so that their participation creates a winning opportunity. -winning “, according to the strategy. “We will seek to build on these efforts and identify new collaborative approaches that will accelerate the rate of knowledge discovery.”
While VA has rapidly developed its AI capabilities since 2019, the agency’s leadership has expressed particular attention to ensuring these capabilities are developed in a way that protects the trust of veterans.
It comes under what Alterovitz calls “Trustworthy AI” that follows NIST principles while expanding them to cover the specifics of the VA AI program.
The Government Accountability Office (GAO) published a similar strategy. Its IA Accountability Framework is designed to prevent AI models and their applications from being designed in a way that is unduly flawed or ethically compromised. GAO Chief Scientist Taka Ariga noted that this is a vital concern in large part because these issues can become ingrained in the foundation of AI applications, allowing these issues to persist and even to expand on the assumption that they are unintentionally integrated into the models themselves.
The Ministry of Labor, for example, faces systemic challenges in datasets compiled decades ago.
“There are datasets that we use today that were developed in the 1960s that identified women as housewives when they were in fact teachers, scientists or lawyers,” said Kathy McNeill. , who leads the agency’s emerging technology strategy, during a virtual conference. event earlier this year.
Similar to VA, other health-focused agencies have paid particular attention to concerns about confidentiality. Healthcare data used in AI models needs safeguards to ensure that personally identifiable information is protected or obscured, a process the National Institutes of Health (NIH) has codified under their supervision. ACD Artificial Intelligence Working Group.