Pentagon adopts ethical principles for artificial intelligence
The new set of principles will focus on responsibly developing technology while preserving American civil liberties
The Department of Defense on Monday adopted a new set of ethical principles governing the use of artificial intelligence or AI. The recommendations from Defense Secretary Mark Esper are the result of 15 months of conversations with leading AI experts from a range of backgrounds.
According to a DOD release, the adoption of these rules “aligns with the DOD AI strategy objective directing the U.S. military lead in AI ethics and the lawful use of AI systems.”
“The United States, together with our allies and partners, must accelerate the adoption of AI and lead in its national security applications to maintain our strategic position, prevail on future battlefields, and safeguard the rules-based international order,” said Secretary Esper. “AI technology will change much about the battlefield of the future, but nothing will change America’s steadfast commitment to responsible and lawful behavior.”
In February of last year, President Trump launched the American AI Initiative — the U.S. national strategy on artificial intelligence, which advances the uses of AI, while safeguarding American civil liberties and values. The ethical principles put forward by DOD are inline with the Trump Administration’s efforts to promote trustworthy AI.
The new series of ethical principles are made up of five core areas:
- Responsible. DoD personnel will exercise appropriate levels of judgment and care, while remaining responsible for the development, deployment, and use of AI capabilities.
- Equitable. The Department will take deliberate steps to minimize unintended bias in AI capabilities.
- Traceable. The Department’s AI capabilities will be developed and deployed such that relevant personnel possess an appropriate understanding of the technology, development processes, and operational methods applicable to AI capabilities, including with transparent and auditable methodologies, data sources, and design procedure and documentation.
- Reliable. The Department’s AI capabilities will have explicit, well-defined uses, and the safety, security, and effectiveness of such capabilities will be subject to testing and assurance within those defined uses across their entire life-cycles.
- Governable. The Department will design and engineer AI capabilities to fulfill their intended functions while possessing the ability to detect and avoid unintended consequences, and the ability to disengage or deactivate deployed systems that demonstrate unintended behavior.
The implementation of the AI ethical principles will primarily be the responsibility of the DOD’s Joint Artificial Intelligence Center. JAIC is currently coordinating and facilitating a series of working groups meant to solicit input from experts throughout the DOD.
Dr. Eric Schmidt, the Chair of the Defense Innovation Board, commended the leadership team of the JAIC for “ensuring democracies adopt emerging technology responsibly.”
“Ethics remain at the forefront of everything the department does with AI technology,” said the DOD’s Chief Information Officer Dana Deasy.