• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar
  • Excelitas Qioptiq banner

BATTLESPACE Updates

   +44 (0)77689 54766
   j.nettlefold@battle-technology.com

  • Home
  • Features
  • News Updates
  • Company Directory
  • About
  • Subscribe
  • Contact
  • Media Pack 2021

AI Principles and the Challenge of Implementation By Morgan Dwyer

November 21, 2019 by Julian Nettlefold

 

 

 

 

 

 

 

19 Nov 19. The Defense Innovation Board (DIB) recently advised the Department of Defense (DOD) to adopt ethics principles for artificial intelligence (AI): that AI should be responsible, equitable, traceable, reliable, and governable. These principles aim to keep humans in the loop during AI development and operations (responsible); avoid unintended bias (equitable); maintain sufficient understanding of AI capabilities (traceable); ensure safety, security, and robustness (reliable); and avoid unintended harm or disruption (governable). Overall, these principles are good. But as with all principles, implementation will be a challenge. This is especially the case today since, if adopted, the DIB’s proposed principles will be implemented during a tumultuous time for defense technology. Presumably, the DIB’s principles will require meticulous development and careful oversight. In recent years, though, DOD’s standard technological processes and oversight mechanisms have been reimagined.

For example, to prioritize innovation and the speed with which DOD fields new capabilities, Congress restructured the department’s primary technology oversight office and delegated most acquisition decisions to the military services. Congress also created new acquisition pathways that enable rapid prototyping and fielding by forgoing traditional oversight processes. The DIB itself also heralded many software-specific changes through its Software Acquisition and Practices (SWAP) Study. The SWAP Study, which preceded the DIB’s focus on AI, encouraged DOD to—among other things—adopt speed as a metric to be maximized for software development. But on AI software programs, there may be an inherent tension between the DIB’s proposed principles and speed.

As DOD develops AI-enabled software, it will need to work through potential trade-offs and articulate a more detailed strategy for navigating the department’s objectives. In particular, the SWAP Study suggests replacing traditional software development processes that separate development from operations with DevOps, which blends the two. It also recommends adopting agile management philosophies that forgo strict requirements in favor of lists of desired features . Further, it espouses the benefits of sharing development and testing infrastructure, granting authority-to-operate (ATO) reciprocity, and employing automated testing. Finally, by changing how it implements software development and prioritizing speed, the SWAP Study argues that DOD will improve software security since it will be able to find and fix vulnerabilities sooner. But how will speed interact with the DIB’s proposed AI principles? Grappling with that question is where the DIB, DOD, and the broader defense community should focus their attention next.

For example, should the principles be implemented as strict requirements or—per agile philosophy—as more flexible features? How should DOD ensure traceability while simultaneously sharing software infrastructure and ATOs? Furthermore, how can DOD enable traceability without encumbering its agile software programs with unnecessary documentation? With respect to responsibility, how much and what type of oversight should be used to ensure that AI software is safe, secure, and robust? How much of that oversight process should be delegated to the lowest levels of an organization or automated to enable speed? And more fundamentally, when and how should the DIB’s principles be incorporated into the DevOps cycle? The defense community is right to want responsible, equitable, traceable, reliable, and governable AI software that is also developed and fielded quickly. But the above questions don’t have easy answers because—as with all systems—the challenge will be implementing all objectives at the same time. Systems engineers typically manage multiple objectives by making trade-offs that prioritize some objectives at the expense of others.

The next step for the defense community, therefore, is to understand what these trade-offs look like for AI software, under what circumstances DOD is willing to make trades, and who in DOD’s oversight hierarchy is empowered to adjudicate trade-off decisions. To do this, DOD should leverage ongoing and planned AI projects to address the questions outlined above. In collaboration, the broader research community should identify and address methodological shortcomings that unnecessarily force DOD to make trade-offs. Requirements definition, as well as testing, verification, and validation, currently require some level of certainty and predictability. As the DIB highlights, DOD needs to adapt current acquisition and testing processes for AI. It remains an open question, however, how the systems engineering methods that underly these processes should evolve in order to address AI’s inherent uncertainty. Therefore, in addition to furthering the science of AI, researchers should tackle the common implementation challenges that will impede DOD’s ability to optimally operationalize and field AI-enabled systems. Although future implementation challenges may be significant, the DIB has taken the right first step by proposing objectives for DOD. The next step—developing and implementing AI software that achieves all objectives—is a challenge that systems engineers have faced for decades. Going forward, the defense community must undertake the challenging work of understanding potential trade-offs, identifying strategies to balance competing objectives, and developing new methodologies that enable future AI software to optimally satisfy as many objectives as possible. Morgan Dwyer is a fellow in the International Security Program and deputy director for policy analysis in the Defense-Industrial Initiatives Group at the Center for Strategic and International Studies in Washington, D.C.

Commentary is produced by the Center for Strategic and International Studies (CSIS), a private, tax-exempt institution focusing on international public policy issues. Its research is nonpartisan and nonproprietary. CSIS does not take specific policy positions. Accordingly, all views, positions, and conclusions expressed in this publication should be understood to be solely those of the author(s). © 2019 by the Center for Strategic and International Studies.
All rights reserved.
###
The Center for Strategic and International Studies (CSIS) is a bipartisan, nonprofit organization founded in 1962 and headquartered in Washington, D.C. It seeks to advance global security and prosperity by providing strategic insights and policy solutions to decisionmakers.

Filed Under: News Update

Primary Sidebar

Advertisers

  • qioptiq.com
  • Exensor
  • TCI
  • Visit the Oxley website
  • Visit the Viasat website
  • Blighter
  • Arnold Defense logo
  • SPECTRA
  • InVeris
  • Britbots logo
  • Faun Trackway
  • Systematic
Hilux

Contact Us

BATTLESPACE Publications
Old Charlock
Abthorpe Road
Silverstone
Towcester NN12 8TW

+44 (0)77689 54766

j.nettlefold@battle-technology.com

BATTLESPACE Technologies

An international defence electronics news service providing our readers with up to date developments in the defence electronics industry.

Recent News

  • MANAGEMENT ON THE MOVE

    March 5, 2021
    Read more
  • CONTRACT NEWS IN BRIEF

    March 5, 2021
    Read more
  • INTERNATIONAL PROCUREMENT OPPORTUNITIES

    March 5, 2021
    Read more

Copyright BATTLESPACE Publications © 2002–2021.

This website uses cookies to improve your experience. If you continue to use the website, we'll assume you're ok with this.   Read More  Accept
Privacy & Cookies Policy

Privacy Overview

This website uses cookies to improve your experience while you navigate through the website. Out of these, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may affect your browsing experience.
Necessary
Always Enabled

Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.

Non-necessary

Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.

SAVE & ACCEPT