Artificial Intelligence and Autonomy in Space: Balancing Risks and Benefits for Deterrence and Escalation Control

Nancy Hayden, Sandia National Laboratories; Kelsey Abel, Sandia National Laboratories; Marie Arrieta, Sandia National Laboratories; Mallory Stewart, Sandia National Laboratories

Keywords: artificial intelligence, escalation control, space deterrence, space control,

Abstract:

An overarching principle common to space-faring nations and industry alike is to maintain freedom of operations in a safe and secure environment, commensurate with national and commercial interests. Deterrence concepts and escalation control play key roles in realizing this principle in the increasingly congested, competitive and contested space environment. The history of nuclear deterrence demonstrates that deterrence concepts rely on norms of behavior and clear signaling that is credible and clearly communicated, while escalation control requires an understanding of feedback systems.  The goal of this research is to explore how Artificial Intelligence employed on critical space systems may impact signaling and escalation control in a framework based on lessons from nuclear deterrence.
Artificial Intelligence (AI) and autonomous machine learning (e.g., models that update autonomously) are being pursued as critical enablers in commercial and military programs for space traffic management (STM), routine space operations, space domain awareness (SDA), and space control.  For example, AI will be essential for managing mega-constellations of commercial telecommunications satellites in low Earth orbit (LEO), guiding functions such as scheduling and tasking, collision avoidance, and space debris mitigation. AI is also being explored for classification of observations from LEO constellations proposed to serve national security applications, such as persistent overhead coverage, and missile defense. Advancements in AI, in combination with increased availability of low-cost and secure cloud storage have also led to improvements in SDA while decreasing costs.  As databases grow with an increased number of objects to track and characterize, companies and countries will employ AI to make timely, cost-effective assessments for SDA, while reducing the role of the human-in-the-loop
Even though ultimate decision-making in these applications may never be ceded to AI without a human-in-the-loop, issues that have arisen in terrestrial AI applications will be also present in space deterrence scenarios. In particular, the performance of AI methods (e.g., accuracy, precision, recall, sensitivity, confusion) is of concern for clear and unambiguous signaling; the potential for adversarial attacks and/or deception raise concerns of vulnerability of AI methods and credibility of signaling; while the need to understand why AI-informed actions may be taken and how they are perceived raise issues of explainability.  There are inherent trade-offs among explainability, performance, and vulnerability of AI methods, creating a challenge for signaling in high consequence deterrence and escalation control scenarios. A key question that emerges, is, “How might signaling be affected by the use of AI and autonomy, and what could be the unintended effects on escalation control in times of ambiguity and crisis? 
To date there are few if any international standards and/or regulations to guide best practices for choosing AI methods for space operations and developing a shared understanding of the risks and benefits to strategic stability.  This paper presents trade-offs between explainability, performance, and vulnerability in AI methods applied to space control and SDA scenarios, and illustrates, through modeling and simulation results, how choices on these trade-offs might affect deterrence signaling and escalation control in space.

Date of Conference: September 15-18, 2020

Track: Machine Learning for SSA Applications

View Paper