A Multi-Agent Trust Framework for Fusing Subjective Opinions with Imperfect Understanding in Space Domain Awareness Using the Scruff AI Framework

Matthew Wilkins, L3Harris; Sanja Cvijic, Charles River Analytics, Inc.; Weston Faber, L3Harris Technologies

Keywords: AI/ML, SDA, Data Trust, Probabilistic Reasoning

Abstract:

It is widely acknowledged even in popular news that the space domain is contested and congested. Owing to high dollar outcomes and negative impact to critical missions, agents in the space domain rarely accept data from untrusted sources (i.e. commercial and non-traditional). To this end, methods to speed the timeline for integration of new sensor sources are being proposed based upon tiered approaches to trust but they do not present a general formalism compute with trust. In the meantime, agents continue to rely on “vetted” sensors to independently assess complex events and then make consequential decisions. “Vetted” sensors typically are those that agents have control over and have been calibrated to their own specifications. Therefore, the agent’s belief that correct decisions are being made is strictly shaped by the hard evidence (i.e. metric data plus uncertainty) provided by these sensors.

Unity of effort is ‘the product of successful unified action’ and consists of ‘coordination and cooperation toward common objectives, even if the participants are not necessarily part of the same command or organization.’ The barrier to cooperative decision making is the lack of a mechanism to incorporate soft evidence (i.e. subjective opinions) or perceived trust in the agent’s decision-making process. Therefore, a framework is needed for operational decision-making that maps both hard and soft evidence to belief and certainty of action. To this end, the space domain has yet to take advantage of recent advances in multi-agent trust frameworks where trust is no longer implicit but rather explicitly defined. These multi-agent frameworks strive to compute trust as the probability of a positive outcome for complex events involving multiple responsible with various subjective opinions and imperfect understanding.

Following the work of Wang and Singh in “Formal Trust Model for Multiagent Systems,” this paper utilizes an evidence-based approach to establish the trust of an agent in another, which can be modeled in probabilistic terms as the probability of a positive experience by another agent, p. The uninformed prior would be a uniform distribution over [0,1] with zero elsewhere. A probability density function of the probability of a positive experience, f(p), is defined such that the integral over the [0,1] is equal to 1. f(p) can be conditionally updated given the agent’s table of outcome evidence. The agent’s current trust level, therefore, corresponds to increasing deviation from the uninformed prior (i.e. uniform) distribution. An agent’s trust is also a function of how strongly the agent believes a positive experience will occur. The agent certainty can be defined as a function of f(p) which represents the deviation of the agent’s trust from the mean absolute deviation. From this, we can say that the agent’s (a) trust space in variable (p) is modeled as three-dimensional space of reals (b,d,u) in [0, 1] represented by weights assigned to belief, disbelief, and uncertainty (i.e. 1 – certainty), T = (b,d,u), where b + d + u = 1 and unity and zero in trust space represent perfect knowledge and ignorance, respectively.

An evidence space is modeled for convenience as a two-dimensional space of reals corresponding to the number of positive (r) and negative (s) outcomes such that r + s > 0 where these outcomes are based upon an agent’s table of evidence. Accordingly, the onus is put on how to map trust to evidence with respect to agent behavior. The transformation between Evidence and Trust spaces is a bijection given by Z(r,s) = (b,d,u) which allows us to compute (b,d,u) as a function of (p,r,s). There is no closed form for Z inverse for which Wang and Singh provide an iterative algorithm.

Having defined a mechanism to go from evidence reports to trust distributions, once the posterior trust distribution has been generated, it may be desired to notify the agent of any “significant” change in trust based upon the supplied evidence. The question becomes how to quantitatively decide if the change between the prior and posterior trust distributions is statistically meaningful. If an agent decides to create an attitude change detection or conjunction alert, how do they assess whether that alert is within a desired false alarm percentile? Fortunately, the authors have previously developed a truncated Sequential Probability Ratio Testing (TSPRT) method that relates information divergence to Type I and Type II errors.

This paper explores a formal trust framework where a “decision-making” agent must consider their trust in multiple “information” agents independently observing the same type of space domain event. We will be leveraging Scruff, developed by Charles River Analytics, which is an AI framework to build agents that sense, reason, and learn in the world using a variety of models. Scruff aims to integrate many kinds of models in a coherent framework, provide flexibility in spatiotemporal modeling, and provide tools to compose, share, and reuse models and model components. Scruff is provided as a Julia package and is licensed under the BSD-3-Clause License. In doing so, the decision-making agent is presented with potentially conflicting information from two information agents that have a variable history length of observing conjunctions. The value of the proposed framework to SDA will not only enable an agent to make more trustworthy decisions in the presence of conflicting information but also enable cooperation and coordination based on statistical measures.

Date of Conference: September 17-20, 2024

Track: Space Domain Awareness

View Paper