Emerging capabilities in science and engineering are enabling a future of adaptive and individualized systems that account for variability in an individual’s capabilities and limitations in real-time to achieve greater individual performance. This individualized, adaptive approach is critical because it incorporates the ability to improve individual performance by enabling greater variability in behavior. However, if we want to realize the full human-agent teaming vision, it will not be sufficient that technology improves the performance of one human or agent. When considering team performance, it is well understood that team outcomes are not simply a sum or an average of the parts. Instead, emergent properties are the result of the interaction of the components of the system, which cannot be reduced to or described wholly in terms of the elementary components of the system considered in isolation. Teams are able to synergistically combine the attributes of team members to produce outcomes beyond the capacity of any one member or of the pooled output of all members. Similarly, ineffective processes and states of the team often emerge leading to team failures, regardless of the individual performance of each team member (Salas et al. 2009).
Considering the failures in human teams, breakdowns are commonly due to problems with team states and processes—insufficient communications, misunderstanding of team goals, undefined team responsibilities or lack of shared mental models, and conflict, as examples (Kohn et al. 2000; Salas et al. 2007). Team-focused training and development literature suggests that the best human teams can overcome external demands (e.g., distributed environments, lack of resources, time pressures) and some individual performance problems through a focus on effective team processes, such that they may not always perform the best on every task, but they will outperform teams lacking effective processes over time (Weaver et al. 2014). However, teams composed of humans, intelligent software agents, embodied agents, and networked sensors add complexity to the concept of emergent properties that may not be completely comprehended today. Considering cognitive and behavioral processes such as decision making and coordination, humans and agents will be working in disparate dimensions (time, space, world views, representations, mental models, etc.), yet need to seamlessly synchronize for collective action. For example, intelligent agents will process information, reason, and make decisions at scales beyond that of humans in both time and magnitude; and yet, we will want to include humans in the decision-making loop for many, if not most, decisions. Similarly, intelligent agents will learn and adapt far more rapidly than their human counterparts, but may possess less flexibility and range in what they can learn. How will we capitalize on the individual advantages of both humans and agents, and simultaneously enhance the performance of the collective group?
Not only will we need methods to bridge diverse capabilities, processes, and beliefs, but much of what we know about critical states and processes in human teams may not be applicable. The very notion of a shared mental model among humans and intelligent agents begs significant scientific and philosophical questions. Shared understanding of team responsibilities and goals among humans and intelligent agents, as it is practiced in human teams, assumes intelligent agents with human-like intelligence; however, non-human teammates will likely span the spectrum of machine intelligence—from passive sensors with only the ability to sense and communicate to advanced machine learning algorithms that can adapt and learn in real-time. Breakthroughs in representation learning and explainability should facilitate human understanding of machine reasoning, but are shared mental models like those targeted in human teams the right approach to human-agent teaming? The very nature of these emergent properties is fundamentally different than our conceptualization today, and assuming that human-agent team cohesion, coordination, and collective performance will develop in ways similar to human teams without concerted scientific focus and effort is naive. So, what are the critical states and processes for effective performance in human-agent teams, and how do we use individualized and adaptive technologies to elicit these emergent team processes in human-agent teams?
regarding ” Breakthroughs in representation learning and explainability should facilitate human understanding of machine reasoning, but are shared mental models like those targeted in human teams the right approach to human-agent teaming?” I don’t have a direct answer, but I do know that tech developers who believe they are helping teamwork are not always clear about what teamwork is