We are presently witnessing the diffusion of intelligent technologies into every facet of modern life. Digital personal assistants, such as Google Home and Alexa, leverage a suite of internet-based sources to provide users access to information and entertainment through a natural language interface. Phones, watches and other wearable devices can provide detailed insights into behavior, and physiology during everyday activities. Self-driving cars appear to be on the verge of wide spread use. For enhanced human-agent teaming, we must combine the capability for real-time sensing and prediction of the states of individual team members—as well as the whole team—with scientific advancements in understanding the interdependencies in individual and team states and processes in human-agent teams to deploy precise technological interventions. Here we draw on key findings from research into artificial intelligence/machine learning and adaptive control architectures to identify a foundation for future research questions to adaptively apply individualized technologies to enhance human-agent teamwork.
Artificial Intelligence & Machine Learning.
The ongoing revolution in machine learning and artificial intelligence, precipitated by deep learning, points to a future with the capability to individualize technology at the point of need and adapt during dynamic and complex events (LeCun et al. 2015). The highly publicized successes of deep learning span visual perception (Krizhevsky et al. 2012), speech recognition (Hinton et al. 2012), and sequential decision-making (Mnih et al. 2015; Silver et al. 2016), but machine learning technologies can more broadly recognize facial expressions (Ranjan et al, 2018), generate natural language descriptions of visual input (Socher 2014), and allow intelligent agents to learn the preferences of humans (Warnell et al. 2017). These latter modalities offer a vision for how machine learning can be used to interface with humans and even other intelligent agents in a teaming scenario. Simultaneously, industry-led efforts are underway to make deep learning-enabled devices deployable in the real-world through embedded hardware and cloud computing. Taken together, many of the scientific capabilities required for real-time inference of motivations and behavior prediction exist today. However, there is much work to do in developing the predictive algorithms for individual, team, and external (e.g., societal, organization) states and behaviors. With each layer, complexity and uncertainty make accuracy and timeliness of these predictions more challenging.
Control Architectures for Continuously Adapting Human-Agent Teams.
Another critical area for realizing this individualized human-technology approach to enhance human-agent teaming is embedding the capability to infer motivations, predicting behavior and reason about the environment into a closed-loop system that can initiate individualized interventions at the right time to improve team performance by leveraging the strengths, and offsetting the limitations of each agent, whether human or autonomous. For instance, it has long been understood that, though autonomy can execute predictable, well-defined procedures with superior speed and reliability, humans are far superior at tasks that require inductive reasoning and adaptation to novel and/or changing information (Fitts 1951; Sheridan 2000; Cummings 2014). As a result, system integrators have developed a wide range of approaches to supplement autonomy with human inputs to increase resilient and robust performance within complex, dynamic, and uncertain environments. Adaptive schemes have been developed to enable active management of the balance of inputs from human and autonomous agents through user selection (Crandall and Goodrich 2001), based on cost-benefit estimates of the performance of the agents (Sellner et al. 2006), or by enabling the autonomy to periodically query the operator for assistance (Fong et al., 2003a, 2003b). Unfortunately, the majority of these approaches have only succeeded in limited and controlled contexts, and have not been widely adopted for real-world use. However, with relatively few exceptions, most approaches have treated the human as the apex of the command hierarchy (c.f. Billings 1991; Sheridan 1992; Fong et al. 2003a; Abbink et al. 2012) rather than as a fully collaborative agent (Woods and Branlat 2010). We join those who have argued that adherence to this premise has limited how well human inputs have been integrated with autonomous systems (Woods 1985; Woods and Branlat 2010; Cummings and Clare 2015).
We argue that the failure of traditional systems-level design approaches is due, at least in part, to failing to fully account for the dynamic strengths and vulnerabilities the individual agents. More recent efforts have pursued human-automation interactions that capture a more authentic essence of natural teaming behavior (Woods and Branlat 2010; Lyons 2013; Chen and Barnes 2014). In our own work, we recently proposed the Privileged Sensing Framework (PSF), an evolved approach that treats the human as a special class of sensor rather than as the absolute command arbiter (Marathe et al. 2017). This approach is based on the concept of appropriately “privileging” information during the process of integration, by bestowing advantages, special rights, or immunities based on the characteristics of each individual agent, the task context, and/or the performance goals. One recent study has demonstrated that this approach to enable natural teaming behavior has enabled a team of humans and intelligent agents to work together to efficiently label targets of interest in large image datasets (Bohannon et al. 2016). Building on this example to include broader application spaces and more dynamic intelligent agents will require continued research in a variety of areas.
[wpforms id=”1351″ title=”false” description=”false”]Call for Comments:
- What other advancements will enable the implementation of individualizable and adaptive team-enhancement technologies?
- What barriers will prevent the implementation of individualizable and adaptive team-enhancement technologies?
- Other related comments
Comments
Estimating changes in internal (overt and covert) states is a critical issue in modelling Human-Agent Teaming (HAT) with high performance. Dynamic environments may lead to significant changes in cognitive states and poor quality data. Although past researchers have identified a number of overt states, the knowledge and use of feature-based representation for describing covert states is very limited. To effectively discover covert-state drifts, we may leverage the machine learning techniques such as fuzzy neural networks and deep neural networks to identify the covert states of humans and intelligent agents, respectively. The understanding of covert states of both humans and machines can enhance the better fusion and more effective cooperation between humans and machines.
This is a great point. Any effort to close the loop on such a system (i.e. adaptively deploy individualized technology to target individual-level and team-level teamwork processes) requires robust estimation and detection techniques for what are almost certainly unobservable states. Better understanding of how these individual and team-level teamwork processes manifest in measurable domains (heart rate or speech patterns of individuals and interactions between team members) will only give us a better understanding of the relevant feature space. I agree that machine learning techniques will be an essential aspect of any solution. Specifically, I suspect that “deep representations” will buy robustness in the detection problem. It may be that learned representations can even shed light on the physiological and sociological processes involved, leading to bi-directional scientific discovery!