3.3 Individualized, Adaptive Technologies for Teamwork: Scientific Questions

In Enhancing Human-Agent Teaming by ieeebrain3 Comments

Arwen H. DeCostanza*, Amar R. Marathe*, Addison Bohannon*, A. William Evans*, Edward T. Palazzolo**, Jason S. Metcalfe*, and Kaleb McDowell*
*Army Research Laboratory, **Army Research Office

While there is growing evidence that we can employ individualized approaches to enhance human-agent teamwork in the complex environments envisioned in the future, there are limited examples of true human-autonomy teams involving multiple humans and multiple intelligent agents. Additionally, much of the current research using individualized technologies focuses on optimizing individual performance within the team without consideration for overall team emergent properties and performance. In the following, we propose some of the core scientific questions addressing interactions between humans and agents that are critical to the future of human-agent teaming.

1. Shared mental models underlie the effective communication and coordination of human teams, and similar concepts have emerged in multiagent systems both organically and by inspiration from human teaming. In complex teams of the future, will it be necessary to maintain a shared mental model amongst teams of humans and intelligent agents? If so, how do we operationalize “shared” mental models in these complex teams? How will human-agent teams develop and manage these shared mental models of the problem, environment, and other team members in order to facilitate communication and rapid mission planning and adaptation?

[wpforms id=”1341″ title=”false” description=”false”]

2. Effective teams capitalize on a rich knowledge of each other’s strengths, weaknesses, and patterned behavior to inform role assignment. In a future human-agent teaming scenario in which intelligent agents can instantly download new behavior models, no coherent team may exist for longer than a single mission or subgoal. Is it possible to rapidly achieve the effect of rapport with new team members (e.g., anticipate their actions or recognize their strengths and weaknesses)? What aspects of rapport-building and trust are most critical in these evolving teams, and how do we develop these in both humans and agents?

[wpforms id=”1342″ title=”false” description=”false”]

3. A rich body of literature connects particular teamwork processes such as communication, shared mental models, and coordination with effective team performance in human teams. Will models of the critical emergent team processes generalize to human-agent teams? Will the same emergent team processes be critical in human-agent teams, or will other novel team processes emerge? How will such properties be validated and measured?

[wpforms id=”1344″ title=”false” description=”false”]

4. Future human-agent teams must contend with variability in the most general sense. Human team members possess diverse capabilities and personalities, each of which is subject to significant variability. In addition, intelligent agents will manifest as unmanned ground and aerial vehicles, networked knowledge bases, and personal assistants, constantly learning and adapting. How do we incorporate complex human and agent variability into closed-loop systems targeted toward team-level performance? What novel approaches are critical to using individualized technologies for the purposes of optimizing the human-agent team? How do these approaches use variability over multiple timescales enable the optimization of team performance immediately (e.g., single task) and over long periods of time (multiple missions, life cycle of team)?

[wpforms id=”1345″ title=”false” description=”false”]

Call for Comments:

  • What are other scientific questions critical to the future development of individualizable and adaptive team-enhancement technologies?
  • Additional related comments

Comments

  1. These are great questions. I agree that these are the top questions. Other questions might include:

    1) With regard to question #4, variability over time is also an issue. Examples are sleep-related degradation of human cognition, which can then be fixed by getting sleep; stress and resilience on the part of the humans; but also changes in technology that can lead to trust (or not) in different, new technologies. What is the fault tolerance of the system (humans and tech/agents) in case of momentary or long-term problems?

    2) How is usability, and changes in usability (and human-centeredness of the usability of the tech and agents) going to impact all the other questions?

    3) What should and can be done about anticipating the normalization of deviance of either the human or the technology parts of the team? Related, but a larger question: how does culture (organizational, team, national) impact technology use, abuse, and trust, as well as expectations of what the tech can and can’t do? How can the technology help overcome human tendencies to (sometimes) normalize deviance?

  2. On shared mental models (Q1, Q3): Based on recent advances in explainable AI and complex task domains where ML results in the emergence of high level human representations (e.g., in first-person shooter: Jaderberg et al., 2019, Science), there may be areas where humans will construct mental models in common with agents (“shared” through common provenance but not actual communication). While AI mental models can be probed by relating unit activity to higher level constructs, probing these models in humans can be time-consuming, distracting, or they may just not be available to introspection. We foresee a challenge in the joint development of models at timescales that are predictive of human and agent behaviors–there are cases where humans may perceive contextual shifts and change mental models faster than agents, and situations where agents may be able to simultaneously entertain and weigh evidence for multiple mental models at much higher speeds than humans. This means that shared human-agent mental models may only be possible at the intersection of tasks where humans and agents have the same representational needs due to similarities in the information requirements and temporal structure of their tasks.

    – Lixiao Huang (ASU), James McIntosh (Columbia), and Alfred Yu (CCDC ARL)

  3. On rapid changes in agent behavior and team composition (Q2): Predictability of behavior and adoption of well-defined roles is a critical aspect of team performance. We envision a need for constrained rates of change to accommodate human expectations. For example, running changes in agent behavior and functionality will likely produce violations of trust, while more graded enhancements to agent performance will be more likely to be accepted (e.g. 5% faster scanning of visual scenes). Discrete scheduled updates, update logs, and learning progress reports may help human users understand the rate of change of agent capabilities and enhance predictability. On the flip side, humans can be extremely sensitive and flexible to changing task requirements in the face of dynamic context switches and it may be beneficial to quickly convey this new contextual information to other agents. While some overt behavioral indicators may be easy to track, agents may also need to consider a wider variety of information sources such as physiological signals to rapidly determine human intent and mission status in order to maintain awareness while team dynamics change. The same applies to agents–abrupt changes in policy may be necessary to handle new situations, but these shifts must be properly communicated to human teammates to preserve trust and predictability.

    – Lixiao Huang (ASU), James McIntosh (Columbia), and Alfred Yu (CCDC ARL)

Leave a Comment