3.3 Individualized, Adaptive Technologies for Teamwork: Scientific Questions

Arwen H. DeCostanza*, Amar R. Marathe*, Addison Bohannon*, A. William Evans*, Edward T. Palazzolo**, Jason S. Metcalfe*, and Kaleb McDowell*
*Army Research Laboratory, **Army Research Office

While there is growing evidence that we can employ individualized approaches to enhance human-agent teamwork in the complex environments envisioned in the future, there are limited examples of true human-autonomy teams involving multiple humans and multiple intelligent agents. Additionally, much of the current research using individualized technologies focuses on optimizing individual performance within the team without consideration for overall team emergent properties and performance. In the following, we propose some of the core scientific questions addressing interactions between humans and agents that are critical to the future of human-agent teaming.

1. Shared mental models underlie the effective communication and coordination of human teams, and similar concepts have emerged in multiagent systems both organically and by inspiration from human teaming. In complex teams of the future, will it be necessary to maintain a shared mental model amongst teams of humans and intelligent agents? If so, how do we operationalize “shared” mental models in these complex teams? How will human-agent teams develop and manage these shared mental models of the problem, environment, and other team members in order to facilitate communication and rapid mission planning and adaptation?

Strongly AgreeAgreeNeutralDisagreeStrongly Disagree

2. Effective teams capitalize on a rich knowledge of each other’s strengths, weaknesses, and patterned behavior to inform role assignment. In a future human-agent teaming scenario in which intelligent agents can instantly download new behavior models, no coherent team may exist for longer than a single mission or subgoal. Is it possible to rapidly achieve the effect of rapport with new team members (e.g., anticipate their actions or recognize their strengths and weaknesses)? What aspects of rapport-building and trust are most critical in these evolving teams, and how do we develop these in both humans and agents?

Strongly AgreeAgreeNeutralDisagreeStrongly Disagree

3. A rich body of literature connects particular teamwork processes such as communication, shared mental models, and coordination with effective team performance in human teams. Will models of the critical emergent team processes generalize to human-agent teams? Will the same emergent team processes be critical in human-agent teams, or will other novel team processes emerge? How will such properties be validated and measured?

Strongly AgreeAgreeNeutralDisagreeStrongly Disagree

4. Future human-agent teams must contend with variability in the most general sense. Human team members possess diverse capabilities and personalities, each of which is subject to significant variability. In addition, intelligent agents will manifest as unmanned ground and aerial vehicles, networked knowledge bases, and personal assistants, constantly learning and adapting. How do we incorporate complex human and agent variability into closed-loop systems targeted toward team-level performance? What novel approaches are critical to using individualized technologies for the purposes of optimizing the human-agent team? How do these approaches use variability over multiple timescales enable the optimization of team performance immediately (e.g., single task) and over long periods of time (multiple missions, life cycle of team)?

Strongly AgreeAgreeNeutralDisagreeStrongly Disagree

Call for Comments:

  • What are other scientific questions critical to the future development of individualizable and adaptive team-enhancement technologies?
  • Additional related comments
avatar
Photo and image files
 
 
 
Other file types such as zip, doc, pdf, txt...
 
 
 
3 Comment threads
0 Thread replies
0 Followers
 
Most reacted comment
Hottest comment thread
2 Comment authors
Alfred YuSusannah Paletz Recent comment authors
  Subscribe  
newest oldest most voted
Notify of
Susannah Paletz
Guest
Susannah Paletz

These are great questions. I agree that these are the top questions. Other questions might include: 1) With regard to question #4, variability over time is also an issue. Examples are sleep-related degradation of human cognition, which can then be fixed by getting sleep; stress and resilience on the part of the humans; but also changes in technology that can lead to trust (or not) in different, new technologies. What is the fault tolerance of the system (humans and tech/agents) in case of momentary or long-term problems? 2) How is usability, and changes in usability (and human-centeredness of the usability… Read more »

Affiliation
University of Maryland
Alfred Yu
Guest
Alfred Yu

On shared mental models (Q1, Q3): Based on recent advances in explainable AI and complex task domains where ML results in the emergence of high level human representations (e.g., in first-person shooter: Jaderberg et al., 2019, Science), there may be areas where humans will construct mental models in common with agents (“shared” through common provenance but not actual communication). While AI mental models can be probed by relating unit activity to higher level constructs, probing these models in humans can be time-consuming, distracting, or they may just not be available to introspection. We foresee a challenge in the joint development… Read more »

Affiliation
CCDC Army Research Laboratory
Alfred Yu
Guest
Alfred Yu

On rapid changes in agent behavior and team composition (Q2): Predictability of behavior and adoption of well-defined roles is a critical aspect of team performance. We envision a need for constrained rates of change to accommodate human expectations. For example, running changes in agent behavior and functionality will likely produce violations of trust, while more graded enhancements to agent performance will be more likely to be accepted (e.g. 5% faster scanning of visual scenes). Discrete scheduled updates, update logs, and learning progress reports may help human users understand the rate of change of agent capabilities and enhance predictability. On the… Read more »

Affiliation
CCDC Army Research Laboratory