While there is growing evidence that we can employ individualized approaches to enhance human-agent teamwork in the complex environments envisioned in the future, there are limited examples of true human-autonomy teams involving multiple humans and multiple intelligent agents. Additionally, much of the current research using individualized technologies focuses on optimizing individual performance within the team without consideration for overall team emergent properties and performance. In the following, we propose some of the core scientific questions addressing interactions between humans and agents that are critical to the future of human-agent teaming.
1. Shared mental models underlie the effective communication and coordination of human teams, and similar concepts have emerged in multiagent systems both organically and by inspiration from human teaming. In complex teams of the future, will it be necessary to maintain a shared mental model amongst teams of humans and intelligent agents? If so, how do we operationalize “shared” mental models in these complex teams? How will human-agent teams develop and manage these shared mental models of the problem, environment, and other team members in order to facilitate communication and rapid mission planning and adaptation?
2. Effective teams capitalize on a rich knowledge of each other’s strengths, weaknesses, and patterned behavior to inform role assignment. In a future human-agent teaming scenario in which intelligent agents can instantly download new behavior models, no coherent team may exist for longer than a single mission or subgoal. Is it possible to rapidly achieve the effect of rapport with new team members (e.g., anticipate their actions or recognize their strengths and weaknesses)? What aspects of rapport-building and trust are most critical in these evolving teams, and how do we develop these in both humans and agents?
3. A rich body of literature connects particular teamwork processes such as communication, shared mental models, and coordination with effective team performance in human teams. Will models of the critical emergent team processes generalize to human-agent teams? Will the same emergent team processes be critical in human-agent teams, or will other novel team processes emerge? How will such properties be validated and measured?
4. Future human-agent teams must contend with variability in the most general sense. Human team members possess diverse capabilities and personalities, each of which is subject to significant variability. In addition, intelligent agents will manifest as unmanned ground and aerial vehicles, networked knowledge bases, and personal assistants, constantly learning and adapting. How do we incorporate complex human and agent variability into closed-loop systems targeted toward team-level performance? What novel approaches are critical to using individualized technologies for the purposes of optimizing the human-agent team? How do these approaches use variability over multiple timescales enable the optimization of team performance immediately (e.g., single task) and over long periods of time (multiple missions, life cycle of team)?
Call for Comments:
- What are other scientific questions critical to the future development of individualizable and adaptive team-enhancement technologies?
- Additional related comments