3.1 Individualized, Adaptive Technologies for Teamwork: Capabilities

In Enhancing Human-Agent Teaming by ieeebrain2 Comments

Arwen H. DeCostanza*, Amar R. Marathe*, Addison Bohannon*, A. William Evans*, Edward T. Palazzolo**, Jason S. Metcalfe*, and Kaleb McDowell*
*Army Research Laboratory, **Army Research Office

To effectively perform in these complex human-agent teams, we suggest the need for technologies that can adapt to individual team members (both humans and agents), as well as the emergent properties and constraints of the group over time, to optimize the system of interdependent agents. Here, we propose a few examples of future capabilities that we believe will be critical for enhanced human-agent teaming in the future operating environment previously described. With the overarching goal of overcoming limitations and enhancing strengths of individual humans and agents to optimize team-level states, processes, and performance, we expect capabilities in the following areas to be critical to enable this future vision:

Individualized technologies to enhance coordination and shared understanding in distributed environments.
While the emergent processes relating to coordination and shared understanding in human-agent teams may unfold much differently than with human-agent dyads, human-only teams, and agent-only teams, capabilities enhancing coordination and shared understanding in distributed environments, or environments where all team members are not collocated, will be critical. The focus here is not on the technologies that can physically communicate across distributed networks, but instead on  individualized, adaptive technologies that couple advanced sensing techniques with state-of-the-art machine learning approaches to enhance capabilities for teams of humans and agents to come together, cognitively and behaviorally, to anticipate each other’s decisions and actions, and perform interdependent, collective tasks in synchrony.

[wpforms id=”1334″ title=”false” description=”false”]

Technologies targeting cohesion and swift action with new, diverse, and rotating teammates.
An advantage of human-agent teams is the ability to bring together diverse expertise and capabilities particularly targeted for performance on a specific mission. However, rapid reconfiguration of teams does not come without a price, or an effect on team processes, both initially and over time (Bell and Outland 2017). Compounding this known problem, human-agent teams are inherently diverse at a deep level, and these differences between humans and agents have the potential to drastically mutate over time in human-agent teams (with agent adaptation and learning). Therefore, capabilities are needed to facilitate swift development of cohesive action in diverse human-agent teams that include new, rotating, and evolving team members. To improve interoperation, we propose these technologies need to enable humans and machines to compensate dynamically for shortcomings of other members through individualized, adaptive mechanisms. For example, each team member may have individualized agents responsible for quickly getting them up to speed on team members, roles and responsibilities, strengths and weaknesses, and predicted actions throughout the mission, in relation to their own role, knowledge structure, biases, strengths, and current state.

[wpforms id=”1335″ title=”false” description=”false”]

Individualized approaches to developing agile group performance and team efficacy with human and agent degradation and loss.
Not only is swift action important when teams are rapidly reconfigured, but agile performance is critical amid unexpected changes, including both human and agent degradation and loss. Degradation and loss of both humans and agents has strong repercussions on the perceived efficacy and the collective performance of the human-agent team. However, humans and agents may experience degradation and loss in much different ways, including affectively and behaviorally, which can subsequently impact the cohesion of the group. For example, maintaining and demanding a pure task focus after an injury could be viewed negatively by human team members and cause friction within the group. Individualized technologies that can quickly detect the degradation or loss of an agent, monitor affective, behavioral, and cognitive changes due to loss, and subsequently reallocate roles and responsibilities across the team that appropriately account for the variation in team member states are critical. As an example, individualized technologies may detect affective changes in team members and work with task allocation technologies to adjust roles and responsibilities within the group to provide opportunities for understanding and coping with loss when needed, but simultaneously maintain functioning.

[wpforms id=”1336″ title=”false” description=”false”]

Technologies to minimize process loss with continual individual development, ever-increasing complexity of action, and prediction of future behaviors.
Human-agent teams, as envisioned in the future, will be capable of performing within environments of ever-increasing complexity, both internally and externally, that are almost inconceivable today. To facilitate effective performance within these realms of complexity, individualized, adaptive technologies are needed to minimize the process losses (e.g., communication, coordination, backup behaviors) we currently see with complexity in teams. Where appropriate, these technologies would target effectiveness in terms of both quality and efficiency within the states and processes that are critical in human-agent teams.

[wpforms id=”1337″ title=”false” description=”false”]

Call for Comments:

  • What other capabilities are critical for enhanced human-agent teaming that individualizable and adaptive technologies could enable?
  • Additional related comments


  1. Hello Arwen and all the authors,

    One of the thoughts I had up to this point was resiliency of both the humans and the agents in case of breakdowns. This point is covered in large part by the agile group performance point above, but I wanted to make sure I was framing/contextualizing it because I wasn’t sure if you all were thinking of it in the same way. I’m not just thinking of, say, a human team member getting injured. My concern is that most software-based technology inherently has trade-offs and limitations, and situations where it simply wasn’t tested but may be used. I suspect new agent-based technologies will have the same underlying problems (short design and testing cycles, say).

    For instance, Tesla’s cars having the equivalent of the blue screen of death and have to be rebooted–do they have a way to handle that if the car is in motion? (I think they might?) Even relatively simple software may not be tested in the kinds of conditions it is put through in daily use, either because of lack of testing or because the daily use cases change after it is introduced. Technology often will have issues until it is sufficiently iterated and tested and improved–and age is not always a discriminator for robust technology, if the new tech is based on old tech that was not designed for the new purpose or has inherent flaws.

    This also raises the issue of technology and complexity as noted above, but I’d add it’s not simply whether humans can handle a sufficiently complex human-agent system, but the issues with overly complex technology, or technology that may be overly multi-purpose and so have more points of failure (see: the Space Shuttle vs. the Soyuz: they have different requirements/goals, but also the Shuttle, in trying to fulfill too many requirements, ended up with some inherent flaws).

    The human side could also involve breakdowns due to mismatches in organizational culture or norms, or even due to malice. One of the challenges we’re facing today with new technology is accounting for these darker sides of human psychology (e.g., how does the technology deal with human harassers: does it enable them or does it discourage?).

    I hope this helps,
    Susannah Paletz

    1. Thanks Susannah – great points! Resiliency, of both the humans and agents, is critically important, as you describe. Related to your comment, I am concerned with examining teams longitudinally as well. I’m going to make sure I share your thoughts with others at ARL.

Leave a Comment