AI assistant monitors teamwork to promote effective collaboration

20.08.2024

 

During a research cruise around Hawaii in 2018, Yuening Zhang SM ’19, PhD ’24 witnessed the challenges of maintaining seamless coordination among team members. Mapping underwater terrain required precise synchronization, but spontaneous changes in conditions often led to confusion and stress, with team members holding different understandings of the tasks at hand. Zhang began to consider how a robotic assistant could have aided in achieving their goals more effectively.

Six years later, as a research assistant in MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL), Zhang developed a solution that addressed this challenge: an AI assistant designed to communicate with team members, align their roles, and facilitate the accomplishment of shared objectives. In a paper presented at the International Conference on Robotics and Automation (ICRA) and published on IEEE Xplore on August 8, Zhang and her colleagues introduced a system that can oversee a team of human and AI agents, intervening when necessary to enhance teamwork in areas such as search-and-rescue missions, medical procedures, and strategy video games.

The CSAIL-led team developed a theory of mind model for AI agents, which simulates how humans perceive and predict each other’s actions when collaborating. By observing the actions of its fellow agents, this AI team coordinator can deduce their plans and their understanding of each other’s roles based on a predefined set of beliefs. When it detects conflicting plans, the AI assistant intervenes by aligning the agents’ beliefs, directing their actions, and asking clarifying questions when needed.

For instance, in a search-and-rescue scenario, rescue workers must make decisions based on their understanding of each other’s roles and progress. CSAIL’s software could enhance this decision-making by sending messages to ensure that each agent knows what others have done or intend to do, thereby avoiding duplicated efforts and ensuring task completion. The AI assistant might intervene to inform the team that an agent has already entered a specific area or that no one is covering a certain region where victims might be present.

“Our work considers the concept of ‘I believe that you believe what someone else believes,’” says Zhang, now a research scientist at Mobi Systems. “Imagine you’re working in a team and you think, ‘What exactly is that person doing? What should I do next? Does he know what I’m about to do?’ We model how different team members comprehend the overall plan and communicate what needs to be done to achieve the team’s shared goal.”

AI to the rescue

Even with a well-defined plan, both human and robotic agents can become confused and make errors if their roles are unclear. This issue is particularly critical in high-stakes scenarios like search-and-rescue missions, where limited time and vast search areas pose significant challenges. The new robotic assistant, equipped with enhanced communication technology, could help search parties by notifying them of each group’s actions and locations, thereby improving efficiency in navigating the terrain.

This type of task organization could also prove beneficial in medical settings, such as surgeries. In these cases, the AI team coordinator could oversee the operation, ensuring that the team remains well-organized and intervening if there’s confusion about any task.

Effective teamwork is equally crucial in video games like “Valorant,” where players must coordinate their actions to attack or defend against another team. Here, an AI assistant could alert players when they’ve misunderstood their tasks.

Prior to developing this model, Zhang created EPike, a computational model that acts as a team member. In a 3D simulation, this algorithm controlled a robotic agent tasked with matching a container to a drink selected by a human. While AI-simulated bots are generally rational and sophisticated, they can still be limited by misconceptions about their human partners or the task. The new AI coordinator intervenes by correcting the agents’ beliefs, ensuring the task is completed accurately. For example, the system might send messages to the robot about the human’s true intentions to ensure it correctly matches the container.

“In our work on human-robot collaboration, we’ve been both humbled and inspired by the fluidity of human partners,” says Brian C. Williams, MIT professor of aeronautics and astronautics, CSAIL member, and senior author of the study. “Take, for example, a young couple with kids who seamlessly work together to get breakfast ready and the kids off to school. If one parent sees the other still in their bathrobe serving breakfast, they instinctively know to shower quickly and take the kids to school without needing to say a word. Good partners are in tune with each other’s beliefs and goals, and our work on epistemic planning aims to capture this style of reasoning.”

The researchers’ method integrates probabilistic reasoning with recursive mental modeling, allowing the AI assistant to make risk-bounded decisions. They also focused on modeling agents’ understanding of plans and actions, complementing previous work on modeling beliefs about the current world or environment. Currently, the AI assistant infers agents’ beliefs based on a given prior of possible beliefs, but the team plans to apply machine learning techniques to generate new hypotheses in real-time. They also aim to incorporate richer plan representations and reduce computation costs for real-world applications.

Dynamic Object Language Labs President Paul Robertson, Johns Hopkins University Assistant Professor Tianmin Shu, and former CSAIL affiliate Sungkweon Hong PhD ’23 collaborated with Zhang and Williams on the paper. Their work was supported, in part, by the U.S. Defense Advanced Research Projects Agency (DARPA) Artificial Social Intelligence for Successful Teams (ASIST) program.

en_USEnglish