Intentions, Beliefs & Trust

 

Computer systems that interact with their environment, such as self-driving vehicles, autopilot systems, and other robotic devices, may sometimes operate without people supervising or otherwise directly controlling their actions. Although not strictly independent of us, they may be said in some respects to be “autonomous,” insofar as they are sensing and interpreting the world, choosing actions, and adjusting their behavior to meet pre-defined goals through a feedback process. They are to some degree self-directed and self-sufficient. As such systems become more capable, people may assign them responsibilities and give them authority previously relegated to human specialists, which requires trusting them to operate properly without unexpected side effects. Understanding and managing trust among people and computerized systems is a central focus for research led by David Atkinson, William J. Clancey, Adam Dalton, Greg Dubbin, Robert R. Hoffman, and Yorick Wilks, with related research in recognizing intention in dialogues by James Allen and understanding conversational social media by Bonnie Dorr, Kristy Hollingshead, Ian Perera, and Archna Bhatia.

Interactive computer systems usually operate by modeling how their environment is changing and may shift goals as situations change. For example, the “traffic collision avoidance system” (TCAS) used in commercial airplanes today gives instructions to pilots to change their flight path to avoid collisions, and if one aircraft doesn’t respond appropriately, it can shift strategy and tell the other pilots to reverse direction. A pilot’s accepting the authority of TCAS, to the point of ignoring what the human air traffic controller might be saying at the same time, requires a high degree of trust in automation.   Autopilot and unmanned aircraft systems (aka “drones”) and self-driving vehicles may be given even more authority to plan routes and navigate, and then trusted to carry out the intentions of the pilots (or onboard passengers) to accomplish expected missions, on time, and safely.

When unexpected behavior or unanticipated situations occur, understanding what is happening involves determining the computer program’s model of the situation (its “beliefs”) and current objectives (its “intentions”). Our trust in automation may be decreased if the system is not sufficiently transparent by behaving and communicating in ways we find natural, consistently competent, able to be directed, and learns from mistakes. Using the Brahms modeling and simulation tool, IHMC researchers have created the Brahms-GÜM testbed to experimentally study and formalize how mutual beliefs and intentions develop during complicated space-time interactions.

Reciprocal issues of trust may arise in which an automated system may need to understand a person’s beliefs and intentions, such as trusting people to take over control. For example, analysis of human operations in controlling complicated systems, such as a nuclear power plant, has suggested that people are not always to be trusted to appraise situations correctly, leading to catastrophic accidents. IHMC researchers are studying how trust issues arise at different levels, ranging from decision making at the policy level, to capability at the mission and organizational levels, to confidence at the level of cognitive work, to reliance on technology on the part of individual operators.

Another research thread examines whether the ways humans come to trust each other through social interaction and mutual experience can also help us judge the trustworthiness of machines. Although much can be learned from interpersonal relations, designing systems that people can justifiably trust involves factors that relate specifically to technology’s limitations and foibles. These factors include reliability, validity, utility, robustness, and false-alarm rate. Evaluating capability and performance is complicated by interdependency among people and machines— for example, “Am I achieving my mission goals?” can require appraising a very specific automated component, such as, “Is that warning indicator faulty?”

Furthermore, trust is dynamic. Relations develop and mature; they can strengthen, and they can degrade. Even when periods of relative stability seem to occur, trust will depend on context and goals. Consequently, trusting of machines is always conditional or tentative—that is, the machine is trusted to do certain things, for certain tasks, in certain contexts. Thus an interaction in which people and machines are interdependent, requires shared understanding. IHMC researchers have been studying how establishing, maintaining, and repairing trust is based on synchronizing the models of beliefs and intentions that people and automated systems have of each other.

One approach for understanding belief, intention, and trusting is to develop programs that have beliefs and can change them in interactive experiences. IHMC has developed the Viewgen program as a research testbed for modeling the beliefs, intentions, and plans of people participating in a dialogue. In particular, beliefs and plans may differ and some of the participants may be conspiring to influence or deceive others.

To understand a conversation and what is really going on—which not all of the participants may realize—researchers on IHMC’s CUBISM (Conversation Understanding through Belief Interpretation and Sociolinguistic Modeling) project use Viewgen’s cognitive computational structures to determine, for example, the role of particular people or computer agents in a dialogue in persuading others of their beliefs or to take actions. This project investigates the relation of changes in belief to changes in sentiment. More generally, the Viewgen analytic paradigm can be used, in parallel with statistical measures, to detect anomalies in dialogue and text, such as a conspiracy to deceive by linked Wikipedia editors. “Pseudo-editors” first attempt to gain trust by what they write before shifting to a biased position, an obfuscation this research project aims to detect.