September 20, 2023
Report

“Shoulda, Coulda, Woulda”: Conceptualizing the Differences in Trust Between Human-Human Teaming and Human-Machine Teaming

Abstract

Intelligent decision support systems (IDSSs) are machine teammates designed to facilitate better human decision-making in high-consequence domains such as health care, power grid operations, and fraud detection. IDSSs identify patterns in datasets and provide intelligent decision-making recommendations to human teammates. However, previous research indicates that humans often trust IDSS recommendations less than the recommendations from their human teammates, even when the machine teammate is more accurate. To conceptualize why trust differs, we review the literature surrounding trust, error, and predictability. Then, we compile and compare participant trust ratings and decision-making in an abridged systematic review of previous studies manipulating teammate type, error rate, and error type. Finally, we conduct a content analysis of participants’ qualitative responses to trust queries from a survey on generative language models. Results suggest that humans may trust IDSS teammates less than other human teammates because of differences in (1) interaction complexity, (2) blame attribution, and (3) swift trust. We conclude that human factors practitioners should collaborate with data scientists and domain experts to build and maintain trust in IDSSs by anthropomorphizing algorithms, matching mental models, and considering individual differences.

Published: September 20, 2023

Citation

Dreslin B.D., and J.A. Baweja. 2023. “Shoulda, Coulda, Woulda”: Conceptualizing the Differences in Trust Between Human-Human Teaming and Human-Machine Teaming Richland, WA: Pacific Northwest National Laboratory.

Research topics