|
The Heterogeneity of the Effectiveness of Human-AI Collaborative Decision-Making: A Four-Stage Process Influence Model Focusing on Agency
Geng Xiaowei, Li Xinqi, Xu Zhiping, Xie Tian
2025, 48(4):
933-947.
DOI: 10.16719/j.cnki.1671-6981.20250414
The integration of artificial intelligence (AI) into the decision-making has revolutionized fields such as healthcare, finance, and criminal justice by offering the potential for enhanced efficiency and accuracy through human-AI collaboration decision-making (HAIC-DM). However, empirical outcomes remain inconsistent. While AI augments human capabilities in complex tasks (e.g., AI-assisted medical diagnostics matching expert performance), it can also degrade performance in simpler tasks due to cognitive redundancy or overreliance (e.g., automation bias in image recognition). This heterogeneity stems from unresolved tensions between technological potential and human factors. First, misaligned task allocation often undermines complementary strengths. While AI excels at structured, data-driven tasks (e.g., credit scoring), it has limitations in contextual reasoning and ethical judgment necessitate human oversight—a balance frequently disrupted in practice. Second, asymmetric trust dynamics skew collaboration: opaque AI systems (e.g., "black-box" algorithms) foster overreliance or distrust, as seen in radiologists uncritically accepting erroneous high-confidence AI diagnoses. Third, bias amplification—where algorithmic biases (e.g., racial disparities in recidivism prediction tools) intersect with human cognitive heuristics (e.g., anchoring effects), creating self-reinforcing error cycles that exacerbate inequities in judicial and hiring decisions. The urgency to reconcile AI’s computational power with human agency, particularly in ethically sensitive contexts, underscores the need for systematic exploration of collaborative mechanisms and risks. This study synthesizes 54 empirical studies from computer science, psychology, and organizational research (2018-2024). These studies were retrieved from the ACM Digital Library, the Web of Science, and the AIS eLibrary, using keywords such as "human-AI collaboration" and "decision-making." The inclusion criteria prioritized quantitative assessments of HAIC-DM performance (human-only, AI-only, and collaborative outcomes). A thematic analysis was conducted to identify recurring patterns in task characteristics (e.g., structured vs. unstructured goals), interaction designs (e.g., explanation formats), and moderators (e.g., user expertise). A four-stage Process Impact Model was developed, integrating principles from symbiosis theory and distributed cognition. Case studies (e.g., healthcare diagnostics, autonomous driving) were analyzed to validate stage-specific mechanisms, and experimental findings (e.g., trust calibration experiments) informed theoretical refinements. The proposed model identifies four interdependent stages that govern the efficacy of HAIC-DM: (1)Strengths/Biases Recognition: AI excels at structured tasks (e.g., fraud detection), while humans dominate ethical judgments. Biases, such as algorithmic (e.g., historical data biases) and cognitive biases (e.g., anchoring effects), distort collaboration in judicial decisions where humans and AI redundantly overemphasize prior convictions. (2)Context-Driven Task Allocation: Optimal allocation improves accuracy (e.g., AI pre-screening cancer images + human validation boosts diagnostic accuracy by 15%), whereas misallocation (e.g., AI-led creative writing) yields superficial outputs. (3)Trust Calibration: Example-based explanations improve the discernment of advice (+22% accuracy in income prediction), yet opaque AI systems induce overreliance, as radiologists often accept erroneous high-confidence diagnoses. (4)Adaptive Dependency: Balanced reliance maximizes efficacy (e.g., AI risk alerts in autonomous driving + human ethical oversight), but over-dependence triggers cognitive offloading and eroding skills (e.g., lawyers who rely excessively on AI for contract analysis). This study advances HAIC-DM research by framing collaboration as a co-evolutionary process. It emphasizes bidirectional adaptation between humans (critical thinking, ethical oversight) and AI (transparency, contextual learning). The Process Impact Model clarifies how dynamic interactions, from bias recognition to dependency calibration, determine efficacy. It offers actionable insights for optimizing task allocation and trust mechanisms. Future work must prioritize shared mental models to align AI’s computational logic with human intuition, particularly in high-stakes domains like healthcare and criminal justice. Institutional reforms, including ethical governance frameworks and mandatory human oversight protocols, are critical to mitigate risks like accountability erosion. Fostering synergistic interdependence, where AI augments human cognition without supplanting agency, is key to realizing the vision of "humans as ethical navigators, AI as precision enablers." This alignment ensures that collaborative intelligence enhances, rather than undermines, societal decision-making in an AI-augmented future.
References |
Related Articles |
Metrics
|