Abstract:Artificial intelligence (AI) assisted decision-making enhances decision-making effectiveness for managers. However, when AI makes errors, if decision-makers adopt its suggestions without verifying their accuracy, it can lead to serious mistakes. This study employs a literature review method to systematically review the concept, behavioral patterns, mechanisms, and influencing factors of automation bias in human use of AI.It identifies a trend of cross-domain development in automation bias and summarizes its characteristics, which include excessive reliance, reduced vigilance, neglect of verification, and individual differences.The main factors influencing automation bias can be summarized from four aspects: system, environment, organization, and individual. These factors include whether intelligent systems can provide predictive information and whether the system can provide immediate feedback on errors, as well as the design of decision support systems, task difficulty, workload, task complexity, time constraints, accountability for overall performance or decision accuracy, trust and confidence, knowledge level, and capabilities. Future research should strengthen empirical studies on the impact of AI automation bias on corporate managers, while also integrating system, environment, organization, and individual factors to reveal the formation mechanisms of automation bias. Additionally, effective debiasing strategies should be developed to assist decision-makers in correctly utilizing AI for decision-making.