Abstract:
The anchoring effect refers to individuals' overreliance on initial information when making judgments and decisions, leading to systematic biases in subsequent information processing and behavioral choices. In order to realize the verification and quantification of the cognitive anchoring effect of large language model agents from two levels: individual cognitive characterization and group propagation dynamics a hybrid inference framework is built, which integrates large language models with agent modeling. This framework combines the semantic understanding and generation capabilities of large language models with the structured interaction rules of agent modeling. Agents have two states: active and inactive, and an action space is designed with three aspects: posting, forwarding, and waiting. The anchor sensitivity of the large language model is tested under role-playing conditions, and the evolution law of anchored information propagation is analyzed on a multi-node social chain. The experiment uses structured output templates covering numerical and semantic topics. By setting different anchoring scenarios and conditions, the relationship between the strength of the anchoring effect and individual identity, topic type, and chain propagation depth is evaluated. The quantitative evidence and innovative ideas and methods are provided for characterizing the network propagation mechanism of cognitive biases in multi-agent collaboration and designing platform interventions.