Human-like intuitive behavior and reasoning biases emerged in large language models but disappeared in ChatGPT
Summary
The study investigates the performance of pre-ChatGPT models and ChatGPT models on tasks embedded with intuitive traps, akin to psychological assessments of human cognitive processing. While pre-ChatGPT models, particularly GPT-3-davinci-003, exhibited a propensity for intuitive, fast, system 1-like responses, leading to errors, ChatGPT models de…
Keep reading with a 7-day free trial
Subscribe to InfoEpi Lab to keep reading this post and get 7 days of free access to the full post archives.