HACKOBAR_item
[HN]score: 0.29

LLMorphism: When humans come to see themselves as language models

May 10, 2026
Researcher Valerio Capraro coined LLMorphism in arXiv:2605.05419, defining it as the cognitive bias where humans misattribute LLM-like token-prediction mechanics to their own cognition. As conversational AI produces increasingly human-like output, reverse anthropomorphism may distort self-perception. Cognitive scientists, AI ethicists, and UX designers building human-AI interaction systems should take note.