feel some kind of way about hallucination becoming increasingly the accepted technical term for the ways llms mess up by virtue of being language models
idk it feels like the anthropomorphization is counterproductive. getting high on your own supply
I think using this word borders on misinformation.
Anthropomorphization isn’t the only thing that’s bad about it. The main thing that bugs me is that it implies that the language model somehow has access to the external world, except sometimes it doesn’t perceive it correctly. But it’s just text all the way down.










