People as a whole say a lot of things, correct and incorrect. But ChatGPT is a single thing that has a fairly impressive rate of reliability on information, but if you get into certain levels of details on certain topics, it'll just spit out false information that's indistinguishable from the correct stuff. I wouldn't expect a human to do that: trick me into thinking they're an expert with an encyclopedic, verifiably correct knowledge of a topic, but then confidently start lying about that same topic in that same conversation. It's much harder to vet, or know when you need to vet.
For the non-ML crowd out there, in the AI world: