Statements like this are very dangerous and detrimental.
I have personally tested ChatGPT with multiple different topic, and found issues and errors across the range, even more disturbing is that explanation why "result" is as it is, is very confident bullshit. If person is not familiar with subject probably will blindly believe what machine says.
That being said, in the field of medicine people will use ChatGPT as poor man doctor (especially inspired by studies likes this), where wrong results coupled with confident BS could result in increase of fatalities due to wrong self medication.