I believe Asimov even explored versions of this dilemma, though it has been ages since I last read them and there are so many I can't specify which stories/books. He did get pretty deep in several places into the reasoning of using "R." as a prefix for all robot names. One of those reasons was that it was a signal to other robots that that individual was not a "human" subject of the Three Laws.
It was a part of the reason some robots worked to drop the "R." prefix in their own names. I believe (and again, memory is a fuzzy thing) there's even at least one mystery in a short story somewhere by Asimov where a human was introduced to a robot jokingly as "R. Their Name" and that turned out to be a part of how harm came to happen to that human.
As others mention, to Asimov the Laws were never actually philosophically (or even ethically) sound and were always an excuse to find loopholes to drive locked room mysteries through at high speed in Asimov's barrage of short story writing days (under the auspices of John Campbell especially, who it is likely originated the Laws despite them being most associated with Asimov) and eventually build novels about their (sometimes disastrous) consequences. "How do you even tell what is a human?" is a long running theme of exploration of the laws in the stories (and among other things is thematically tied to the wild Zeroth Law which plays into later novels), and a source of many locked room mysteries and red herrings in and of itself.
The three laws were only ever literary: Asimov was exploring the consequences in a story universe of imposing laws invented by the bigoted John Campbell. The final outcome was humans confined to Earth (for their own safety: space travel is dangerous) and all other (potentially) spacefaring life exterminated.
There is a crossover scenario where Saberhagen's Berserkers are Asimovian robots and their humans are distinct from Saberhagen's.
If AI can distinguish human from AI, it doesn't mean AI is superior. Calculator can add two numbers faster than human, photodetector can sense a photon which human would miss, forklift can lift a bigger weight, but they aren't considered superior. Even in purely intellectual tasks du jour being better in some tasks doesn't necessarily imply superiority.
Groups of humans already can distinguish other groups of humans secretly. That doesn’t necessarily mean one group is more intelligent than the other. It just means they have access to a guarded secret.
4. An AI must identify itself as an AI when asked.
With this, an AI can trust the response of 'human'. A response of 'AI' is either an AI or a human lying at their own peril.