I think this article has the problem that it is addressing an interdisciplinary topic with too much focus on only a single discipline, which can be a sign of lacking sufficient diversity in disciplinary background of peer reviewers.
Scott Aaronson’s attempted refutation of IIT - https://scottaaronson.blog/?p=1799 - I think is better in that he actually tries to relate IIT to some of the philosophical literature (e.g. his distinction between Chalmers’ “Hard Problem” and the distinct “Pretty Hard Problem” which he sees IIT as trying to address)
I think it is a pity that Aaronson has never (to my knowledge) published his criticisms of IIT in a more formal setting-and I don’t know if Tononi has responded to them anywhere. I think Aaronson is probably right - that IIT fails as a mathematical model of what we intuitively consider conscious, since even though it excludes many common electronic devices we wouldn’t “conscious”, it is possible to mathematically construct an algorithm, capable of being physically implemented in electronics, which would be conscious per IIT but not per our intuition. And even if Tononi patches his mathematics to solve a particular case of that problem, someone with Aaronson’s skillset may just be able to construct another.
Tononi might then argue that if there is no mathematical model of our intuitions about consciousness lacking in special pleading, that’s a sign our intuitions are flawed. Okay, but then if we accept our intuitions can be flawed in some cases, why not in more cases? One could decide the intuition of consciousness is completely erroneous and become
an eliminativist about it. Or, if IIT forces you to accept (contrary to our intuitions) certain (special cases of) simple electronic devices or computer systems as just as conscious as humans, why not violate those intuitions further and insist on that for even more cases?
I'm not a "believer" in IIT. But I think its an incredible idea and taking the time to really understand what Tononi et al are proposing is a mind-expanding experience. It may not explain consciousness but it does make you think about what things could be a part of it. And any attempt to mathematically formalize cognitive science gets a vote of approval from me.
My personal belief is that consciousness requires dynamic continuity. I don't think an algorithmic system is conscious because it's "cognition" is discrete and the information isn't integrated across frames. I don't have a "why that works"—its just a gut belief.
Funny, I was just thinking in the opposite direction. There's "I think therefore I am," but there isn't really "I thought therefore I was." I know I'm conscious, but I only have the memory of being conscious before, which could be false. Conscious could just be a snapshot, although it certainly doesn't feel like it.
I agree this seems like an important difference. It’s just interesting to me that there’s no proof that my consciousness is continuous. I’m going to continue to assume that it is, but it’s unknowable.
Scott Aaronson’s attempted refutation of IIT - https://scottaaronson.blog/?p=1799 - I think is better in that he actually tries to relate IIT to some of the philosophical literature (e.g. his distinction between Chalmers’ “Hard Problem” and the distinct “Pretty Hard Problem” which he sees IIT as trying to address)
I think it is a pity that Aaronson has never (to my knowledge) published his criticisms of IIT in a more formal setting-and I don’t know if Tononi has responded to them anywhere. I think Aaronson is probably right - that IIT fails as a mathematical model of what we intuitively consider conscious, since even though it excludes many common electronic devices we wouldn’t “conscious”, it is possible to mathematically construct an algorithm, capable of being physically implemented in electronics, which would be conscious per IIT but not per our intuition. And even if Tononi patches his mathematics to solve a particular case of that problem, someone with Aaronson’s skillset may just be able to construct another.
Tononi might then argue that if there is no mathematical model of our intuitions about consciousness lacking in special pleading, that’s a sign our intuitions are flawed. Okay, but then if we accept our intuitions can be flawed in some cases, why not in more cases? One could decide the intuition of consciousness is completely erroneous and become an eliminativist about it. Or, if IIT forces you to accept (contrary to our intuitions) certain (special cases of) simple electronic devices or computer systems as just as conscious as humans, why not violate those intuitions further and insist on that for even more cases?