There is an analog of free will that works even in a cellular automaton. For that you need two things:
1. the part of cellular automata describing a thinking entity can be separated from the rest of the world: that is changing states of cells anywhere outside of it doesn't change the state of the entity itself.
2. it is not possible to replace the computation describing this entity with anything simpler.
2 means that any method of predicting what this thinking entity choses is completely equivalent to that entity living and making the choice it wants by itself. And 1 means that the part of network is indeed a separate entity.
With superdeterminism 1 can not be true, and a spin change of a single particle far away triggers very complex change in behavior of all thinking entities that were close to that particle in the past.
Please elaborate. It seems like it’s not even possible to phrase a question about free will that makes sense.
What would it mean for an entity to have free will, such that you could ask a question about whether or not we had it?
If you could ask a meaningful question, it doesn’t seem as difficult as explaining why anything does or does not exist, or qualia.