This seems like such faulty and backwards reasoning to me.
The experiment of detecting anomalies at the subatomic level is very interesting and that work should continue.
But, why must we say such anomalies lead to "we are living in a simulation"?
This seems like a fallacy of logic from reasoning backwards from the effect to the cause, with absolutely no connection between the two.
(Maybe people just want to hop on the "everything is a computer" bandwagon for marketing/publicity; it is tiring. At it's worst, it makes future tech assertions look kookish)
> But, why must we say such anomalies lead to "we are living in a simulation"?
That is not the main argument that we are living in a simulation. The main argument is the following disjunction (from wikipedia). Either:
* human civilization is unlikely to reach a level of technological maturity capable of producing simulated realities, or
* a comparable civilization reaching this technological status will likely not produce a many simulated realities for any of a number of reasons, such as, diversion of computational processing power for other tasks, ethical considerations of holding entities captive in simulated realities, etc., or
* any entities with our general set of experiences are almost certainly living in a simulation.
Personally I think the biggest prima facie weakness in this argument is that while we will probably reach a point where simulating a human mind is feasible, and we will probably do so when that time comes, it may take a lot of computational mass to simulate a person. Thus, any recursive simulations would get smaller and smaller in terms of the number of thinking entities in them.
I would say it probably doesn't take a lot of computational mass to simulate a person. We're each doing it right now with a glob of meat the size of a grapefruit, and that's just what evolution was able to come up with.
So simulating a universe where minds are a low-level primitive, and everything else is simulated lazily and using lots of shortcuts is reasonably plausible.
But that's not the kind of world we find ourselves in. Our minds appear to be constructed of the same stuff as the rest of the world. If we're to accept the simulation argument, then our definition of "simulated reality" must imply a physics implementation that is high enough fidelity to support minds as high level structures - you have to actually be able to build compact working brains in the simulation.
Alternatively, we'd have to suppose that a brain is actually just a very convincing facade, usually sitting inert until someone observes it, at which point it reflects the inner workings of the person's mind at whatever level of detail is necessary to convince the observer that it is the mechanism behind the mind.
> But that's not the kind of world we find ourselves in. Our minds appear to be constructed of the same stuff as the rest of the world. If we're to accept the simulation argument, then our definition of "simulated reality" must imply a physics implementation that is high enough fidelity to support minds as high level structures - you have to actually be able to build compact working brains in the simulation.
Devils advocate, but there could simply be level-of-detail heuristics, where the mind is explainable in terms of physics, but not implemented that way (unless you look). Similarly, for people who take damage to their brains, the model for that person's new brain could simply be derived during that simulation step and substituted.
That's just a high level view of how a lazy simulation could still produce minds that appear to by implemented using physics.
That's what I was saying in the "Alternatively, ..." part.
But this does suggest an avenue of testing the simulation hypothesis that might be more useful than looking at purely physical anomalies. The physics engine might be simplified, but edge cases will likely be subtle and difficult to distinguish from gaps in our theories of physics.
But if the simulation is using heuristics to gauge whether a brain is being observed, and making it correlate to the mind primitive backing it as needed, then we should eventually be able to construct scenarios where those heuristics are wildly inaccurate. If we could, for example, trick the simulation into presenting a completely inactive brain while its was conscious, that would be informative.
Who says we haven't tricked the simulation, and who says we can be tricked?
If we do exist in a simulation, not understanding the motivations and nature of the perpetrators of said simulation leave us with no means to tell if our results are unadulterated.
Lets say you find incontrovertible proof that we are being simulated. You are so exited you pull through a red light on the way home and are killed by a bus. In the mean time the hard drive with your data on it crashes and the data is lost. Another researcher attempts to follow your work, performs the same experiment and gets a negative result, we are not being simulated (little does he know he's running on Universe.exe patch level 4.0.2113).
A universal virtual machine could operate just like virtual machines we use now. If the occupants of said simulation screw stuff up we can always be rolled back to a checkpoint with no knowledge of such an occurrence. Slight changes could be made to the problematic actors in the simulation and chaos theory would likely make the simulation diverge.
Yeah, there are a number of ways that it could be thwarted. Any civilization that can achieve this kind of universe simulation would probably also have solved general AI, so the "heuristics" we're trying to fool may in fact be an intelligence far more clever than ourselves.
Actually in a universe where time doesn't necessarily flow forward, evidence could be perfect. They could just delete branches where the simulation is discovered and revert back to the point where the bit leading to the discovery was made. But I see what you are trying to say.
>I would say it probably doesn't take a lot of computational mass to simulate a person. We're each doing it right now with a glob of meat the size of a grapefruit, and that's just what evolution was able to come up with.
And in this case, evolution was exceptionally clever, since it managed to design the world's most powerful, most parallel computation devices in maybe a couple kilograms of meat.
Consider that the computational power necessary to emulate a human brain accurately is conjectured to be measured in petaflops.
I mean, sure, you could invoke some kind of high-level computational heuristics, like we did to emulate the Nintendo 64 back in the day, but when dealing with living things I wouldn't call that a very good idea.
>And in this case, evolution was exceptionally clever, since it managed to design the world's most powerful, most parallel computation devices in maybe a couple kilograms of meat.
That is almost certainly not the case. The estimate for emulating a brain is about 37 petaflops. The fastest supercomputer is currently China's Tianhe-2, at nearly 34 petaflops. You're certainly going to have more than 10% overhead for emulating a brain (more on this in a moment).
But beyond that, the brain is a specialized computing device, and at certain specialized tasks, it is currently unrivaled. However, this may largely be a matter of algorithms rather than raw power.
>Consider that the computational power necessary to emulate a human brain accurately is conjectured to be measured in petaflops.
Emulation requirements are an exceptionally poor indicator of computational power. Emulating an SNES loosely requires a CPU capable of around 350 MIPS (FLOPS don't work for obvious reasons). Emulating an SNES accurately requires a CPU capable of over 50,000 MIPS. The CPU of an actual SNES does 1.5 MIPS. It has other components working in parallel, but we'd be generous to estimate 10 MIPS for the whole system.
Why the discrepancy? Well first off, there's the overhead of simple translation. But more importantly, the SNES has several components working in parallel, and the timing of how these components interact is critical.
And that's emulating hardware with a handful of components, designed by rational people. The brain has massive parallelism, and was created by a blind optimization process.
So even if whole brain emulation can really be accomplished with power around 37 petaflops, doing the actual work of a human brain (without emulating it) can be done with far, far less. The emulation overhead is probably orders of magnitude greater than that of an SNES, but if it were the same, we'd be looking at single-digit teraflops to implement a mind as powerful as a human's. That's achievable right now for a few thousand dollars.
>it may take a lot of computational mass to simulate a person
I don't understand much of what's going on here, but couldn't it possibly take virtually no computational mass in order to simulate a person? I mean, if you tie the whole system together such that if you simply switch the smallest of switches from 0 to 1 that sends out desired changes rippling out through the rest of the system, it wouldn't matter how much mass it took.
Why does it have to take a lot of mass to simulate a person - does what's outside necessarily have to be more complex than what's on the inside?
If each simulated universe can only contain one actual person, then the simulation argument doesn't hold up, since the vast majority of people would still be living outside of simulations.
Thus, any recursive simulations would get smaller and smaller in terms of the number of thinking entities in them.
I'm glad this flaw in the simulation argument is well known. The simulation argument reminds me of Anselm's ontological argument in a way.
I wonder if anyone has tried to create a set of "Laws of Simulation" analogous to the laws of thermodynamics, which would define a limit to how much pseudoreality can be simulated by a given amount of "true" reality.
Well, it's not so much a flaw as something that bears empirical testing, someday. If we find that we can easily simulate a perceptual volume of the universe in a relatively smaller amount of physical space, this part of the argument will be sound.
You're right. I think "unstated, untested assumption" would be better than "flaw" in that context. If indeed it turns out that a simulation can be more compact in its host universe, then maybe everything really is a simulation, and it's simulations all the way down to infinity. But this would defy my intuitive sense of the nature of reality (derived from things like the conservation of matter and energy, and the geometric idea that four inches of space won't fit within three inches of space), and would effectively allow infinite computation within a finite space.
Well, I'm fairly certain that what we know about physics so far bars 1:1 simulation of this universe at 1:1 speed using anything less than all the atoms in this universe (not much of a simulation then, eh?) But we might be able to devise a simpler physics that still supports intelligent life. :)
You could make some approximations. That's pretty much the core of the argument. It's somewhat implicit that the true nature of the containing universe is not granular like ours.
Brian Greene does a very good job of explaining the logic in this conversation with Robert Krulwich: https://www.wnyc.org/radio/#/ondemand/91859 . If you have a spare 50 minutes, it's really really worth your time to listen.
In sum, the argument goes like this: if there is more than one universe, then there will be some universes that are larger than ours, some that are smaller, some that are younger and some that are much, much older. If we, in our universe, have evolved intelligence sufficient to construct machines to simulate simple mammalian brains, then it is reasonable to assume that beings in a much larger, much older universe would have computational power far in excess of ours. So much so, that not only would they be able to simulate entire smaller, younger universes, but those simulations may themselves have the ability to simulate even smaller, younger universes. Turtles all the way down, so to speak.
So even if the very biggest, oldest universes form the long tail of all possible universes, the fact that they could simulate universes that simulate universes that simulate universes means that, all else being equal, the probability that our universe is real is much less than the probability that we are, in fact, in a simulation.
To simulate a universe you'd need to be able to solve, or at least approximate with great accuracy, the n-body problem.
And we can't even do that with n >= 3 (without restrictions).
To simulate a small universe you're talking about one hell of a large "n".
Some physics are fundamentally not workable with logic and processing power due to exponential growth requirements for each additional n (and it makes no difference how large or advanced that other universe is)...
Those same physics require something much more fundamental, the fabric of space-time itself with some energy/motion introduced into it.
Hence the only way to "simulate" a universe is to create one... But then it's not a simulation at all.
"... construct machines to simulate simple mammalian brains", you fell for it man, marketing does marvels
what we are capable of doing is just building a machine with the same number of components (i.e. transistors if you want) as some simple mammalian brains (i.e. neurons), then the guys at wired or discovery channel do their part and tell that "that's an artificial brain"
I'm glad to see someone call this circus what it is.
Yes, these people are imagining the unimaginable and then making certain predictions concerning the qualities of the processes?
If a "simulation" was created by some intelligent entity, how could we know what error corrections it would have, what short-cuts it would involve, how fine or coarse-grained the steps involved would be and so-forth? Any "necessary flaw" we might imagine would be something these much smarter and advanced entities would imagine and compensate for. As others have noted, Quantum effects imply you couldn't have a 1-1 simulation of very much in a universe resembling this one. But since our whole universe is supposedly being simulated, what reason do we have to assume the simulating universe has the same laws? But then what reason do we have to make guesses about the form "anomalies" would take??
On the other hand, indications that events in our universe are happening the context of some sort of "substratum" doesn't mean at all such a substratum was created by an intelligent entity - quantum effects "simulate" Newtonian effects but QM isn't a computer but simply deeper level of reality.
It's hard to describe all the different fallacies that seem to be at work.
>This seems like such faulty and backwards reasoning to me.
It's not backwards, it's the scientific method: come up with a hypothesis (the universe is a simulation), find a prediction made by that hypothesis (physical anomalies), then test for that prediction. Of course alternate explanations are possible, but the usefulness of the test is determined by the prior probabilities we assign to the various possible explanations; the basic form of reasoning is not backwards.
I think Kant may have rendered this article's vector at the question pretty meaningless over 200 years ago.
In the Critique of Pure Reason, he posits that space and time are the forms of our intuition, and he claims that anything that comes to us does so through these forms. That is to say, we do not know things as they are in themselves--we only know them insofar as they have properties that can be reflected in a form that can be processed by our intuition. We bring space and time with us, as part of our structure, and mathematics is an examination of that structure.
So Kant would have it that mathematics is neither a Platonic form nor something we have created-it's an exploration of our capabilities of sensation. This seems to me like a much better explanation of why mathematics are "universal" than the article's approach.
It also leaves open the question of whether we are "living in a computer simulation" without suggesting that it's a simple matter (or even necessarily possible) to see out from inside.
>We can eat oysters only insofar as they are brought under the physiological and chemical conditions which are the presuppositions of the possibility of being eaten.
>Therefore, we cannot eat oysters as they are in themselves.
I think the supposition that space and time are forms of our intuition is suspect, and is indeed undermined by relativity. The supposition that maybe spacetime are the forms of our intuition also doesn't necessarily withstand scrutiny, because there is at least one way of formulating quantum field theories with no reference to spacetime at all (see, for example, https://news.ycombinator.com/item?id=6403285 ).
I don't think relativity or quantum field theory undermine the idea that space and time are our forms of intuition. On the contrary, I feel it may be the first framework that really created a sensible space for those phenomena.
Kant's approach holds that our /intuition/ is limited by space and time and that "things in themselves" may in actuality be many different ways. We can only receive and understand things insofar as they exist or can be manifest in space/time. I think his theory very much leaves open that we can better grasp the properties of things by studying the way they appear in space/time and drawing inferences. His very point is that space and time do not exist in themselves as far as we perceive them--they are merely the forms through which we are capable of receiving anything. And thus I would say he created a groundwork of thought that made room for relativity and quantum observations well before they were conceived--in fact, I imagine Kant's groundwork was important in our path to them.
[Edited for a slight bit more clarity]
[HN meta comment: it's really annoying that any given story that has a lot of upvotes and that is seemingly appropriate for HN can be flagged from the front page. Why spend time writing an insightful comment when any given story could be sent from the top of the front page to the middle of page 3 by the capricious action of a flagger or moderator, meaning no one will ever see the comment? Such unpredictability really discourages quality participation and encourages quick responses--why expend a lot of effort when the chance of anyone reading your work could evaporate so quickly and arbitrarily?]
Space and time are things themselves and do exist independent of our perception. As evidence: all the solutions to general relativity that have no matter content but nonetheless are not flat, because the curvature of spacetime itself has energy that gravitates (the Schwarzschild solution comes to mind). Spacetime has its own real, physical, dynamics.
All you did there was prove your assumption. You assumed spacetime, then you assumed it to be a thing-in-itself (which, by the way, is an IDEA) and then noticed it has certain properties as described by various theories (such as from Schwarzild). None of this is evidence that space or time - which, by the way, I DARE you to define - are properties of some world independent of consciousness (the witness/observer), your just merely asserting them. You can say they are empirical (or phenomenological) facts, but that is much different than saying they are things-in-themselves
Kant agrees that empirically, space and time are things themselves and exist independent of our perception. In fact, for Kant, anything that can come to us or be reasoned by us, and the relations of those things, must comply with space, time, and, therefore, mathematics.
I wasn't trying to say that space, time, and mathematics do not exist independent of our perception. Rather, I was refuting the idea that mathematics is either a Platonic form or something we made up. The Kantian idea is: anything we come into contact with or understand must come to us in a way that conforms with mathematics, or else we wouldn't be able to experience it. Therefore, everything we know complies with mathematics--it's not some Platonic form outside us or our creation, but the very structure of our understanding. There might be a superset of laws that things in themselves comply with that we don't have access to. We are, however, limited to the understanding we can achieve through space, time, and mathematics (which do describe everything we can encounter).
My initial response wasn't terribly well written and was rambling, so I tried to clean it up a bit. It started to address your point but veered off-topic. There's also an important difference between perception and intuition that I originally didn't express very well.
Getting back to what I was originally trying to say, I don't think the nature of mathematics has much to do with whether or not we live in a computer simulation. We could be bound to mathematics for any number of reasons that are beyond our capability for understanding. What I was commenting on was the article's introduction to the question. I suppose, though, that the article itself admits the relation between the introduction and the topic may not be strong ("But one fanciful possibility is that we live in a computer simulation based on the laws of mathematics"). I just don't think "Mathematics applies to everything we can experience, therefore we live in a computer" is a particularly interesting approach.
> I don't think the nature of mathematics has much to do with whether or not we live in a computer simulation.
Yes, I don't understand the logic behind this idea. If our universe is only orderly because it is inside a simulation, what is the universe in which the simulating computer like? Totally non-mathematical, and yet rich enough to have computers in it? Seems farfetched.
"The bottom line is, if we are in a simulation, there needs to be compressive sampling because quantum effects between particles would otherwise require an infinite amount of memory, so the meaning of the wave function is that it's for data compression. [...] The fortuitous data compression implicit in wave functions is merely another reason to suspect we are in a simulation. "
Seems like backward reasoning. It's fair to say "Hey, if the universe is a resource constrained simulation, that would explain why wave-particle duality!", but I don't see any logical requirement that "Because wave-particle duality, it must be a resource constrained simulation (if it's a simulation)".
Quite. The creator could just as easily choose such a physics for obscure aesthetic reasons, or reasons for which we have no concept at all. Furthermore, we have no way of knowing whether "resource constraints" are a meaningful concept outside our little physical universe, either.
Other people think this way e.g. "The Earth's creation, according to Mormon scripture, was not ex nihilo, but organized from existing matter." If God did not create the earth out of nothing, but rather organized it from existing matter, it's not that much of a jump to assume that He doesn't run simulations with infinite memory either.
I'm not a physicist, but I suspect there are more convincing (physical) arguments for the discreteness of nature -- otherwise we would have non-finite (informantion/thermodinamic) entropy per unit space, which would probably lead to a handful of inconsistencies, perhaps analogous to the ultraviolet catastrophe which lead to the proposition of quanta of energy by Plank.
That article is so much fluff, and doesn't really say much in the way of hard science.
What he's referring to are Quantum Chromodynamics simulations, which in very general terms were born out of a desire to simulate the sub-atomic interaction of the nucleus of various elements, mapping nuclear reactions for weapons like the hydrogen bomb, when live fire tests were banned by international treaties.
Numerical lattice Quantum Chromodynamic (QCD) calculations
using Monte Carlo methods can be extremely computationally
intensive, requiring the use of the largest available
supercomputers.
It's nice to contemplate the philosophical implications of being able to simulate the stimulated emissions of high-energy gamma ray emissions with high reliability and resolution inside a computer, but where's the meat of the article? It's just eleven paragraphs of "what if?"
We are living in a universe with natural laws and intelligence. Whether it's a "simulation" is sort of a nonsensical question. Is an iPhone emulator a simulation or an actual iPhone? What distinguishes those two other than our definition of iPhone, which of course is subjective?
The question is similar to "is there a god?" That is, is there an intelligence outside of our universe that consciously created this one.
Do wave function anonomolies indicate that our universe is the product of intelligence, or do they just show us how little we understand about the nature of existence?
Further, following the logic of the article, we most likely live in near infinite nested simulated universes, since the same reasoning applies to our gods, unless of course reason is an aspect of out universe in which case how can we expect to reason about what is outside it?
These are interesting questions, but jumping to the conclusion that we are in a simulation seems both premature and nonsensical to me.
You're missing the point. What makes an iPhone more real than an emulator? Is it that we consider an iPhone to be real and an emulator to be a simulator, is it that one came first? An emulator is still real. One could emulate an emulator - which of those then is real? Is the "real" iPhone not a simulator? It's all a matter if semantics.
Even if we live in a universe created by an intelligence, we are not in a simulator - we are in a universe created by an intelligence. If we are in a simulator, what are we simulating? The real universe which is like this one?
What makes the iPhone real in this case is that its chips are running the code, whereas the simulator runs code that does not control actual chips, and functions as though it did as emergent behavior. The calls in the code running in the simulator don't actually touch the hardware they're coded to touch, they are not subject to actual hardware behavior. That's all.
It's not a distinction without a difference. Javascript does not run on the CPU, Firefox does. If you recreate Firefox in javascript (i.e. a whole slow VM) then it is still a simulation of a computer running Firefox. The physical computer is not actually executing the C++ instructions that Firefox is coded with.
It is quite easy to see this level of abstraction difference.
The question is, am I existing in a physical sense with my atoms being 'the metal' or are there some more fundamental laws that are used to 'interpret' (at a higher level) the laws of our universe, without being bound by them, and are only interpreting the movement of my atoms.
This is quite an easy question to understand and don't see why it poses such a metaphysical conundrum to you.
If we are in a simulator, all that means is that there are more fundamental rules that are being used to create our universe at a higher logical level as an emergent property.
The question is, is our code running at the lowest level, or is some other code running at the lowest level but simulating our laws as an emergent property? (a la linux in javascript).
It's the same question, just dressed in scientific language.
We could write a program that detects if it's running on a real or emulated iPhone, but that's because we know both those environments from the outside.
I think Boström poses interesting questions, but I don't believe it's possible to make an experiment to test this from the inside.
> we assume that our universe is an early numerical simulation with unimproved Wilson fermion discretization and investigate potentially-observable consequences
So the argument seems to be that, if we were living in a simulation, and it used physics we now know to be slightly inaccurate, we would be able to measure that.
Maybe I'm missing something, but that's not very logical is it?
The article should have linked to the original Simulation Argument by Nick Bostrom, since the physics part is just one of many possible tests to verify the hypothesis. Here is Nick Bostron's argument: http://www.simulation-argument.com/
I think the argument is more like, "If we are in a simulation, it can't have an infinite resolution because that would imply an infinite amount of memory and computational power. We should be able to detect the 'graininess' at the limit of resolution."
We can, via simulations, deduce what a world with an infinitely fine grid (or no grid, depending on your preferred language) would be like. Is our world like that world? If not, then there must be a grid at the bottom that is only finitely fine. See my top-level comment for more detail.
I sometimes picture the Universe as being some kind of a recursive knowledge generation simulation eating memory as it learns, with (Known Universe <-> Subatomic Particles) being the current limitations. The One Electron[0] acts as the program counter.
It is interesting how counterintuitive quantum mechanics seems and yet many aspects make perfect sense in the eyes of a programmer.
* The foundation of Quantum Mechanics is the idea of quanta. The data or physical quantities of a computer and our Universe can change only by discrete amounts like the bits of a computer. Distances, energy, and possibility time are quantized in our universe.
* The Heisenberg Uncertainty Principle also implies Interactions don't 'count' unless they affect some other piece of the system. Is the Universe doing called 'lazy evaluation' here?
* In computer simulations, the criteria to achieve von Neumann stabilitiy is that no effect can propagate at a speed faster than the size step divided by the time step. Interactions in our universe are also limited by the speed of light.
* Holographic Principle does away with spatial locality, drastically reducing the number of possible states our Universe can have. None of this makes sense unless the Universe is trying to minimize resource usage.
* E=mc2 : mass and energy are two forms of the same thing. It a computer it's all bytes at it's simplest form.
This comment is a response to a bunch of the comments in this thread, but it didn't make sense to scatter the information throughout the page.
The way lattice QCD [0] works requires a few approximations in order to fit the problem into your computer. You
(1) simulate a universe with finite volume (state-of-the-art calculations have boxes just a few times bigger than a nucleus in each direction),
(2) discretize the space into a bunch of points some lattice spacing apart connected by links, and,
(3) possibly simulate at values of the quark masses that are not their physical values (this helps make a a matrix you need to invert better-conditioned).
Finally, you know what action[1] you want to simulate (that of QCD[2]) and you construct a discretized action that accounts for all of the above approximations that approaches the real action you want to simulate when you take the limits volume --> infinity, lattice spacing --> 0, masses --> physical values.
There are many different ways to build such a discretized action (and they go by names like the Wilson action, the Domain Wall action, the Staggered action, and others), but no discretized action is "perfect" in the sense that anything you try to measure can have errors that depend on how different the volume you actually measured is from the infinite-volume limit, how much having a finite lattice spacing changes your answer, and how much unphysical masses matter.
We have theory that controls how measured quantities depend on these parameters, and thus can understand, via extrapolation, how lattice artifacts change the quantities we measure from the true continuum limit.
You can use this understanding to find out what the answer is in the limit of there being no discretization at all (lattice spacing = 0) and then compare measurements of our world to those predictions. If they match, then the world doesn't really have a grid of points at its bottom, but if they don't match you can find out how finely spaced the grid of spacetime is.
Unfortunately, any simulation you do will have error bars, and so in practice all you can say is "the continuum world looks like this ± that", the practical consequence being that you can only put an upper bound on how different the lattice spacing of spacetime is from zero: "If the lattice spacing were bigger than X then we would have conflict between the experimental observations we made of the universe and the simulation. No such conflict is known, so the lattice spacing is smaller than X."
Source: I am a postdoc doing lattice QCD, and I have met Martin and Zohreh.
You're overlooking something basic and uncontroversial that may prevent such an analysis from being deterministic. Gödel's incompleteness theorems say that, for a sufficiently complex system, there are true statements that cannot be proven from within that system, using the system's methods. This means that, from a perspective within the universe, we may be constrained by the incompleteness theorems from making any kind of conclusive determination as to its true nature.
Gödel's incompleteness theorems have a wide reach. They prevent a solution to the Turing Halting problem, and they placed a firm limit on projects such as Russell and Whitehead's tendentiously named "Principia Marthematica" (http://en.wikipedia.org/wiki/Principia_Mathematica), a very ambitious project that was fatally undermined by Gödel's work.
The incompleteness theorems need to be considered in the present question -- we're inside the "system" we're trying to analyze, consequently there are questions we can't meaningfully resolve. I think the question about the universe being a simulation meets this classic criterion for indeterminacy.
I would love a way, and have indeed tried (in my ambitious, misspent youth) to formulate a way, to make this observation precise for quantum field theories. I am sure there is a way, but it's tough.
Only the Gödel sentences are true but not provable. Maybe the sentence "the lattice spacing is finite" is not a Gödel sentence. Indeed, the thrust of the paper seems to be exactly that, and I agree.
When I think too hard about apply Gödel to physics in my night thoughts, it makes me wonder if there is really a "final theory of everything" or if every theory is really just an effective field theory [0] with another deeper theory underneath, and it's turtles all the way down.
> Only the Gödel sentences are true but not provable.
An important aspect of the incompleteness theorems, relevant to the present topic, is that one cannot deterministically locate those particular true claims that cannot be proven true, to which the theorems apply. If this were not so, it would undermine the very notion of indeterminacy.
But the subjective nature of our view of the universe extends beyond the incompleteness theorems into more pedestrian issues like how we can claim to objectively define a property of physical reality, using tools and methods embedded in that same reality.
> When I think too hard about apply Gödel to physics in my night thoughts, it makes me wonder if there is really a "final theory of everything" or if every theory is really just an effective field theory [0] with another deeper theory underneath, and it's turtles all the way down.
"Turtles all the way down" -- I love that expression -- it aptly describes the problem faced by creatures trying to fully define their domain from within. From time to time I picture the day this expression came into being ... "'You're very clever, young man, very clever,' said the old lady. 'But it's tortoises all the way down!'"
> An important aspect of the incompleteness theorems, relevant to the present topic, is that one cannot deterministically locate those particular true claims that cannot be proven true, to which the theorems apply. If this were not so, it would undermine the very notion of indeterminacy.
Right, I agree, but that doesn't necessarily mean that we cannot prove any true claims. Unless I misunderstand, it just means that true claims whose proof we do not know cannot be distinguished from true claims whose proof does not exist. We can search for proofs to both, and sometimes we will find a proof, and sometimes we won't. But not finding a proof doesn't mean that the claim is a Gödel sentence, it could also mean that we just didn't look hard enough.
Also: I should really... proof read my posts better. "too hard about apply..." I cringe at the fact that my carelessness will be preserved for posterity.
> Right, I agree, but that doesn't necessarily mean that we cannot prove any true claims.
That's certainly true, but the fact that we can't locate a demarcation between provable and unprovable makes all analysis inside system X problematical as to the true nature of X, with an obvious connection to the present topic.
> Unless I misunderstand, it just means that true claims whose proof we do not know cannot be distinguished from true claims whose proof does not exist.
Not exactly. The demarcation is between theorems whose truth we can prove (and can prove that we can prove), and theorems we're not sure about but that may fall on the far side of the demarcation line. The second category is a problem that undermines any global analysis of logical systems.
Remember that Russell and Whitehead's famous tome (http://en.wikipedia.org/wiki/Principia_Mathematica) wasn't edited after Gödel's work, to simply remove those parts that were unsupportable, it was instead abandoned in its entirety. Gödel's work demonstrated that any such deterministic analysis of logical systems wasn't possible, in whole or in part.
But again, this doesn't argue that a given logical system can't be used to prove theorems. It only argues against the claim that such a system can be proven to be both complete and internally consistent.
>The demarcation is between theorems whose truth we can prove (and can prove that we can prove), and theorems we're not sure about but that may fall on the far side of the demarcation line.
Can you clarify that a little bit? It seems like you define the demarcation in terms of itself.
Sorry -- I meant that one side of the demarcation line are theorems that we can prove (and prove that we can prove). The other side are two entities -- theorems that are true but not provable, and theorems we don't know enough about to classify.
I emphasize that the idea of a demarcation line is itself a matter of debate. Some theorems, by virtue of their unprovability, affect others and cast them into question, which blurs the very notion I'm discussing.
I shouldn't have made the demarcation line seem so certain. Even saying that such a thing exists risks giving it more credence that it may deserve -- except as a matter of discussion.
I'm not sure the GIT applies here, it is my understanding that the GIT says that for any formal system (i.e. sentences derived by repeated application of rules of derivation to axioms) that is able to represent the Peano arithmetic, has no deterministic decision procedure, due to the ability of a sentence to self-reference. Is that accurate, or is GIT broader than I thought?
I mentioned the GIT as an example of the risk of invalidation by self-reference, but whether they formally apply depends on how we state our hypotheses. They certainly apply in some degree to any effort to define strict deterministic criteria by which one might conclude that the universe is a simulation, carried out from within that same universe. The more formal and detailed the criteria, the more likely we are to be trapped by self-reference.
> Is that accurate, or is GIT broader than I thought?
I think the ambition of the simulation question offers enough similarity to a complex logical system that one needn't assume any greater scope for the GIT than they're known to have. I'm not claiming that the GIT obviously applies, only that the possibility needs to be taken into account on a list of issues that attend this question.
Also, Seth Lloyd turns the question [0] on its head and suggests that it's silly to call the universe anything but a computer, since it contains systems that are Turing Complete. What is it computing? Its own history / dynamics.
That said, it is different to ask if we're in a simulation vs. in a computer---the word simulation makes one think of some programmer compiling some source code and executing some program: it entails agency.
Good point. When we ask whether this is all a simulation, I think most people assume an agency, a wizard behind a curtain. But the question doesn't necessarily imply that form.
But think -- until about 50 years ago we wouldn't have necessarily thought of the universe as a computer, because we didn't think of things in terms of computers until recently. My point is this conversation and its terminology is only possible because of the present state of technical evolution -- in us and in our machines.
> What is it computing? Its own history / dynamics.
Yes, agreed. On a scale smaller than the universe, we can examine evolution by natural selection as an example of an algorithm that works in a reliable, predictable way (predictable only in a statistical sense). I prefer evolution as an example of a natural algorithmic model because it's easy to express, but produces so much complexity.
The logic should more precisely be: the dynamics of the universe, ie. the things that physics allows, includes the operation of Turing machines, so that the physical laws are at least Turing complete. Thus, the universe, from a computational point of view, a computer.
That's sort of like saying the universe is a table because its constituent parts, laws, etc make tables possible. I guess it's interesting, not very interesting, though.
A) Goedel sentences involve some form of mathematical self-reference/recursion within the proof system.
B) Empirical/physical science doesn't deal with conclusive proof from axioms, anyway: it deals mainly in probability and evidential falsification.
Ergo, Goedel's Theorem doesn't really tell us there's some great, unlearnable Fact About The Universe we can never figure out, since science isn't a proof system subject to the Theorem in the first place. We can probably put very good probability bounds (so to speak) on anything about real life that we actually need to know.
> Empirical/physical science doesn't deal with conclusive proof from axioms, anyway: it deals mainly in probability and evidential falsification.
Of course it does. Modern physics requires rigorous mathematics, the latter certainly are influenced by the incompleteness theorems.
If I shape a theory about the universe in modern physics, if my theory doesn't have a mathematical form it won't be taken seriously for very good reasons. Therefore modern physics, the most scientific of sciences, requires a very high level of mathematical reasoning. Therefore the incompleteness theorems need to be taken into account.
Even something as trivial as the Turing halting problem is known to be insoluble because of its connection to the incompleteness theorems.
Quote: "The concepts raised by Gödel's incompleteness theorems are very similar to those raised by the halting problem, and the proofs are quite similar. In fact, a weaker form of the First Incompleteness Theorem is an easy consequence of the undecidability of the halting problem."
This is just one of many examples in which the incompleteness theorems affect the outcome of a pedestrian scientific issue.
> ... since science isn't a proof system subject to the Theorem in the first place.
This is false. Beyond the examples given above, many aspects of present scientific theories are influenced, directly or indirectly, by the incompleteness theorems.
Your reversing of the relationship between the Halting Problem and the Incompleteness theorem (based on the quote you provided) makes me doubt the rest of your conclusions.
It's not that the Incompleteness Theorem tells us about the Halting Problem, but rather, that the Halting Problem tells us about the Incompleteness theorem.
Also, we have no reason to think that the universe satisfies some of the requirements of the Incompleteness Theorem (or even things like the Halting Problem).
Both require some kind of unbounded or self-referential behavior, which we might not have in the universe.
> Your reversing of the relationship between the Halting Problem and the Incompleteness theorem ...
Here's what I said: "Even something as trivial as the Turing halting problem is known to be insoluble because of its connection to the incompleteness theorems."
That doesn't reverse the relationship between the two, it identifies that there is a relationship. That's not controversial.
> It's not that the Incompleteness Theorem tells us about the Halting Problem, but rather, that the Halting Problem tells us about the Incompleteness theorem.
The Turing halting problem and the incompleteness theorems are related. Without the GIT, we would not be aware that the Turing halting problem is insoluble, a statement that can be made about any number of similar issues.
> Also, we have no reason to think that the universe satisfies some of the requirements of the Incompleteness Theorem (or even things like the Halting Problem).
You mean, apart from the fact that the question is whether the universe is a vast computer simulation, by definition subject to the halting problem, which lacks a solution because of the GIT?
> Both require some kind of unbounded or self-referential behavior, which we might not have in the universe.
We're not discussing the universe's behavior, we're discussing the feasibility of establishing whether or not it is a simulation, and do this from within the universe. That is the very definition of self-referential.
> "requires a few approximations in order to fit the problem into your computer"
I think this is where the source "Are we living in a simulation?" comes from.
It's interesting to note that they way a model behaves in a computer, and certain measurements share some similarities (the lattice spacing). But to go from similarities to "we living in a simulation" is just a jump too far.
Lets try to keep some clear distinction between science (measurements) and philosophical conjecturing, so that we know when we are doing the former or the latter.
Yes, I'm on board. The point is that as far as we understand currently there's no "good reason" for spacetime to be discretized, save for it being run in someone else's computer.
That by no means indicates that it is in a computer. Indeed, as far as we know, spacetime is not a grid! So if the hypothesis is: computer we are alleged to live in is any way like computers we ourselves build (eg. finite memory, finite processing speed, discrete, etc.), this paper is a step towards falsifying that hypothesis, not confirming it!
It's probably better to resist the temptation to ascribe meaning to models. The model is the meaning, nothing more or less. If we observe these artifacts, it means that's a property of our universe. Really, we're completely locked within that system and can't say that it means anything else.
But, I'd argue that if we can be a simulation, we're ignoring the qualia of consciousness issue. So I'd say that if we can be a simulation, we can just be one of those mathematical entities in Platonic space. The universe seems to be written in math, perhaps all of this is just what it's like to be one of those Platonic ideals.
There's not much use in speculating in that which cannot be falsified.
Part of me is becoming tired of "brain in a jar" lines of reasoning, because there can be no other side of the discussion by default. If there is, I'm a Citizen who would like to know more.
What sides are you talking about? Either the line of reasoning (whichever you're talking about) has flaws or it doesn't.
If you're trying to say "it leads nowhere", I sort of agree. But it only "leads nowhere" once you get to the acceptance that truth is subjective. Getting that far is worth it and can change the way a person thinks about many other things.
Yes, I'm trending toward "it leads nowhere". To me, it's like trying to prove a negative: "prove you aren't a brain in a jar, receiving electrical//chemical inputs instead of walking around in a body". Well, right down to it, I can't just as much as you can't.
> So truth is subjective, knowing that is worth it, and getting that far is awesome.
Grandpa Tom, is playing "Mario" on his beautiful "iMac" presented by his grandson for Christmas. He is just a regular grandpa who is in his 70s and is really new to this computer world. After playing Mario for a while he starts wondering how does a key on keyboard is making Mario Move. He starts putting himself within Mario character and starts seeing things around him. What he see is a world where he just have to "Run from dangers, get powers, survive and save the queen". Mario is so much involved he doesn't even know that there is something called as keyboard and a monitor outside his system. Mario thinks he just have to move and perform certain actions to accomplish his goal. thats all.
After playing so many hours of game, Suddenly mario starts thinking "am i doing things on my own or am i being controlled ? how the hell i am going to find out. the environment around me just extends in whichever direction i go. but not if i die. so what is this sorcery !"
In this whole story, Mario is each one of us. The next obvious question is, how do you know ? Neither does Mario. Some of these Marios went beyond the simulated environment (non materialistically, because there is no matter which runs this system, just like above cpu, interaction....) and figured out all this. We can decide to "believe" or "find an answer ourselves".
Now a programmer knows an answer. I wrote this goddam Mario program, i did not give him enough instructions (C/C++/whatever) to thik, so he will never know what the hell is going on. But, given intelligence to think on his own, he might be able to figure out things around his environment, but still he can't see the CPU<->Memory<->GPU<->Display<->Keyboard interactions. He can, only if .... ? . So, who are we , better than mario ? who can think on their own right... ? what more do you want ?
I don't quite understand how we propose to identify those anomalies when we have nothing to compare them to. Maybe the weird behaviours we observe in quantum physics for instance are the result of some optimisation/bug in the simulation, but how would we prove that?
How would a video game character understand and then prove he's in a simulation if the simulation itself didn't program them to do that?
It kind of reminds me of the people looking for life on other planets by searching for worlds similar to ours. It's of course a reasonable approach but in the end we have no idea if statistically speaking the presence of water or a certain temperature or atmosphere gives higher probabilities for life to emerge. We're just extrapolating from a sample of one.
Mathematics is like a fractal, given some axioms it explodes into an infinite set of true equations. No one has to observe math for it to exist. I think our universe(s) could be the same way, given some natural laws, or equations, everything we know is defined to take place. Why would there have to be anything 'running' the simulation.
Even if it is a simulation, as in the words of Rene Descartes, I think therefore I am.
My interpretation of it is, I might be in the matrix but I exist somewhere. I might be a butterfly in another dimension hooked up to the matrix like simulation to be in a human body and experience earth. Anyhow I exist.
In my naivety i think it will be impossible to proof that we live in a simulation. Even if we could simulate an universe we cannot say we live in one. And if we find bugs in our current universe we cannot say if it is a bug or our model is just wrong.
The Universe is a simulation, so what programming language/galactic cloud does it run on? Haskell for typing and correctness, Erlang for 'just fail', Ruby for POLA (God's, that is)...
Unless the question is to ask whether the universe is a poorly written simulation with edge cases that differ from the real thing. In that case, the question is very interesting.
The main objection is that the way edge cases would "differ from the real thing" depends on what their "computers" are like which may not be anything like our computers. In fact there is no reason to believe they are similar at all.
To take that point further, not only may their computers be nothing like our computers, the whole point of the article is that our natural laws and the basis of our mathematics is part of the simulation. That implies that the host (to borrow virtualization jargon) world's computers may work according to rules nothing like our own. Certainly you can't make statements like quantum effects between particles would otherwise require an infinite amount of memory. You can't make any statements about the host world, any more than we can make statements about the nature of a god, if one existed.
One approach: you look at what the physical laws appear to be, and then ask "What weird edge cases would we expect to see if and only if this were running in a particular type of simulation?" That's what the paper linked to does. It's not perfect, but it's something.
Actually, an unspeakably crappy simulation may be indistinguishable from the real thing if it doesn't "fight fair". Just have the simulation delete any thoughts by the participant(s) that they have convincingly observed anything amiss.
The simulation might not quite fit in 640KB, but we might be rather disturbed at just how trivial a sneaky simulation of our world would be.
What is the real thing? Is there a grid at the bottom of the real universe or not? We can calculate what a world with no grid at the bottom would be like. Is our world like that world?
> Is there a grid at the bottom of the real universe or not?
Wait -- we don't get to specify what criteria can be used to detect a simulation, from within the simulation. This is a classic case where Gödel's incompleteness theorems rule out any such determination.
Very briefly, Gödel's incompleteness theorems say that, for any sufficiently complex system, there are true statements that cannot be proven from within that system, i.e. using the system's methods. This has obvious applicability to the question of a simulated universe and our ability to detect this fact from within the simulated universe.
In short, this entire conversation is pointless except to say that it's pointless. Because of what Gödel proved, we cannot ever determine that we're in a simulation, from a perspective inside the simulation.
> The minimum evidence that we are in fact in a simulation would be observing a violation of physics.
Not at all. A "violation of physics", by which I assume you mean a discrepancy between our picture of the universe and the real universe as revealed in experiment, only tells us that our model is wrong.
There have been any number of "violations of physics" over the decades. We believed that a luminiferous ether served as a medium for light waves -- wrong. We believed that neutrinos didn't have mass -- wrong. We believed that all mass/energy was accounted for by the descriptions in the Standard Model -- very wrong (only 4% is so described, the rest is dark matter and dark energy, with no present explanation).
The simplest explanation for these discrepancies, consistent with lex parsimoniae or "Occam's razor" is that our model isn't good enough and needs revision, not that the universe is a simulation.
It's equally logical (and not very logical at all) to argue that a perfectly running universe, with no anomalies, would constitute evidence for a simulation, on the ground that any self-respecting superbeing able to craft an entire universe, would surely test his code before launching it.
When someone posts something like this, do they honestly believe everyone else having this debate, including serious physicists, are just stupid because they can't see the brilliant insight they have posted?
It's a typical post by someone whose knowledge of physics is limited to the content of the Discovery Channel, whose physics scripts are worded in just this way.
He's just starting a conversation, and while I wouldn't advocate handing out participation ribbons, you're just trashing someone that had a thought and expressed it--you and gfodor are buddying up to one another over how worthless that thought is.
Spare us your revelry.
I'm as big a fan as a dirty flame war as anyone. I find complaints over HN's critical culture to be tiring and self-defeating.
But if you're going to be negative, don't give each other high fives about it.
Edit: I suppose this is directed more at you, lutusp, because gfodor seems to be posting criticism in earnest.
I posted what I did so nontechnical readers wouldn't be misled. Would you object if I similarly addressed someone who claimed that evolution must be wrong because it's not in the Bible?
> ... you're just trashing someone that had a thought and expressed it ...
Indeed, that's what I did, but it was because of what was expressed, not the fact of its expression.
Surely it has by now occurred to you that you're defending this person's right to say whatever he chooses, while denying me the same right? Or did that escape your attention?
> Oh, get over yourselves, both of you.
Under the circumstances, it's time for you to get over yourself. The OP's post, and my reply, are perfectly symmetrical -- one demands the other.
You've just demonstrated that you're a study in hypocrisy and have no grasp of how you sound. Under the circumstances, what's my incentive to continue?
It would be a leap of logic to assert that we're in a simulation after observing a violation of physics. The first possibility we'd have to consider is that our understanding of physics is flawed. After that consideration (or maybe as a part of that consideration, depending on how you look at it), we'd have to consider the possibility that the laws of physics are not constant.
But why would we jump immediately to the universe being a simulation?
The experiment of detecting anomalies at the subatomic level is very interesting and that work should continue.
But, why must we say such anomalies lead to "we are living in a simulation"?
This seems like a fallacy of logic from reasoning backwards from the effect to the cause, with absolutely no connection between the two.
(Maybe people just want to hop on the "everything is a computer" bandwagon for marketing/publicity; it is tiring. At it's worst, it makes future tech assertions look kookish)