I don't think it is a "funny" notion of inclusion, but rather that once ℚ is defined, ℤ is silently redefined to be that one subset of ℚ that was used in the construction.
You can't do this in general, and everywhere, so it's still not rigorous. Consider the embedding of the nats into the reals, where the reals are defined as a subset of the binary sequences (e.g. as the usual infinite binary expansion for (0,1] choosing those that do not terminate in the zeros sequence, but prefix with a Elias-gamma-coded, zigzag-encoded integer). But the usual definition of sequence is as a function from the nats; so are you again going to redefine your reals in terms of that?
In the end you still need to maintain a correspondence between the embedding and the original set as in the typical way where you do that and consider the subset notation as a shorthand that requires you to "lift" or "wrap" through the correspondence wherever necessary.
I think you misunderstood the parent comment. They're not talking about defining the reals by setting up a mapping where each real number corresponds to a unique natural number. They're talking about defining the reals by setting up a mapping where each real number corresponds to a unique mapping from the natural numbers to {0, 1} (i.e. a unique binary sequence). The set of all binary sequences is isomorphic to the power set of the natural numbers, which is uncountable.
I'm not a set-theorist, but a set-theorist friend of mine once taught me that you can turn a countably-infinite set (such as the integers) into an uncountably-infinite one (like the reals) by applying the 'power set' operation (the set of all subsets).
Not heaps sure what this really means with respect to whether 1 the integer is really completely related (as in, equal, or the-exact same-thing) to 1.0 the real though. Kinda seems like it might still need a bit more information to fully identify a real, even when it happens to be infinitely-close to an integer?
The integer part is easy, since we already have the integers. Once you have D=[0,1), then you can define R=ZxD. That is to say, this definition of R seperates out the integral and fractional components of every real number.
There’s a very deep problem with that—every time you invent a “superset”, do you then have to redefine the subset to be a subset of that “superset”?
There is an infinite chain of supersets of the rational numbers, real numbers, etc.
Think of it like this… we can define 0, and then define 1 as the successor. Repeating this, we can have a definition for every finite number. But we cannot do this the other way around. We cannot start with ∞, define the predecessor to ∞, and then somehow get back to 0.
In other words, if you want to work backwards and say that smaller sets (like the natural numbers) are a subset of the bigger sets (like complex numbers), then you have to pick a “biggest set” containing all numbers, which is unsatisfactory. Somebody always wants a bigger set.
> There’s a very deep problem with that—every time you invent a “superset”, do you then have to redefine the subset to be a subset of that “superset”?
I have thought about this too, and I'd initially agree with you. but I thought at some point how mathematical history is not extremely dissimilar from this. put in very rough terms:
at first humans discovered/invented numbers (i.e. the counting numbers); these started at number one — the first number. later on, at some point we had to go back and realize that there was a zero before number one which "silently" redefined the first number as zero and this created the natural numbers a the modern set-based N
edit: adding this alternative rendering of my intended comment triggered by a condescending reply: "mathematics silently redefines stuff all the time. deal with it"
There is a different answer here which is more satisfactory… which is to use notions of equality other than set theoretic equality. Which is what the article is talking about.
"Nothing" as a concept always existed of course. But it wasn't considered a number, generally. Certainly no one counted "nothing, one, two", and even today natural language doesn't include "nothing" or some equivalent as a numeral noun.
You need to be careful about the phrase “considered a number” since I believe one, or unity, was also not considered a number by some ancient civilisation - i.e. a number was only multiple copies of unity.
[I believe this YouTube video goes into more detail in its discussion of why 1 was not considered Prime in the ancient world: https://youtu.be/R33RoMO6xeA]
That's where I'm quite skeptical. Imagine you are in charge of trade or rationing important village resources in the winter. It just seems to me almost necessary that people would have a way to symbolically indicate that all the sheep are gone. As opposed to just not having any symbol for that at all.
Zero entered western writing systems through India, with limited usage in math before that. It seems like it was invented/borrowed as part of switching from additive numbers (such as Roman numerals) to positional numbers.
> In other words, if you want to work backwards and say that smaller sets (like the natural numbers) are a subset of the bigger sets (like complex numbers), then you have to pick a “biggest set” containing all numbers, which is unsatisfactory. Somebody always wants a bigger set.
Exactly what we did in the Analysis I course I attended during my bachelor: defined the reals axiomatically, and the N as smallest inductive(?) subset containing 0.
Satisfactory or not, it worked well for the purpose. And I actually liked this definition, if anything because it was original. Mathematical definitions don't need to have some absolute philosophical value, as long as you prove that yours is equivalent to everyone else's it's fine.
> as long as you prove that yours is equivalent to everyone else's it's fine
That’s exactly the point I was making in the first place.
“Unsatisfactory” just means “unsatisfactory” in the sense that some mathematicians out there won’t be able to use your definitions and still get the subset property. This means that you are, in all realities, forced to deal with the separate notions of “equivalence” and “equality”. Which is what the article is talking about—all I’m really saying here is that you can’t sidestep equivalence by being clever.
Another way to see it is to prepend every mathematical text involving ℤ with "for every ℤ such that [essential properties omitted]", so that you can apply it to several definitions of ℤ, rather than awkwardly redefining it after the fact. This is the mathematical equivalent of "programming to the interface", and is actually how mathematics are often formalized in modern proof assistants: as huge functors that abstract these details away.
It makes more sense to define it the other way around:
Q = Z u Q’, where Q’ is the set of all rational numbers that aren’t integers.
Redefining the smaller set can’t work because there may be more than one larger set, e.g. split
complex numbers vs regular complex numbers. But you can define a larger set to strictly extend a smaller set.