I am not a mathematician, but in the discrete domain of integers:
1) you have two functions, essentially we look for where the two 3d space surfaces intersect, if z is taken as the result for each equation.
2) the integers "are disctinct", so (0,0) and (1,1) are out, plus (2,2) (3,3) etc. Basically a whole linear diagonal in the instersection of both 3d spaces is excluded (why though, to what useful end?)
3) Starting points for the ranges is therefore (0,1) and (1,0)
4) 1^y is always 1, and x^0 is always 1 so there is a constant starting value of 1 both on both axis
5) but x^y will always be larger than y^x, for y>x and x^y > y^x
(prove this by taking the first derivative to get rate of change. Do you use Laplace z domain for discrete, instead of s for continuous?)
6) and the converse to 5.
So once you have found one solution, you know to stop looking, the two surfaces keep diverging from each other.
Why resort to the continuous domain to solve a problem n the discrete, is this even a valid approach?
eg Is there a formal proof that says integers strictly follow that same rules as the continuous domain, just as a subset? I'm interested.
Does this come under group theory, a set with an applied operation?
As I said I am not a mathematician, I am an electrical engineer so probably one of the worst abusers of pure math in a formal sense, but the more I think about this the more questions this raises in my thinking.
> 5) but x^y will always be larger than y^x, for y>x and x^y > y^x (prove this by taking the first derivative to get rate of change. Do you use Laplace z domain for discrete, instead of s for continuous?)
This is false, though! Or, it's almost always true, but there are some exceptions. We of course have an exception at (2,4) and (4,2), where they're equal, and of course at (n,n), where they're obviously equal. And of course 0's and 1's will cause problems for you.
But also, most interestingly, there's an exception at (2,3) and (3,2)! 2<3, and yet, 3^2 > 2^3. Any proof has to account for this!
(There's a fair bit I could say about this exception, but perhaps I should just let you think about it instead. :) )
> Why resort to the continuous domain to solve a problem n the discrete
Because oftentimes this is easier. (In a number of cases it's much easier.) Also... you did this? Like to the extent that your step (5) is valid, you seem to have "proven" it by using the first derivative. That's a continuous tool! I'm not sure what you're talking about with the "Laplace z domain". Or are you using "first derivative" to mean "first difference", or something?
> is this even a valid approach?
Yes, why wouldn't it be? In fact this is actually one of the big reasons for introducing larger number systems, that they let you prove things about the original smaller number system. The rational numbers let you prove things about the integers; the complex numbers let you prove things about the reals; the real numbers and p-padic numbers let you prove things about the rationals; etc. (The integers let you prove things about the whole numbers!)
Proving statements about integers by means of complex numbers is a whole field in itself, i.e., analytic number theory. And in the case of Goodstein's theorem, one famously proves a statement about the whole numbers by passing to the ordinals...
Apropos: difference calculus is a fascinating topic. It has lots of results that are analogous to differential calculus.
Yes, p-adic numbers are also really interesting to look into.
It's also interesting to re-derive much of analysis (like limits and derivatives etc) in the context of the dual numbers (https://en.wikipedia.org/wiki/Dual_number):
> They are expressions of the form a + b * ε, where a and b are real numbers, and ε is a symbol taken to satisfy ε^2 = 0 with ε ≠ 0.
You can sort-of pretend that ε is an infinitesimal, but with a sound theoretical footing.
This is an oversimplification. Nonstandard analysis, the hyperreals, are one way of adding in infinitesimals to the reals, and definitely not the one I'd recommend for all use cases (although going from context, they may be appropriate here).
There are plenty of ways to make a number system that add in infinitesimals to the reals, such as yes the hyperreals, and also I'd count the dual numbers among them, but there's also e.g. the surreal numbers: https://en.wikipedia.org/wiki/Surreal_number
So, why am I kind of down on the hyperreals? Well, thing is, as I understand it, nonstandard analysis isn't really the study of the hyperreals; it's the use of the hyperreals to study the reals. I mentioned above that one big use of passing to a larger number system is that it reflects on the smaller number system; however, as best I can tell, the hyperreals are pretty much used purely in this way. They're used pretty much entirely as a tool for proving statements about the real numbers, rather than an object of study in their own right.
And there's a reason for that; people often talk about "the" hyperreals, but actually, they're not uniquely defined. There's not really the system of hyperreal numbers, so much as there are potentially different systems of hyperreal numbers, which is annoying, but not if you only want to use them as a tool to study the reals, because their (relevant) relation to the reals is all the same. It's a bit icky.
So yeah if you want to do analysis or calculus -- which might be the case, given that the earlier context was dual numbers, and that's what one would typically use dual numbers or -- then sure, use hyperreals. But if you just want to play around with a nifty number system that includes both reals and infinitesimals... eh, they're not great. You're likely to have more fun with the surreals.
(More generally, of course, it's worth remembering that there's no need to stick to well-known systems of numbers... you can invent your own! Like, if for some reason you need infinitesimals, but you don't want them to square to zero like in the dual numbers, but you also don't want all the stuff that's in the hyperreals or surreals, there's nothing wrong with using R[ε] (or R(ε), or other variants depending on exactly what you're doing) to get a sort of minimal reals-with-infinitesimals...)
[Edit: Is there no way to do bold anymore? Those R's in the above paragraph were supposed to be bold, to indicate the real numbers...]
> [if] you also don't want all the stuff that's in the hyperreals or surreals, there's nothing wrong with using R[ε] (or R(ε), or other variants depending on exactly what you're doing) to get a sort of minimal reals-with-infinitesimals...)
> [Edit: Is there no way to do bold anymore? Those R's in the above paragraph were supposed to be bold, to indicate the real numbers...]
I believe you can do bold in Unicode. (see: 𝐛𝐨𝐥𝐝) As far as I know HN has never supported bold as a markup style.
Standard number sets can be done the same way; I would represent the reals as ℝ. My standard method is to go to the wikipedia page for "blackboard bold" and copy the letter I want.
I'd like to know more about the sets you refer to, ℝ[ε] and ℝ(ε), but I don't recognize them. Do they have names I can search for?
> I believe you can do bold in Unicode. (see: 𝐛𝐨𝐥𝐝) As far as I know HN has never supported bold as a markup style.
Oh, I think you're right, I'd forgotten. Grr, HN's limited subset of Markdown is quite annoying sometimes. Well, I'm going to be lazy and just write "R".
> I'd like to know more about the sets you refer to, ℝ[ε] and ℝ(ε), but I don't recognize them. Do they have names I can search for?
No, they don't, that's part of my point -- that you don't need to use recognized systems with names, you can use the usual constructions (or unusual constructions...) to make your own in the way that mathematicians always do. I mean I guess they sort of have names in that the notation would be pretty understandable to most everyone, although pronouncing it is annoying in that they'd both most typically just be pronounced "R adjoin epsilon", although I guess you could say "ring-adjoin" or "field-adjoin" to disambiguate. But note that there's plenty of other variants one could make as well, not just these.
Basically go learn some abstract algebra, is what I would say. Or go read about ring adjunction (and polynomials) or field adjunction (and rational functions).
Btw, I wouldn't describe these as "sets", that's not really the appropriate word to use here. When we talk about systems of numbers, we are, well, talking about systems, or algebraic structures -- rings, fields, ordered rings or fields, topological rings or fields, etc. To say "set" implies that what is important is the elements of these things, the contents; but these elements have no meaning on their own, they're given meaning by the structure -- the permitted operations, relations, etc. (Addition, multiplication, negation, less than or equal to...) Formally, an algebraic structure is a tuple, with the base set being just the first element of that tuple, even if we typically abuse notation and use the same symbol to refer to the set and the structure on that set.
"The Z-transform is a mathematical tool which is used to convert the difference equations in discrete time domain into the algebraic equations in z-domain."
I was taught in engineering math that z-domain is the Laplacian discrete equivelent of the s domain, which is continuous and used by EE as well in analog.
z^-1 (or z(-1) means last sample. It is commonly used in digital signal processing, FFT etc.
s domain is used for (from memory) the operater e^-jw, which is for electrical engineers the transform for use with sine waves, such that impedance is 1/sC for capacitance and sL for inductance.
z domain has useful properties like "The differentiation in z-domain property of Z-transform states that the multiplication by n in time domain corresponds to the differentiation in zdomain."
I am sure I have made some technical mistakes in the above, but it is how I remember it and they don't impact my ability to apply it for my limited EE needs.
As to the question about the valid approach, intuitively I see this, but wondering if there was a formal proof of some kind, or is it taken as given?
Oh, I see! So it's the Fourier transform (in the abstract sense) from the integers to the circle group, except that then you extend to all of C (well, possibly minus the origin). Interesting! Yeah as someone previously unfamiliar with it, it would have been substantially clearer had you referred explcitly to applying the Z-transform, rather than switching domains, which is the sort of thing that will only make sense to someone who already knows about this.
So wait does that mean you weren't actually sure if this approach to that step would work? I assumed you were saying you had a proof, not just outlining an approach you thought would work. (I mean, obviously you didn't have a proof of the whole thing as the overall statement is false, but individual steps might have worked.)
But I have to note -- even if the proof works, then if you're applying the Z-transform, then you are once again not sticking to the realm of the discrete! Complex numbers are a continuum matter. So that approach still doesn't yield an integers-only proof!
> As to the question about the valid approach, intuitively I see this, but wondering if there was a formal proof of some kind, or is it taken as given?
I'm not really sure how to answer this -- what would a formal proof here even consist of? It's easy enough to do it in any instance, but the problem is, how would you even formally state the general principle?
Like you could do large classes of statements, certainly. So for instance, if what we're doing is purely algebraic, then you could say, if A and B are algebraic structures, and i:A->B is an injective homomorphism, and S is a set of algebraic equations all of which are always in the image of i, then all of them are always true in A; but of course there's way more types of statements one can make than algebraic equations.
So, uh, yeah, one can write down any number of statements like that, but I don't know how you'd formally abstract it into a general principle...?
Or, to say a bit more about this -- I guess the basic principle here is "it's OK to use auxiliaries?" Introducing a larger number system is no different from drawing an auxiliary line in geometry (for instance). There's no rule that the proof of a statement may only contain the entities appearing in the statement! You can introduce whatever auxiliaries you like.
A subset won’t have more solutions than its containing set; but it may have fewer.
Many problems are easier to solve in the reals (due to being complete), and you can then restrict that solution to your (sub)set of interest — in this case, the integers.
You see the same thing with Pythagorean triples being simpler to solve by doing the math over the complex numbers and then restricting your answers.
I'm trying to understand 5). If you're claiming that both (A) y > x, and (B) x^y > y^x hold, then x^y > y^x ("will always be larger") holds. You're satisfying your claim by assumption. Nothing new is deduced.
However, if only A) needs to be satisfied: y = 3, x = 2 is a counterexample, as x^y = 2^3 = 8 < 9 = 3^2 = y^x.
edit: looks like someone had the same thought as me as I was typing my reply!
I am not a mathematician either, I always maintained the university made a mistake granting make a math teacher degree :) It was also long ago enough a lot of you weren't even alive and I haven't done any such work in a quarter century. Nonetheless...
> but x^y will always be larger than y^x, for y>x and x^y > y^x (prove this by taking the first derivative to get rate of change. Do you use Laplace z domain for discrete, instead of s for continuous?)
I am struggling to follow what are you saying here
>
Why resort to the continuous domain to solve a problem n the discrete, is this even a valid approach?
I can't make heads or tails of this question. The proof says, correctly, for all (real) x != y, x < y solutions it is true that 0<x<e and e<y. He found this by investigating the derivative and establishing the monotonically increasing / decreasing nature of the function. Only after finding this out using does he go back to the original question: what integer x could be? Since he restricted x to be 0 < x < e , we only need to investigate the case of 1 and 2.
Yes, m=n is a solution, or set of solutions, but why arbitrarily exclude some solutions and not others?
There is no explanation as to why - eg maybe it could be if the equations represent a physical system and m=n implied the same physical space was occupied by two objects, for want of a better example.
But if the equations represented some kind of abstract concept, why are not all solutions valid and of interest?
It seems like me saying, find an equation that only yields all the primes that use any digit once - but why, to what end and value?
I am not a mathematician, but in the discrete domain of integers:
1) you have two functions, essentially we look for where the two 3d space surfaces intersect, if z is taken as the result for each equation.
2) the integers "are disctinct", so (0,0) and (1,1) are out, plus (2,2) (3,3) etc. Basically a whole linear diagonal in the instersection of both 3d spaces is excluded (why though, to what useful end?)
3) Starting points for the ranges is therefore (0,1) and (1,0)
4) 1^y is always 1, and x^0 is always 1 so there is a constant starting value of 1 both on both axis
5) but x^y will always be larger than y^x, for y>x and x^y > y^x (prove this by taking the first derivative to get rate of change. Do you use Laplace z domain for discrete, instead of s for continuous?)
6) and the converse to 5.
So once you have found one solution, you know to stop looking, the two surfaces keep diverging from each other.
Why resort to the continuous domain to solve a problem n the discrete, is this even a valid approach?
eg Is there a formal proof that says integers strictly follow that same rules as the continuous domain, just as a subset? I'm interested.
Does this come under group theory, a set with an applied operation?
As I said I am not a mathematician, I am an electrical engineer so probably one of the worst abusers of pure math in a formal sense, but the more I think about this the more questions this raises in my thinking.
Can someone point out errors in thinking?