> so ipv6 is now a social justice issue? I'll send you the $2 a month for a elastic IP .
And the billion people in India? The billion in China? The billion on the continent of Africa? And even in the US:
> Our [American Indian] tribal network started out IPv6, but soon learned we had to somehow support IPv4 only traffic. It took almost 11 months in order to get a small amount of IPv4 addresses allocated for this use. In fact there were only enough addresses to cover maybe 1% of population. So we were forced to create a very expensive proxy/translation server in order to support this traffic.
> We learned a very expensive lesson. 71% of the IPv4 traffic we were supporting was from ROKU devices. 9% coming from DishNetwork & DirectTV satellite tuners, 11% from HomeSecurity cameras and systems, and remaining 9% we replaced extremely outdated Point of Sale(POS) equipment. So we cut ROKU some slack three years ago by spending a little over $300k just to support their devices.
PSINet/Cogent got 38/8 in 1994: did they invent it? Ford got 19/8 in 1995: how about them?
How many places and people/companies didn't have the ability to go to a RIR in the 1990s or 2000s and get an allocation because their local infrastructure (power, telecom) wasn't developed at the time? So because they got computers, fibre, smartphones later they're SOL?
Both IPv4 and IPv6 addresses are 'just' u_ints: one is 2^32 and the other is 2^128. The fact that we display them in a particular format (10.11.12.13; ff:ee::bb:aa) is only for human UX purposes.
Strictly speaking everything in a computer is 'just' a number represented in base-2 (binary digits: bits) that we affix certain labels to (char, int, float, struct).
> Or ... people can use the DoD/Mod and assorted other space to get their IPv4 allocations.
Someone did the math on this:
> Now, average daily assignment rates have been running at above 10 /8s per year, for 2010, and approached 15 /8s towards the end. This means any reclamation effort has to recover at least 15 /8s per year just to break even on 2010’s growth. That’s 5.9% of the total IPv4 address space, or 6.8% of the assignable address space. Is it feasible to be able to reclaim that much address space? Even if there were low-hanging fruit to cover the first year of new demand, what about there-after? Worse, demand for address space has been growing supra-linearly, particularly in Asia and Latin America. So it seems highly unlikely that any reclamation project can buy anything more than a years worth of time (and reclamation itself takes time).
There are 'only four billion IPv4 addresses, and there are eight billion people on the planet. There are just as many smartphones (I have two: personal and work):
Even if you (CG-)NAT an IPv4 address for some number of people, you still need to have IPv4 addresses for public services (web, mail, NTP, etc).
There is no scenario where 2^32 addresses is enough for humanity's needs: as some point you need to go to a protocol with more that 32 bits of address space.
There are 'only four billion IPv4 addresses, and there are eight billion people on the planet. There are just as many smartphones (I have two: personal and work)
Unless all of these devices are running a dedicated full time server that must be reachable inbound by everyone this is not required. At any given time "all the people" are not online. That is why DHCP (per ISP) takes care of this. Maybe some day all the people may become terminally online but I would not count on it.
Yeah some day IPv6 may be required. Maybe in 100 years or so. IPv4 has plenty of unused allocated addresses that can be ripped away from greedy people. There was a time when ARIN would check to see what was in use and would take back anything people were squatting on. I think the reclamation project works if we dont assume everything has to be reachable as a server.
I should add that cell phones (where people are terminally online) were already IPv6 a long time ago for the most part so it's really a non issue. The only risk I see is if someone wanted to start a new massive dedicated server and VPS provider. Most of those are dual stack IPv4+IPv6 now and doing that means clawing some IPv4 space away from those I mentioned earlier.
> Unless all of these devices are running a dedicated full time server that must be reachable inbound by everyone this is not required.
I think this is a lack of imagination. The fact that (CG-)NAT is in the way could be precluding the development of software that could take advantage of incoming/P2P connections.
It's a form of (negative/inverse) survivorship bias: kind of like zoning for only single-family homes and yet saying "no one wants mid-rise towers/apartments as evidenced by the fact no one building them". The current rules/structure preclude any other options.
When we went from dial-up speeds to DSL/cable to fibre we were able to have all sorts new applications due to higher bandwidth. Are there classes of applications that we don't / can't have because of NAT? We're stuck with things that often need a central server (TURN/ICE/STUN) and I'd like people to have the ability to explore a more distributed/decentralized Internet.
No imagination required. P2P works fine if at least 20% to 30% have ports open inbound. 70%+ need not have open inbound ports. Where this could theoretically be a problem is if a specific sub-set of CG-NAT users were the only people seeding and downloading something. This non existent problem can be worked around using a VPN mesh. Tinc is an open source VPN that operates in user-space and while not as fast as Wireguard it can do things Wireguard could never dream of such as user space mesh routing, always discovering the shortest path. The advantage of this is keeping ambulance chasing lawyers off the P2P/VPN mesh. The only imagination required is how to keep the network semi-private. In my experience this is running a semi-private invite-only self hosted forum. In reality none of this is required for P2P however.
> No imagination required. P2P works fine if at least 20% to 30% have ports open inbound. 70%+ need not have open inbound ports. Where this could theoretically be a problem is if a specific sub-set of CG-NAT users were the only people seeding and downloading something.
"Seeding"? "Downloading"? I think applications besides BitTorrent could be invented and become popular. Even now, existing things like SIP and WebRTC would probably be much less onerous.
> This non existent problem can be worked around using a VPN mesh. Tinc is an open source VPN that operates in user-space and while not as fast as Wireguard it can do things Wireguard could never dream of such as user space mesh routing, always discovering the shortest path.
So you're introducing another layer of software because the underlying network does not have the functionality available (just like STUN/TURN/ICE had to be invented to deal with NAT).
Here's another idea: have IPv6, and if folks want to have end-to-end encrypted communications, start up an IKEv2 process (that opens a hole for its port via UPNP/PCP), and we have IPsec (which is built into most OSes anyway) encrypted communications opportunistically enabled.
> So you actually agree with me, that making all addresses public was stupid to begin with.
If an address is not public how can you start an connection from it, or end a connection at it? A web server needs a public address if you want to have people reach it. And you, at some point, also have to have a public address if you want to connect to pubic services: either on your end-host, at your CPE/router's WAN interface, or on an interface of your ISP's CG-NAT box.
But having a public address on your end-host also allows for much more functionality than if you were stuck behind CPE-NAT or CG-NAT. Now, you don't have to use this functionality—just like how I didn't when my printer gets an publicly addressable (but not publicly reachable) IPv6 address—but it opens up various possibilities.
> Yes...? I know that, but does that cause any issues in practice other than death of P2P?
Well:
> If you’re a gamer using PS5, Xbox, or PC in 2025, running into Double NAT or CGNAT port forwarding issues can make online play nearly impossible. Many 5G home internet and satellite services (like T-Mobile Home Internet and Starlink) put users behind carrier-grade NAT, which blocks direct connections and port forwarding. The good news? There are still workarounds that can open up your connection for smoother online gaming.
When we went from dial-up speeds to DSL/cable to fibre we were able to have all sorts new applications due to higher bandwidth. Smartphones are capable of all sorts of things because they're always online: back in the day people used to talk about "being online" and saying "sorry, I was offline", because you only had connectivity at the office or at home (where you dialed into your ISP).
What kind of applications and services are not being invented because we're stuck with the current non-P2P / centralized setup of IPv4+NAT?
> What kind of applications and services are not being invented because we're stuck with the current non-P2P / centralized setup of IPv4+NAT?
I don't know? I've never had CG-NAT and yet I've never seen a piece of software that takes advantage of that except maybe for games that use UPnP to open ports.
> I don't know? I've never had CG-NAT and yet I've never seen a piece of software that takes advantage of that except maybe for games that use UPnP to open ports.
Which, as a sibling comments mentions, is the point.
The fact that (CG-)NAT is in the way could be precluding the development of "software that takes advantage of that". It's a form of (negative/inverse) survivorship bias: kind of like zoning for only single-family homes and yet saying "no one wants mid-rise towers/apartments as evidenced by the fact no one building them". The current rules/structure/architecture preclude any other options.
Games, voice/video chat (especially open source ones), stuff like Tailscale, stuff like Magic Wormhole, ... stuff like Dropbox.
Is there anything you do on a computer that involves communicating with another user? That's not just anything - that's most things! All communication between two computers is improved by not requiring NAT.
Corporations love to keep us dependent on their central servers, of course.
> I've never seen a piece of software that takes advantage of that except maybe for games
Maybe we haven't seen many products available on the market to take advantage of it because the current standard of NATs make such things practically unworkable?
Its pretty much impossible to ship smart home stuff that is hosted locally (i.e. not without it connecting to some cloud service) because people want to access these smart devices from outside their home. They're not likely to configure a VPN to connect home, they're not going to configure NATs in any workable fashion (or may be unable to, such as CGNAT), the applications probably don't want to have to handle having NAT hairpinning issues, etc.
So instead we continue down everything that's popular being something that requires a cloud proxy/relay (because that's the only way things actually work for most people), when in reality if things could just be public we could do a whole bunch more and empower people to easily host things themselves.
> Furthermore, DHCPv6 holds you back from various desirable things like privacy addresses and (arguably even more importantly) IPv6 Mostly.
Why would DHCPv6 hold back privacy addresses? Can't DHCPv6 servers generate random host address bits and assign them in DHCP Offer packets? Couldn't clients generate random addresses and put them in Request packets?
DHCPv6 temporary addresses have the same properties as SLAAC
temporary addresses (see Section 4.6). On the other hand, the
properties of DHCPv6 non-temporary addresses typically depend on the
specific DHCPv6 server software being employed. Recent releases of
most popular DHCPv6 server software typically lease random addresses
with a similar lease time as that of IPv4. Thus, these addresses can
be considered to be "stable, semantically opaque". [DHCPv6-IID]
specifies an algorithm that can be employed by DHCPv6 servers to
generate "stable, semantically opaque" addresses.
How does DHCPv6 hold back IPv6-mostly? First, most clients will send out a DHCPv4 request in case IPv4 is the only option, in which case IPv6-mostly can be signalled:
I was unaware of this, so thanks. Sounds like it addresses (pun intended) my concern.
> How does DHCPv6 hold back IPv6-mostly? First, most clients will send out a DHCPv4 request in case IPv4 is the only option, in which case IPv6-mostly can be signalled
> It won't work. The only way to authenticate who ones what coins is with signatures. If the signature algorithm is broken, you can't tell who the original owner is to move the coins to a safe signature algorithm.
If you publish/take a snapshot of the ledger at (say) 23:59 UTC everyday, and publish it with a SHA2/3 hash, people will know what the state of ownership was at that time. Then if a break occurs at any later point you cannot trust any transaction afterwards, but some portion of folks can attest to their ownership.
There will be some portion of folks that did some legitimate transactions that could come into question, but at least it's not necessarily everyone.
> but some portion of folks can attest to their ownership.
How? Alice pay's Bob 1 BTC at random address 0x1234. Someone shows up and says, I own that address and here is signature proving it. But the signature scheme is broken so anyone can do that. So you ask for documentation they own that address, well they have screencap of a message asking for payment from Alice. Is that real? Maybe you find the email of that user and ask them, but they could be lying. Now if you paid from coinbase, coinbase could vouch for you.
So you need some sort of court that sits in judgement over who owns what. That is going to be very expensive. While you are doing this, no one can move funds. What is the most likely outcome of such a system, well there is not CEO of Bitcoin, so you would probably end up with multiple courts producing conflicting rulings that no one would respect.
The whole notion of ownership courts is anathema to Bitcoin's philosophy and would completely undermine the social trust that makes Bitcoin valuable. If we are going to save Bitcoin from a CRQC we must act before a CRQC recovers everyone's private key.
There are three workable schemes:
* For public keys that in hashed addresses such as P2PKH (Pay-to-Public-Key-Hash) et al., if the public key is not known, then you could produce a ZKP that you know the public key (proof of pre-image). The main problem with this approach is that it only protects hashed addresses where the public key has not been leaked or exposed on-chain. It doesn't have enough coverage.
* You can do commit-reveal schemes, this makes miners far more trusted and again only helps with hashed addresses that haven't exposed the public key.
* You can do ZKP proof of HD Seeds, from most modern wallets have HD seeds. AFAICT You'd have to use STARKs but STARKs for HD seeds are too big for on-chain proofs. Not all HD seeds are protected and not all addresses have HD seeds. Just today Laolu published this demo for doing this, the proofs at 1.7 mbs https://groups.google.com/g/bitcoindev/c/Q06piCEJhkI
See perhaps The Fable of the Bees as well, which was published a few years before Smith's birth:
> In The Grumbling Hive, Mandeville describes a bee community that thrives until the bees decide to live by honesty and virtue. As they abandon their desire for personal gain, the economy of their hive collapses, and they go on to live simple, "virtuous" lives in a hollow tree. Mandeville's implication—that private vices create social benefits—caused a scandal when public attention turned to the work, especially after its 1723 edition.
> Mandeville's social theory and the thesis of the book, according to E. J. Hundert, is that "contemporary society is an aggregation of self-interested individuals necessarily bound to one another neither by their shared civic commitments nor their moral rectitude, but, paradoxically, by the tenuous bonds of envy, competition and exploitation".[1] Mandeville implied that people were hypocrites for espousing rigorous ideas about virtue and vice while they failed to act according to those beliefs in their private lives. He observed that those preaching against vice had no qualms about benefiting from it in the form of their society's overall wealth, which Mandeville saw as the cumulative result of individual vices (such as luxury, gambling, and crime, which benefited lawyers and the justice system).
In some ways this can be used to support trickle-down economics (TDE; popularized by Reagan et al in the 1980s, but around for over a century) to allow the rich to keep more money:
One problem with TDE though is that you can only really spend so much, even if you go crazy in extravagance (only so many Ferraris and jets that one can purchase), and a lot of the money ends up being saved/invested/hoarded. It's probably better to give more money to 'poor(er)' people, who will probably end up spending more of it just to make ends meet—which is more likely to circulate in the economy.
And corporate networks: in Google's stats you'll see IPv6 usage jumps on weekends as people do stuff not using their work computer.
reply