I think your summary is really great. One of the better refutations I've seen about the "what about v4 but longer??" question.
However, I think people do get tripped up by the paradigm shift from DHCP -> SLAAC. That's not something that is an inevitable consequence of increasing address size. And compared to other details (e.g. the switch to multicasting, NDP, etc.), it's a change that's very visible to all operators and really changes how things work at a conceptual level.
The real friction with SLAAC was that certain people (particularly some at Google) tried to force it as the only option on users, not that IPv6 ever forced it as the only option. The same kind of thing would likely occur with any new IP version rolling out.
SLAAC isn't something that is an inevitable consequence of increasing address size, it's something that is a useful advantage of increasing address size. Almost no one had big enough blocks in IPv4 where "just choose a random address and as long as no else seems to be currently claiming it it is yours" was a viable strategy for assigning an address.
There are some nice benefits of SLAAC over DHCP such as modest privacy: if device addresses are randomized they become harder to guess/scan; if there's not a central server with a registration list of every device even more so (the first S, Stateless). That's a great potential win for general consumers and a far better privacy strategy than NAT44 accidental (and somewhat broken) privacy screening. It's at odds with corporate device management strategies where top-down assignment "needs to be the rule" and device privacy is potentially a risk, but that doesn't make SLAAC a bad idea as it just increases the obvious realization that consumer needs and big corporate needs are both very different styles of sub-networks of the internet and they are conflicting a bit. (Also those conflicting interests are why consumer equipment is leading the vanguard to IPv6 and corporate equipment is languishing behind in command-and-control IPv4 enclaves.)
> Furthermore, DHCPv6 holds you back from various desirable things like privacy addresses and (arguably even more importantly) IPv6 Mostly.
Why would DHCPv6 hold back privacy addresses? Can't DHCPv6 servers generate random host address bits and assign them in DHCP Offer packets? Couldn't clients generate random addresses and put them in Request packets?
DHCPv6 temporary addresses have the same properties as SLAAC
temporary addresses (see Section 4.6). On the other hand, the
properties of DHCPv6 non-temporary addresses typically depend on the
specific DHCPv6 server software being employed. Recent releases of
most popular DHCPv6 server software typically lease random addresses
with a similar lease time as that of IPv4. Thus, these addresses can
be considered to be "stable, semantically opaque". [DHCPv6-IID]
specifies an algorithm that can be employed by DHCPv6 servers to
generate "stable, semantically opaque" addresses.
How does DHCPv6 hold back IPv6-mostly? First, most clients will send out a DHCPv4 request in case IPv4 is the only option, in which case IPv6-mostly can be signalled:
> 3. History has shown that upgrading network backbone hardware (in particular) is incredibly difficult through a process that's been described as "ossification", which is a nice description. Basically, network relays and routers wanted to avoid security issues and decided to discard things they didn't understand.
What makes you suggest that it's backbone hardware that is the problem? It's largely enterprise customers and tier 3 providers that don't really do IPv6 afaics.
I get the impression that this fact is fundamentally lost on a lot of the people who want a "compatible" IPv6. Like, their mental model does not distinguish between how we as humans write down an IPv4 address in text and how that address is represented in the packet.
So they think "let's just add a couple more dots and numerals and keep everything else the same"
I think you’re right. Honestly, my impression is that a lot of people imagine it like a string field, and others more like a rich text field, analogous to “can’t we just use a smaller font?”
> The first thing in the IP header is the version number.
So you just change the version number… like was done with IPv6?
How would this be any different: all hosts, firewalls, routers, etc, would have to be updated… like with IPv6. So would all application code to handle (e.g.) connection logging… like with IPv6.
I was addressing the narrow claim that you cannot distinguish ASCII from UTF-7. You can distinguish IPv4 from IPv6 by looking at the version field (and I forgot to mention the L2 protocol field is out of band from IP's perspective). Obviously if the receiver doesn't support UTF-7 or IPv6 then it won't be understood. Forward compatibility isn't possible in this case.
Weirdly, the version field is actually irrelevant. You can't determine the type of a packet by looking at its first byte; you must look at the EtherType header in the Ethernet frame, or whatever equivalent your L2 protocol uses. It's redundant, possibly even to the point of being a mistake.
I mean, yes, in practice you can peek at the first byte if you know you're looking at an IP packet, but down that route lies expensive datacenter switches that can't switch packets sent to a destination MAC that starts with a 04 or 06 (looking at you, Cisco and Brocade: https://seclists.org/nanog/2016/Dec/29).
Right, the variable-length thing was my point. That's fine when you're dealing with byte slices that you scan through incrementally. But it's not fine for packets and OS data structures that had their lengths fixed at 32 bits.
Look up systemd-userdb (the systemd component that added this field). Like the sibling comment said, this is basically equivalent to adding a GECOS field. A totally optional field.
Because someone came with a pull request for this; this additional field was meant to support a feature in something else they were working on (an xdg portal). It was a simple PR that addressed a need that the programmer had. And it was accepted.
Is it some sort of well-known fact that Xi has some egotistical need to build some kind of strongman legacy? I’m no expert, but the China that he has built is one of extreme competence and long-term thinking. It’d seem contradictory to compromise that for some flashy but ultimately unhelpful actions.
Yeah, the field of software engineering has come a long way since then. But just because previous implementations of the analysis phase were flawed doesn't mean that the phase itself was flawed.
To the extend that Python is indeed "batteries included," that seems true. But just how "batteries included" is it? I'd argue that its batteries are pretty limited. Exhibit A: everybody uses the third-party requests instead of the stdlib urllib. Exhibit B: http.server isn't a production-ready webserver, so people use Flask or something beefier.
I'd contrast Python with Go, which has an amazing stdlib for the domains that Go targets. This last part is key--Go has a more focused scope than Python, and that makes it easier for its stdlib to succeed.
> http.server isn't a production-ready webserver, so people use Flask [...]
Nit, but relevant nit: Flask is also not a production-grade webserver. You could say it is also missing batteries ... and those batteries are often missing batteries too. Which is why you don't deploy flask, you deploy flask on top of gunicorn on top of nginx. It's missing batteries all the way down (or at least 3 levels down).
Appreciate the nit. Had no idea that Flask wasn't production-grade. Yeesh.
I really don't miss this part of the Python world. When I started on backend stuff ~10 years ago, the morass of runtime stuff for Python webservers felt bewildering. uWSGI? FastCGI? Gunicorn? Twisted? Like you say, missing batteries all the way down, presumably due to async/GIL related pains.
Then you step into the Go world and it's just the stdlib http package.
Anyway, ranting aside, batteries included is a real thing, and it's great. Python just doesn't have it.
> In Linux, all windows share the same message loop thread.
I'm no expert, but aren't you just talking about Xorg here? As far as my limited knowledge goes, there's nothing inherent in the Wayland protocol that would imply this.
But yeah, SLAAC's paradigm of moving assignment logic into the node (away from network infra like in DHCP) is definitely a stumbling point.
reply