This is almost certainly much less interesting than it sounds. It's effectively saying that if you're in the US, your network path is probably from one part of the US to another for many of your communications. (eg: Utah to Palo Alto, New York to Virginia, etc) These communications would not be surveilled.
But, if you send 100% of your communications through a consumer VPN and therefore much of your network traffic originates outside the US, your traffic might end up getting automatically collected.
there are two reasons, one has already been noted elsewhere - that you don't know what will succeed so you cast your net wide and hope you got one of the few winners in your catch.
The second reason is that your diverse catalog exists also as advertising for that subset of readers who will become your next authors, they might be dreaming for years about the day they can be published by Random House, just like their hero, obscure writer X from some years ago who did not earn out, but ended up inspiring the next generation's big thing.
Publishers have a good idea who will succeed. Being able to estimate that is part of the job, and they've become far less diverse and adventurous over time because they've become far more corporate and profit=focused.
Most published books either have a track record as self-published break outs, and/or they're trend chasers, and/or - as the article noted, but didn't say enough about - they have authors who fit a known demographic with known tastes and can be marketed on the strength of the author's story.
The real reason so many books are published is because it's a volume game. Most books are actually profitable - barely - so they need to keep churning them out to make adequate aggregate profits.
It's true that most of the money is made by a few tens of authors, but that doesn't mean no money is coming from the rest.
The tiny number of high prestige titles are cultural loss leaders which give publishers a cover story that they're really about Serious Literature and aren't just content mills.
This used to be a credible excuse when Serious Authors were still a thing, but it's looking more and more threadbare these days.
Serious Authors used to be household names, but hardly anyone can name a Recent Serious Author now. Critics still criticise, competitions still award prizes, and review editors still review, but most of the money and most of the interest is elsewhere.
It's a baffling flaw in human nature. The board should have cared about these issues, but in practice communications to and from the board are tightly controlled, and communications outside of those constraints are discarded.
This occurs whether or not it makes sense. Machiavelli actually warns about the specifically: if someone else controls access to you and communication with you, they have real leverage over you.
"Does it have a way to uninstall, and does that uninstallation clean every application artifact?" is such a great litmus test for just how much a software company actually cares about having a proper finished product that respects the user. Nobody forces a company to do it, but when they don't do it, you can probably bet that they're cutting corners and disrespecting the user's machine in other ways, too.
It's like "Do you return your shopping cart to the cart storage or leave it in the carpark?" You're allowed to just shove your cart away and drive off, but people who do that are highly probably assholes in other ways, too.
In the file menu of the installer, there is generally an option to see all the files it is placing on the system with full system paths. I generally note this down so I can make sure to clean things up completely if/when needed.
For app that just get dragged into the Applications folder, they end up doing all this additional file creation on first-launch instead of via an installer. That actually makes it harder. For those I tend to search the ~/Library folder for the name of the app and the company that made it, hoping I find all the remnants to delete. There are apps, like AppZapper and AppCleaner, which try to automate this process. I still think it’s ridiculous that Apple never solved for this. It’s one of the reasons I always do a manual migration to a new Mac. It feels like the only real way to clean things up.
> I still think it’s ridiculous that Apple never solved for this.
I think that problem, in general, is unsolvable on the Mac. The OS cannot know whether a file that an application creates is a user file that should be kept on uninstall or an application one that, maybe, should be deleted on uninstall.
(Maybe because Apple’s guidelines say (or at least used to say) uninstall ers, if you have one, should keep preferences files around, in case a user reinstalls the app later. Also, applications may ship with files (e.g. fonts, sounds, picture libraries) that users may want to keep around)
> For app that just get dragged into the Applications folder, they end up doing all this additional file creation on first-launch instead of via an installer
For quite a few things that an installer can install, applications cannot do that, as they want to install them into protected directories.
I think most of the leftovers whose locations you cannot gauge from looking at the file list in the installer are for caches, preferences, logs, etc.
Yeah, it's usually plist files for preferences and maybe an Application Support folder with whatever the app needed. Occasionally some other things. More recent apps end up with a container in there.
The upgrades process for some apps almost necessitates that the those files/folders are decoupled from the app and can live on, as the app upgrade ends up deleting the existing app and dropping a new one in it's place.
I get wanting to keep user preferences around in spirit, but in practice keeping them forever can sometimes be problematic. If I tried an app, then installed it again in 8 years. I usually want to start over. For users who don't know about the ~/Library, this is hard. Especially now that Apple hides it in Finder.
When having issues with an app, deleting those files (or simply moving them somewhere else like the desktop as a test) is a great troubleshooting step to see if its an app problem or something corrupt in your settings or support files. When most users reinstall an app as a troubleshooting step, they aren't doing much of anything, with all those files sticking around.
UTM buries their VM disks away in a container inside the ~/Library. I have a 20GB disk in there. It's not always trivial small files. If someone deletes UTM and forgets to check for old VMs first, that's a big hit.
What I'd like to see is a something, maybe in the Settings app, that lists all the applications on the system and 3 options.
1. Remove Application, keep Library data
2. Remove Application and Library data (have it give into on what files are in there)
3. Remove Library data only (this could be used to refresh an app to start over with it)
Maybe in addition to that, as part of the Optimize Storage feature, it could crawl through all those old orphaned application support folders and containers, and list all the ones without an app installed, show the size, and the user can choose to get rid of them.
Looking at how much junk I have out there now, I may just do a re-install of my OS to clean things up soon. I usually wait for a new system, but this M1 Pro is lasting a long time. I recently migrated off 1Password and it seems to have a bunch of junk out there, including 4 year old weekly archives of all my passwords that it took for a few months for some reason. The files are encrypted, but who knows how long that will actually be good for.
I'll have to check that installer trick the next time I use one.
Isn't the "Receipts" folder that so angered OP kind of that same thing? I thought those included the list of files installed.
In general, I think some worries about removing "every trace" are overblown, though. The receipts, for instance, are inert and they're not filling up the disk or consuming RAM.
Of all the things Apple does in the name of "security" it's funny to me that they've never even tried to build uninstallation functionality. Even though a majority of apps with "Installers" use, not arbitrary installer executables like Windows, but .pkg files that open with Apple's "Installer" app. That means it's Apple's code placing most of those files in place, and even if the install includes a "script" portion, it seems like a solvable problem that Installer.app could monitor the files being added or changed by the script process, to at least let you view a log of what happened if not reverse the changes.
There are two cases: I am uninstalling because I never want to use the app, or I am uninstalling because I know I currently don't need the app and will reinstall after 6 months when I do.
An example of first is a trial of an app but you don't like it in the end, an example of the latter is a game that you might want to play with the same settings later.
Now, I want the option. In the first case I don't want these inert files taking up disk space and in the second I want to have those files.
I stopped trying new apps as often, because I don't like how I can never really go back to a state before it was installed, unless the developer actually put effort into not spraying files everything and not leaving a trace once gone. I appreciate these developers very much, and am more likely to keep using their apps. The most junk an app install puts on my system, the more likely I am to want it gone.
Almost never, indeed, so you need some 3rd party trash utilities with databases and heuristics.
Though that's also on the gardener and his bad OS design where forced compartmentalization is's trivial, the weeds will never want to root themselves out!
The only good thing Microsoft azure ever did for me was provide a very easy way to exploit their free trial program in the early 2010s to crypto mine for free. It couldn’t do much, but it was straight up free real estate for CPU mining. $200 or 2 weeks per credit/debit card.
We tried this (and M$ sold it hard) and never went to production with it (except for a couple of niche use cases). It was obviously not going to meet expectations before we were half way through the PoC.
Azure container apps are a great (idea) and work mostly fine as long as you don’t need to touch them. But they’re just like GCR or what fargate should be - container + resources and off you go.
We ran many internal workloads on ACA, but we had _so may issues_ with everything else around ACA…
Sounds like containers and potentially adblocking and js blocking prevent this. For my part, I use linked in on my "god dammnit I hate corporate websites so much" browser which is used only for medical bill pay and amazon / wal mart purchases and then monthly bills. Could LinkedIn get something from me there? Potentially, but they're also not really following me around the web. I think given this I'll go install a 3rd browser for linkedin only, or maybe finally just delete my account. It never got me a job and it's a cesspool.
You can use Firefox with different profiles and configure it to launch particular profile directly, without launching default profile and using about:profiles.
Firefox with a non-default profile can be created like that:
./firefox -CreateProfile "profile-name /home/user/.mozilla/firefox/profile-dir/"
# For linkedin that would be:
./firefox -CreateProfile "linkedin /home/user/.mozilla/firefox/linkedin/"
And you can launch it like that:
./firefox -profile "/home/user/.mozilla/firefox/profile-dir/"
# For linkedin that would be:
./firefox -profile "/home/user/.mozilla/firefox/linkedin/"
So, given that /usr/bin/firefox is just a shell script, you can
- create a copy of it, say, /usr/bin/firefox-linkedin
- adjust the relevant line, adding the -profile argument
If you use an icon to run firefox (say, /usr/share/applications/firefox.desktop), you'll need to do copy/adjust line for the icon.
Of course, "./firefox" from examples above should be replaced with the actual path to executable. For default installation of Firefox the path would be in /usr/bin/firefox script.
So, you can have a separate profiles for something sensitive/invasive (linkedin, shops, etc.) and then you can have a separate profile for everything else.
And each profile can have its own set of extensions.
Really happy to see this kind of analysis on HN. The news you want to hear the most must also be looked at critically, and as much as I love Linux gaming we want to be sober in our expectations.
Funny enough, my favorite version has been the SNES version. Despite all the limitations, it's got built-in controller support and also has a map! Maybe I'll try to gran the mac-for-pc version.
- I don't want my interfaces to have multiple IP addresses
- I don't want my devices to have public, discoverable IPs
- I like NAT and it works fine
- I don't want to use dynamic DNS just so I have set up a single home server without my ISP rotating my /64 for no reason (and no SLAAC is not an answer because I don't want multiple addresses per interface)
- I don't need an entire /48 for my home network
IPv6 won't help the internet "be addressable." Almost everyone is moving towards centralized services, and almost no one is running home servers. IPv4 is not what is holding this back.
Why don't you want every device to have a public IP? There seems to be a perception that this is somehow insecure, but the default configuration of any router is to firewall everything. And one small bonus of the huge size of a /64 is that port scanning is not feasible, unlike in the old days when you could trivially scan a whole IPv4 /24 of a company that forgot to configure their firewall.
NAT may work fine for your setup, but it can be a huge headache for some users, especially users on CGNAT. How many years of human effort have gone towards unnecessary NAT workarounds? With IPv6, if you want a peer-to-peer connection between firewalled peers, you do a quick UDP hole punch and you're done - since everything has a unique IP, you don't even need to worry about remapping port numbers.
Your ISP shouldn't be rotating your /64, although unfortunately many do since they are still IPv4-brained when it comes to prefix assignment. Best practice is to assign a static /56 per customer, although admittedly this isn't always followed.
And if you don't need a /48... don't use it? 99.99% of home customers will just automatically use the first /64 in the block, and that's totally fine. There's a ton of address space available, there's no drawback to giving every customer a /56 or even a /48.
Great question and my gut is that it makes it that much easier for large, perhaps corporate interests to gain surveillance and control. I'm aware it's possible now, but it really feels like there's some safety in the friction of the possibility that my home devices just switch up IP addresses once in a while.
Like, wouldn't e.g. IPv6 theoretically make "ISP's charging per device in your home" easier, if only a little bit? I know they COULD just do MAC addresses, but still.
You can't correlate the number of addresses with the number of devices because IPv6 temporary addresses exist. If you enable temporary addresses, your computer will periodically randomly generate a new address and switch to it.
I feel like this is a silly narrowing of the problem for normal, retail users. My priority isn't masking "the number of addresses" or devices. My desire is to not have a persistent identifier to correlate all my traffic. The whole idea of temporary addresses fails at this because the network prefix becomes the correlation ID.
I'm not an IPv4 apologist though. Clearly the NAT/DHCP assignments from the ISP are essentially the same risk, with just one shallow layer of pseudo-obscurity. I'd rather have IPv6 and remind myself that my traffic is tagged with my customer ID, one way or another.
Unfortunately, I see no real hope that this will ever be mitigated. Incentives are not aligned for any ISP to actually help mask customer traffic. It seems that onion routing (i.e. Tor) is the best anyone has come up with, and I suspect that in today's world, this has become a net liability for a mundane, privacy-conscious user.
> My desire is to not have a persistent identifier to correlate all my traffic.
Reboot your router. Asus (with the vendor firmware) allows you do this in a scheduled manner. You'll get a new IPv4 WAN IP (for your NAT stuff) and (with most ISPs) a new IPV6 prefix.
As it stands, if you think NAT hides an individual device, you may have a false sense of security (PDF):
But most ISPs aren’t giving out static IPv6 prefixes either. Instead they are collecting logs of what addresses they’ve handed out to which customer and holding on to them for years and years in case a court requests them. Tracking visitors doesn’t need to use ip addresses simply because it’s trivial to do so with cookies or browser fingerprinting. There’s exactly zero privacy either way.
> Instead they are collecting logs of what addresses they’ve handed out to which customer and holding on to them for years and years in case a court requests them.
They are only supposed to hang on to them for a limited time according to the law where I live (six months AFAIK). Courts are also unwilling to accept IPv4 addresses as proof of identity.
> Tracking visitors doesn’t need to use ip addresses simply because it’s trivial to do so with cookies or browser fingerprinting
Cookies can be deleted. Browser fingerprinting can be made unreliable.
Its not zero privacy either way. Privacy is not a binary. Giving out more information reduces your privacy.
> Most home users do not have a static public IPv4 address - they have a single address that changes over time.
I'd be curious to know the statistics on this: I would hazard to guess that for most ISPs, if your router/modem does not reboot, your IPv4 address (and IPv6 prefix) will not change.
"If you enable" is doing ALL THE HEAVY LIFTING THERE.
Again, my point isn't about what is possible, but what is likely. -- which is MUCH MORE IMPORTANT for the real world.
If we'd started out in an IPv6 world, the defaults would have been "easy to discover unique addresses" and it's reasonable to think that would have made "pay per device" or other negatives that much easier.
Temporary addresses are enabled by default in OSX, windows, android, and iOS. That's what, like 95% of the consumer non-server market? As for Linux, that's going to be up to each distro to decide what their defaults are. It looks like they are _not_ the default on FreeBSD, which makes sense because that OS is primarily targeting servers (even though I use it on my laptop).
I haven't done the exhaustive research but props in advance for being the only person shouting in caps on HN. Definitely one way to proclaim one's not AI-ness without forced spelling errors.
I don’t want some of my devices to be publicly addressable at all, even if I mess up something at the firewall while updating the rules. NAT provides this by default.
I don’t want a static address either (although static addresses should be freely available to those who want them). Having a rotating IP provides a small privacy benefit. People who have upset other people during an online gaming session will understand; revenge DDoS is not unheard of in the gaming world.
> I don’t want some of my devices to be publicly addressable at all, even if I mess up something at the firewall while updating the rules. NAT provides this by default.
Do you ever connect your laptop to any network other than your home network? For example, public wifi hotspots, hotel wifi, tech conferences, etc? If so, you need to be running a firewall _on your laptop_ anyway because your router is no longer there to save you from the other people on that network.
It's also a good idea even inside your home network, because one compromised device on your network could then lead to all your other firewall-less devices being exploited.
Not every device can run its own firewall. IoT devices, NVR systems, etc should be cordoned off from the internet but typically cannot run their own firewall.
You must have not read my original post. I said that the NAT provides an additional fallback layer of safety in case you accidentally misconfigure your firewall. (This has happened to me once before while working late and I’ve also seen it in the field.)
Only if they're set up properly, which is quite the gamble. I was recently in a hotel and I listed all the chromecast devices throughout the entire hotel. I could see what everyone was watching and if I was a lesser person I could have controlled their TVs or changed what they were watching.
What about device like those Chromecasts which don't even have firewalls? The only real solution would be to bring your own hardware firewall / access point and connect it as a client off the hotel wifi. Who is really going to do that?
You can have IPv6 firewalls emulate the behavior of NAT so it blocks unsolicited inbound traffic while allowing outbound traffic. If you get a /48 form your ISP you could rotate to a new IP address every second for the rest of your life.
Right, but if you’re messing around as a naive learner it’s easy to accidentally disable that or completely open up an IP or range due to a bad rule. It’s a lot harder to accidentally enable port forwarding on a NAT.
> I don’t want some of my devices to be publicly addressable at all, even if I mess up something at the firewall while updating the rules. NAT provides this by default.
This feels like a strawman. If you are making the sort of change that accidentally disables your IPv6 firewall completely, you could accidentally make a change that exposed IPv4 devices as well (accidentally enabling DMZ, or setting up port forwarding incorrectly for example).
As someone who has done this while tired, it’s a lot easier to accidentally open extra ports to a publicly routable IP (or overbroad range of IPs) than it is to accidentally enable port forwarding or DMZ.
You could accidentally swap ips to one that had a port forward, some applications can ask routers to forward, etc etc. I donmt know how exactly we'd measure the various potential issues but they seem incredibly minor compared to the sheer amount of breakage created by widespread nat.
> Why don't you want every device to have a public IP?
Suddenly, your smart lightbulb is accessible by everyone. Not a great idea.
> With IPv6, if you want a peer-to-peer connection between firewalled peers, you do a quick UDP hole punch and you're done - since everything has a unique IP, you don't even need to worry about remapping port numbers.
There is no guarantee with IPv6 that hole punching works. It _usually_ does like with IPv4.
> Suddenly, your smart lightbulb is accessible by everyone. Not a great idea.
The answer here is kinda that Wi-Fi isn't an appropriate networking protocol for lightbulbs (or most other devices that aren't high-bandwidth) in the first place.
Smart devices that aren't high bandwidth (i.e. basically anything other than cameras) and that don't need to be internet accessible outside of a smart home controller should be using one of Z-Wave/Zigbee/Thread/LoRaWAN depending on requirements, but basically never Wi-Fi.
>> Why don't you want every device to have a public IP?
> Suddenly, your smart lightbulb is accessible by everyone. Not a great idea.
Why would it be "accessible by everyone"? My last ISP had IPv6 and my Asus (with the vendor firmware) didn't allow it. My printer automatically picked up an IPV6 address via SLACC and it was not "accessible by everyone" (I tried connecting to it externally).
It's because router defaults have been bad for a long time and NAT accidentally made them better.
I finally have IPv6 at home but I am being very cautious about enabling it because I don't really know what the implications are, and I do not trust the defaults.
>> Why don't you want every device to have a public IP?
> What would be the advantage in it?
Not having to deal with ICE/TURN/STUN. Being able to develop P2P applications without having to build out that infrastructure (anyone remember Skype's "supernodes"?).
It's about being able to run apps that can operate without have an HQ that needs to be phoned home to for operation, which is currently generally necessary with NAT.
> Anyhow. I'm not confused about NAT vs. firewalling. No one who dislikes IPv6 is confused by this.
"No one"; LOL. I've participated in entire sub-threads on HN with people insisting that NAT = security. I've cited well-regarded network educators/commentators and vendors:
That article is making a narrower claim than you're implying. It argues that NAT is not a security mechanism by design and that some forms of NAT provide no protection, which is true.
It also explicitly acknowledges that NAT has side effects that resemble security mechanisms.
In typical deployments, those side effects mean internal hosts are not directly addressable from the public internet unless a mapping already exists. That reduces externally reachable attack surface.
So, the disagreement here is mostly semantic. NAT is not a security control in the design sense, but it does have security-relevant effects in practice.
I personally do consider NAT as part of a security strategy. It's sometimes nice to have.
Both of those articles are actually wrong. They say "if an unknown packet arrives from the outside interface, it’s dropped" and "While it is true that stateful ingress IPv4 NAT will reject externally initiated TCP traffic" respectively, but this is in fact not true for NAT, which you can see for yourself just by testing it. (It's true for a firewall, but not for NAT.)
The biggest security-relevant effects of NAT are negative. It makes people think they're protected when they aren't, and when used with port forwarding rules it reduces the search space needed to find accessible servers.
I agree it can be a useful tool in your toolbox sometimes, but a security tool it is not.
> Why don't you want every device to have a public IP?
Big companies would abuse that beyond belief. Back around the late 90s ISPs wanted to have everyone pay per device on their local networks. NAT was part of what saved us from that.
IMO, IPv6 should have given more consideration to the notation. Sure, hex is "better in every way" except when people need to use it. If we could just send the IPv6 designers back in time, they could have made everyone use integer addresses.
# IPv4 - you can ping this
ping 16843009
# IPv6 - if they hadn't broke it :-(
ping 50129923160737025685877875977879068433
# IPv7 - what could have been :-(
ping 19310386531895462985913581418294584302690104794478241438464910045744047689
> Back around the late 90s ISPs wanted to have everyone pay per device on their local networks. NAT was part of what saved us from that.
But with IPv6 a single device may have multiple addresses, some of which it just changes randomly. So this idea that they'll then know how many devices you have and be able to pay per device isn't really feasible in IPv6.
A single /64 being assigned to your home gives you over 18 quintillion addresses to choose from.
If the ISP really wanted to limit devices they'd rely on only allowing their routers and looking at MAC addresses, but even then one can just put whatever to route through that and boom it's a single device on the ISP's lan.
NAT is arguably a very broken solution.IPv4 isn't meant to be doing address translation, period. NAT creates all sorts of issues because in the end you're still pretending all communications are end to end, just with a proxy. We had to invent STUN and all sorts of hole punching techniques just to make things work decently, but they are lacking and have lots of issues we can't fix without changing IPv4. I do see why some people may like it, but it isn't a security measure and there are like a billion different ways to have better, more reliable security with IPv6. The "I don't want my devices to have public, discoverable IPs" is moot when you have literally billions of addresses assigned to you. with the /48 your ISP is supposed to assign you you may have 4 billion devices connected, each one with a set of 281 trillion unique addresses. You could randomly pick an IP per TCP/UDP connection and not exhaust them in _centuries_. The whole argument is kind of moot IMHO, we have ways to do privacy on top of IPv6 that don't require fucking up your network stack and having rendezvous servers setting that up.
We may also argue that NAT basically forces you to rely on cloud services - even doing a basic peer to peer VoIP call is a poor experience as soon as you have 2 layers of NAT. We had to move to centralised services because IPv4 made hosting your own content extremely hard, causing little interest in symmetrical DSL/fiber, leading to less interest into ensuring peer to peer connections between consumers are fast enough, which lead to the rise of cloud and so on. I truly believe that the Internet would be way different today if people could just access their computers from anywhere back in the '00s without having to know networking
And the worst part about CGNAT is that you have two bad solutions:
Either EIM/EIF (preferably with hairpinning) where you can practically do direct connections but you have to limit users to a really low number of "connections" breaking power users.
Or EDM/EDF where users have a higher number of "connections" but it's completely impossible to do direct connections (at least not in any video/voice calling system).
Maybe you don't need the addresses, but there are other advantages. If we made the move, I suspect we could give you the experience you want and the one I want. I personally do want to host my own services. My phone is configured to send my pictures to Google and my personal NAS. Centralized services mean you have to trust that provider. These days I don't. I intend to leave centralized services so I know my content isn't training AI or the doorbell isn't spying on me or my neighbors. But, no instead we should force everyone to share the same IP addresses and run less efficient routing.
I mean, so many reasons. Not the least of which is carrier grade NAT is out. And that alone implies so much cost savings, performance increase, and home user flexibility .
I'm struggling to assume good faith on your question, since it's so strange. I feel like I need to start from scratch explaining the internet, since asking this question reveals a lack of knowledge about everything networking.
I don't have CG Nat, I choose a proper ISP. Opening a hole in my ipv6 firewall or forwarding a port in in my ipv4 firewall is effectively the same thing, I define the policy (allow traffic arriving on $address on tcp/1234 to this server on vlan 12) and it goes live.
Away from home, like I am at the moment, I vpn all my traffic back home, to work, or to a mullvad endpoint. Neither the hotel wifi nor tethering off my phone gives me a working ipv6 address (anything other than an fe80::) anyway.
All my workflows work on ipv4 only. Some workflows (especially around the corporate laptop) don't work on ipv6 only - maybe that's a zscaler thing, maybe its a windows thing.
As such the only choice is ipv4 with ipv6 as a nice to have, or ipv4 only.
Personally I prefer the smaller attack surface of a single network protocol.
Sounds like ipv6 is a good solution for people who choose ISPs with CGNat. It doesn't matter to me if I vpn home via my ipv6 endpoint or my ipv4 endpoint, I expose a very minimal set of services.
I guess if I wanted to host more than 4 servers on the same port at home it would be handy, as my ISP will only allow me to have 4 public IPs without paying for more. I don't host anything other than my wireguard endpoint and some UDP forwards which I specific redirect to where I want to go (desktop, laptop, server) - another great feature of nat, but yes nat66 can do that too.
But where's the killer feature of ipv6. Is it just CGNat on poor ISPs?
I'm not sure where that long story is supposed to convey. Cool story, bro.
> Sounds like ipv6 is a good solution for people who choose ISPs with CGNat.
I mean… this is just "not even wrong".
> Is it just CGNat on poor ISPs?
I already said no to this.
Look, like I said, you appear to be unaware of so much about everything about the Internet, running an ISP, running a service provider, corporate networks, ISP-customer relationships, small businesses, BGP viable policies, cloud economics, etc… that it's hard to know where to even start. And while HN is great for some things, HN comments are just not suitable for something that is shaped more like a course or internship. This can't even be described as "gaps" in your knowledge.
I'm put off by your confidence without the knowledge, and of course also by your implication that if you have CGNat then you should have just worked a little harder to not be so poor, to pay a better ISP, or you should move to a more expensive place where other ISP options exist. Of course ignoring that this doesn't scale to the population at all, and extra address bits are very relevant to scaling.
I don't directly deal with public peering, I leave that to my colleagues, my only practical BGP knowlege is on private ASes.
Your shitty ISP doesn't give you an ipv4 access, that's fine. ipv4 address blocks cost $20 an address and are cheaper today in real terms than in 2016, and have been coming down in nominal terms for years.
ipv6 makes sense at a global scale, it still makes no sense for many individuals with a good ISP, mainly because of how it was implemented, too much stuff still relies on ipv4. If you have to also run ipv4 then why run ipv6.
I have no services I use that are ipv6 only
I have services that are ipv4 only, so I have to run a 6:4 nat
I want a stateful firewall because it's not 1999
I want to handoff to multiple consumer ISPs, using PBR, not running BGP, so I need to use NAT66 (changing IPs isn't good enough, I want to round-robin based on various rules, send traffic to dropbox via one ISP, send udp via another, etc)
I have software which doesn't work on ipv6 on a client, so I have to run CLAT on the device
But not all my local devices can run CLAT, I thus have to run dual stack to use ipv6 successfully.
Thus as I'm running ipv4 anyway, and running NAT, there is no benefit over running ipv4 only. IPV6 adds more things to go wrong (NAT64/DNS64), but offers no benefits.
Even without the ipv6 client requirement I still need to run both NAT64 and NAT66. I have an ipv6 only network at home which I put phones on. It works, but there's no benefit other than keeping awareness of ipv6.
Now sure, the reason that ipv4 addresses are cheap is because other people are moving to ipv6 (especially mobile), and relying on 464 gateways, with 46 in their CPE and 64 on the ISP level. That's great.
But that doesn't change the equation for someone with a choice of ISPs, as they can choose an ISP which provides them with static ipv4 addresses.
I recently changed ISPs and have IPv6 for the first time. I mostly felt the same way, but have learned to get over it. Some things took some getting used to.
An "ip address show" is messy with so many addresses.
Those public IPs are randomized on most devices, so one is created and more static but goes mostly unused. The randomly generated IPs aren't useful inbound for long. I don't think you could brute force scan that kind of address space, and the address used to connect to the Internet will be different in a few hours.
Having a public address doesn't worry me. At home I have a firewall at the edge. It is set to block everything incoming. Hosts have firewalls too. They also block everything. Back in the day, my PC got a real public IP too.
NAT really is nice for keeping internal/external separate mentally.
I'm lucky enough my current ISP does not rotate my IPv6 range. This, ironically, means I no longer need dynamic DNS. My IPv4 address changes daily.
A residential account usually gets a /56, what are you talking about? Nowhere near a /48! (I'm just being funny here...)
There are reasons to need direct connectivity that aren't hosting a server. Voice and video calls no longer need TURN/STUN. A bunch of workarounds required for online gaming become unnecessary. Be creative.
I'm not confused about the NAT / firewall distinction, but it might be nice if my ISP didn't have a constant, precise idea of exactly how many connected devices I owned. Can that be _inferred_ with IPv4? Yes, but it's fuzzier.
The ISP still doesn't know how many devices are connected, because a lot of those devices are using randomized and rotating IPs for their outbound connections.
Okay but why does this matter? They're your ISP they also have your address, credit card number and a technician has been in your home and also supplied the router in the common case.
The theoretical vague problem here is being used to defend a status quo which has led to complete centralization of Internet traffic because of the difficulty of P2P connectivity due to NAT.
On Linux, I think the defaults are left up to the distros so there is a chance of a privacy footgun there. Hopefully most distros follow the example set by Apple and Microsoft (a sentence I never thought I would write...)
All desktop/mobile OSes today use "Stable privacy addresses" for inbound traffic (only if you are hosting something long-term) and "Temporary addresses" for outbound traffic and P2P (video/voice calls, muliplayer games...) that change quickly (old ones are still assigned to not break long-lived connections but are not used for new ones).
NAT only matters in so far as you don't technically need a firewall to block incoming traffic since if it fails a NAT lookup you know to drop the traffic.
But from a security standpoint you can just do the same tracking for the same result. That is just technically a firewall at that point.
NAT is a horrible, HORRIBLE hack that makes everything in networking much more complicated. IP networking is very elegant when everyone is using globally unique addresses and a ugly mess when Carrier NAT is used.
NAT demonstrably does not work fine. We have piles of ugly hacks (STUN, etc) that exist only because NAT does. If you really want to keep NAT then nothing stops you from running it on IPv6, but the rest of us shouldn't suffer because of your network design goals.
Only because most people don't know how NAT is hurting them, and because corporations have spent incredible resources on hacking around the problem for when peer to peer is required (essentially only for VoIP latency optimization and gaming).
NAT hurts peer to peer applications much more than cloud services, which are client-server by nature and as such indeed don't care that only outgoing connections are possible.
Even in a NAT-less world, the common advice is to use a firewall rule that disallows incoming connections by default. (And I'd certainly be worried if typical home routers were configured otherwise.) So either way, you'd need the average person to mess with their router configuration, if they want to allow incoming P2P connections without hole-punching tricks. At best, the lack of NAT might save you an address-discovery step.
> the common advice is to use a firewall rule that disallows incoming connections by default.
That's good advice! But firewall hole punching is also significantly easier (and guaranteed to work) compared to NAT hole punching. Address discovery is part of it, but there are various ways to implement a NAT (some inherently un-hole-punch-able) and only really one sane way to do a firewall.
> you'd need the average person to mess with their router configuration,
At least with IPv6, that firewall is likely to exist in the CPE, which sophisticated users can then ideally open ports in (or which can implement UPnP/NAT-PMP or whatever the current name for the "open this port now!!" protocol of the decade is); for CG-NAT, it's often outright impossible.
UPnP has covered a huge percentage of use cases that actual users care about, and those who it doesn't cover are often able to do their own customization.
I used to have it enabled long ago. It's insecure. Random cheap devices will open up ports with upnp without the user noticing. It doesn't work that well either, cause hosts will conflict on ports. P2P applications have better ways to establish connectivity.
But, if you send 100% of your communications through a consumer VPN and therefore much of your network traffic originates outside the US, your traffic might end up getting automatically collected.
reply