I have been doing "DNS prefetching" before this term existed
I do non-recursive lookups and store the data I need in custom zone files or HOST file. I get faster lookups than any "solution" from any third party.
It is sad how much control is taken from the user, always with the stated goal of "making the web faster".
In many cases they are makng it slower. The irony of this blog post by a CDN about TLD latency is that some CDNs actually cause DNS-related delay by requiring excessive numbers of queries to resolve names, e.g., Akamai
Users have the option to choose for themselves the IP address they want to use for a given resource. If they find that the connection is slow, then they can switch to another one. Same idea as choosing a mirror when downloading open source software. Some users might want this selection done for them automatically, others might not
> And then there are browsers that have internal stub resolver.
You mean internal caching resolver? Every application has an internal stub resolver, even if it's just using getaddrinfo, which builds and sends DNS packets to the recursive, caching resolvers specified by /etc/resolv.conf or equivalent system setting. But getaddrinfo is blocking, and various non-portable extensions (e.g. glibc getaddrinfo_a, OpenBSD getaddrinfo_async) are integration headaches, so it's common for many applications to include their own async stub resolver. What sucks is if an internal stub resolver doesn't obey the system settings.
As a user, I prefer gethostbyname to getaddrinfo. The text-only browser I use actually has --without-getaddrinfo as a compile time option, so I know I am not alone in this preference. The best "stub resolvers" are programs like dnsq, dq, drill, etc. They do not do any "resolution", they just send queries according the user's specification.
As a user, I expect that the application interfacing with the resolver routines provided by the OS will respect the configuration settings I make in resolv.conf. Having to audit every application for how it handles DNS resolution is a headache.
> As a user, I prefer gethostbyname to getaddrinfo
On many systems (e.g. OpenBSD) they're implemented with the exact same code. glibc is something of an outlier given its insanely complex implementations interacting with RedHat's backward compatibility promises. Many of the code paths are the same[1], but getaddrinfo permits stuff like parallel A and AAAA lookups, and minor tweaks in behavior (e.g. timing, record ordering) often break somebody's [broken] application, so I'm not surprised some people have stuck to gethostbyname, which effectively disables or short-circuits alot of the optimization and feature code.
But, yeah, browsers in particular do all sorts of crazy things, even before DoH, that were problematic.
[1] As a temporary hack to quickly address a glibc getaddrinfo CVE without having to upgrade (on tens of thousands of deployed systems) the ancient version of glibc in the firmware, I [shamefully] wrote a simple getaddrinfo stub that used glibc's gethostbyname interfaces, and dynamically loaded it as a shared library system wide. It worked because while most of the same code paths were called, the buffer overflow was only reached when using getaddrinfo directly. Hopefully that company has since upgraded the version of glibc in their firmware. But at the time it made sense because the hack was proposed, written, tested, and queued for deployment before the people responsible for maintaining glibc could even schedule a meeting to discuss the process of patching and testing, which wasn't normally included in firmware upgrades and nobody could remember the last time they pushed out rebuilt binaries. glibc was so old because everybody was focused on switching Linux distributions, which of course took years to accomplish rather than months.
And then there are browsers that have internal stub resolver. Horrible
https://www.reddit.com/r/chrome/comments/bgh8th/chrome_73_di...
https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/X-...
https://www.chromium.org/developers/design-documents/dns-pre...
https://www.ghacks.net/2019/04/23/missing-chromes-use-a-pred...
https://www.ghacks.net/2013/04/27/firefox-prefetching-what-y...
https://www.ghacks.net/2010/04/16/google-chrome-dns-fetching...
I have been doing "DNS prefetching" before this term existed
I do non-recursive lookups and store the data I need in custom zone files or HOST file. I get faster lookups than any "solution" from any third party.
It is sad how much control is taken from the user, always with the stated goal of "making the web faster".
In many cases they are makng it slower. The irony of this blog post by a CDN about TLD latency is that some CDNs actually cause DNS-related delay by requiring excessive numbers of queries to resolve names, e.g., Akamai
Users have the option to choose for themselves the IP address they want to use for a given resource. If they find that the connection is slow, then they can switch to another one. Same idea as choosing a mirror when downloading open source software. Some users might want this selection done for them automatically, others might not