No idea if modules themselves are failed or no, but if c++ wants to keep fighting for developer mindshare, it must make something resembling modules work and figure out package management.
yes you have CPM, vcpkg and conan, but those are not really standard and there is friction involved in getting it work.
I emphatically agree. C++ needs a standard build system that doesn’t suck ass. Most people would agree it needs a package manager although I think that is actually debatable.
Neither of those things require modules as currently defined.
That is not even half realistic. Are you going to port all that code out there (autotools, cmake, scons,meson, bazel, waf...) to a "true" build system?
Only the idea is crazy. What Conan does is much more sensible: give s layer independent of the build system (and a way to consume packages and if you want some predefined "profiles" such as debug, etc), leave it half-open for extensions and let existing tools talk with that communication protocol.
That is much more realistic and you have way more chances of having a full ecosystem to consume.
Also, noone needs to port full build system or move from oerfectly working build systems.
Much like contracts--yes, C++ needs something modules-like, but the actual design as standardized is not usable.
Once big companies like Google started pulling out of the committee, they lost their connection to reality and now they're standardizing things that either can't be implemented or no one wants as specced.
It has the developer mindshare of game engines, games and VFX industry standards, CUDA, SYCL, ROCm, HIP, Khronos APIS, game consoles SDK, HFT, HPC, research labs like CERN, Fermilab,...
Ah, and the two compiler major frameworks that all those C++ wannabe replacements use as their backend.
In my experience it seems to be an issue caused by optimizations in legacy code that relied on dlopen to implement a plugin system, or help with startup, since you could lazy load said plugins on demand and start faster.
If you forego the requirement of a runtime plugin system, is there anything realistically preventing greenfield projects from just being fully statically linked, assuming their dependencies dont rely on dlopen ?
It becomes tricky when you need to use system DLLs like X11 or GL/Vulkan (so you need to use the 'hacks' described in the article to work around that) - the problem is that those system DLLs then bring a dynamically linked glibc into the process, so suddenly you have two C stdlibs running side by side and the question is whether this works just fine or causes subtle breakage under the hood (e.g. the reason why MUSL doesn't implement dlopen).
E.g. in my experience: command line tools are fine to link statically with MUSL, but as soon as you need a window and 3D rendering it's not worth the hassle.
X11 actually has a stable wire protocol so you don't strictly need any dynamic libraries for that - it's just that no one bothers because if you want X11 then you most likely also want GPU access where you do need to load hardware-specific libraries.
TLDR - on the TLS parts, quite a lot, up to 2x slower on certain paths. Amusingly, openssl 1.1 was much faster.
libcrypto tends to be quite solid though, though over the years, other libraries have collected weird SIMD optimizations that enable them to beat openssl by healthy margins.
The whole 3.0 rewrite is a massive regression in all ways possible - they deprecated the old engines and replaced them with providers, and they are not that much easier to work with as a developer (I hope that providers are at least easier for the maintainers to handle) and the library is a lot more runtime dynamic (for some reason). This has resulted in mutex explosion and massive performance regressions in every facet. haproxy has an amusing article on the topic.
Deepseek is GOATed for me because of this. If I ask it if "X" is a dumb idea, it is very polite in telling me that X is is dumb if the AI knows of a better way to do the task.
I'm partial to the tone of Kimi K2 — terse, blunt, sometimes even dismissive. Does not require "advanced techiques" to avoid the psychosis-inducing tone of Claude/ChatGPT
Me neither. Docker is the platform agnostic way to deploy stuff and if I maintained software, it is ideal - i can ship my environment to your environment. Reproducing that yourself will take ages, or alternatively I also need to maintain a lot of complex scripts long-term that may break in weird ways.
This is a familiarity problem. I've never used NixOS and all your posts telling me how simple it is sounds like super daunting challenges to me versus just updating a Dockerfile or a one liner in compose that I am already familiar with, I suspect its the inverse for you.
In the real world, unless are writing a very specialized system, intended to run only on Linux 6.0 and never, it just is not realistic and you will need some sort of abstraction layer to support at the very least additionally poll to be portable across all POSIX and POSIX like systems. Then if you want your thing to also run on Windows, IOCP rides in too...
I used 6.0 because 5.8-5.9 is roughly when io_uring became interesting to use for most use cases with zero copies, prepared buffers and other goodies, and 6.0 is roughly when people finally started being able to craft benchmarks where io_uring implementations beat epoll.
yes you have CPM, vcpkg and conan, but those are not really standard and there is friction involved in getting it work.
reply