Wow, this is claiming 1.2x to 2x faster than gold at 10% of the code size. That is a remarkable achievement. Gold was already faster than GNU ld.
I am really surprised that there was this much room for optimization. Ian Lance Taylor who wrote Gold is a really smart guy, and speed was one of the primary goals.
Linkers are mind-bogglingly slow. I don't understand why they are so slow.
lld is still slow, it is just less slow than the other linkers.
This is not to disparage anyone working on linkers or say they are not smart. I think they just don't tend to be performance-oriented programmers, and culturally there has become some kind of ingrained acceptance of how much time it is okay for a linker to take.
Linkers have to copy large amount of data from object files to an output file, and that is inevitably slow. But LLD is not that bad if you pass -O0 (which disables string merging, which is computationally expensive). For example, LLD takes 6.7 seconds to link clang with debug info on my machine.
$ time ld.lld <omit> -O0 -o bin/clang-4.0
real 0m6.689s
Copying the output file to other file takes 2.6 seconds.
$ time cp bin/clang-4.0 foo
real 0m2.657s
So, LLD is only 2.5x slower than cp in this case. That's not too bad, as apparently the linker has many more things to do than the cp command.
Is your test mostly waiting on a mechanical hard drive? If so, then "2.5x slower than cp" could mean "a very large amount slower than cp" once you remove that overhead.
Whereas I have not done your specific test, I know that for the file sizes of executables I deal with in everyday work (around 10MB), the amount of time I wait for linking is woefully disproportionate.
For a comparison, an optimized release build of Clang 5.0 trunk (from a few days ago, built with Clang 5.0 trunk from like a week ago) with assertions enabled is 87mb -- with the LLVM libraries linked in, but dynamically linked to system libraries, on my Fedora 25 machine.
A debug build of clang is normally hundreds of megabytes (~600mb IIRC, and normal linkers go bonkers dealing with it), so if LLD is actually only 2.5x slower than 'cp' at -O0, that's quite good I think.
The next question is how much memory LLD uses vs the competition in this test...
It'd be interesting to see the numbers of a ramdisk cp/link as well. With ssd being so much faster than mechanical disks, people sometimes forget that ram is faster still.
Linking is also very easy to do incrementally, but for some reason incremental linking is not popular in the Unix world. GNU ld and Apple ld64 can't do it. GNU gold can do it, but only if you pass a special linker flag, which typical build systems don't. LLD can't do it, despite being the spiffy new thing.
So you end up with big projects where most of the time taken by incremental debug builds is spent linking - relinking the same object files to each other over and over and over. Awful. I don't use Windows, but I hear Visual Studio does the right thing and links debug builds incrementally by default. Wish the rest of the world would catch on.
Every incremental linking technique I'm aware of involves overwriting the output file and does not guarantee that identical input files and command line lead to identical (bit-exact) output files.
Incremental linking is not so easy under that constraint, since the output depends on the previous output file (which may not even be there).
(and considering the previous output file to be an "input file" follows the letter of the requirement but not the spirit; the idea is that the program invocation is a "pure function" of the inputs, which enables caching and eliminates a source of unpredictable behavior)
We have had to reject certain parallelization strategies in LLD as well because even though the result would always be a semantically identical executable, it would not be bit-identical.
See e.g. the discussions surrounding parallel string merging:
https://reviews.llvm.org/D27146 <-- fastest technique, but non-deterministic output
https://reviews.llvm.org/D27152 <-- slower but deterministic technique
https://reviews.llvm.org/D27155 <-- really cool technique that relies on a linearly probed hash table (and sorting just runs of full buckets instead of the entire array) to guarantee deterministic output despite concurrent hash table insertion.
As I said in a different reply, I think nondeterminism is an acceptable sacrifice for development builds, which is where incremental linking would be most useful. That said, it's definitely possible to get some speedup from incrementality while keeping the output deterministic; you'd have to move symbols around, which of course requires relocating everything that points to them, but (with the help of a cache file that stores where the relocations ended up in the output binary) this could probably be performed significantly more quickly than re-reading all the .o files and doing name lookups. But admittedly this would significantly reduce the benefit.
I agree. It's definitely possible. It's just that the actual benefit is far from reducing link time to "O(changes in the input)" and it would introduce significant complexity into the linker (and keeping LLD simple and easy to follow is a high priority). It's definitely an open research area.
> That said, it's definitely possible to get some speedup from incrementality while keeping the output deterministic; you'd have to move symbols around, which of course requires relocating everything that points to them, but (with the help of a cache file that stores where the relocations ended up in the output binary) this could probably be performed significantly more quickly than re-reading all the .o files and doing name lookups. But admittedly this would significantly reduce the benefit.
Yeah. It's not clear if that would be better in practice than a conservative padding scheme + a patching-based approach.
"move symbols around, which of course requires relocating everything that points to them" sounds a lot like what the linker already spends most of its time doing (in its fastest mode).
In its fastest mode, LLD actually spends most of its time memcpy'ing into the output file and applying relocations. This happens after symbol resolution and does not touch the input .o files except to read the data being copied into the output file. The information needed for applying the relocations is read with a bare minimum of pointer chasing (only 2 serially dependent cache misses last I looked) and does not do any hash table lookup into the symbol table nor does it look at any symbol name string.
> It's just that the actual benefit is far from reducing link time to "O(changes in the input)"
Not sure exactly what you mean by this. If you give up determinism, it can be O(changes) - except for time spent statting the input files which, at least in theory, should be possible to avoid by getting the info from the build system somehow. I can understand if LLD doesn't want to trade off determinism, but I personally think it should :)
One practical problem I can think of is ensuring that the binary isn't still running when the linker tries to overwrite bits of it. Windows denies file writes in that case anyway… On Unix that's traditionally the job of ETXTBSY, which I think Linux supports, but xnu doesn't. I guess it should be possible to fake it with APFS snapshots.
> In its fastest mode, LLD actually spends most of its time memcpy'ing into the output file and applying relocations. This happens after symbol resolution and does not touch the input .o files except to read the data being copied into the output file.
Interesting. What is this mode? How does it work if it's not incremental and it doesn't read the symbols at all?
> Not sure exactly what you mean by this. If you give up determinism, it can be O(changes) - except for time spent statting the input files which, at least in theory, should be possible to avoid by getting the info from the build system somehow. I can understand if LLD doesn't want to trade off determinism, but I personally think it should :)
Not quite. For example, a change in the symbols in a single object file can cause different archive members to be fetched for archives later on the command line. A link can be constructed where that would be O(all inputs) changes due to a change in a single file.
Even though a practical link won't hit that pathological case, you still have to do the appropriate checking to ensure that it doesn't happen, which is an annoying transitive-closure/reachability type problem.
(
If you need a refresher on archive semantics see the description here: http://llvm.org/devmtg/2016-03/Presentations/EuroLLVM%202016...
Even with the ELF LLD using the windows link.exe archive semantics (which are in practice compatible with traditional unix archive semantics), the problem still remains.
)
In practice, with the current archive semantics, any change to symbol resolution would likely be best served by bailing out from an incremental link in order to ensure correct output.
Note: some common things that one does during development actually do change the symbol table. E.g. printf debugging is going to add calls to printf where there were none. (and I think "better printf debugging" is one of the main use cases for faster link times). Or if you use C++ streams, then while printf-debugging you may have had `output_stream << "foo: " << foo << "\n"` where `foo` is a string, but then if you change to also output `bar` which is an int, you're still changing the symbol table of the object file (due to different overloads).
> Interesting. What is this mode? How does it work if it's not incremental and it doesn't read the symbols at all?
Compared to the default, mostly it just skips string merging, which is what the linker spends most of its time on otherwise for typical debug links (debug info contains tons of identical strings; e.g. file names of common headers). [1]
To clarify, there are two separate things:
- the fastest mode, which is mostly about skipping string merging. It's just like the default linking mode, it just skips some optional stuff that is expensive.
- the part of the linker profile that the linker spends most of its time doing in its fastest mode (memcpy + relocate); for example, I've measured this as 60% of the profile. This happens after symbol resolution and some preprocessing of the relocations.
Sorry for any confusion.
[1] The linker has "-O<n>" flags (totally different from the "-O<n>" family of flags passed to the compiler). Basically higher -O numbers (from -O0 to -O3 just like the compile, confusingly) cause the linker to do more "fancy stuff" like string deduplication, string tail merging, and identical code folding. Mostly these things just reduce binary size somewhat at a fairly significant link time cost vs "just spit out a working binary".
ld64 links clang (a multi-millions line project) in roughly 2s on my laptop. Do we need incremental linking?
MSVC incremental link is really not a model I would take as an example: the final binary is not the same as the one you get from a clean build, which is not a world I would want to live in.
First of all, that doesn't include linking debug info (dsymutil), does it? That's usually bigger than the executable itself. You can get away without linking it, but that just means the debugger has to read a ton of object files at startup. dsymutil isn't part of the linker, but AFAIK there's no reason it couldn't/shouldn't be.
Anyway, an incremental link should take a small fraction of a second and be O(1) all the way up to something like chromium. Well, there's the need to stat the input files to check for changes, but that's something the build system also has to do, so ideally there should be some sort of coordination.
It's true that the output of an incremental link will generally be nondeterministic, unless you add a slow postprocessing step. After all, the whole point is to take advantage of the fact that most of the desired content is already in the output binary. Ideally you never have to touch that content, even just to do a disk copy; you should be able to just patch in the new bytes in some free space in the binary, and mark the old region as free. But of course the ordering of symbols in the binary then depends on what was there before.
I don't know why that's particularly problematic; incremental builds are mainly useful during development, not for making releases, which is where reproducible builds are desirable.
> I don't know why that's particularly problematic; incremental builds are mainly useful during development, not for making releases, which is where reproducible builds are desirable.
I don't know for you, but not having to chase bugs that would only show up in an incremental build (or symmetrically: having bugs hidden by the incremental build that would show up in a clean build) is making "reproducible builds desirable" in my day-to-day development...
I would design any incremental system to provide this guarantee from the beginning.
I would expect it to be extremely rare for the order of symbols within a binary to hide or expose bugs in the application. It's not like compiler optimizations where semi-random inlining decisions can allow for further optimizations, with a cascading effect on whether undefined behavior gets noticed or what kind of code gets generated, etc. Linkers are much simpler and lower level than that, in the absence of LTO.
Anyway, many people already develop without optimizations and release with them, which is far more likely to result in heisenbugs even if it's technically a deterministic process. For that matter, I'm only proposing to use incremental linking in debug builds, so most of the time you'd only end up with nondeterminism if you were already going to get an output binary substantially different from a release-mode one. The only exception is if you have optimizations enabled in debug builds.
I am pretty familiar with object file formats, and I don't get how this "atom" stuff works.
The site says that "atoms" are an improvement over simple section interlacing. But I don't get how you are going to make this leap without changing the object file format. Linkers work on the section level because that is how object files work. Object files have sections, not atoms. Compilers emit sections as their basic, atomic unit of output. Within a section, the code will assume that all offsets referring to other parts of the section will be stable, so you can't chop a section apart without breaking the code.
How does the new linker work in terms of this new "atom" abstraction without changing the underlying object file format?
"The atom model is not the best model for some architectures The atom model makes sense only for Mach-O, but it’s used everywhere. I guess that we originally expected that we would be able to model the linker’s behavior beautifully using the atom model because the atom model seemed like a superset of the section model. Although it can, it turned out that it’s not necessarily natural and efficient model for ELF or PE/COFF on which section-based linking is expected."
But maybe, they require you to create special versions of object files where even references internal to each library are referenced there as if they live in a different object file? Is that even possible?
> But maybe, they require you to create special versions of object files where even references internal to each library are referenced there as if they live in a different object file? Is that even possible?
The extra information that is needed for an ELF linker (any ELF linker; nothing LLD specific) to operate on functions and global data objects in a fine-grained manner is enabled by -ffunction-sections/-fdata-sections.
If you are familiar with object file formats in general, you may know that this is exactly how MachO works: it is based on atoms.
If you want to map ELF to the atom model, you need somehow to build with -ffunction-section so that the compiler emits a single function per section (and similarly with -fdata-section) or model it by mapping one section of the object to an atom.
Hmm, in my experience with Mach-O, I have never come across atoms. For example, in this file format reference, atoms are not mentioned -- only the more traditional segment/section hierarchy: https://github.com/aidansteele/osx-abi-macho-file-format-ref...
What am I missing?
I do note that on OS X, -ffunction-sections appears to do nothing.
Sorry, I was wrong to characterize the atom being a core part of the MachO object format while it is only a core part of how ld64 works. The compiler is following some convention though that ld64 takes advantages of. Other than ld64 source code (available on opensource.apple.com) I can only point to some design document in the source repo: https://opensource.apple.com/source/ld64/ld64-253.6/doc/desi...
Actually no: `lld` is not a single linker, there has been a split between Elf/COFF folks and the MachO ones. The page you're linking is about the MachO project which is not very actively developed.
No, it is just that nobody is currently working on it. Last I talked with the Apple folks they are just busy with other stuff.
Patches are definitely welcome for MachO improvements in LLD (as always in LLVM!). You should be aware though that the Apple folks feel strongly that the original "atom" linker design is the one that they want to use. If you want to start a MachO linker based on the COFF/ELF design (which has been very successful) you will want to ping the llvm-dev mailing list first to talk with the Apple folks (CC Lang Hames and Jim Grosbach).
http://lld.llvm.org/NewLLD.html