Generators don't have to put out portable code. You document what compilers are required for the output and that's something you can change with any given release of your generator. Then the generated code uses whatever works with those compilers. If you use the output with some other compiler, then that's undefined behavior w.r.t. the documentation of the generator; you are on your own. "Whatever works" could be something undocumented that works de facto.
I think I may end up coming full circle on Virgil. Circa 2005 Virgil I compiled to C and then with avr-gcc to AVR. I did that because who the heck wants to write an AVR backend? Circa 2009 I wrote a whole new compiler for Virgil III and since then it has JVM, x86, x86-64, wasm, wasm-gc and (incomplete) arm64.
I like compiler backends, but truth be told, I grow weary of compiler backends.
I have considered generating LLVM IR but it's too quirky and unstable. Given the Virgil wasm backend already has a shadow stack, it should now be possible for me to go back to square one and generate C code, but manage roots on the stack for a precise GC.
Having done this for a dozen of experiments/toys I fully agree with most of the post, would be nice if the the addition of must_tail attribute could be reliable across the big 3 compilers, but it's not something that can be relied on (luckily Clang seems to be fairly reliable on Windows these days).
2 additional points,
1: The article mentions DWARF, even without it you can use #line directives to give line-numbers in your generated code (and this goes a very long way when debugging), the other part is local variables and their contents.
For variables one can get a good distance by using a C++ subset(a subset that doesn't affect compile time, so avoid any std:: namespaced includes) instead and f.ex. "root/gc/smart" ptr's,etc (depending on language semantics), since the variables will show up in a debugger when you have your #line directives (so "sane" name mangling of output variables is needed).
2: The real sore point of C as a backend is GC, the best GC's are intertwined with the regular stack-frame so normal stack-walking routines also gives everything needed for accuracte GC (required for any moving GC designs, even if more naive generation collectors are possible without it).
Now if you want accurate somewhat fast portable stack-scanning the most sane way currently is to maintain a shadow-stack, where you pass prev-frame ptrs in calls and the prev-frame ptr is a ptr to the end of a flat array that is pre-pended by a magic ptr and the previous prev-frame ptr (forming a linked list with the cost of a few writes, one extra argument with no cleanup cost).
Sadly, the performant linked shadow-stack will obfuscate all your pointers for debugging since they need to be clumped into one array instead of multiple named variables (and restricts you from on-stack complex objects).
Hopefully, one can use the new C++ reflection support for shadow-stacks without breaking compile times, but that's another story.
> ... [pointers] need to be clumped into one array ...
You could put each stack frame into a struct, and have the first field be a pointer to a const static stack-map data structure or function that enumerates the pointers within the frame.
BTW, the passed pointer to this struct could also be used to implement access to the calling function's variables, for when you have nested functions and closures.
Related to shadow stacks, I've had trouble convincing the C optimizer that no one else is aliasing my heap-allocated helper stacks. Supposedly there ought to be a way to tell it using restrict annotations, but those are quite fiddly: only work for function parameters, and can be dusmissed for many reasons. Does anyone know of a compiler that successfully used restrict pointers in their generated code? I'd love to be pointed towards something that works.
Static inline functions can sometimes serve as an optimisation barrier to compilers. Its very annoying. I've run into a lot of cases when targeting C as a compilation target where swapping something out into an always-inline function results in worse code generation, because compilers have bugs sadly
There's also the issue in that the following two things don't have the same semantics in C:
float v = a * b + c;
vs
static_inline float get_thing(float a, float b) {
return a*b;
}
float v = get_thing(a, b) + c;
This is just a C-ism (floating point contraction) that can make extracting things into always inlined functions still be a big net performance negative. The C spec mandates it sadly!
uintptr_t's don't actually have the same semantics as pointers either. Eg if you write:
void my_func(strong_type1* a, strong_type2* b);
a =/= b, and we can pull the underlying type out. However, if you write:
void my_func(some_type_that_has_a_uintptr_t1 ap, some_type_that_has_a_uintptr_t2 bp) {
float* a = get(ap);
float* b = get(bp);
}
a could equal b. Semantically the uintptr_t version doesn't provide any aliasing semantics. Which may or may not be what you want depending on your higher level language semantics, but its worth keeping the distinction in mind because the compiler won't be able to optimise as well
Compiler bugs and standards warts suck, but you know what sucks more? Workarounds for compiler bugs and edge cases that become pessimizing folk wisdom that we can dispell only after decades, if ever. It took about that long to convince the old guards of various projects that we could have inline functions instead of macros. I don't want to spook them into renewed skepticism.
> And finally, source-level debugging is gnarly. You would like to be able to embed DWARF information corresponding to the code you residualize; I don’t know how to do that when generating C.
I think emitting something like
#line 12 "source.wasm"
for each line of your source before the generated code for that line does something that GDB recognizes well enough.
Java JIT compilers perform function inlining across virtual function boundaries… this is why JIT’d Java can outperform the same C or C++ code. Couple it with escape analysis to transfer short-lived allocations to be stack-allocated (avoiding GC).
Often times virtual functions are implemented in C to provide an interface (such as filesystem code in the Linux kernel) via function pointers—-just like C++ vtable lookups, these cannot be inlined at compile time.
What I wonder is whether code generated in C can be JIT-optimized by WASM runtimes with similar automatic inlining.
Has anyone defined a strict subset of C to be used as target for compilers? Or ideally a more regular and simpler language, as writing a C compiler itself is fraught with pitfalls.
Sounds like why LLVM was created? (and derivatives like MLIR and NaCL) Its IR is intended be be C-like, except that everything is well-defined and substantially more expressive than C.
I’ve done something similar during my intern days as well. We had a Haskell-based C AST library that supports the subset of C we generate, and an accompanying pretty printing library for generating C code that has good formatting by default. It really was a reasonable approach for good high-level abstraction power and good optimizations.
The lifetimes argument is extremely sound: this is information which you need from the developer, and not something that is easy to get when generating from a language which does not itself have lifetimes. It's an especially bad fit for the GC case he describes.
> not something that is easy to get when generating from a language which does not itself have lifetimes
Not easy, but there are compilers that do it.
Lobster [0] started out with automatic reference counting. It has inferred static typing, specialising functions based on type, reminiscent of how Javascript JIT compilers do it. Then the type inference engine was expanded to also specialise functions based on ownership/borrowing type of its arguments. RC is still done for variables that don't fit into the ownership system but the executed ops overall got greatly reduced. The trade-off is increased code size.
I have read a few older papers about eliding reference counting ops which seem to be resulting in similar elisions, except that those had not been expressed in terms of ownership/borrowing.
I think newer versions of the Swift compiler too infer lifetimes to some extent.
When emitting Rust you could now also use reference counting smart pointers, even with cycle detection [1].
Personally I'm interested in how ownership information could be used to optimise tracing GC.
I was also reading through lobsters Memory management, which (i think) currently implements "borrow first" semantics, to do away with a lot of run-time reference counting logic, which i think is a very practical approach. Also i have doubts if reference counting overhead ever becomes too much for some languages to never consider RC ?
Tangentially, i was experimenting with a runtime library to expose such "borrow-first" semantics, such "lents" can be easily copied on a new thread stack to access shared memory, and are not involved in RC . Race-conditions detection helps to share memory without any explicit move to a new thread. It seems to work well for simpler data-structures like sequence/vectors/strings/dictionary, but have not figured a proper way to handle recursive/dynamic data-structures!
I mean, the argument boils down to "the language I'm compiling FROM doesn't have the same safeguards as rust". So obviously, the fault lies there. If he'd just compile FROM rust, he could then compile TO rust without running into those limitations. A rust-to-rust compiler (written in rust) would surely be ideal.
I'd be willing to sell you a rust to rust compiler. In fact, I'll even generalize it to do all sorts of other languages too at no extra charge. I just need a good name...maybe rsync?
Snark aside, the output targets of compilers need to be unsafe languages typically, since the point of a high level compiler in general is to verify difficult proofs, then emit constructs consistent with those proof results, but simplified so that they cannot be verified anymore, but can run fast since those proofs aren't needed at runtime anymore. (Incidentally this is both a strength and weakness of C, since it provides very little ability for the compiler to do proofs, the output is generally close to the input, while other languages typically have much more useful compilers since they do much more proof work at compile time to make runtime faster, while C just makes the programmer specify exactly what must be done, and leaves the proof of correctness up to the programmer)
This is weird. As soon as I thought about the subject the relevant article showed up on HN.
I was thinking about how to embed custom high level language into my backend application written in C++. Each individual script would compile to native shared lib loadable on demand so that the performance stays high. For this I was contemplating exactly this approach. Compile this high level custom language with very limited feature set to plain C and then have compiler that comes with Linux finish the job.
"static inline", the best way of getting people doing bindings in other languages to dislike your library (macros are just as bad, FWIW).
I really wish someone on the C language/compiler/linker level took a real look at the problem and actually tried to solve it in a way that isn't a pain to deal with for people that integrate with the code.
reply