Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

>LLMs are not deterministic, so they are not compilers.

"Deterministic" is not the the right constraint to introduce here. Plenty of software is non-deterministic (such as LLMs! But also, consensus protocols, request routing architecture, GPU kernels, etc) so why not compilers?

What a compiler needs is not determinism, but semantic closure. A system is semantically closed if the meanings of its outputs are fully defined within the system, correctness can be evaluated internally and errors are decidable. LLMs are semantically open. A semantically closed compiler will never output nonsense, even if its output is nondeterministic. But two runs of a (semantically closed) nondeterministic compiler may produce two correct programs, one being faster on one CPU and the other faster on another. Or such a compiler can be useful for enhancing security, e.g. programs behave identically, resist fingerprinting.

Nondeterminism simply means the compiler selects any element of an equivalence class. Semantic closure ensures the equivalence class is well‑defined.





No, deterministic means that given the same inputs—source code, target architecture, optimization level, memory and runtime limits (because if the optimizer has more space/time it might find better optimizations), etc—a compiler will produce the same exact output. This is what reproducible builds is about: tightly controlling the inputs so the same output is produced.

That a compiler might pick among different specific implementations in the same equivalency class is exactly what you want a multi-architecture optimizing compiler to do. You don't want it choosing randomly between different optimization choices within an optimization level, that would be non-deterministic at compile time and largely useless assuming that there is at most one most optimized equivalent. I always want the compiler to choose to xor a register with itself to clear it if that's faster than explicitly setting it to zero if that makes the most sense to do given the inputs/constraints.


Determinism may be required for some compiler use cases, such as reproducible builds, and several replies have pointed that out. My point isn't that determinism is unimportant, but that it isn't intrinsic to compilation itself.

There are legitimate compiler use cases e.g. search‑based optimization, superoptimization, diversification etc where reproducibility is not the main constraint. It's worth leaving conceptual space for those use cases rather than treating deterministic output as a defining property of all compilers


Given the same inputs, the desire for search-based optimization, superoptimization, or diversification should still be predictable and deterministic, even if it produces something that is initially unanticipated. It makes no sense that that a given superoptimization search would produce different output—would determine some other method is now more optimized than another—if the initial input and state is exactly the same. It is either the most optimal given the inputs and the state or it is not.

You are attempting to hedge and leave room for a non-deterministic compiler, presumably to argue that something like vibe-compilation is valuable. However, you've offered no real use cases for a non-deterministic compiler, and I assert that such a tool would largely be useless in the real world. There is already a huge gap between requirements gathering, the expression of those requirements, and their conversion into software. Adding even more randomness at the layer of translating high level programming languages into low level machine code would be a gross regression.


Don't LLMs create the same outputs based on the same inputs if the temperature is 0? Maybe I'm just misunderstanding.

When they run on the deterministic hardware, yes. When they run on some large, parallel, varying-unpredictable-load-dependent-latency hardware, no.

Unfortunately not. Various implementation details like attention are usually non-deterministic. This is one of the better blog posts I'm aware of:

https://thinkingmachines.ai/blog/defeating-nondeterminism-in...


i dont think theres anything that makes it essentiall that llms are non-deterministic though

if you rewrote the math to be all fixed point precision on big ints, i think you would still get the useful LLM results?

if somebody really wanted to make a compiler in an LLM, i dont think that nondetermism is problem

id really imagine an llm compiler being a set of specs, dependency versions, and test definitions to use though, and you'd introduce essential nondetermism by changing a version number, even if the only change was the version name from "experimental" to "lts"


They're not inherently non-deterministic, correct. And floating point is deterministic enough, as that blog post is demonstrating.

Perhaps you're comfortable with a compiler that generates different code every time you run it on the same source with the same libraries (and versions) and the same OS.

I am not. To me that describes a debugging fiasco. I don't want "semantic closure," I want correctness and exact repeatability.


I wish these folks would tell me how you would do a reproducible build, or reproducible anything really, with LLMs. Even monkeying with temperature, different runs will still introduce subtle changes that would change the hash.

This reminds me of how you can create fair coins from biased ones and vice versa. You toss your coin repeatedly, and then get the singular "result" in some way by encoding/decoding the sequence. Different sequences might map to the same result, and so comparing results is not the same as comparing the sequences.

Meanwhile, you press the "shuffle" button, and code-gen creates different code. But this isn't necessarily the part that's supposed to be reproducible, and isn't how you actually go about comparing the output. Instead, maybe two different rounds of code-generation are "equal" if the test-suite passes for both. Not precisely the equivalence-class stuff parent is talking about, but it's simple way of thinking about it that might be helpful


There is nothing intrinsic to LLM prevents reproducibility. You can run them deterministically without adding noise, it would just be a lot slower to have a deterministic order of operations, which takes an already bad idea and makes it worse.

Please tell me how to do this with any of the inference providers or a tool like llama.cpp, and make it work across machines/GPUs. I think you could maybe get close to deterministic output, but you'll always risk having some level of randomness in the output.

It's just arithmetic, and computer arithmetic is deterministic.

On a practical level, existing implementations are nondeterministic because they don't take care to always perform mathematically commutative operations in the same order every time. Floating-point arithmetic is not commutative, so those variations change the output. It's absolutely possible to fix this and perform the operations in the same order every time, implementors just don't bother. It's not very useful, especially when almost everything runs with a non-zero temperature.

I think the whole nondeterminism thing is overblown anyway. Mathematical nondeterminism and practical nondeterminism aren't the same thing. With a compiler, it's not just that identical input produces identical output. It's also that semantically identical input produces semantically identical output. If I add an extra space somewhere whitespace isn't significant in the language I'm using, this should not change the output (aside from debug info that includes column numbers, anyway). My deterministic JSON decoder should not only decode the same values for two runs on identical JSON, a change in one value in the input should produce the same values in the output except for the one that changed.

LLMs inherently fail at this regardless of temperature or determinism.


Just because you can’t do it with your chosen tools it does not mean it cannot be done. I’ve already granted the premise that it is impractical. Unless there is a framework that already guarantees determinism you’ll have to roll your own, which honestly isn’t that hard to do. You won’t get competitive performance but that’s already being sacrificed for determinism so you wouldn’t get that anyway.

Agree. I'm not sure what circle of software hell the OP is advocating for. We need consistent outputs from our most basic building blocks. Not performance probability functions. Many softwares run congruently across multiple nodes. What a nightmare it would be if you had to balance that for identical hardware.

That is exactly how JIT compilers work, you cannot guarantee 100% machines code generation across runs, unless you can reproduce the whole universe that lead to the same heuristics and decision tree.

Once I create code with an LLM, the code is not going to magically change between runs because it was generated by an LLM unless it did an “#import chaos_monkey”

> What a compiler needs is not determinism, but semantic closure.

No, a compiler needs determinism. The article is quite correct on this point: if you can't trust that the output of a tool will be consistent, you can't use it as a building block. A stochastic compiler is simply not fit for purpose.


Compiler output can be inconsistent and correct. For any source code there is an infinite number of machine code sequences that maintain the semantic constraints of the source code. Correctness is defined semantically, not by consistency.

Kind of, dynamic compilers, are called dynamic exactly because they depending on profiling and heuristics.

What matters is observable execution.


Bitwise identical output from a compiler is important for verification to protect against tampering, supply chain attacks, etc.

its a useful way to solve those problems, but i dont think that means its the only way?

Sometimes determinism is exactly what one wants. For avionics software, being able to claim complete equivalence between two builds (minus an expected, manually-inspected timestamp) is used to show that the same software was used / present in both cases, which helps avoid redundant testing, and ensure known-repeatable system setups.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: