Hacker Newsnew | past | comments | ask | show | jobs | submit | flohofwoe's commentslogin

> so I had it write a language guide for 0.15.2

Tbh, while impressive that it appears to work, that guide looks very tailored to the Zig stdlib subset used in your projects and also looks like a lot more work than just fixing the errors manually ;) For a large code base which would amortise the cost of this guide I still wouldn't trust the automatic update without carefully reviewing each change.


> I don't think Rust is "a better C/C++". It's a new kind of beast. Interesting, but very different.

The same can be said about Zig's comptime. It's entirely unlike anything C, C++ or Rust has to offer.

> I expect LLMs to be really good at converting C to Zig.

While it's possible to translate C to Zig code - and you don't need an LLM for that, it's a Zig compiler/build-system feature - the result will be quite different from a project that's developed in Zig from the ground up since the translation output wouldn't make use of Zig's unique features (and Zig isn't really unique as 'C translation target', C can also be translated to unsafe Rust, or even to Javascript - see early Emscripten versions).

Also, the 'C compatibility' of Zig is implemented via a separate compiler frontend, Rust toolchains could do exactly the same thing by integrating the Clang frontend into the Rust compiler.


Using the same language for compile-time and run-time programming is compelling, but doing it properly requires using the same approaches that dependently typed languages use. Comptime is a bit half baked.

It's not just about writing imperative code that runs at compile time, the actual interesting comptime feature in Zig is that "types are comptime values", e.g. you can inspect types and build new types with regular (comptime) code. This is very different from the template/trait systems in C++ and Rust. What Zig's comptime system is missing is the ability to build functions bodies at comptime (e.g. some sort of comptime AST builder).

"You can inspect types and build new types at compile time" is a key affordance of dependently typed languages.

Zig's comptime is an addition. You don't have to use it. And some C-macros may translate quite cleanly to it.

OTOH going from C++ (OO) to Rust (not OO, borrow checker) is a big leap.


Not all C++ is OOP, and Rust does support OOP as per CS literature, so much so that I have had no issues rewriting Raytracing Weekend tutorial from C++ into Rust, while keeping the same OOP architecture from the tutorial.

I think Rust and Zig really don't overlap much when it comes to target audience. E.g. if you're attracted to Rust, you'll probably find Zig terrible (and the other way around).

Rust will also never replace C or C++ in any meaningful way, at best new code gets written in new languages (and Rust being only one among many, and among languages used for new projects will also be C and C++, just maybe not that often).

I think the era of 'pop star languages' is over, the programming language future is highly diverse (and that's a good thing).


> Rust will also never replace C or C++ in any meaningful way

Not only do I disagree it never will, I think it's already well on its way to doing exactly that.


Is it? rust has to ditch llvm to be able to replace c++ - or rewrite llvm in rust.

C? Never. I feel like that ship has sailed, it's too primordial and tied to too many system ABI's to ever truly go away. I think we'll see a lot of Rust or Zig replacing certain popular C programs and libraries, but I don't think C will ever go away.

C++ on the other hand? Possibly, though I think that it's just as much because of the own-goals of the C++ standards committee as it is the successes of Rust. I don't really consider Zig a competitor in this space because if you're reaching for C++, you are reaching for a level of abstraction that Zig is unwilling to provide.


> I think Rust and Zig really don't overlap much when it comes to target audience. E.g. if you're attracted to Rust, you'll probably find Zig terrible (and the other way around).

This is ironic since these two crowds are mostly solving the same type of problems. It's just democrats vs republicans type of split, some of it is just for show and philosophical.


> This is ironic since these two crowds are mostly solving the same type of problems. It's just democrats vs republicans type of split, some of it is just for show and philosophical.

This is a painfully shallow framing.

Yes, programming languages solve problems by emitting instructions that a programmable logic chip can use to preform calculations on input resulting in output. And the scaffolding you use to get there isn't just a matter of philosophical show. Rust as a first order decision will refuse to emit perfectly valid programs because it's unable to prove it's correctness. Zig will emit any program it has enough information to do so. People coding in rust off load much of the effort in understanding and proving that correctness to the compiler. In Zig that relationship is reversed, where the compiler offloads that responsibility to the programmer.

The person you responded to is correct. For some people. Rust solves the difficult and annoying problems, for others it creates difficult and annoying problems.

Some people like creating art, some people like creating software. I guess you could frame that as philosophical, but to call it a political show, belies ignorance to the interactions between systems and predispositions of individuals.


Rust is solving the memory safety problem, Zig is solving the 'idiomatic interop with existing C coding patterns' problem. These couldn't be more different - C-like idiomatic code is generally antithetical to 'safe' modularity since it often relies on tacit global invariants for correct behavior.

Interestingly, Carbon is kinda trying to tackle both at the same time (though starting from C++ in their case) which is a bit of a challenge.


I am not sure how Carbon will go. The Carbon compiler is not ready to be used yet by the public as I understand it, and the roadmap has not been updated for some time now, it seems.

https://docs.carbon-lang.dev/docs/project/roadmap.html


I hear Cardon get mention on rare occasion, and with how rare that is I have to assume it's been completely stagnant. Does it offer anything over C++ in current year? Seems like C++ interop begets turning your language into C++ with different syntax in a way that C interop just doesn't.

The GitHub project has some activity at least, and they might be coming with some announcement later this year.

https://github.com/carbon-language/carbon-lang/


There is an announcement already planned at NDC Toronto 2026.

> Carbon: graduating from the experiment

https://ndctoronto.com/agenda/carbon-graduating-from-the-exp...

As for it being widely adopted, people keeping missing the point that Carbon is mostly for Google themselves, as means to integrate into existing C++ projects.

They are the very first ones to assert that for green field projects there are already plenty of safe languages to chose from.


What concerns me is that the design of Carbon in aspects seem to have serious issues already now.

In case that you are well familiar with for instance pattern matching, might you have any opinions on the pattern matching that is currently proposed for Carbon?

https://docs.carbon-lang.dev/docs/design/pattern_matching.ht...


I am not a Google employee, as such I don't care where they take Carbon, other than being a technology nerd that had compiler design as one of the areas I majored in.

Regarding the linked pattern matching proposal, it seems alright to me, not everything has to be ML like.


Are you really OK with runtime "expression patterns"?

    match (0, 1, 2) {
      case (F(), 0, G()) => ...
    }
> Here (F(), 0, G()) is not an expression, but three separate expressions in a tuple pattern. As a result, this code will call F() but not G(), because the mismatch between the middle tuple elements will cause pattern matching to fail before reaching G(). Other than this short-circuiting behavior, a tuple pattern of expression patterns behaves the same as if it were a single expression pattern.

How would that work with exhaustiveness checking? As far as I can tell, they themselves believe that Carbon's exhaustiveness checking will be very poor.

And OK with implicit conversions? Especially when combined with their way of handling templates for pattern matching?


As mentioned I have no interest in ever using Carbon, the language still isn't 1.0, and full end to end compiler is yet to be made available.

I was more referring to the type of things 90% of the developers are likely to build. In most cases that'll be command line tools, libraries or API's.

That's the space where Go shines

IME Zig's breaking changes are quite manageable for a lot of application types since most of the breakage these days happens in the stdlib and not in the language. And if you just want do read and write files, the highlevel file-io interfaces are nearly identical, they just moved to a different namespace and now require a std.Io pointer to be passed in.

And tbh, I take a 'living' language any day over a language that's ossified because of strict backward compatibility requirements. When updating a 3rd-party dependency to a new major version it's also expected that the code needs to be fixed (except in Zig those breaking changes are in the minor versions, but for 0.x that's also expected).

I actually hope that even after 1.x, Zig will have a strategy to keep the stdlib lean by aggressively removing deprecated interfaces (maybe via separate stdlib interface versions, e.g. `const std = @import("std/v1");`, those versions could be slim compatibility wrappers around a single core stdlib implementation.


> I take a 'living' language any day over of a language that's ossified because of strict backward compatibility requirements

Maybe you would, but >95% of serious projects wouldn't. The typical lifetime of a codebase intended for a lasting application is over 15 or 20 years (in industrial control or aerospace, where low-level languages are commonly used, codebases typically last for over 30 years), and while such changes are manageable early on, they become less so over time.

You say "strict" as if it were out of some kind of stubborn princple, where in fact backward compatibility is one of the things people who write "serious" software want most. Backward compatibility is so popular that at some point it's hard to find any feature that is in high-enough demand to justify breaking it. Even in established languages there's always a group of people who want somethng badly enough they don't mind breaking compatibility for it, but they're almost always a rather small minority. Furthermore, a good record of preserving compatibility in the past makes a language more attractive even for greenfield projects written by people who care about backward compatibility, who, in "serious" software, make up the majority. When you pick a language for such a project, the expectation of how the language will evolve over the next 20 years is a major concern on day one (a startup might not care, but most such software is not written by startups).


> The typical lifetime of a codebase intended for a lasting application is over 15 or 20 years (in industrial control or aerospace).

Either those applications are actively maintained, or they aren't. Part of the active maintenance is to decide whether to upgrade to a new compiler toolchain version (e.g. when in doubt, "never change a running system"), old compiler toolchains won't suddenly stop working.

FWIW, trying to build a 20 or 30 year old C or C++ application in a modern compiler also isn't exactly trivial, depending on the complexity of the code base (especially when there's UB lurking in the code, or the code depends on specific compiler bugs to be present - e.g. changing anything in a project setup always comes with risks attached).


> Part of the active maintenance is to decide whether to upgrade to a new compiler toolchain version

Of course, but you want to make that as easy as you can. Compatibility is never binary (which is why I hate semantic versioning), but you should strive for the greatest compatibility for the greatest portion of users.

> FWIW, trying to build a 20 or 30 year old C or C++ application in a modern compiler also isn't exactly trivial

I know that well (especially for C++; in C the situation is somewhat different), and the backward compatibility of C++ compilers leaves much to be desired.


You could fix versions, and probably should. However willful disregard of prior interfaces encourages developers code to follow suit.

It’s not like Clojure or Common Lisp, where a decades old software still runs, mostly unmodified, the same today, any changes mainly being code written for a different environment or even compiler implementation. This is largely because they take breaking user code way more seriously. Alot of code written in these languages seem to have similar timelessness too. Software can be “done”.


I would also add that Rust manages this very well. Editions let you do breaking changes without actually breaking any code, since any package (crate) needs to specify the edition it uses. So when in 30 years you're writing code in Rust 2055, you can still import a crate that hasn't been updated since 2015 :)

Unfortunately editions don't allow breaking changes in the standard library, because Rust codes written in different "editions" must be allowed to interoperate freely even within a single build. The resulting constraint is roughly similar to that of never ever breaking ABI in C++.

> The resulting constraint is roughly similar to that of never ever breaking ABI in C++.

No, not even remotely. ABI-stability in C++ means that C++ is stuck with suboptimal implementations of stdlib functions, whereas Rust only stabilizes the exposed interface without stabilizing implementation details.

> Unfortunately editions don't allow breaking changes in the standard library

Surprisingly, this isn't true in practice either. The only thing that Rust needs to guarantee here is that once a specific symbol is exported from the stdlib, that symbol needs to be exported forever. But this still gives an immense amount of flexibility. For example, a new edition could "remove" a deprecated function by completely disallowing any use of a given symbol, while still allowing code on an older edition to access that symbol. Likewise, it's possible to "swap out" a deprecated item for a new item by atomically moving the deprecated item to a new namespace and making the existing item an alias to that new location, then in the new edition you can change the alias to point to the new item instead while leaving the old item accessible (people are exploring this possibility for making non-poisoning mutexes the default in the next edition).


Only because Rust is a source only language for distribution.

One business domain that Rust currently doesn't have an answer for, is selling commercial SDKs with binary libraries, which is exactly the kind of customers that get pissed off when C and C++ compilers break ABIs.

Microsoft mentions this in the adoption issues they are having with Rust, see talks from Victor Ciura, and while they can work around this with DLLs and COM/WinRT, it isn't optimal, after all Rust's safety gets reduced to the OS ABI for DLLs and COM.


I'm not expecting to convince you of this position, but I find it to be a feature, not a bug, that Rust is inherently hostile to companies whose business models rely on tossing closed-source proprietary blobs over the wall. I'm fairly certain that Andrew Kelley would say the same thing about Zig. Give me the source or GTFO.

In the end it is a matter of which industries the Rust community sees as relevant to gain adoption, and which ones the community is happy that Rust will never take off.

Do you know one industry that likes very much tossing closed-source proprietary blobs over the wall?

Game studios, and everyone that works in the games industry providing tooling for AAA studios.


> Game studios, and everyone that works in the games industry providing tooling for AAA studios.

You know what else is common in the games industry? C# and NDA's.

C# means that game development is no longer a C/C++ monoculture, and if someone can make their engine or middleware usable with C# through an API shim, Native AOT, or some other integration, there are similar paths forward for using Rust, Zig, or whatever else.

NDA's means that making source available isn't as much of a concern. Quite a bit of the modern game development stack is actually source-available, especially when you're talking about game engines.


Do you know what C# has and Rust doesn't? A binary distribution package for libraries with a defined ABI.

> I'm fairly certain that Andrew Kelley would say the same thing about Zig. Give me the source or GTFO.

Thus it will never be even considered outside the tech bubble.


Compiler vendors are free to chose what ABI stability their C++ implementations provide.

ISO C++ standard is silent on how the ABI actually looks like, the ABI not being broken in most C and C++ compilers is a consequence of customers of those compilers not being happy about breakages.


> Compiler vendors are free to chose what ABI stability their C++ implementations provide.

In theory. In practice the standards committee, consisting of compiler vendors and some of their users, shape the standard, and thus the standard just so happens to conspire to avoid ABI breakages.

This is in part why Google bowed out of C++ standardization years ago.


I know, but still go try to push for ABI breaks on Android.

Sure, but considering that Zig is a modern C alternative, one should not and cannot afford to forget that C has been successful also because it stayed small and consistent for so long.

The entire C, C ABI and standard lib specs, combined, are probably less words than the Promise spec from ECMAScript 262.

A small language that stays consistent and predictable lets developers evolve it in best practices, patterns, design choices, tooling. C has achieved all that.

No evolving language has anywhere near that freedom.

I don't want an ever evolving Zig too for what is worth. And I like Zig.

I don't think any developer can resolve all of the design tensions a programming language has, you can't make it ergonomic on its own.

But a small, modern, stable C would still be welcome, besides Odin.


I'm pretty sure the point of aggressively evolving now is to have to basically not evolve it at some point in the future?

Besides Odin? Does Odin give you most of this?

Seeing that the author of Blade (kvark) isn't exactly a 3D API newbie and also worked on WebGPU I really wonder if a switch to wgpu will actually have the desired long term effect. A WebGPU implementation isn't exactly slim either, especially when all is needed is just a very small 3D API wrapper specialized for text rendering.

Cross API graphics abstractions are almost always a bad idea even if its just wrapping modern DX12 and Vulkan, and always always are when Metal comes into the mix.

Kvark was leading the engineering effort for wgpu while he was at Mozilla.

But he was doing that on his work time and did so collaborating with other Mozilla engineers, whereas AFAIK blade has been more of a personal side project.


WebGPU has some surprising performance problems (although I only checked Google's Dawn library, not Rust's wgpu), and the amount of code that's pulled into the project is massive. A well-made Metal renderer which only implements the needed features will easily be 100x smaller (in terms of linecount) and most likely faster.

There is also the issue that it is designed with JavaScript and browser sandbox in mind, thus the wrong abstraction level for native graphics middleware.

I am still curious how much uptake WebGPU will end up having on Android, or if Java/Kotlin folks will keep targeting OpenGL ES.


For a text editor it's definitely good enough if not extreme overkill.

Other then that the one big downside of WebGPU is the rigid binding model via baked BindGroup objects. This is both inflexible and slow when any sort of 'dynamism' is needed because you end up creating and destroying BindGroup objects in the hot path.

Vulkan's binding model will really only be fixed properly with the very new VK_EXT_descriptor_heap extension (https://docs.vulkan.org/features/latest/features/proposals/V...).


The modern Vulkan binding model is relatively fine. Your entire program has a single descriptor set containing an array of images that you reference by index. Buffers are never bound and instead referenced by device address.

Do you think Vulkan will become "nice" to use, could it ever be as ergonomic as Metal is supposed to be?

Apparently "joy to use" is one of the new core goals of Khronos for Vulkan. Whether they succeed remains to be seen, but at least they acknowledge now that a developer hostile API is a serious problem for adoption.

The big advantage of Metal is that you can pick your abstraction level. At the highest level it's convenient like D3D11, at the lowest level it's explicit like D3D12 or Vulkan.


Python had already exploded in popularity in the early 2000s, and for all sorts of things (like cross-platform shell scripting or as scripting/plugin system for native applications).

> GPUs, from my understanding, have lost the majority of fixed-function units as they’ve become more programmable.

That would be nice but doesn't match reality unfortunately, there are even new fixed-fuction units added from time to time (e.g. for raytracing).

Texture sampling units also seem to be critical for performance and probably won't go away for a while.

It should be possible to hide a lot of the fixed-function magic behind high level GPU instructions (e.g. for sampling a texture), but GPU vendors still don't agree about details like how the texture and sampler properties are managed on the GPU (see: https://www.gfxstrand.net/faith/blog/2022/08/descriptors-are...).

E.g. the problem isn't in the software, but the differing hardware designs, and GPU vendors don't seem to like the idea of harmonizing their GPU architectures and they're also not a fan of creating a common ISA as compatibility shim (e.g. how it is common for CPUs). Instead the 3D API, driver and highlevel shader bytecode (e.g. SPIRV) is this common interface, and that's how we landed at the current situation with all its downsides (most of the reasons are probably not even technical, but legal/strategic - patents and stuff).


Thanks for the link to the post. I also watched her talk posted elsewhere in these comments. We’re lucky to have people like her doing the hard work for free software.

> most of the reasons are probably not even technical, but legal/strategic - patents and stuff

I think fighting for specified interoperable interfaces is important and we must be vigilant again forces that undermine this, either knowingly or through ignorance.


> it's not a coincidence that metal is much easier to program for

Tbf, Metal also works on non-Apple GPUs and with only minimal additional hints to manage resources in non-unified memory.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: