Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Doing First Grade Math in Rust's Type System (fprasx.github.io)
122 points by lukastyrychtr on Jan 13, 2024 | hide | past | favorite | 41 comments


I'm uncomfortable with programs that rely on complicated compile-time evaluation within the type system.

For complicated runtime behavior, we have various tools for stepping through the evaluations as they unfold. E.g., debuggers, print statements, etc.

But I haven't come across anything similar for these type-based evaluations. Usually the best I get is a compiler warning or error, or even worse, an unexpected successful compilation.

Are there any efforts to help with this, for e.g. C++ compilation?


Zig's comptime feature is really well designed IMO. It uses an interpreter to let you run arbitrary Zig at compile-time, and you can use it to do code generation, custom types, etc. It's pretty cool.


For Lisp macros, there is a macrostep plugin for Emacs that expands a macro within the text of your source code one step at a time until there's nothing left to be expanded. This is temporary, so after you exit the macrostep mode, the source code viewer goes back to normal.

Lisp is dynamically typed, so it's not the same as C++ templates or the Rust type system, but to me, it seems like there isn't a reason in principle why you couldn't make a stepper for pure C++ template metaprogramming, like this Peano arithmetic implementation: https://gist.github.com/vladris/2dc4f5a47d003da002a70f0c0184...

Particularly for "using" declarations, like this (taken from the gist that I linked):

  // A few numbers...
  using one = succ<zero>;
  using two = succ<one>;
  using three = succ<two>;
  using four = succ<three>;
  using five = succ<four>;
Also, the clangd language server expands C preprocessor macros, but that's less useful because the C preprocessor doesn't allow recursive macros.


I think that behind this sentiment is an assumption that software development is mostly applications and CRUD services.

If the target system is a space rocket, medical systems, etc. One really want to have guarantees.


The code for multiplication is correct, but the explanation is not:

> Here, we're using the fact that x * y = x + (x - 1) * y. We recursively calculate the second term, and then add it to the first factor!

It should say

    x*y = y + (x-1)*y.


Nice write-up.

I'm the author of the mentioned peano and typenum libraries, if anyone has any questions.


The problem with this peano arithmetic thing is that it only goes up to 88 https://upload.wikimedia.org/wikipedia/commons/b/bd/D274.jpg


If you build rustc with MIDI support enabled, it goes up to 100.


mine goes to 11


Can someone give an example of a useful application of this technique? Honest question, if it is just for fun that is fine with me.


We used methods akin to this back in 2019 for creating low-level HALs for insanely complicated SoCs (think experimental switch fabric controllers). We wanted the compiler to reject any program that was attempting to access the hardware incorrectly because the dev-test cycle was slow, the hardware was very fiddly, and debugging was miserable. The register definitions were all generated from the supplier data sheet (a clojure program we wrote parsed a PDF then emitted Rust).

We wrote a blog post about the Rust type-level programming part of the approach: https://blog.auxon.io/2019/10/25/type-level-registers/


Before const generics, you needed this kind of "types" to use integer values in generics, for example for fixed-length vectors/matrices/... (and const generics still have limitations).

https://doc.rust-lang.org/reference/items/generics.html#cons...

They came with 1.51: https://blog.rust-lang.org/2021/03/25/Rust-1.51.0.html


You still need them for many cases the current const generics implementation doesn't support.


This technique is good for job interviews, when they ask me to solve leetcode problems. I was inspired by the blog:

https://aphyr.com/posts/342-typing-the-technical-interview

and related to Rust, the solution

https://github.com/insou22/typing-the-technical-interview-ru...

Or Typescript https://www.richard-towers.com/2023/03/11/typescripting-the-...


As others mentioned, nowadays const generics replace this technique for most problems. This technique however is still useful if you need something like `[T; A+B]` (an array whose length is the sum of two generic constants), which is not yet possible with const generics.


How do you prove to the type-checker that e.g. `append(v: [T, N], w: [T, M]) -> [T, N + M]`? Or, to make it easier, that just `append(v: [T, N], x: T]) -> [T, N + 1]`?


you implement the function recursively and let the compiler derive a proof by induction.


Not really in the programming mainstream, but in research-oriented languages with dependent typing like Lean or Idris, constraining your "implementation search space" with types and leveraging the type system to prove properties about your program is commonplace.


While not nearly as involved, I've seen similar-ish techniques used for FFTs and similar algorithms, in order to precompute constants.

Here[1] is an example in C++ but given the article it seems you could do something similar in Rust.

[1]: https://github.com/tmolteno/template-fft/blob/master/fft_rea...


You do that (as nowadays in C++) with compile time "constants" https://doc.rust-lang.org/reference/const_eval.html


Yes, but then you wouldn't be abusing the type system, so where's the fun in that? :P


Theres a train company using these to prove the microcontrollers and train network cant be in bad states. check youtube


Something like a typed heterogeneous list is often build in a similar way.

A simpler version is probably a simple non empty list where you always have the guaranty of at least one element.


Theorem provers that use dependent typing, like Coq.


Life critical mathematics: you could verify calculations in the type system which are too risky to allow to fail at runtime.


Reading this I realize that I know nothing about Rust's type system, despite using the language daily.


When I use Rust, I feel like I'm just a dude mopping the floors of the nuclear silo.


Imagine doing C++. It's like being the new guy who has to manually take Geiger readings of the warhead.


Basic modern C++ is easier than Rust at this point. It’s not as safe (std::shared_ptr is not your friend), but it’s easier to get going vs dealing with lifetimes and Rustic syntax. I feel like at this point Rust has inherited all the syntax diarrhea of C++ and makes me adverse to using it. I don’t want to wrestle type system crap when I’m trying to wrestle app features. Some things I love. A few things I hate. Those things I hate make me use another language. A lot of rust is inferred through use so why bother with the extra cognitive load of having to unwrap or box or ‘a or worse ‘static. Zig isn’t better in this area either, requiring allocators for everything. Go is great but it’s a bit too simple and not fast enough for Rust-like workloads. This is me with a sad face because while I applaud the effort to build modern languages and tooling, we still haven’t addressed the issue of syntax elitism and shame-by-code-review because someone didn’t know about macro meta. I just want to code, have my program work, and not have to master 4th-degree black belt syntax grammar to do it. I think any language or runtime that allows one to write 4th-degree black belt syntax grammar should be shunned for simpler terms. Imagine the quality of software we would produce with a simpler type system. This was the goal of Go but it still needs work on language features for me (I really really want operator overloading on structs using + - * and / instead of methods).

C++ is by far the most Wild West, the most gotcha, the most “do as I say but not as I do”, the most “yes, except if” language ever. From 98-today, it’s just gotten more and more bloated with alternative pathways and acid pits.

I’m hopeful for Rust. Equally so for Zig. My opinions mean shit and in the end, the Rust team will evolve Rust beyond what they originally intended, capturing dev attention and pushing Rustic concepts outwards. Such as they all have the last 15 years.


> It’s not as safe (std::shared_ptr is not your friend), but it’s easier to get going vs dealing with lifetimes and Rustic syntax.

Rust can be easy enough to get going if you know to use .clone(), Rc<RefCell<>> and the like. Yes it's a bit heavy on boilerplate but that's about it. Also the boilerplate is helpful since it can guide a refactoring effort to make the code a bit more efficient. With Rust, "clean" code is usually fast and safe code which is not as true in other languages.


Less divergent code does not make better code. Rc<RefCell<>> is a prime example of "Do as we say, not as we do" in Rust that has made its way in from C++. I thought the borrow-checker was gospel and yet, here we are on the cusp of circumventing that with dynamic memory allocation checking (or lack thereof, just that it's the same size).


Rust has always supported some variety of Rc<…> because the borrow checker cannot account for every possible way of writing safe code. It's not something that has "made its way in" from the outside, it's just as inherent to the language as anything else in it.


If you like type-level math, I really recommend the Idris 2 language. It uses this same system for numbers, and is somewhat-magically able to identify isomorphic structures like linked lists.


"is much faster and the one you should probably use in practice" Where is this useful in practice? Exercises maybe. Like doing planks for example. I have never needed to plank "in practice" other than to exercise.


One practical use case is for dimensional analysis. A type is then a k-tuple of type-level integers where k is the number of primitive units, e.g. (kg, m, s). Then m^2 / kg would be the tuple (-1, 2, 0). Multiplying two quantities adds the tuples elementwise. For example multiplying the above type by a quantity of type kg / m / s (corresponding to (1, -1, -1)) would give (0, 1 -1), which is the correct m / s.

Addition would only be allowed if the types are equal.


This doesn't work with dimensionless numbers, as they would need to have e.g. the type `(0, 0, 0)` for multiplication but `(-1, 2, 0)` or `(0, 1 -1)` for addition.

Edit: I've just realised that that's actually correct ;)


This could be fine, because in science we rarely add dimensionless numbers. Usually they are a scaling factor (i.e. multiplicative).


As I've already said, nowadays there are const generics which do (almost) the same. But before them, these were the only way to use integer values in generics.

They came with 1.51: https://blog.rust-lang.org/2021/03/25/Rust-1.51.0.html


Like Object Oriented Programming, Type State Programming is a design paradigm. Using them in appropriate scenarios are very powerful. I am a wannabe Rustacean.


> Like doing planks for example. I have never needed to plank "in practice" other than to exercise.

You need it every day to, you know, hold that meat on your carcass.


Thats just prolog!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: