Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

One way that is very common to have decidable dependent types and avoid the paradox is to have a type hierarchy. I.e, there is not just one star but a countable series of them *_1, *_2, *_3, .... and the rule then becomes that *_i is of type *_(i+1) and that if in forall A, B A is of type *_i and B is of type *_j, forall A, B is of type type *_(max(i, j) + 1).


I'm no expert myself, but is this the same as Russell's type hierarchy theory? This is from a quick Google AI search answer:

    Bertrand Russell developed type theory to avoid the paradoxes, like his own, that arose from naive set theory, which arose from the unrestricted use of predicates and collections. His solution, outlined in the 1908 article "Mathematical logic as based on the theory of types" and later expanded in Principia Mathematica (1910–1913), created a hierarchy of types to prevent self-referential paradoxes by ensuring that an entity could not be defined in terms of itself. He proposed a system where variables have specific types, and entities of a given type can only be built from entities of a lower type.


I don't know that much about PM but I from what I read I have the impression that for the purposes of paradox avoidance it is exactly the same mechanism but that PM in the end is quite different and the lowest universe of PM is much smaller than than that of practical type theories.


This is correct but just delays the problem. It is still impossible to type level-generic functions (i.e. functions that work for all type levels).

The basic fundamental reality that no type theory has offered is an ability to type everything


>if in forall A, B A is of type _i and B is of type _j, forall A, B is of type type *_(max(i, j) + 1).

Minor correction: no +1 in forall


Ah is that what Lean does with its type universes?


Yes, it is.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: