One might assume he is using Haskell or similar, and is assuming that because the compiler accepts his code it is therefore correct, of course this is a fallacy. I will use a particular apt quote from Donald Knuth "Beware of bugs in the above code; I have only proved it correct, not tried it."
If correctness can't be demonstrated by mathematical proof, and it can't be demonstrated by comprehensive test suite, how can it be demonstrated? What kind of definition of 'correct' are you using?
Haskell is not quite sufficient to connect mathematical proof to your code. Coq/Agda/Idris are much closer to this idea though and, if you can create a spec in one of those systems then a proof really is sufficient.
You can express "this code performs at most n primitive steps from this list". Theoretically it should be possible to express any performance requirement in that form, but I doubt it's practical to do for e.g. numeric computation. (It's usable for "no page render require more than 4 db roundtrips" or similar though).
I'd bet you could ensure that some kind of tolerance and stability checks were performed and then carry the results around. Ultimately whenever computation relies on dynamic input it's necessary to have a hard-coded "tolerance failure" check.
Theoretically you could embed enough analysis in there to have theories about the actual code you're running, too, but I don't think as many analysts care about theorem proving—they're more of an algebraicist's toy.
That you did. I certainly didn't mean it as a takedown! I just saw it a subtlety worth stressing, and about which I was eager to learn more if there was more to learn :)
Perhaps mathematical proof + comprehensive test suite (including property/fuzz testing) + (time used without bugs showing up * number of people using it)
This does not prevent you from proving the program correct by hand. Regrettably that is rarely done, but could in principle be done regardless of the type system of the language. Of course that would require you to write a specification of the of the features you want to implement, which might be messy depending on your problem domain.
I'm not sure what this has to do with me. Again, I never said you can't prove programs correct. I simply stated how I work: I don't run my code nearly as often as I would when working in C.
Imagine if just compiling your code told you all the things you get from running a good suite of unit tests. And you didn't even have to write the tests, just add a few annotation hints to help the automatic test generator occasionally. That's what a strongly typed language can get you, impossible as it sounds.
"Strong typing" is an ambiguous term, I'm going to need something more specific. As far as I'm aware, "strong typing" is used as a spectrum, in which case C is relatively strong because the compiler will reject parameters that don't match a function signature's data type unless explicitly cast.
Nonetheless, unit tests aren't a panacea. If all you do is just run your unit tests, watch them pass and proclaim "Well, everything works! No need to do any acceptance, integration, system, fuzz or other form of testing! Not even gonna run it, let's just ship!", then that's quite stupid, to put it mildly.
> "Strong typing" is an ambiguous term, I'm going to need something more specific. As far as I'm aware, "strong typing" is used as a spectrum, in which case C is relatively strong because the compiler will reject parameters that don't match a function signature's data type unless explicitly cast.
C has some unfortunate integer conversion rules and a style that often encourages the use of (unsafe) void pointers. Perhaps more importantly, the only way to implement polymorphism is with casting or unions, both of which are very much unsafe.
Still, you can write safe code in C, at least once you write an object system. Use structs rather than typedefs, avoid casts, use only memory-safe things, write wrapper functions to allow you to do allocation and deallocation in a safe way. It's actually possible to reach a similar safety level to something like Haskell - but it will require a lot of tedious hand coding because the language doesn't have the abstraction facilities that would let you write these helpers generically. Eventually you end up greenspunning.
> Nonetheless, unit tests aren't a panacea. If all you do is just run your unit tests, watch them pass and proclaim "Well, everything works! No need to do any acceptance, integration, system, fuzz or other form of testing! Not even gonna run it, let's just ship!", then that's quite stupid, to put it mildly.
Sure. But I think it's reasonable for your day-to-day code loop to be edit / run unit tests, and to do the fancier testing or manual testing maybe a few times a week.
In a language like Haskell (actually I use Scala) you can get to the same point with compilation. It won't just happen without working at it, you need to think about what invariants are important and then encode them into your types. And you wouldn't want to release/deploy without executing the program or running some tests. But you can get to the point where in day-to-day coding an edit/compile cycle is enough.
C is strongly typed, it just has too few types to enable the really useful benefits of static type checking. If typedef actually let you define new types instead of just defining a shorthand for an existing type, it would be a much safer language.
"Actually, the reality of C is much more complex: C has two different type systems. There is one type system (with types such as int, double and even function types) at the level of the program, and a different type system, defined solely by lengths of bitstrings, at the level of execution. … As a result, it isn't even meaningful to talk about soundness for C, because the static types and dynamic type representations simply don't agree."
page 263 "Programming Languages: Application and Interpretation" Shriram Krishnamurthi 2003
That's an outdated version of a bad book. That statement is completely false. Notice how as racket's type system improved, he continued to update the book to be more and more pro-types and removed more and more the deliberate FUD like that?
>There is one type system (with types such as int, double and even function types) at the level of the program
Yes, that is the only type system.
>and a different type system, defined solely by lengths of bitstrings, at the level of execution
That is not a type system. By definition type systems are at the program level. Data classification occurs at the level of execution, and such a system exists in virtually every language whether it has a type system or not.
If you have ever used the phrases “strong typing” or “weak typing”, define them.
That’s what I thought. But thank you for playing.
Indeed, the phrases are not only poorly defined, they are also wrong, because the problem is not with the “strength” of the type checker but rather with the nature of the run-time system that backs them.
Concerning "strong typing", yeah I completely agree. I've wanted to talk about types and programming languages for some time now, but I've had trouble because the word "type" means very different things depending on what programming languages you are familiar with.
I think the thing that people are so excited about for recent type systems is that they have some basis in type theory. So maybe we can call these type systems "mathematically typed" or something along those lines?
I'm not sure if I agree about the highly subjective "power level" claim, but then who's to say.
More specifically, it's quite certain that types are capable of solidly dispatching some kinds of errors and tests a different kind. In other words, they are orthogonally optimized and combine well.
Yes, it does verify behavior... "up to" how well your types describe your program's behavior. That is, you get a free proof that your program does at least what it says it does in its types.
If, for example, I give one of my Haskell functions the type "f: [a] -> a", I know that it a) does not have side effects, and b) must pick some element from the list to hand me back, i.e. it cannot just construct or cast "something" to an 'a' and give me that. (There is a caveat around explicit "unsafeXXX" usage, but that's mostly beside the point.)
Both of those properties constrain behavior and are impossible to prove using unit testing.
It seems my confusion was that 'a' represents a generic in Haskell. Since the parameter is generic the function can't perform any type-specific operations on the value (e.g., creation or modification), and since it doesn't accept a function to apply to the values in the list, the only thing it can do is return one of the values it was given.
Yes, exactly. There's no way to construct a value of an arbitrary type -- outside of non-termination[1] (which doesn't really produce any values) or abuses of unsafeXXX functions. I think I perhaps should have been more explicit in my example. It could also have been specified as
f: forall a . [a] -> a
which would make it more explicit that the function has to work for any given 'a'.
[1] ... which I think I should have actually specified. The function type given does not preclude non-termination in Haskell. (You can specify such things in Idris/Coq/Agda, but the effort of implementing f can be much higher if you have to prove totality.)
If said a implements the Num typeclass (can kind of think of it like an interface) then you can perform Num specific operations such as adding/subtracting or averaging.
It verifies all the behavior that you tell it to. Consider the simple example used to help teach people to use type systems effectively: tic tac toe. You can write an implementation of tic tac toe in which the behavior of the game is type checked. You can even do so in java, although it is certainly more effort than in a more powerful language.
'So what is "strong typing"? This appears to be a meaningless phrase, and people often use it in a nonsensical fashion. To some it seems to mean "The language has a type checker". To others it means "The language is sound" (that is, the type checker and run-time system are related). To most, it just seems to mean, "A language like Pascal, C or Java, related in a way I can't quite make precise". If someone uses this phrase, be sure to ask them to define it for you. (For amusement, watch them squirm.)'
page 263 "Programming Languages: Application and Interpretation" Shriram Krishnamurthi 2003
Is that an opinion you share with the author and you are referencing a really poor book in an attempt at an argument from authority? Strongly typed is well defined, even when a Racket enthusiast dishonestly tries to pretend otherwise in his Racket marketing book. That nonsense was removed from the book was it not?
The term was defined by Barbara Liskov in 1974: "whenever an object is passed from a calling function to a called function, its type must be compatible with the type declared in the called function". Very clear, very simple, no nonsensical or meaningless problems at all.
It's absolutely possible for certain values of "power of your type system", "power of your program's types", and "extensiveness of your unit testing system". A few points to elaborate this.
1. Types absolutely can capture (trivially often) invariants which can never be satisfactorily covered by unit tests. This is the difference between proof and evidence.
2. Unit tests are far easier to write for many kinds of program behavior. Even though you can encode essentially anything you want into sufficiently powerful type systems, their proofs can be too challenging to practically build.
3. Type checking works very differently from testing. It's not red-green-refactor. It's more like "describe your problem in types and then slowly discover the only possible program which is correct". The feedback cycle here is very fast, but sometimes more inhospitable. Major research focus is on making this better, though. I'd say some of the most advanced HCI work is being done in dependently typed proof systems... but it's targeted only at people who understand and use dependently typed proof systems to begin with.
4. Unit tests often duplicate functionality which is trivially and automatically inferred by a type system. This is why, for certain invariants, types are "free" and unit tests are both incomplete and a big violation of DRY.
So, essentially, the unit tests which get written for you automatically are the trivial ones which are inferable properties. A dynlang usually does not have sufficient structure to enable this inference so you have to cover it with unit tests or contracts. More advanced, un-inferable properties can be handled and checked on compilation as well, but typically it takes some effort to build this stuff. Interestingly, though, you usually do it with heavy automation driven by the (partial) inference that the compiler provides—it becomes a conversation between you and the compiler as you develop the program which satisfies the type specification you gave.
I've seen this attitude a couple of times before and I think it's a perfectly valid reaction to have towards these statements because you're correct. It is impossible. However, type systems heavily steeped in type theory get around it being impossible by cheating. For example ...
Consider the halting problem. It's pretty common knowledge that given a program and some input there's no general algorithm that tells you if that program halts when the programming language is turing complete. However, we know that every simply typed lambda calculus program halts if it passes the type checker, so did we defeat the halting problem? No we cheated because simply typed lambda calculus isn't turing complete.
The problem with type theory is that sometimes it excludes good programs that we want to run just because we don't know how to prove that the program is good (or maybe it's even unprovable). So we try to develop new type theories that will allow us to prove more and more good programs are in fact good.
Does quickcheck run at compile time for Haskell? (I've dabbled with Haskell, and I've read about Quickcheck in other languages besides Haskell. In those other languages, Quickcheck definitely does not "happen" at compile time ... does it for Haskell?)
Idunno I write JavaScript primarily and I can go days without running a project. It's a way that I enjoy working. I kind of thought that everyone works this way. When I run my code, I'll usually have a few typos and syntax oversights, but other than that it usually works fine and tests out. Writing in a concise functional style really helps.
To some degree, if you can't write code without running it, it's a sign that you don't really understand the code you are writing.
Ehhh, it's all pretty simple logic. There's nothing different about "non trivial" stuff, other than maybe the opinion that the devs have of themselves.
Exactly what I said. I rarely run my code. When your compiler catches 99% of your errors as soon as you reload your repl, there's far less need to be constantly running your program to see what it is going to do. I know what it is going to do, the code tells me. I only run it when I need to, which is when I am doing something unsafe, when I have made a lot of big changes and want to double check them, or when I need it to run to produce new output for some other program to process.
I'm not sure what you mean by this, because it sounds horribly wrong.