Hacker Newsnew | past | comments | ask | show | jobs | submit | Theriac25's commentslogin

I'd like to inform you that you have bad taste and your post doesn't have anything to do with the article.


He doesn't have to have the private key, only a private key that was signed by any of the hundreds (counting intermediate CAs, thousands?) CAs trusted by his browser.


He has to have the private key that matches the certificate he's presenting.

He's presenting the CloudFlare-obtained cert (which the site offers up on request), so the lack of a warning means he's got that private key.

Getting another CA-signed certificate, naming 'www.cloudflarechallenge.com' and matching another private key, would itself be an impressive compromise, though not the challenge CloudFlare made or what he's demonstrating.


See here how to verify that Indutny indeed snatched the private key from Cloudflare’s server: http://dankaminsky.com/2014/04/12/bloody-cert-certified/


CAs will verify that you at least have control over hostmaster@ or an email listed in the WHOIS info for the domain before issuing certs.


>In C you could have made some 'safe buffer'

No you couldn't've. C is fundamentally unsafe language and people like you should stop pretending it isn't.


People (all people) are fundamentally unsafe, not languages. It may be harder to write safe code in some languages than others, but that arguably doesn't make the language unsafe. Most certainly though, it does not make it "fundamentally" unsafe.


Browse through the National Vulnerability Database, and write down the language of each bug.

Discounting bad PHP, a huge majority of bugs are in C/C++ code. And "past popularity" isn't a good explanation, because a lot of the software is fairly recent.

So, the following isn't an opinion. It's simply a matter of observable fact. If you write in C/C++, you are far more likely to introduce security vulnerabilities than in other languages; therefore, unless there's a pressing reason to use these languages, don't.


You are making the claim that there would not be an increase in representation of other languages if they were more represented in the wild. I don't know how you could prove that.

That is, I don't think this exercise really shows what you think it does. Consider, if 90% of the software out there is in c/c++, and you had equal representation of language to vulnerability, then you would expect 90% of the vulnerabilities to be in c/c++. This would not mean that you are more likely to write bugs in those languages. In fact, unless I misunderstand, it would simply mean you are just as likely to have bugs there as otherwise.

Right?


You missed the point, and your characterization of my claim is way off.

1. A huge number of vulnerabilities affect C/C++ programs, almost all of which are memory based.

2. Memory-managed languages take care of this for you.

3. Therefore, C/C++ shouldn't be a default choice in domains where a managed language does just as well (a separate question).

Everything else is tangential.

But here are some reasons why your second paragraph is dangerously wrong (and why my claim was not as you characterize):

1. It only applies if we assume a uniform distribution of security effort across languages.

2. It only applies if we assume that c/c++ is being used for the same class of applications programs written in other languages, or that the applications have similar attack surfaces.

3. It only applies if we assume a uniform distribution of security effort over all code regardless of age.

and so on...

And also, 90% of code -- especially relatively recent code -- is not written in c/c++.


Look, I don't even really disagree with your hypothesis. I just question whether counting all of the CVEs and "discounting bad PHP, a huge majority of bugs are in C/C++ code" proves much. Other than the majority of attacked applications are c/c++. (well, likely they are php, but we are tossing that for some reason.)

Especially if you follow that with the claim of "If you write in C/C++, you are far more likely to introduce security vulnerabilities than in other languages." In order for that claim to stick, you have to show not just that there are more CVEs against c/c++ than other languages, but that there is the same effort spent in attacking non c/c++ programs. Right? (Or, am I misreading a claim on that?)

Sure, if you reduce that to "memory vulnerabilities", it is a true statement. However, you did not make that reduction. As you point out in your counter #2, there are plenty of other vulnerabilities out there. What makes you think people are more likely to avoid those than they are memory vulnerabilities in c/c++?

As for the 90% of code not being c/c++, what is the point? Unless you can show that they receive the same level of attack as the c/c++, you can not really use that to claim that they are inherently more secure. Worse, your throwing of php under the bus just shows that recent languages don't do enough to prevent security mistakes.

Heaven help you if you throw in XSS and friends. Suddenly one of the darlings of the tech industry at the moment, javascript, is rocketing to the top of the list for security blunders.


> It's simply a matter of observable fact. If you write in C/C++, you are far more likely to introduce security vulnerabilities than in other languages; therefore, unless there's a pressing reason to use these languages, don't.

And in the case of crypto code, one could argue that there is a very good reason to use C/C++: to prevent timing attacks.


How hard would it be to develop a fast memory safe language that eliminates timing attacks?


That seems like a great idea.

Timing attacks work when code tries to be fast, taking shortcuts where possible. There's a reason we talk big-O instead of big-Theta: we gladly accept when an algorithm finishes early. This is, in this case, highly undesirable. Shortcuts are great, everywhere but crypto; not unlike recursion being great, unless it's embedded.

A language where every expression takes constant-time to compute seems like a solution. No short-circuiting, no variable runtimes. Don't use sorting algorithms like insertion/quicksort where best-case & worst-case are very different; use selection/merge where best-case = worst-case = average-case. Use constant-time hash functions. Caching is a very prevalent (and somewhat invisible) shortcut that needs to be solved (how can a language disable caching?).

Obviously this isn't fast but you can't have both.


Obviously this isn't fast but you can't have both.

Merge sort is still pretty fast. It isn't the fastest. But safe > fast when it comes to reliability and security.


That's a very good question. I think it's possible that the recent revelations will at least spur some interest in researching this. It seems like we need it badly.


You can imagine a language that's memory safe. In fact, these languages exist today.


Your question is essentially "why is there a difference between theory and practice?".


in theory there aren't


That would be because GNU decided that man pages weren't good enough for them and decided to use info pages.


Doesn't seem to support backquotes? For example

    for x in `ls ~/foo`; do echo $x; done
doesn't yield anything remotely interesting.


New style subshells aren't supported (yet) either:

   for x in $(ls ~/foo); do echo $x; done


> this is typically the end game for capitalism and globalisation

Say what?


tl;dr if you don't know what you're doing, you don't know what you're doing.


Game programmers are weird.


I'm not sure if it's game programming or C++ that makes him say what he's saying. I can't speak to his level of competence, but in writing my own game library (mostly for education, but also out of discomfort with tools like Unity) I have found myself ambivalent about visibility and find myself making a lot more stuff public than I otherwise would due to the (wonderful, why oh why don't more languages steal this) constness in C++. Constness can often replace private visibility for data--i.e., I don't need getters or setters. A field may be immutable once the object is instantiated, in which case I make it a const field, or it may be mutable, in which case the object's constness will handle it for me. (There's still the case where I need to perform transformations on a piece of data either on the get or the set, of course, and it's there that private usability is still of use.)

I came to C++ through a circuitous route and a lot of my early C++ is very Java- and Scala-influenced - private vars all over, getters, setters. I find my code becoming much more "public-first" as I get better at what I'm doing. That said, I'm uncomfortable with the idea of public being the default visibility; I think I do prefer to have to make that decision consciously while designing my APIs.


Constness is a really useful concept especially since it has been extended to also mean thread-safe (at least in the standard library), but has a bunch of design flaws that make it very difficult to use correctly.

1. It is not the default. Everything should be const by default and should be marked mutable when necessary. This even seems to be consensus in the committee and has been done for newer language features (lambdas).

2. It is not "deep" when pointers are used. This one is actually inherited by C, but still is really painful. This is fixed in D.

3. const_cast is legal C++ in most circumstances (the only exception being an object declared const). This invalidates most assumptions optimizing compilers could possibly make and takes away a lot of usefulness because there is always someone that is going to const_cast stuff around.

4. It becomes very hard to use with polymorphic types, because const-ness is part of the function signature and affects override behavior. Can you really safely say something about const-ness and thread-safety of all classes you could possibly derive? I often find I cannot and when I really need runtime polymorphism I will usually end up with lots of const-less member function.


1. Agreed.

2. Agreed. I like D, or I want to anyway. Main problem is the garbage collector; I'm writing a game that I want to port to mobile and I don't really trust a garbage collector in a low-perf environment. Rust is sort of interesting for that reason but is a few years from being mature enough to consider I think.

3. Also agreed for projects in the large, but for the most part I write C++ for me (for an idea of the project size, my utility game library is ~14KLOC over about 200 compilation units, the game will probably be around half of both) is all my code and so I have certain assurances. The only one I'd be hurting is me. And I am averse to that. =)

4. This I'm not really so sure about. I have never really run into this issue - my base classes are generally close to all-virtual whenever possible and only expose a fairly limited set of methods. I have run into what you describe--the deepest nesting of polymorphism I have is my current project is in my drawing code, where a DrawSource is basically a time -> (rect, textureID) mapping and a Sprite is a timekeeper for DrawSource that performs a little matrix manipulation before invocation and has a list of child Sprites. I solved this by pulling everything I didn't need out of Sprite entirely and presenting a very simple interface. (I'm liberated a bit in that this is all my code, though, so I generally know what types I have bouncing around.)

In any case, I'll take flawed C++ const over "new ImmutableCollectionBecauseWeDontHaveConst<T>(someList)" when I can. I like Scala quite a bit but this mutable/immutable division drives me up the wall.


Java.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: