Hacker Newsnew | past | comments | ask | show | jobs | submit | qwertyboy's commentslogin

Absolutely. A century has around 36500 days. If you compose a list of 100 people who died on that century, your chances of hitting the same day more than once are estimated at almost 13%. As a matter of fact, even 28 people will get you over the 1%.


"Worse is better" is an incomplete sentence.

Different folks complete it in different ways. Some say that worse is better than complex. Others claim it's better than slow. Others still that it's better than incompatible. The point of the posted link is that worse is better than nothing. One can hardly argue with that.


> The point of the posted link is that worse is better than nothing.

No, that's inaccurate.

For instance, one example the article used to support WiB was that x86 (a CISC architecture) beat out RISC architectures. There were many RISC architectures: MIPS, SPARC, DEC Alpha, PA-RISC. So it's not a case of worse is better than nothing. x86 won against real competition, because it took advantage of evolutionary pressures.


The article contains several examples for the different interpretations. CISC vs. RISC is an example of "worse is better than incompatible". Unix vs. lisp-machine is an example of "worse is better than complex". But the bottom line, for this specific author, is that worse is better than nothing.


You guys do realize that most `find` implementations support both the `-ls` and the `-name` flags, right?

    find ~/Documents -name somefile -ls


Egads! It was so obvious that I kept calling it GNUSpeak in my head, enjoying the cleverness of it all, and only now, following your comment, noticed it's actually GNUSpeech...


The existence of "finance tech entrepreneurs", as you politely call them, and the fact that they are indeed very noisy, does not mean there isn't a community of programmers who actually write code and try to design new decentralized technology.


From reading the article (and not enjoying it very much - see msravi's comment about its handwavyness), the question it's failing to answer isn't "why is light faster than X" but rather "why is the speed of light as it is".


In a lot of ways, that question is just about as meaningful as "why are there electrons?" Looked at from the right angle, you can say that there is only one speed at which everything always moves through space-time; the only difference, really, among speeds is how much of that speed extends timewards from the moving thing's perspective. (What space-time itself gets up to, and what it is apart from something we experience, is a different set of questions altogether.) The idea that we ought to be able to understand everything in principle is a horse long out of the barn.


Its definitely an interesting question. Something that limits all, moves uniformly no matter the point of view, doesn't interact with itself but does with everything else - its a singular feature of physics.

If I understand you right, one can wonder things like, maybe 'light' is really 'time' in some sense, not moving uniformly but instead the actual clock that drives the universe. Stuff like that is, at the least, interesting fodder for bull sessions.


The question you might ask is why doesn't EVERYTHING travel at the speed of light. The speed of light is one in some units. We just happened to develop the meter and the second before we could measure the speed of light. These units are totally arbitrary.


Everything does travel at the speed of light. It's the only speed that exists. When you aren't moving relative to the space around you, you are traveling at the speed of light along the time axis. As your speed increases through space, your speed along the time axis decreases to compensate, ensuring that your total speed in space-time is the speed of light.


I think Feynman's point is that to be a good scientist, you must keep in mind the difference between the map and the territory.

You preform observations. You think of explanations ("why"s) and try to fit them in a model. You make testable predictions using your model. You test them. If the experiments support your model, then it's a good and useful model. People will use it to achieve cool and terrible things. But it's still just a model. And you have to be ready at a moment's notice, as soon as the empirical data demands it, to drop your model like a hot potato and start looking for a better one.

To do otherwise - to believe that you already know the "why" - is to abandon the scientific method. "Knowing" is the opposite of "learning", and the antonym of scientific progress.


With your description i would say what feynman suggests is abandoning the model altogether. Whenever i hear physicist say "we don't understand it and it doesn't matter", or speak of "spooky action at a distance", you can't say that they've built a "model" of anything that can be proved or disproved by any "better" model.

They are, in fact, computing numbers, just like mayan priests, and not even try to put a "god" or a "magic number theory" behind it (as mentioned in Feynman's speech).

PS : this kind of debate reminds me a bit of the debate between chomsky and norvig, with norvig saying that numbers and results are all that matters, and chomsky arguing this isn't even science.


It sounds like you're confused by quantum mechanics and entanglement.

We never really know "why" in that sense, that's a question best left to philosophers, and that's what doesn't matter for us --perpetually asking "why" is not productive. In general, asking questions that cannot be falsified/validated experimentally isn't useful in science, hence they don't really matter to us.

A physical (or scientific) theory is such that you get more than you put into it. Newton's F=ma and inverse square law didn't just explain the motion of Venus, it explained and extremely wide range of phenomena and gave rise to thermodynamics, heat engines and fluid dynamics among many many other things. That it predicts new testable phenomena. Maxwell equations uncovered the link between magnetism, electricity and light (things that apparently have nothing to do with each other --but they do, and speed of light is related to permittivity and permeability), and eventually gave rise to special relativity. Quantum mechanics predicted --among many many other things-- anti-particles, superfluidity, superconductors. Mayan priests didn't have this. Physical theories do. "Spooky action at distance" is also an example of this, and it is something falsifiable, and its existence is experimentally confirmed. Nobody is saying we don't understand it or it doesn't matter. It is a just part of reality, and (non-relativistic) quantum mechanics.

That model you're referring to is called quantum mechanics, and it has been refined by quantum field theory.

You can't prove a scientific theory either way. There are physical laws that work within a certain domain. They just agree with observations. Until we observe something strange that requires a more refined theory, which however reproduces the old theory within the old domain (because it actually worked). For example, when the speed of light is much greater than any speed, you recover Newton's laws from special relativity and general relativity. When the action is large in comparison to the Planck constant, quantum mechanics turns into classical mechanics. When the mass density is small, general relativity becomes Newtonian gravity. And so on.

General relativity and quantum field theory will eventually be replaced by something that will (hopefully) explain what's going on inside a black hole, what is dark matter/energy, and so on.


Ok, so i've got another question : why didn't general relativity raise the same kind of debate about its "interpretation" ?

It does have its share of counter intuitive predictions ( twin paradox), new concept that are difficult to grasp ( relation between acceleration and time clock), yet i've never heard a physisict starting its general relativity course saying things like "you won't understand it, and neither do i" ( which is what feynman did in this video, and he isn't the first professor i saw doing this).


GR's conceptual model is fairly clear. It's completely unintuitive and hard to understand. But - so far at least - it's not open to multiple competing interpretations.

QM doesn't have an agreed conceptual model at all.

Stuff happens, and you can predict it statistically with a lot of accuracy. But the math doesn't reduce to a physical explanation that makes sense and everyone agrees on.

No one knows if a wave function is a physical thing, or if there's some other physical process which defines the wave function, or exactly how a statistical process with spatial and temporal indeterminacy gets turned into a physical observation.

These are all complete unknowns. And you can't say you understand something when you have equations that work, but no idea how or why they work.

This matters because when a scientific revolution happens the conceptual model everyone uses is transformed. The math tags along behind as a proof of consistency and accuracy, but it's not the primary driver of change.

If you don't have a conceptual model, you're stuck.


> In general, asking questions that cannot be falsified/validated experimentally isn't useful in science, hence they don't really matter to us.

Does this integral diverge? What does "measurement" used in the Born rule mean? Is this algorithm used in quantum theory internally consistent? Does this result of computation violate relativity theory?

Answers to these questions are not experimentally verifiable, yet they are very important, hence asking such questions is very useful in science and they do matter.

> Quantum mechanics predicted --among many many other things-- anti-particles, superfluidity, superconductors.

Superconductivity was first discovered in 1911, before quantum theory was even formulated, and it was not predicted. The first theory to explain superconductivity was the Ginzburg-Landau phenomenological theory and it was published in 1950.

Superfluidity was discovered in 1937 by Pyotr Kapitza, again not predicted. The first theories of it were Tisza's and Landau's two-fluid models, published in 1940 and 1941.

> "Spooky action at distance" is also an example of this, and it is something falsifiable, and its existence is experimentally confirmed.

It is generally agreed upon that neither quantum theory nor measured correlations of light prove any action at distance. If there was such an action, we could use it for super-luminal communication.


While your post borders on trolling and doesn't change my point, I'll bite this time.

> Does this integral diverge? What does "measurement" used in the Born rule mean? Is this algorithm used in quantum theory internally consistent? Does this result of computation violate relativity theory?

1) What integral? 2) Unless you're trying to play the philosopher, the current consensus on the word "measurement" is "whatever registers in your measurement device". 3) There is no "algorithm" used in "quantum theory". 4) I don't know what computation you're are talking about.

Anyway, I think everybody understands what a falsifiable prediction is.

> Superconductivity was first discovered in 1911, before quantum theory was even formulated, and it was not predicted. The first theory to explain superconductivity was the Ginzburg-Landau phenomenological theory and it was published in 1950. > Superfluidity was discovered in 1937 by Pyotr Kapitza, again not predicted. The first theories of it were Tisza's and Landau's two-fluid models, published in 1940 and 1941.

Gosh, if you're going to nitpick, read it as "explained".

Superconductivity, superfluidity, energy quantization, constancy of speed of light, gravity, viscosity have experimentally been known before. No one had any clue whatsoever about what's going on. Realizing the honey in your jar spills differently than your coffee and coming up with a general law that yields Navier-Stokes equation aren't the same thing.

If you're saying that the person who fell down first discovered gravity and hence Newton's law of gravity doesn't predict anything, then sure, go ahead and say that quantum mechanics doesn't predict superconductivity because there was a guy who measured that the resistance of some material is mysteriously 0 at certain conditions.

Gravity is more than us falling down, and superconductivity is more than just having 0 resistance.

What is your point anyway? That QM doesn't predict anything? Or if one aspect of a physical phenomena has been observed before, nothing is allowed to predict it?

Quantum field theory does predict all of these and more.

And if you're looking for fresh phenomena (that no one has ever dreamt of) first predicted by a theory, then it narrows down the list (time dilation, antimatter, entanglement, worm holes etc.), but it still doesn't change the point I made above.

"Ginzburg-Landau phenomenological theory" Dear Wikipedia reader; it doesn't really matter anyway, but do you know what that is? GL theory is a general framework for critical phenomena --you expand your free-energy in terms of an order parameter, something that is finite but suddenly vanishes beyond phase transition (yes, it was first invented for type-I superconductors). Do you understand what a phenomenological theory is? It means they didn't know what actually was going on inside a superconductor. The theory that actually explains superconductivity and mentions Cooper pairs is the BCS theory.

> It is generally agreed upon that neither quantum theory nor measured correlations of light prove any action at distance. If there was such an action, we could use it for super-luminal communication.

"Spooky action at a distance" means entanglement (in Einstein's words, which is what the OP is talking about). And no, it's not really action at a distance; entanglement does not violate causality.


You don't have to understand a model for it to bear fruit.

I'm sure you could find plenty of traders on Wall Street with models of the market who would say "we don't understand and it doesn't matter" as the money flows into their account.


Capability matters, and so long as you aren't exploring avenues that could expand your capabilities, you have something more to do.


Boy, do I have good news for you: http://conkeror.org/PDFViewer

Conkeror is so much Firefox, that you can easily fit the former with most things that work with the latter, including the built-in PDF extension. As a matter of fact, the Conkeror "binary" is just a shell script that launches Firefox or xul-runner with the proper config (something like: firefox -app /usr/share/conkeror/application.ini).

Also worth noting is that Conkeror was the original inspiration for vimperator (and thus for uzbl, dwb, luakit and the likes - all very splendid and worthwhile projects), and that this message is written on an editor spawned by conkeror (which, surprisingly enough, happens to be vim).


It is trivial today to sample audio in a rate and sensitivity which greatly surpass both the density of the pherromagnetic material in the tapes and the fidelity of the original recording equipment.

Even "analogue" sound depends on the resolution of the underlying technology, so no, there doesn't have to be any data loss.

And that's just on the philosophical level, about which our limited ears care very little :)


First of all, if your business wants me to memorize stuff that has nothing to do with my job so that we can both engage in a game of pretend, I don't want to work there. Who knows what other stupid shit you'll make me do.

But what really alarmed me was the following sentence:

> Don't complain about a system when you can game that system to your advantage!

This is not only morally dubious, it can be downright dangerous. A broken system is unstable, and can quickly turn against you. If you see a fire in your neighbours yard you should do something about it, even if he is a total asshole.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: