Hacker Newsnew | past | comments | ask | show | jobs | submit | marcusrobbins's commentslogin

This is really excellent. This should be taught in schools.


This is awesome :)


Or sad. Depending on your outlook ;)


Is Mach-O on OS X or ELF on Linux any more or less sad?


Don't know about Mach-O, but ELF is actually a rather nice and clean design.

Much of the hate PE gets is because of the silly overloading with .NET assemblies.


Oh yeah, and now also WinRT metadeta.

It annoys me mildly that .NET requires you to have a little native stub in assemblies, but the Windows loader does not actually execute it.


It annoys me more than just mildly to see .exe and .dll on linux. And that silly native stub!


I agree, but you could always simulate the brain's environment as well. Then you could speed up the environment with the brain. Bridging the gap might be annoying, for the brains, waiting to communicate with the glacial pace of the squishy rubbish real world humans, but I'm sure they'd get over it.


I think this is the most important aspect of this paper. Throwing more computing power at the problem increases performance significantly. It is possible that our algorithms are adequate but our hardware is not.


I'd be interested in getting my hands on the data generated by this app. It would useful in building a stock prediction AI.


Are you serious about that? If so contact me.


This is the group of problems we need to solve in order to design effective government and stable markets.


> we need to solve in order to design effective government

Is there actually any demand for "effective govt"? (Yes, lots of people claim to want it, but do you really think that there's large scale agreement on what that means?)


Isn't effective government a system which can maximise the following function?

G = SUM(Fi(W)) from i to WorldPopulation

Where W is the physical configuration of matter in the world. Fi is the function associated with citizen i that defines his notion of 'goodness'.

Bad government is one which tries to maximise an alternative function:

G = SUM(Fi(W) * Di) from i to WorldPopulation, where Di is some factor for each individual and where some Di are much greater than other Di. i.e. some individuals have much greater say over the configuration of the world which is chosen...


Isn't effective government a system which can maximise the following function?

almost certainly not.

For example, many people think that "effective govt" involves some notion of "justice" and/or "fairness".

One concrete example is Obama's position wrt capital gains taxes. He wants higher rates even if that results in less revenue. (Higher rates with less revenue means that there's less capital gain, which means less wealth produced, aka less total stuff. Since there's less tax revenue, there's less govt spending.)

A significant number think that "effective govt" propagates certain values/behaviors and discourages others.

Yes, there is disagreement on what "justice" and "fairness" mean and there's also disagreement as to the values/behaviors to be encouraged/discouraged.


There is plentiful demand for 'effective government,' and a total lack of consensus on what it means.

In politics, 'effective' really means 'does what I want.' When there is a lot of conflict about what people want, the government is not 'effective' because (A) deadlock is frustrating and (B) yelling louder is a way to get more and intimidate the opposition.

A government is less like a "singular intelligent self-modifying system" and more like a war on controlled burn.


I think his point is that they may be irresolvable.

To think about the world, you must first have a model of the world. Then you reason about the model, finally you take action.

Somewhere in there you have a motivation for following this observe-decide-act loop. Motivation provides a reference point towards which you want the observed system to evolve.

But there's a problem. The easiest way to satisfy the motivation component is to lie to it. Tell it things are just hunky-dory.

Any singularity-style intelligence will necessarily need to be built with some kind of anchoring motivations to try and stop it from getting out of control. But what's to stop it simply lying to itself and ignoring the outside world?

Trying to add a meta-motivation be as realistic as possible won't work, such a system will seize up with analysis paralysis.

One of the things that Hayekian economists argue is that knowledge about reality cannot be centralised; it is unevenly, lumpily distributed across the whole of humankind. No one actor does, and no one actor could, perceive the entire system. But it works, because no one actor has to.


I do not believe that it is irresolvable, I'm looking at biology for my counter example. In biology there exists a rigid framework inside which a "general" search of language/model is taking place, after a few billion years it seems to be doing ok. It is my hunch that there is a big something out there which will pull a lot of these issues together:

How do you build efficient markets? How do you build an effective AI? How do you design effective distributed systems? How do you build effective languages/models with which to compress the world?

Under what circumstances do such 'meta' searches fail and succeed? It's all beyond me but this is what my nose is saying...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: