Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

>Do you think you could control it?

There always seems to be an assumption in popular thought that any intelligent machine would necessarily be like us: ie, have drives, motivations, self-preservation instinct, etc. This just isn't the case. Even in us, our rational nature is an appendage to our more base drives and instincts. Intelligence does not come with these motivations and drives, they are completely separate. There is no reason to think that a general AI would have any of these things, thus concerns about it deciding that its better off without us are completely unfounded.

There is a concern however, that someone would program an AI specifically with these motivations. In that case we do have everything to worry about.



Intelligence does not come with these motivations and drives, they are completely separate.

Do you know of any examples of intelligent beings that don't have any motivations and drives? (Note that "motivations and drives" is a very general term; there's no need for them to be human motivations and drives. I agree the motivations and drives don't have to be human ones; but that's not the same as saying the AI has none at all.)

There is no reason to think that a general AI would have any of these things, thus concerns about it deciding that its better off without us are completely unfounded.

If an entity doesn't have some kind of motivation and drive, how can it be intelligent? Intelligence doesn't just mean cogitating in a vacuum; it means taking information in from the world, and doing things that have effects in the world. (Even if the AI just answers questions put to it, its answers are still actions that have effects in the world.) So an AI has to at least have the motivation and drive to take in information and do things with it; otherwise it's useless anyway.

So given that the AI at least has to "want" to take in information and do things with it, how do you know the things it will want to do with the information are good things? ("Good" here means basically "beneficial to humans", since that's why we would want to build an AI in the first place.) We can say that we'll design the AI this way; but how do we know we can do that without making a mistake? A mistake doesn't have to be "oh, we programmed the AI to want to destroy the world; oops". A mistake is anything that causes a mismatch between what the AI is actually programmed to do, and what we really want it to do. Any programmer should know that this will always happen, in any program.


>Do you know of any examples of intelligent beings that don't have any motivations and drives?

My computer, for certain definitions of intelligent.

> Intelligence doesn't just mean cogitating in a vacuum; it means taking information in from the world, and doing things that have effects in the world.

I agree with this; but its behavior does not have to be self-directed to be intelligent. Again, computers behave quite intelligently in certain constrained areas, yet its behavior is completely driven by a human operator. There is no reason a fully general AI must be self-directed based on what we would call drives.

>So an AI has to at least have the motivation and drive to take in information and do things with it

I don't see this to be true either. Its (supposed) neural network could be modified externally without any self-direction whatsoever. An intelligent process does not have to look like a simulation of ourselves.

The word "being" perhaps is the stumbling point here. Perhaps it is true that something considered a "being" would necessarily require a certain level of self-direction. But even in that case I don't see it being possible for a being who was, say, programmed to enjoy absorbing knowledge to necessarily have any self-preservation instinct, or any drives whatsoever outside of knowledge-gathering. All the "ghosts in the machine" nonsense is pure science fiction. I don't think there is any programming error that could turn an intended knowledge-loving machine into a self-preserving amoral humanity killer. The architecture of the two would be vastly different.


for certain definitions of intelligent

Yes, but I would argue that those definitions are not really relevant to this discussion. You say...

behavior does not have to be self-directed to be intelligent

...which is true, but the whole point of AI is to get to a point where computers are self-directed; where we don't have to laboriously tell the computer what to do; we just give it a general goal statement and it figures out how to accomplish it. If we have to continuously intervene to get it to do what we want, what's the point? We have that now. So this...

Its (supposed) neural network could be modified externally without any self-direction whatsoever.

...is also not really relevant, because the whole point is to develop AI's that can modify their own neural networks (or whatever internal structures they end up having) as an ongoing process, the way humans do.

(Btw, one of the reasons I keep saying this is "the whole point" is that developing such AI's would confer a huge competitive advantage, compared to the "intelligent" machines we have now, that only exhibit "intelligent" behavior with continuous human intervention. So it's not realistic to limit discussion to the latter kind of machines; even if you personally don't want to take the next step, somebody else will.)

The word "being" perhaps is the stumbling point here.

No, I think it's the word "intelligent". See above.

I don't think there is any programming error that could turn an intended knowledge-loving machine into a self-preserving amoral humanity killer.

I don't think you're trying hard enough to imagine what effects a programming error could have. Have you read any of Eliezer Yudkowsky's articles on the Friendly AI problem?

The architecture of the two would be vastly different.

Why would this have to be the case? Human beings implement both behaviors quite handily on the same architecture.


I have a feeling our disagreement is largely one of terminology.

>the whole point of AI is to get to a point where computers are self-directed; where we don't have to laboriously tell the computer what to do;

There is much in between laboriously tell it what to do and having a self-directed entity traipsing in and out of computer networks. A system that is smart enough to figure out how to accomplish a high level goal on its own doesn't have to be a self-directed entity. It just needs to be infused with enough real-world knowledge that the supposed optimization problem has a solution.

>because the whole point is to develop AI's that can modify their own neural networks (or whatever internal structures they end up having) as an ongoing process, the way humans do.

I think you give us too much credit. We may guide our learning processes, but the actual modification of our neural networks is completely out of our control. The distinction here may seem useless, but in this case it is important. A supposed unself-directed-but-intelligent being would simply not have the ability to direct its learning processes. We would still have to bootstrap its learning algorithms on a particular dataset to increase its knowledge base. But this isn't contradictory to general AI, nor would it be useless. In fact, I would say the only thing we would lose is the warm fuzzies that we created "life". It's still just as useful to us if we're in control of its growth.


A system that is smart enough to figure out how to accomplish a high level goal on its own doesn't have to be a self-directed entity.

But a system that can do this and is a self-directed entity provides, as I said, a competitive advantage over a system that can do this but isn't self-directed. So there will be an incentive for people to make the latter kind of system into the former kind.

We may guide our learning processes, but the actual modification of our neural networks is completely out of our control.

As you state it, this is false, because guiding our learning processes is controlling at least some aspects of the modification of our neural networks. But I agree that our control over the actual modification of our neural networks is extremely coarse; most aspects of it are out of our control.

It's still just as useful to us if we're in control of its growth.

No, it isn't, because if we're in control of its growth, its growth is limited by our mental capacities. An AI which can control its own growth is only limited by its own mental capacities, which could exceed ours. Since one of the biggest limitations on human progress is limited human mental capacity, an AI which can exceed that limit will be highly desirable. However, the price of that desirable thing is that, since by definition the AI's mental capacity exceeds that of humans, humans can no longer reliably exert control over it.


>No, it isn't, because if we're in control of its growth, its growth is limited by our mental capacities

I don't know why you think this is true. In fact, we do this all the time. Just about any decent sized neural network we train we are incapable of comprehending how it functions. Yet we can bootstrap a process that results in the solution just the same. As long as we are able to formulate the problem of "enhance AI intelligence", it should still be able to solve such a problem, despite our lack of intellect to comprehend the solution.


I don't know why you think this is true. In fact, we do this all the time. Just about any decent sized neural network we train we are incapable of comprehending how it functions. Yet we can bootstrap a process that results in the solution just the same.

That's because we can define what the solution looks like, in order to train the neural network. We don't understand exactly how, at the micro level, the neural network operates, but we understand its inputs and outputs and how those need to be related for the network to solve the problem.

As long as we are able to formulate the problem of "enhance AI intelligence", it should still be able to solve such a problem, despite our lack of intellect to comprehend the solution.

You've pulled a bait and switch here. Above, you said we didn't comprehend the internal workings of the network; now you're saying we don't comprehend the solution. Those are different things. If we can't comprehend the solution, we can't know how to train the neural network to achieve it.

If the problem is "enhance AI intelligence", then we will only be able to do if we can comprehend the solution, enough to know how to train the neural network (or whatever mechanism we are using). At some point, we'll hit a limit, where we can't even define what "enhanced intelligence" means well enough to train a mechanism to achieve it.


> There is no reason to think that a general AI would have any of these things, thus concerns about it deciding that its better off without us are completely unfounded.

That's addressing a strawman; I suspect you don't understand the argument. It would be narcissistic to worry about advanced AI meddling in human affairs, unless it were specifically programmed to do so.

But that's not what people are worried about. The concern is that we would be no more to a super-intelligence AI than ants are to us. The concern is that these slow, stupid fleshy bags of meat would be a nuisance in the way of an amoral AI.


>The concern is that these slow, stupid fleshy bags of meat would be a nuisance in the way of an amoral AI

Why would this be a concern unless the end result is an AI who decided to get rid of us? I think I understood the argument just fine.

Edit: I think I see where the misunderstanding is. For an AI to decide we were a nuisance it would necessarily need some kind of drive that we were getting in the way of. No drives means no decision regarding out fate.


Humans have an annoying tendency to monopolize the energy and material resources of the planet we occupy... that could be a nuisance for an AI with other plans, without the end-goal being anything to do with humans at all.

And all AIs have goals. That's what an AI is: a utility-optimizing machine. Utility implies a goal, and end-game state of affairs with maximal expected utility.


An intelligence without a goal or motivation wouldn't be completely useless. Sure it would be safe to be around, but it also wouldn't have an reason to improve it's own intelligence or do anything useful. It would just do nothing because there would be no reason for it to do anything.


Right. It would do nothing until we gave it a command to carry out. Personally, I would prefer it that way.


Once it has a command to carry out, it is no longer idle. It has a goal, to fill out whatever that command is. If the goal you give it is not exactly the same as humanity's goals, then it could do things that were unintended, or things that conflict with our normal goals (i.e. you ask it to make money so it goes and robs a bank.) I seriously suggest reading these:

http://lesswrong.com/lw/ld/the_hidden_complexity_of_wishes/

http://wiki.lesswrong.com/wiki/Paperclip_maximizer


Interesting ideas, not something I had previously thought of.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: