Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I have a feeling our disagreement is largely one of terminology.

>the whole point of AI is to get to a point where computers are self-directed; where we don't have to laboriously tell the computer what to do;

There is much in between laboriously tell it what to do and having a self-directed entity traipsing in and out of computer networks. A system that is smart enough to figure out how to accomplish a high level goal on its own doesn't have to be a self-directed entity. It just needs to be infused with enough real-world knowledge that the supposed optimization problem has a solution.

>because the whole point is to develop AI's that can modify their own neural networks (or whatever internal structures they end up having) as an ongoing process, the way humans do.

I think you give us too much credit. We may guide our learning processes, but the actual modification of our neural networks is completely out of our control. The distinction here may seem useless, but in this case it is important. A supposed unself-directed-but-intelligent being would simply not have the ability to direct its learning processes. We would still have to bootstrap its learning algorithms on a particular dataset to increase its knowledge base. But this isn't contradictory to general AI, nor would it be useless. In fact, I would say the only thing we would lose is the warm fuzzies that we created "life". It's still just as useful to us if we're in control of its growth.



A system that is smart enough to figure out how to accomplish a high level goal on its own doesn't have to be a self-directed entity.

But a system that can do this and is a self-directed entity provides, as I said, a competitive advantage over a system that can do this but isn't self-directed. So there will be an incentive for people to make the latter kind of system into the former kind.

We may guide our learning processes, but the actual modification of our neural networks is completely out of our control.

As you state it, this is false, because guiding our learning processes is controlling at least some aspects of the modification of our neural networks. But I agree that our control over the actual modification of our neural networks is extremely coarse; most aspects of it are out of our control.

It's still just as useful to us if we're in control of its growth.

No, it isn't, because if we're in control of its growth, its growth is limited by our mental capacities. An AI which can control its own growth is only limited by its own mental capacities, which could exceed ours. Since one of the biggest limitations on human progress is limited human mental capacity, an AI which can exceed that limit will be highly desirable. However, the price of that desirable thing is that, since by definition the AI's mental capacity exceeds that of humans, humans can no longer reliably exert control over it.


>No, it isn't, because if we're in control of its growth, its growth is limited by our mental capacities

I don't know why you think this is true. In fact, we do this all the time. Just about any decent sized neural network we train we are incapable of comprehending how it functions. Yet we can bootstrap a process that results in the solution just the same. As long as we are able to formulate the problem of "enhance AI intelligence", it should still be able to solve such a problem, despite our lack of intellect to comprehend the solution.


I don't know why you think this is true. In fact, we do this all the time. Just about any decent sized neural network we train we are incapable of comprehending how it functions. Yet we can bootstrap a process that results in the solution just the same.

That's because we can define what the solution looks like, in order to train the neural network. We don't understand exactly how, at the micro level, the neural network operates, but we understand its inputs and outputs and how those need to be related for the network to solve the problem.

As long as we are able to formulate the problem of "enhance AI intelligence", it should still be able to solve such a problem, despite our lack of intellect to comprehend the solution.

You've pulled a bait and switch here. Above, you said we didn't comprehend the internal workings of the network; now you're saying we don't comprehend the solution. Those are different things. If we can't comprehend the solution, we can't know how to train the neural network to achieve it.

If the problem is "enhance AI intelligence", then we will only be able to do if we can comprehend the solution, enough to know how to train the neural network (or whatever mechanism we are using). At some point, we'll hit a limit, where we can't even define what "enhanced intelligence" means well enough to train a mechanism to achieve it.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: