much as how we can’t imagine the color triangle.

I still don’t see the threat though. Any urge to threaten humanity, if its there at all, will be programmed in. If we desire to create AI that *doesn’t* threaten humanity, the capability of threatening humanity will be absent.

It’s maximal parameters will be set by the programmers. I don’t even worry about swarm computing (little computers that connect with others.

If some *idiot* releases something too powerful to contain… well then THAT might be a problem. But if someone is intelligent enough to create something capable of that level of destruction, it seems likely that containment of disasterous consequences will be programmed into it in a fashion that the AI *can’t* think about, much as how we can’t imagine the color triangle.

Leave a comment

Your email address will not be published. Required fields are marked *


− 2 = three

Leave a Reply