You can be very smart in some areas but of normal intelligence in other areas.

He does but he’s more practical physics + business now. You can be very smart in some areas but of normal intelligence in other areas.

AI does not pose a danger unless we program the danger into it. It’s really that simple. The safeguards against sci-fi possibilities being played out in reality are being built into the systems as they’re being written and I see no reason for that not to continue.

There was a great conference given a few months ago on bioengineering about exactly this kind of thing: DNA is easy enough to manipulate now where we’re really close to putting together Human 2.0: a prototype human RNA/DNA that’s usable for research.

Because it’s compatible with human life, it has to be protected against leaving the lab.

So, they feed it a common industrial product that’s nearly impossible to find in the real world so that if it leaves the lab and its food source it dies instantly.

These kinds of safeguards are/will be build into any AI that gets even closer to consciousness… and we’re not close to that at all, unless the Internet is already conscious now, which it doesn’t seem to be. [although it could be]

===

It’s ridiculous to ban warfare AI and autonomous weapons BECAUSE of the lone hackers and stuff.

Right now, you can get a $300 kids’ helicopter drone and program a simple evasive algorithm in it, similar to what you’d find in any video game anywhere, and cause massive damage and it would be difficult to destroy.

Banning things on a big level won’t stop the small level and by stopping development, it’ll allow the lone hackers and rogue governments to get more powerful in these areas.

A ban on deployment? Sure. A ban on research? No.

===

What’s important is to program ETHICAL AI, whatever it’s purpose, even if it’s to kill other people.

===

Well, most AI are tools for a purpose: their functionality is limited but WITHIN the constraints of its programming, it should have a free range as possible.

An AI designed to mimic but improve upon humans as its purpose, would definitely have ethics built into it. There’s plenty of psychological models to choose from, personality types and the like.

The pondering AI.
The activist AI.
The artistic AI.
The professor AI.

The roles would be programmed in and they would respond in kind.

Would it be possible to have a “generic” AI for human? I don’t know to be honest. That would be most interesting.

===

 

Leave a comment

Your email address will not be published. Required fields are marked *


− one = 1

Leave a Reply