I love Stephen Hawking but honestly, he’s riding his popularity wave and using his wheelchair as a platform to espouse other ideas that he believes in. Nothing wrong with that; but he’s turned somewhat into a science politician.
Will robots take over? Sure it’s possible.
But – people like Stephen Hawking will always come along… watchdog groups will always come along – and make sure they don’t.
There will always be someone to pull the plug. We won’t be destroyed. Machines can only get as smart as the creators are theoretically capable of. They’re hindered by the flaws-in-thought that created them. They may advance us in “what humanity is capable of today” by sheer speed, but not what humanity is theoretically capable of thinking.
I’m not worried.
Kenneth Udut Besides, pure logic is a mythological creation of humans. Biologically, logic requires the amydala to emotionally “push” an answer one way or the other. Replacing emotions with statistics *may* work and in that case, the emotions are embedded in the system itself anyway.
Psychologically, it seems the fear of the takeover speaks of their present fears: their lives are built upon fragile systems that they each control.
They control it through people believing in them. Through people listening to them speak. Through people publishing their words. Through people seeing them as authorities.
An AI that, say, controls public opinion and cannot be swayed by eloquent formulas or fascinating ideas, or well-worded talks and convincing arguments that sway the trust-in-authority…
…that system might render Musk and Hawking irrelevant. Irrelevance = death.
That’s my 25 cent psycho-babblanalysis anyway