It’s been an ongoing worry since at least Asimov though. All those Sci-Fi stories through the decades were precisely about the same kind of thing. Remember AI is old technology now, and fears of AI have been going on ever since first-gen AI in the early 60s. That’s the source of these stories.
IN the 80s, there was the 5th Gen computer project in Japan: parallel computing. MASSIVE project.
Unfortunately, it flopped (funding, fears, economy, etc) which was a shame, because we ended up getting stuck at 4th gen computers for a long time after that.
===
5th Gen (AI) has been the big promise. I was ready for it in 1990, learning about neural networks, parallel computing, chaos and complexity theory…
I was SURE it was coming. But nope: AI winter instead. Focus became on speed of processors. Then in the 00s, it shifted from speed of processors (because we’ve reached a limit more or less years ago), and shifted to improvements on 3D video matrix technologies and networking speeds.
Finally we’re getting back to 5th gen again but there’s still a long way to go.
===
It was the weirdest thing for me too:
Where did all the mainframes go?
That was the strangest thing. I mean, they still exist and are made, obviously… but the focus entirely shifted. It was a good thing of course: getting smaller computers in the hands of consumers, the smartphones being the culmination, really… and whatever neat stuff comes next.
But now that it’s in the hands of many, where’s the beef at? The raw power?
===
Of course, but SPEEDwise. But remember: These supercomputers were built for vector processing: parallel processing, which is a much different task than what the iphones and androids and desktop PCs do.
Right now the ‘thing’ is using low powered but speedy equipment networked to replicate some of the parallel processing functionality by distributing it via the Internet or LANs to perform similar functions to the vector processing supercomputers.
So it’s not really a direct comparison.
===
The GPU performs similarly to the vector CPUs of the Cray, true.
I mean we’re finally making some advances in the area. Using the GPU for vector crunching (“all at once”) started off with hackers who wondered “what if we could…” – and has been progressing from there.
I don’t think there’s yet a computer that runs solely on a GPU but it’ll come. When it does, then we’ll have our pocket cray.
==
But my point is: we’ve gotten smaller, more energy efficient, and that stuff is fantastic of course. We could plug all these things together in a nice server network and perform some amazing feats that were impossible years ago.
It’s modular – it’s almost swarm-like.
But: as development stopped on that style of mainframe, we don’t know what would be possible now.
We could be at 1024-bit vector processing. We were at 64 bit vector processing in the early 1980s.
But, we’re not. Development stopped at the death of 5th gen computing and focus in the industry shifted to consumer.
Now it’s like we’re starting up again. I’m not putting down what we’ve achieved in the past 25 years instead…. but really, we could’ve been further in the “massive power” dept.
===
I know – we’re making progress again, thanks to GPU hacking.
I’m glad. But still, there was a black hole in the middle of it all… before GPUs were powerful enough to be bothered using to crunch your database, but after the super computers stopped.
It’s a hole in time that was filled with other activities. Good ones, but still, seemed like lost time to me.
It wasn’t called AI winter for nothing. I enjoyed my Internet but was disappointed that all we had now were toys to work with. At least we’re starting to catch up a little.
===
I lived through it. It sucked. In the 80s as a teenager, Expert systems were all the rage. Computers ere ready to replace doctors. “Future tech” shows shows me as a kid all this great promise of the future.
I could taste it. I’d see prototypes on “Beyond 2000” and other shows.
I went to college in 1990/91, ready to learn neural networking and stuff: learned it from a friend ’cause I couldn’t get in the classes.
AI, robotics, all exciting.
Then: nothing.
No robots. No AI. Nobody talked about it. It was all AOL. Internet. Great: get text from A-B. Oh look, more wires. More people getting on now. Marvelous. Oh! computers got a higher clock rate, well that’s cool. Hey, graphics are improving. I like that. Hey, companies are getting online…..
No robots. No AI.
1990-ish-early 2000s. I gave up.
One day, a stupid vacuum cleaner comes out that’s a robot. I’m like, “wtf? robots are dead news”.
More robot stuff.
Google starts using the phrase AI. Hadn’t heard that shit in a decade. This was around 2002-5. It was coming back.
I mean you can Wikipedia it if you like: it’s in there too.
But as someone who was ready to get on board with it at 17 yrs old, and watched it disappear from the news and the planet it seemed… it was a long long winter.
Things are much much better now.
===
Of course. Expert systems were a 70s/80s product. Work still continues on OpenCyc (I think it’s called) and such, Amazon’s Turk and that Capacha thing were a part of it too.
But they were cumbersome and painstaking like you said.
Machine learning is far better, although semi-supervised learning seems to fair better still than unsupervised, but unsupervised keeps improving. It’s exciting: it’s like watching a baby finally being born after a long time in the womb.
===
Last year, May 30th 15, I set an unsupervised AI out to recognize my face. It did pretty good. I gave it 10,000 tries. I set it to Pokemon music ’cause the baby was REALLY trying it’s best and someday, it’ll catch all the nuances.
https://www.youtube.com/watch?v=LxQuFUPlkkQ
===
[it never did catch my face after 10,000 tries (you only see a few of them: it didn’t output every – graphics systems can’t handle that kind of speedy output just yet) – but it got close]
===