This marvelous little article goes through the distinctions and history with some decent clarity. When they included the important role of the GPU in the practical realization of neural networks and mentioned how neural networking was shunned by the AI community previous to that, I knew it was a solid article.
I remember when neural networks were shunned in AI. CPUs just weren’t powerful enough, and they STILL aren’t. Parallel computing just wasn’t enough on its own, stacking CPUs just didn’t cut it. Using networked LAN or internet computers wasn’t enough.
I remember those dark times as I’ve been rooting for neural nets day in the sun since 1990/91. Imagine learning about something, seeing ALL of its promise and reading all around how impractical, wrong and difficult it is, and not to bother – by the very professionals who SHOULD be involved in it?
So, I was _really_ glad when GPUs were brought in. Their matrix transformation capabilities were just what were needed to help realize this phase of AI. We needed chips PHYSICALLY designed to handle those kinds of transformation for this work to take place. Software was not enough.
In US culture, the word “artificial” has some strong negative connotations that aren’t always present in the English’s of other countries. I have a sense that this is whats referring to in the OP.
On another note : This that I just found might be useful, on the distinctions between Artificial Intelligence, Machine Learning, and Deep learning.
Complexity theory, until recently, has had a _very_ rocky road to acceptance. An “IF AND ONLY IF” “A NOT B” “EXCLUSIVE OR” mindset was prevalent in computing across the board, not just in computer science but in other sciences as well.
They were seen as toys because the only neural simulations (and these are the artificial neurons of the 1940s conceptualization, not brain neurons) were of a small scale and a lot of reweighting was manual.
“You’re just tweaking to get the result you want”
and a lot of people did that.
But unsupervised machine learning progress did pave the way for deep learning. Bayesian was the closest thing the machine learning people got to neural networking generally.
Self-organizing systems can be annoying because the results aren’t always calculable in advance, so I suspect that bias was against it as well.
So it wasn’t just speed going against it. Proponents of what became known as deep learning sometimes spoke with a kind of “religiousness” about it.
This isn’t uncommon in computer science either. Perl and LISP are two languages that come to mind that brought out a lot of religiousness in the people involved in them.
In more popular culture, the rise of the “cult of Kek” and the religiousity of meme culture is a similar phenomenon: something people are deeply devoted to activates whatever “religious centers” of the brain there are.
It starts out ironic and becomes real.
Logic (in the form of how they’re used in internet discussion groups, with fallacy replacing ‘sin’) has become a modern religious form as well, with some people claiming that logic is the very foundation of the Universe itself.
So, this somewhat cultish element of neural networking proponents was a turn-off to the more computer engineering-minded, because for them, all wires had to be traceable, every address subject to inspection at any time, a TRACE should be logical and correlate well to the program.
Any trace of “woo” would get the logical thinker’s ire.
[i made a pun in there without realizing it. “any ‘TRACE’ of woo”… ok I need more coffee… I’m imagining going through a trace, seeing a confusing memory dump, printing it out, and circling the section with “woo”.”
Only people involved in virtual machines really do this anymore : https://en.wikipedia.org/wiki/Instruction_set_simulator
I think it’s only once all computers became virtual machines and our ability to have a useful dialog with the machine level became a useless venture were we better able to think (in computer science) in terms of what became deep learning.
We accept the “magic of the machine” without having to know how all of it works now as it’s working. So, I think a human / subcultural (computer science) mental shift had to take place. The programmers themselves became more abstracted from the machines they were using, allowing for them to thinking in further abstractions from the machine.
well, negative feedback is a basic form of self-awareness. Self-correcting systems are a form of artificial intelligence – a way where the circuit itself can know what it’s thinking and make corrections based upon feedback of its own systems upon the environment and the environment upon the systems…
Exactly. I think a generation of people who were (for these intents and purposes) raised on computers they really couldn’t understand on a 1:1 correspondence between, let’s say area of memory and software glitch, that the environment was created to allow the majority of computer scientists to think in the kinds of abstracted ways needed for a greater acceptance of complexity theory and deep learning.
For example, in the days before the OS was abstracted away from … sorry – abstracted from – the hardware – had a job at a carpet cleaning company for a short time.
One of their computers had this weird glitch where it could freeze at one particular point in time during one particular activity.
They knew I was a “whiz kid” so they asked me to look at it.
I was puzzled. It could happen while running Geoworks, a graphical UI that looked similar to Windows 3.1 but could run on older machines.
The pattern would be IDENTICAL every time.
Most citizens didn’t have the internet – and this was even before the AOL boom in the mid 90s so there wasn’t even AOL to help me.
Then it hit me: It had to be the graphics card with a bad memory chip in it.
It wasn’t a software problem. I knew it wasn’t system memory. I knew it wasn’t a hard drive problem. There wasn’t much left.
So, they drove me to the computer store and I bought them a replacement video card. [it was a Hercules monochrome card – this is how long ago].
Not that this isn’t impossible now, but with more levels of abstraction between network, programs, OS and hardware, the route isn’t always as direct as it was. But of course things are far better now.
[those early days were dark times. Hard to imagine disconnected computers anymore]
Of course not. But there was a direct correspondence between what you see on the screen and the hardware below. I only had to use logic to find the answer.
Things are better now although video cards still love to overheat.
Do you know what sucks about being raised with this kind of thinking?
I look at my task manager 100 times a day to see what processes are running, just for a moment. If my laptop makes a strange noise, or the internet has a momentary slowdown, my brain automatically goes through the whole system (as much as I know it) to try to determine if it’s DNS, the router, my Wifi chip, is the Waterfox memory leak causing it, is that video I’m rendering causing it, did that old adware that I didn’t fully remove assert itself through a cookie I temporarily enabled for one site and its affecting others…
and I can’t help but automatically go through this mental process. It only lasts a few seconds, but it’s annoying.
It took me a LONG TIME to learn to ignore winsock and and its processes. I don’t think about it anymore. I don’t WANT to be hyper-aware of the goings-on of the layers, not at these speeds.
Best thing to come my way was the iPhone. I couldn’t really hack it (unless I wanted to jailbreak it and we need our contract to stay valid so I can run this business).
It forced me to stop. Trust the system. Let the computer-thing do the computer things. When an app crashes? It doesn’t tell me anything. It just elegantly freezes and goes away, and I start it again.
No error messages. No debugging. No anything.
I *should* be on Linux. It suits my personality better. But I’d probably have another monitor set aside just running a constant trace of the systems processes and I’d be in debug-mode even when things are working well and get nothing done
On a nerdy note, I love how the iPhone allocates memory and uses it. I wanted to find a more technical description for programmers and found it just now:
“The virtual memory system in iOS does not use a backing store and instead relies on the cooperation of applications to remove strong references to objects. When the number of free pages dips below the computed threshold, the system releases unmodified pages whenever possible but may also send the currently running application a low-memory notification. If your application receives this notification, heed the warning. Upon receiving it, your application must remove strong references to as many objects as possible. For example, you can use the warnings to clear out data caches that can be recreated later. ”
On an ideological level, this suits me. You have the RAM you have and that’s what you have so make it work. Cooperate with the other apps and let them cooperate with you. It’s a political system in an OS/hardware combination.
Installing devices on windows systems sucks. While I like how iOS hides information from the user, I *don’t* like how Windows does it.
I haven’t worked with Macs since OS-9 but I’ve heard it’s far better now since they left Classic Mac OS.
Self-awareness. “Knowing what you’re thinking”. It’s awareness of your system processes at its basic level.
At a fundamental level, it is. That doesn’t mean it’s making sentences or ponders great questions of the universe or is equivalent to humans or dogs or even plants.
It’s a system designed to look at itself and modify its behavior based upon what input is received.
Self-awareness is the results of a systems-check.
OK | NOT-OK
There’s sub processes that don’t enter conscious awareness except in the form of emotions and other non-grammatical forms of communication. Some others do spit out results in grammatical forms.
Why do you feel bad? It’s the result of a systems-check.
Why do you feel happy? It’s the result of a systems-check.
You don’t get all of the details because our subconscious processes take care of that for us and spit out the results in some form.
The more you are aware of your own systems, the better your interpretations of the results of the systems checks are.
Why do babies cry? they can’t interpret their systems checks properly yet.
Of course I’m using metaphors and all metaphors are broken at some points, but they’re a useful aid.
The self isn’t an illusion. It’s a junction area.
Our cognition is embodied . Our “body-space” has been mapped out successfully with fMRI. Did you know Heidegger was correct? The body-space extends to the hammer we’re using and it becomes a part of us as we’re using it.
The cars we drive cognitively *is* an extension of you. It is part of you as you drive it.
This computer is part of me as well while I’m using it. That’s just how we work.
This talk of illusion is nonsensical to me. The world is what it is and it is also how we perceive it. The perception is not the illusion; the perception *is* the reality of the world as we go through it and make decisions in it.
Our senses and cognition are perfect for our usage but for objectivity, we compare notes and figure out commonalities in order to get a ‘generic’ view of reality that seems to hold true enough for most human needs.
Are you shifting towards a “and that’s why we have no free will because there are constraints upon our conscious cognitive processes?”
Tell me where the line between reality and illusion is drawn in the body. Here’s a knife.
and who determined the law of physics? Others being lied to by their brains? It’s turtles all the way down in your way of thinking.
did its magic and /i/ disappeared.Where did /i/ go?how did /i/ come back?State system.
a) a feedback system (I got that covered)
b) intention (define please)
c) experience (state memory and /or hysteresis loops, which would be another electro/mechanical form of experience)
It can also be inaccurate.
They make errors, can be prone to fads and other collectively subjective mistakes, following collective smoke & mirrors down rabbit holes.
But hopefully overall, amongst all the lies, a few truths emerge. That’s why I like the sciences.