I think with his new job, this site may be getting neglected. Conceptually, it’s this: Slow learning and fast learning. Evolution is slow learning. DNA is faster learning. Neurological interaction with itself and environment is fastest learning. All layers interact and update each other and interact with the environment and the environment with them. So basically, at any given moment, everything you’ve learned is used in decision making and in forming new connections, whether evolutionary, DNA or neurology. It all is in a dance as it were, traversing these levels. This allows for an agile intelligence continually capable of adaptation and adaptation is continual.

I think with his new job, this site may be getting neglected. Conceptually, it’s this:

Slow learning and fast learning.
Evolution is slow learning. DNA is faster learning. Neurological interaction with itself and environment is fastest learning.

All layers interact and update each other and interact with the environment and the environment with them.

So basically, at any given moment, everything you’ve learned is used in decision making and in forming new connections, whether evolutionary, DNA or neurology. It all is in a dance as it were, traversing these levels.

This allows for an agile intelligence continually capable of adaptation and adaptation is continual.

====

an AI computer would be a “box that can learn”.

====

This is a basic overview. It’s missing a point or two but it’s a good start. https://www.youtube.com/watch?v=WIzsz03X8qc

====

Having worked on and off with AI through the decades now as a hobby, certain aspects never sat right with me. Preloading huge databases first or assuming certain thinking patterns are superior to others. Or agility with a lack of memory. Or static tables used to bootstrap that can’t be updated. Stuff like that.

====

 

Now for an extreme in the “mapping known neurology directly to AI” area, you have Jeff Hawkins.
 
Very inspirational talk for me. Compelled me to pause 1/4 of the way through and I spent a year learning everything I could about spatiotemporal databases, sparse databases, 6 cell layers surrounding human brain and what they do, computational theories of brain function, etc.
 
I think his approach is invaluable as it’s using a direct 1-to-1 correspondence to what we understand of our brains from a structural level.
 
It’s a completely different approach to AI and also has merit and was an important step in my learning.
 
https://www.youtube.com/watch?v=4y43qwS8fl4
=====
  What they both have in common:
They each use biology as a basis.
Most AI uses an ancient view of “neuron” developed in the 1940s, with weights and such. Useful? Oh yes. But it’s limited.
=——
  BUT, I usually “back the wrong race horse”. That is, I rarely pick what’s popular. I bet on Gopher over www. Minix over Linux, user data as priority over system data, etc.The direction AI will continue to take will be sources from here, which I think has an incorrect philosophical approach, building on itself rather than revisiting old assumptions.

But it’s where the money and institutional support is, and we’ll still make progress, even if it’s far more slowly than I’d like to see.

https://openai.com/

—–
  I can’t complain though. OpenAI has connections to chip manufacturers and it is where we’ll find consumer product development branching from allowing you or I to easily work on novel projects.

https://blog.openai.com/block-sparse-gpu-kernels/

=====

Leave a comment

Your email address will not be published. Required fields are marked *


+ nine = 18

Leave a Reply