Find another machine that can abstract itself from machines in such as a way as to no longer be a machine, who decides what is and isn’t a machine, who creates concepts to form theories of categories and encourages other machines to believe other machines more than still other machines, based upon competing theories created and built upon by dozens of generations of machines such as logic, math, physics, religion, aesthetic, language, tools, other smaller thinking-like machines, in a fashion that’s arbitrary to itself and the world around it? I’ll wait.

Find another machine that can abstract itself from machines in such as a way as to no longer be a machine, who decides what is and isn’t a machine, who creates concepts to form theories of categories and encourages other machines to believe other machines more than still other machines, based upon competing theories created and built upon by dozens of generations of machines such as logic, math, physics, religion, aesthetic, language, tools, other smaller thinking-like machines, in a fashion that’s arbitrary to itself and the world around it?

I’ll wait.

=====

How about this:
Humans UNIQUELY make metaphors to their own creations and continually attempt to limit their own capacities by abstracting themselves to fit into such physical systems and will stay WITH broken metaphors long after their utility has expired?

====

 

It’s interesting but too simplistic a model.
 ——
  Don’t take offense.That article is the WOW of the 19th century scientists who declared that we were at the pinnacle of knowledge, right at the cusp of understanding and controlling it all.You don’t see the model the article is following? Of course I read it.
=====
 I read quickly, chunking into central concepts and associate with existing concepts for speedy analysis. Been doing it forever.THIS, does not contain sufficient information to draw conclusions from, so I had to read it. But when you post a summary with a link, I use that.
=====
 No.
Sentence 1 is: My method.
Sentence 2 is: What I did in this case, which is read the article.
Sentence 3 is what I WOULD HAVE done if you had said more than, “An interesting read”.
=======
 Bob Grant is rightly amazed by:
feedback loops.
progress in past 60 years in molecular levels in biology.
Progress in AI to better approximate things humans do than standard programming methods usually can.
But,The neural model used in AI is flawed in almost all cases as it is based on a simplified 1940s neuron model that is helpful in pedagogy, pragmatic to treat as a “computer style” of a neuron-like thing, but it is not a biological neuron.Neural networks are doing great things with 60 year old metaphors but it’s not up to date with neurology.
======
 You see, there’s hype. A LOT of hype around connectionism. I know because I bought into it for 29 years and I STILL love it but I also know what’s “public relations” vs fact.
=====
 First off, parallelism for vector computations is WOEFULLY inadequate. What we find is more akin to fast task switching but VERY little actual concurrency.This makes a bit difference for how does a brain deal with concurrency without running into queuing lock-ups while computer programs usually do, unless they’re programmed in Erlang or something?Vectors have had great improvements with GPUs but they require triangle matrices, which is fine for modeling Euclidean forms but for concurrent parallel computations with leaky integrals, a sinus pulse, emotional swaying, implicit knowledge and a phonological loop that no one can find but we all know is there?
=====
 On TOP of this, the brain is MERELY a weird outcropping of a complex nervous system which interacts with its own biome with ITS own complexities, not to mention the “invisible stuff humans know” like social pressures that we feel but can’t see with eyes.Quantum computing won’t help with this, not directly. Most of the quantum logic models are simply refactorings of classical logic gates with VERY little additional functionality provided by the simultaneous vector computations that it MIGHT be capable of en masse but at what energy cost?You see how much cooling they take?Gotta split the HYPE from what’s fact. .
—–
 Hype brings the funding. So does articles like this.
But I don’t got the $$$, so I can “afford to” dissect it.
====
 There is and I want it to continue. I’m part of this “we” and that’s why I’m critiquing.
=====
 My point is – Advocate but save the hype for doubters. I know what’s possible but “merely technical hurdles” is what led to Planck and Einstein.it’s what went from Godel to Church/Turing to these here computers we’re on.That’s why I caution against cockiness.New discoveries await in the technical hurdles that will blow all this away..

=====
 I guard against “In 20 years, humans will…” because it’s wrong almost 100% of the time.
=====
Much of Neuroscience is “pin and label” at the moment. Much work in modeling theories to fit theories but not much work in algorithm improvements.Algorithms usually fall in the realm of computer scientists, mathematicians and logicians. If they’re inspired by actual biology, it will of course be abstracted (for computer science, math and logic are all “CONSTRUCTED” by abstraction).

But how far away is the abstraction?

Functioning but incorrect models lead to a pragmatic illusion.

See: AI demos in the 1960s, 80s, 00s,

=======
 Another example of a pragmatic illusion is found in world religions. Working explanations but incorrect.
=====

Leave a comment

Your email address will not be published. Required fields are marked *


six − = 1

Leave a Reply