That was an excellent read: While I do not disregard the potential of a computational model of the brain being an actualizable brain (and being a fan of connectionism (of a greater than the simple 1940s models, which we’ve gotten better at surpassing since the early 90s) as a potential causation of measurable macro-processes (hidden Markov’s and such), and less so of the symbol-passing models of the Turing/1950s AI…
..Penrose does seem to give a somewhat reasonable, if difficult to work with, causation with the effects of microgravity on microtubules… although I still feel it may be premature to declare it victorious because, as a model for chaos… it’s bound to be difficult to attain a level of provability to physical systems without extensive simulations of both the mathematical and the physical kind.
Even then, it’s useful to continue standard symbol-passing algorithmic computational models for macro processes because they’re far easier to work with than that of chaotic systems… so long as we remember the finger that points to the moon isn’t the moon.
[the computer is like a brain: the brain is not like a computer. It’s easy to mistake the model for the actual… and too often over the past few decades, the lesser has become the greater and we’ve forgotten it’s an analogy that’s been flipped].
How does this bear upon 3rd wave AI (are we at 3rd wave? Hm, we’re probably still on 2nd wave, continuing the work from the late 80s)
I don’t know. But one thing I do suspect is that provability of consciousness (which was, I think, the main thrust of the article), we may be going about things backwards as well Necessarily so for now… but at some point the issue of “What’s in the box that we can’t touch or know?” ought to be addressed in more than a dismissive, “the box is empty because… [this model seems to do a decent job of replicating what may be in the box without requiring any subjectivism whatsoever]
In short, empathy. The issue of Turing is that of empathy. Ours. Do we feel it is human-like enough? We have to address the use of our own empathetic processes in determining consciousness and _not_ attempt to eliminate it altogether by suggesting it does not exist in the first place, as that strikes me a cheater’s way out – but rather make use of those same empathetic processes and use THAT as a clue to a determination of consciousness in a non-human, human created device.
Of course it raises the issue: What of fakers? Sociopaths? That is the fear of AI: That they will be sociopathic. Well, how do we determine such things in humans? Is there _truly_ such a thing as a lack of empathy, or is it misplaced empathy?
Anyway, these are a few ramblings. Thoughts, dismissals, whatever – are welcome. In any case, this inspired it… and I’m reminded that I should edit what I write sometimes instead of blasting out “first thoughts”. But, such as how I do.
===
really should read more. I based my thoughts solely upon this article as I haven’t read anything of Penrose in a very long time. I should though: he is brilliant, although I remember a few things he said back in the early 90s that I found disagreeable… just don’t remember what, but overall he always left me with a mostly positive vibe.
==