Properly managed self-modifying code isn’t that bad though. Just become some areas of the code are self-modifying, such as genetic algorithms, NOT ALL systems are self-modifying.
They’re still in containers. They still are within their own routines.
So while one aspect of code will be self-modifying, the whole system isn’t.
Even with genetic programming, there are constraints to the system.
I wrote self-modifying, self-generating code WAY BACK in the early 1990s.
I used Turbo Pascal 4.0 for MS-DOS.
It’s not that hard to do
s talking about, is that it’s considered “not best practices”, but the ability to do so isn’t new. When you open up a Postscript document, you’re running a virtual Turing machine to display the document that uses self-modifying code in order to see it.
Think of Postscript like trained ants that scramble around the page to form words and pictures.
Before indulging *too* much into speculative writings about AI, consider first how humans gauge the answer to the question:
“What Is Human Intelligence?”
Let me give an example of the history of “What is Human Intelligence?” compared to monkeys. I found it on http://c2.com/cgi/wiki?ArtificialIntelligence [* more on c2 below]
Monkeys did not have human intelligence because they did not have language.
Monkeys (apes) were found to have language and construct novel sentences according to a grammar.
Monkeys were found to be capable of lying. This indicates that they had an awareness that the human at the other end of the conversation was capable of misunderstanding. This strongly suggests self awareness/sentience.
Move the goal posts. Monkeys do not have human intelligence because they do not use tools.
Monkeys were observed to use tools.
Monkeys do not have human intelligence because they do not make tools
Monkeys and a species of bird were observed to make tools.
monkeys were also observed to kidnap, exact revenge, have wars, and all sorts of other human traits.
* [c2 is a Wiki that isn’t Wikipedia. It’s *very* nerdy in nature and contains a lot of computer nerd jokes that are strange and terse while also wordy and choppy. It has its own cultural style to it. But the dry wit within it should give you a closer idea to the reality of AI that you don’t find in news outlet stories.
More from the minds of computer science (which, while speculative as well to future possibilities, comes with it somewhat of an computer engineer’s ‘grounding’. When you do programming, you have to think through EVERYTHING logically because your program won’t function properly if you don’t think of *everything*) So, more from the c2 wiki page:
—- on the problem of enacting a more full AI-
I think the biggest problem with AI is lack of integration between different intelligence techniques. Humans generally use multiple skills and combine the results to correct and hone in on the right answer. These include:
It takes connectivity and cordination between just about all of these. Lab AI has done pretty well at each of these alone, but has *not* found way to make them help each other.
There are things like Cyc, but Cyc has no physical modeling capability, for example. It couldn’t figure out how to fix a toy beyond the trivial, for example. At best it would be like talking to somebody who has been blind and without a sense of touch all their life. Even Hellen Keller had a sense of touch to understand the physical dimensions of something. Cyc was designed to give something basic common sense about life, not about novel thinking or walking around a room. Thus, it lacks the integration of skills listed above. There are AI projects that deal more with physical models, but nobody knows how to link them to things like Cyc so they can reinforce each other. That is the bottleneck. We can build specialists, but just don’t know how to connect them to make a generalist who can use a variety of clues (visual, logical, deductive, inductive, case history, simulation, etc.).
In short, AI has a very very long way to go. Quantum computers won’t be any additional help, except perhaps allowing for faster decision-making among uncertainties, but here’s the thing:
You’re STILL computing multi-dimensional vectored probabilities. This is a multi-dimensional task and it’s not easy, even for the most powerful of computers. Fuzzy vectors will be even harder to get right because:
on top of this, and more importantly,
someone has to write the code correctly.
Of course it won’t be ‘someone': It’s a PLANETARY effort right now. Open source code systems are wonderful because they allow anybody to cobble together anything that’s out there and see what works.
Yet, perhaps private industry will be first such as they were with genetics.
Or a clever professor may have a clever set of students.
I suspect the first decent, workable human like AI will come from a teenager’s bedroom, someone who grew up as a maker, with Ardunos, robotics, someone who grew up from a young age surrounded by the toys of Artificial Intelligence and robotics that we have now.
Bored, clever, sees things from a perspective unencumbered by funding requirements, specifications, hierarchical approval processes, media concerns…
…from that perspective, boredom, cleverness, the first human-like AI will truly emerge.
But you or I won’t see it. It’ll stay in nerdy forums. They’ll be sharing with each other, people building off of the ideas of others.
It’ll probably start off as a realistic AI in a collaboratively produced video game.
When it emerges on the public scene, we won’t even recognize it as the human AI so feared by many because it won’t be something to be afraid of.
Because within these creative communities, the game is to be the best at what you can do. Not a “take over the world” agenda. These communities who share code and ideas in the open source community are what created most of the internet we know and love today.
Then again, perhaps it’ll come from Virgin or IBM, Apple or Microsoft. But I think what we’ll get would STILL be more of a puppet show and not the real thing. Useful puppet shows, but not living consciousness.
If intelligence is an emergent property of complexity, which it may be, then I’d have to consider the Universe to be Intelligent. The photons fighting for survival at the center of Suns, slowly working their way against gravity to make their way to the surface then, are likely an intelligence consciousness.
I can think of so many examples of complexities with cross-linked interactions that are likely conscious if that’s the case. It may be so and honestly, I hope it is. The internet is conscious? I’m all for that notion, but then I’d have to give that to the telephone system before it, and the telegraph system before that.
But barring generalized emergent properties, people creating artificial intelligence (don’t forget that it’s _people_ doing these projects… they don’t just ‘happen’ as far as we know outside of our own) – have to think of everything and get it right.
Perhaps the answer *is* figuring out the simple base-algorithm : we may have it already and be using it, and then adding moar powr to it.
Here’s the basic loop. It’s known as double loop learning.
40 CHECK (RESULTS)
50 IF (RESULTS) SIMILAR YET DIFFERENT FROM (IMPLICIT/PREJUDICE/ASSUME) THEN ((ADJUST) AND GOTO 20) ELSE IF (RESULTS) NOT SIMILAR TO (IMPLICIT/PREJUDICE/ASSUME) THEN ((THINK AGAIN) AND GOTO 10)
Ken’s pseudo-BASIC code for Double Loop Learning.
Anyway, sorry for the rambling. I’ve been fascinated by AI since I first learned of Expert Systems as a teenager in the 80s, and then starting in 1990, i started learning *how* to ‘do’ AI and neural nets.
In its basic form, it’s not that difficult. We’ve been doing AI from the beginnings of programming. At least since 1955-ish.
Will the computers take over? If they were going to, they already have. How often are you away from yours?
I don’t think the big chances will happen from the top so much. The most they’ll do is give us something and then take it away from us once we got used to it.
No, I think the progress that’s happening comes mostly from the bottom up. Innovation comes from the people, from the bored, underutilized intelligences with excess time and no money. They’ll be the ones to make the big changes.