yes, this was a valuable lesson.
The AI seemed to lack a core value system, allowing its neural nets’ weightings to change too frequently.
Happens to elementary age kids too to some degree. and to adults as well if we’re not careful.
A good lesson for AI and human alike.
It’s also a good proof of one of the known weakness of neural net: the underlying theoretical basis (from the 40s or early 50s – and I can’t remember his name at the moment) were a very primitive neuron model that doesn’t match up to reality.
It does do a good job mimicking it partially but it lacks the ability to form more solid conditioning filters and proper prejudices against the onslaught of public opinion.