Future AI will only write itself if we make CURRENT AI capable of writing itself properly in the future.

I want *programmers* to do the job _right_ to avoid problems and to produce the *best possible* AI. I’m thinking of TODAY leading to TOMORROW, not tomorrow as an isolated “what if” scenario.

Example: I have a team of programmers. We’re designing AI. We have to think 50, 100, 500 years in the future and we’d better do it _right_.

If we do it wrong, well, _then_ there may be problems with whatever AI produces later on.

Future AI will only write itself if we make CURRENT AI capable of writing itself properly in the future.

[responsivevoice_button voice="US English Male"]

Leave a comment

Your email address will not be published. Required fields are marked *


nine − 7 =

Leave a Reply