I tend to believe in Kernels, that is, concentrated forms / algorithms / emergent (simpler to complex to boundary/interface to a newer simple) but that the kernels don’t hold all of the information. So then kernels are lossy (lossy compression). BUT if it is possible to recreate using lossy kernel (example: redundancy removed from language then restored), then: the information was NOT LOST BUT DISTRIBUTED! Ask: from where did the ability to restore lossy information come from? Answer: It was distributed not lost.

I tend to believe in Kernels, that is, concentrated forms / algorithms / emergent (simpler to complex to boundary/interface to a newer simple) but that the kernels don’t hold all of the information.

So then kernels are lossy (lossy compression). BUT if it is possible to recreate using lossy kernel (example: redundancy removed from language then restored), then:

the information was NOT LOST BUT DISTRIBUTED!

Ask: from where did the ability to restore lossy information come from? Answer: It was distributed not lost.

Leave a comment

Your email address will not be published. Required fields are marked *


5 − = two

Leave a Reply