Music. Outside to in, Inside to Out. EmBodIed?
SO, you can take a completed piece and compress and label it in various ways, being a fixed, completed work with a start and end.
But also, you can create a piece of music using algorithms that generate in the same ways that you could compress and label a work that’s finished, like genre and style and other things, but use the elements to create something new.
Unexpected events are captured in the completed piece.
But what of the piece that’s generating according to rules of style / genre / other? Can it handle the unexpected and continue generating a piece that might later be dissected as well.
Can I make this embodied cognition?! Must be another who did. Will look.
Good good, more confirmation that I wasn’t totally nuts here. This in particular is for a limited amount of overall music as it stands directly _but_ considering that it’s influenced very similar techniques used to create AI that can identify songs, or used manually to deconstruct and reconstruct pieces, or even by chorals and marching bands to rescore pieces or EDM and other remixing artists to create novel combinations, I’d say it’s useful.