I love the process they use for it. It’s a good mix of keeping distinct things distinct (eyes processed differently than noses, etc) while also holding it all together. Much better than the usual Gaussian averages.
This kind of NN isn’t sophisticated enough for algorithm generation.
What this does is parameter-fitting. Whole to part. Number and types of Inputs are stable. Map is stable but “meshy”.
For it to work for arbitrary algorithm generation, you’d need something more akin to maze-solving. Self-avoiding random walk.
If this face-maker was of that caliber, it would need to guarantee that each face not only “does not exist” but “will never exist” even though it “COULD exist”.
Above-human algorithm-making is of that level of difficulty.
I think that “this person does not exist” is going to a fascinating legal test of deepfake territory and the realm of “victimless crimes”. Plausible scenes that never happened and such.
On a related note, this is also related to:
technology except in that case, the parameters are set by memory and the artist/computer software creates “a person that does not exist” that resembles a living person in hopes to find them.
Amazing stuff, all of this.
Our own visual abilities to detect possible fakes will have to improve too and will probably rely on stereotypical “resembles”, part by part.
This unboy for example. Possible female older woman chin, left ear (viewing left) is a flat fleshy thing with no curve, the right is set too far back.
Left and right sides of nose are distinct , eyebrows are toddler surprise brows, top of head far too tall in the back (too much slope).
AND YET this is ALSO TOTALLY possible to be a real face with good reasons for every “odd feature”.
Uncanny Valley. Our discrimination skills will be put to the test and will probably be wrong as much as right to figure out the differences.