Yeah. I really liked the thick/thin metaphor as well. The metaphor carries through to other assumptions about things, just as ours do. DARPA (or IARPA?) about 5 or so years had an open call for a theorist/programmer to work on a system to capture, extract and interpret middle eastern metaphors in order to improve intelligence. I envied whoever got the project, which someone did and as of now it’s been completed for a couple of years. I would love to see a paper from that. I know I won’t but had I been in a different situation with slightly different skill sets (machine learning + AI isn’t something I do but could’ve), I’d have pounced on that project which seemed fascinating.
While I can’t see 3D (don’t have enough 3D cells for it according to a specialist thing I did long long ago involving converging red and green lights – and I couldn’t) — I can see outlines which never ceases to fascinate. Knowing it’s a function of vision is all the more intriguing. Reality is quite participatory.
ossibly. I can triangulate distance well though. Give me a basketball and I’ll go 1/4 past half court and get the ball in one handed 7 out of 10 times. But it’s a fixed goal.