This one I had written down in an old 3×5 notebook but i can’t verify its accuracy or its source but my writing was from 2001 so it’s likely older than that. But it’s a starting point.
What Each Human Senses Processes?
eyes – 10,000,000 bits per second
skin – 1,000,000 bits per second
ears – 100,000 bits per second
smell – 100,000 bits per second
taste – 1,000 bits per second
Last measurement I saw of conscious subvocalizing speed was 400-800 wpm. [how fast your brain thinks in words]
To compare: speaking voice is 100-400 wpm (the fastest auctioneer or rapper can hit 400 wpm). My typing speed averages 110 wpm, although I can do small segments up to 150 wpm and once saw a meter hit 200 wpm for just a few seconds… but it dropped back down after that.
The 400-800 wpm “thinking speed” is from back in 2001 research I did (I have it in one of my little notebooks), so I don’t know what’s been updated (if anything) since then.
The main problem with each of these though, is “bit”.
A ‘bit’ is a binary unit of information. Active or not. Whether active or not conveys information. If it’s active continuously, it’s a “run condition” (no information carried) and if it’s not continuously, it’s dead.
Also, I got pulled away before finishing the last thought:
bits are processing units for _binary_ information.
But these “bits” traveling along human nervous system are not bits as they are carrying more information per ‘bit’.
My favourite example of the dramatic difference between the IP perspective and what some now call the ‘anti-representational’ view of human functioning involves two different ways of explaining how a baseball player manages to catch a fly ball – beautifully explicated by Michael McBeath, now at Arizona State University, and his colleagues in a 1995 paper in Science. The IP perspective requires the player to formulate an estimate of various initial conditions of the ball’s flight – the force of the impact, the angle of the trajectory, that kind of thing – then to create and analyse an internal model of the path along which the ball will likely move, then to use that model to guide and adjust motor movements continuously in time in order to intercept the ball.
That is all well and good if we functioned as computers do, but McBeath and his colleagues gave a simpler account: to catch the ball, the player simply needs to keep moving in a way that keeps the ball in a constant visual relationship with respect to home plate and the surrounding scenery (technically, in a ‘linear optical trajectory’). This might sound complicated, but it is actually incredibly simple, and completely free of computations, representations and algorithms.
That *could* be true, except balls don’t end up in the same place everytime when you try to catch them, unlike typing on a computer keyboard, where the hands and keyboard stay in the same place long enough to form muscle memory of typing for example.
Then again, this model *does* have you place your body in the optimum position for catching it, which would confirm your idea.
Very true. Yet, isn’t it a refinement of a skillset we gain young?
We start playing catch with babies.
We play catch with dogs and dolphins.
Oh I agree. One thing I remember from piano lessons, “perfect practice makes perfect”. The more times you practice correctly, the better you will do in performance.
I’m going to switch the context and see what *successful* models have been used to ‘teach’ robots to catch. There’s three competing models I’ve seen so I’m curious which actually works with the least amount of computing power.
Oh this is no help at all:
They taught this robotic arm to catch using neural network approximations of trajectory prediction … WITH 18 CAMERAS.
We don’t have 18 eyes.
UGH. Great for robots but terrible for analogizing to humans.