What Each Human Senses Processes?

This one I had written down in an old 3×5 notebook but i can’t verify its accuracy or its source but my writing was from 2001 so it’s likely older than that. But it’s a starting point.

What Each Human Senses Processes?
eyes – 10,000,000 bits per second
skin – 1,000,000 bits per second
ears – 100,000 bits per second
smell – 100,000 bits per second
taste – 1,000 bits per second

=—-

Last measurement I saw of conscious subvocalizing speed was 400-800 wpm. [how fast your brain thinks in words]

To compare: speaking voice is 100-400 wpm (the fastest auctioneer or rapper can hit 400 wpm). My typing speed averages 110 wpm, although I can do small segments up to 150 wpm and once saw a meter hit 200 wpm for just a few seconds… but it dropped back down after that.

The 400-800 wpm “thinking speed” is from back in 2001 research I did (I have it in one of my little notebooks), so I don’t know what’s been updated (if anything) since then.

—–

The main problem with each of these though, is “bit”.

A ‘bit’ is a binary unit of information. Active or not. Whether active or not conveys information. If it’s active continuously, it’s a “run condition” (no information carried) and if it’s not continuously, it’s dead.


Also, I got pulled away before finishing the last thought:

bits are processing units for _binary_ information.

But these “bits” traveling along human nervous system are not bits as they are carrying more information per ‘bit’.

====

 

A bit in the brain is created by the synapse, which is the space between two or more neurons. For the sake of simplicity, let’s have our synapse between only two neurons, the axon of one and the dendrite of another. The cell body depolarizes sending an action potential – a tiny electrical charge – out into the synaptic junction. Before the dendrite of the other neuron receives this charge, which will become the bit, it is coded by neurotransmitters, synaptic stimulation by the dendrite called “long term potentiation” or LTP and the ratio of different peptide molecules which change the voltage, duration and morphology or shape of the pulse.
 
Just to simplify this scenario, let’s say our synaptic signal is only affected by 5 different neurotransmitters and each neurotransmitter can only have 70 different effects apiece. Already we have 350 different types of bits compared to the two of a computer. Throw in a couple more variables and you could be talking about many thousands of different type bits.
 
https://www.quora.com/What-acts-as-the-brains-bits
====
 Oh this rant is priceless! In it, the author wants us to get rid of the IP (Information Processing) metaphor for the brain.He feels it distorts our way of viewing the human brain and gives compelling reasons why we should stop thinking of the human brain this way.The article generated a LOT of anger in a forum, which in a way proves his point: The metaphor is *so much* a part of how we think about the brain that it’s difficult to see alternative explanations (such as the one by Michael McBeath. related to embodied cognition) that could turn out to be more ‘true’.
https://aeon.co/essays/your-brain-does-not-process-information-and-it-is-not-a-computer
===

My favourite example of the dramatic difference between the IP perspective and what some now call the ‘anti-representational’ view of human functioning involves two different ways of explaining how a baseball player manages to catch a fly ball – beautifully explicated by Michael McBeath, now at Arizona State University, and his colleagues in a 1995 paper in Science. The IP perspective requires the player to formulate an estimate of various initial conditions of the ball’s flight – the force of the impact, the angle of the trajectory, that kind of thing – then to create and analyse an internal model of the path along which the ball will likely move, then to use that model to guide and adjust motor movements continuously in time in order to intercept the ball.

That is all well and good if we functioned as computers do, but McBeath and his colleagues gave a simpler account: to catch the ball, the player simply needs to keep moving in a way that keeps the ball in a constant visual relationship with respect to home plate and the surrounding scenery (technically, in a ‘linear optical trajectory’). This might sound complicated, but it is actually incredibly simple, and completely free of computations, representations and algorithms.

That *could* be true, except balls don’t end up in the same place everytime when you try to catch them, unlike typing on a computer keyboard, where the hands and keyboard stay in the same place long enough to form muscle memory of typing for example.

===

Then again, this model *does* have you place your body in the optimum position for catching it, which would confirm your idea.

===

Very true. Yet, isn’t it a refinement of a skillset we gain young?

We start playing catch with babies.

We play catch with dogs and dolphins.

====

Oh I agree. One thing I remember from piano lessons, “perfect practice makes perfect”. The more times you practice correctly, the better you will do in performance.

====

I’m going to switch the context and see what *successful* models have been used to ‘teach’ robots to catch. There’s three competing models I’ve seen so I’m curious which actually works with the least amount of computing power.

=====

Oh this is no help at all:

They taught this robotic arm to catch using neural network approximations of trajectory prediction … WITH 18 CAMERAS.

We don’t have 18 eyes.

UGH. Great for robots but terrible for analogizing to humans.

=====

 

Ok. This one is monocular… so better in that sense. But now I have to see what assumptions are built into the programming.
http://ieeexplore.ieee.org/abstract/document/7018920/
====

Leave a comment

Your email address will not be published. Required fields are marked *


2 × one =

Leave a Reply