50ms tube. Interpolation of two frame of film (20 frames per second) into the synthesis of motion. Sound also starts at 20 cycles per second. Below that we hear the beats. 50ms. Good I’m not alone at noticing that. Good explanation here.
Walter Scott Murch (b. 1943) is widely recognised as one of the leading authorities in the field of film editing, as well as one of the few film editors equally active in both picture and sound. [Listener: Christopher Sykes] TRANSCRIPT: He limited himself to this particular analysis, and it’s a very interesting article if you read it, it appeared in 1976 in ‘Scientific American’. But it made me think that perhaps this was also responsible for ‘the phi effect’ or ‘the persistence of vision’, the illusion in that we show you a frame, a still frame, and then we replace it with another, the first one, when you, the instant you see it, it is stored in this cache of some kind, this RAM memory, in great detail, waiting for anything that might happen within the next fifty milliseconds that it would be good to compare this information with, and sure enough if something comes along to compare it with, the brain synthesises something out of the information that’s in this tube, this fifty millisecond long tube. There are two things in that tube and it’s good to compare them and extract some meaning from that. Well, it turns out that these two things are slightly different, because in the second one, there is motion displacement, and as a result the mind constructs something that doesn’t exist in either of these things, but is the mysterious sum of them, on another conceptual level, which is motion. In very much the same way that we construct three dimensions out of the differences between the left eye and the right eye. As I look at you with my left eye I see some image and the window behind you is some displacement. If I look with my other eye, the window behind you is slightly differently displaced. So the two images, which are perfectly flat, are compared by the mind, and some other meaning is extracted from them, which is the world in three dimensions. In some strange way, it’s similar to the synthesis of red, the colour red, from the different signals from the three cones in our retina. And, there are, you can take this idea and run with it, as I have over the years, the, our hearing, human hearing kicks in at twenty cycles per second, which is a very low note, but we still perceive it as a note, as music. Below that, we no longer hear music; we hear the individual pulses out of which the sound is made. But started at about twenty, we hear, I’m not able to do it, but a note several octaves below that. Up until we run out of hearing at twenty thousand cycles per second. Well, the question is: why does hearing kick in at twenty cycles per second? Well, I’m suggesting because twenty cycles per second is fifty milliseconds. So the distance between the peaks of the waves is fifty milliseconds. If it’s lower than twenty cycles, then the peak of one wave is already out of this [cache] tunnel or tube before the next one comes in, so we can’t synthesise the two things and extract meaning out of them, and instead we hear each individual wave of sound. In the same way that if film is slowed down below the threshold of motion, we see individual frames, we start to see flicker, which is the equivalent of hearing the thumps of the frame.