FLICKER-FUSION FREQUENCY FOR ACOUSTIC SIGNALS

Ruben Tikidji-Hamburyan (KRINC, Rostov-on-Don, RF and LSU, N.Orleans, USA), Witali Dunin-Barkowski (NIISI RAS, Moscow, RF)

It is generally considered that the sensory inflow to the neural system is cut into fragments, which then are bidden to each other, based on their features, to result in sensory perception [1]. This idea is supported by flicker-fusion of visual images, when perception of continuous movement might be provided by a series of fixed visual frames. The critical frequency for fusion of the series into continuous in time visual perception is well known (8-12 Hz for oscilloscope sweeps and higher frequencies for more complex visual stimuli). It is natural to suggest that the same fusion phenomenon should be present in other sensory modalities. We test here this idea for audio signals. Of course, it seems virtually impossible to have a stopped in time acoustical signal as it is dynamical in principle. Nevertheless, we report an initial attempt to obtain a static (in some definite sense) acoustical signal. We also have obtained an experimental identification of the acoustic flicker-fusion frequency. To come to the point, we cut the electrically recorded acoustic signal into equal duration fragments and simulated full stop in time with time-reversal of the signal in each fragment. The transformed signal was presented to human subjects for subjective analysis. We determined the lowest frequency of the signal fragmentation, at which one can understand the contents of the recorded speech. For acoustic signal we used virtually noiseless Russian speech. Analysis of the signals was performed by native Russian-speaking subjects. Up to 7.5 Hz no message can be extracted from the transformed audio signal. At 10 Hz about a half of the speech information can be restored. At 12.5 Hz and higher frequencies, one can understand practically all words. For the second type of experiments we didn’t reverse the signal in fragments, but still cut the audio signal in fragments. Then, we inserted between the fragments of the original signal either periods of silence (of the same or doubled duration as the fragments themselves) or fragments of noise with the same spectrum as the speech signal. At 12.5 Hz, with silent intervals of up to doubled duration as compared to the period of cutting, the subjects can understand speech. It is perceived as speech with stuttering. If we insert noise between fragments, the signal could not be recognized. However, filling each interval between the original fragments of the signal with the same particular waveform (also generated as a sample of noise) yields understandable signal. These findings support the idea that in processing of the audio information brain uses its transformation into discrete fragments. The flicker-fusion frequency, revealed in our studies, points to the alpha-rhythm as a possible rhythm, involved in audio signal fragmentation in brain.

The work was supported by Russian Foundation of Basic Research grant no. 10-07-00206 to W.L.D.B.

Reference

[1] Buszaki G.A. Rhythms of the Brain. Oxford University Press, 2006, 448 p.

 

Preferred presentation format: Poster
Topic: General neuroinformatics

Document Actions