In what format is it better to listen to music and why is everything subjective
In what format is it better to listen to music and why is everything subjective
Anonim

We have already mentioned that the concept of "quality sound" and "quality equipment" are very relative. Why is there no perfect musical instrument?

In what format is it better to listen to music and why is everything subjective
In what format is it better to listen to music and why is everything subjective

The main audio content played today is digital in one of the lossy compression formats.

For compressed sound, the concept of the psychoacoustic model is very important - the ideas of scientists and engineers about how a person perceives sound. The ear only receives acoustic waves. The brain processes signals. Moreover, it is the work of the brain that makes it possible to distinguish from which side the sound comes, with what lag the waves arrive relative to each other. It is the brain that allows us to distinguish between musical intervals and pauses. And like any other job, he needs special training. The brain collects templates, correlates new information and processes it based on what has already been accumulated.

And the rumor itself is not so simple. Officially, the human-audible range is between 16 Hz and 20 kHz. However, the ear, like other organs, is aging, and by the age of 60, hearing is almost halved. Therefore, it is generally accepted that the average adult is not able to perceive sound above 16 kHz. However, frequencies up to 16 Hz and after 16 kHz are quite perceived by the tissues of the ear (yes, touch plays a role here, not hearing). In addition, you need to take into account that it is not enough to hear - you need to be aware of what you hear. A person cannot equally perceive all the components of sound at the same time. The fact is that the ear receives sound by special cells. There are many of them, each designed to perceive sound waves in a certain range. The cells are thus divided into groups that operate in their own range. There are about 24 such ranges, and within their limits, a person recognizes only the general picture. A limited number of tones (sounds or notes) are distinguished within each range. Therefore, hearing is discrete: a person can distinguish only 250 tones at a time.

Perfectly. Because it takes training. And the number of cells registering acoustic waves is different for everyone. Worst of all, in a single person, their number in the right and left ear is different. As well as the perception of the left and right ears in general.

Hearing is a non-linear thing. Each sound frequency is perceived only at a certain volume. This leads to several interesting quirks. The propagating wave is not heard until the wave amplitude (sound volume) reaches a certain value and activates the corresponding cell. Then the silence is replaced by a sharp and rather distinct sound, after which a person can hear a slightly quieter sound. In addition, the lower the volume level, the lower its resolution - the number of sorted sounds decreases. On the other hand, when the volume is lowered, the high frequencies are better perceived, and when the volume is increased, low frequencies are perceived. And they do not complement, but replace each other, even if the person does not realize it.

Another small remark: due to all the features of the hearing aid, a person practically does not perceive sounds below 100 Hz. More precisely, he can feel, touching low frequencies with his skin. And to hear - no. At more or less adequate volume, of course. What makes them audible is that acoustic waves are reflected in the auditory canal, as a result of which secondary waves are formed. It is them that the person hears.

Strictly speaking, when playing music, a person does not perceive some sounds, concentrating his attention on others. Notice that when the musician starts playing a solo, especially when the volume is turned up, attention almost completely switches to it. But everything can be the other way around, if the listener loves drums - then both instruments will sound almost at the same level. But only one and the general sound stage will be clearly audible. In a science called psychoacoustics, such phenomena are called disguises. One of the options for masking part of the perceived sound is external noise coming from behind the headphones.

Interestingly, when listening to music, the type of acoustics also plays a role. From the point of view of physics, they give different perceptions and sound artifacts. Earbuds and earbuds, for example, can be mistaken for a so-called point source, since they give an almost unallocated sound picture. On-ear headphones and any other larger systems already distribute sound in space. Both ways of propagation of sound waves create the possibility of mutual superposition of sound waves on each other, their mixing and distortion.

Thanks to the great work carried out, modern psychoacoustic models accurately assess human hearing and do not stand still. In fact, despite the assurances of music lovers, musicians and audiophiles, for the average, untrained hearing, MP3 in maximum quality has almost extreme parameters.

There are exceptions, they cannot but exist. But they are not always easily noticeable with blind listening. And they no longer follow from the mechanisms of the hearing, but from the algorithms for processing sound information by the brain. And here only personal factors play a role. All this explains why we love different models of headphones and why the numerical characteristics of audio cannot unambiguously determine the sound quality.

Recommended: