Frequency range of sound. Great encyclopedia of oil and gas

The range of acoustic vibrations capable of causing the sensation of sound when exposed to the organ of hearing is limited in frequency. On average, a person between 12 and 25 years old hears frequencies from 20 Hz to 20 kHz. With age in the "snail" inner ear Nerve endings die. Thus, the upper limit of audible frequencies is significantly reduced.

The region from 20 Hz to 20 kHz is usually called the audio range, and the frequencies lying in this region are called audio frequencies.

Oscillations below 20 Hz are called infrasonic, and vibrations with a frequency above 20,000 Hz are called ultrasonic.

These frequencies are not perceived by our ears. The infrasound region, with sufficient power, can have a certain impact on the emotional state of the listener. In nature, infrasound is extremely rare, but it was possible to record it during an impending earthquake, hurricane, or thunder. Animals are more sensitive to infrasound, which explains the reasons for their anxiety before cataclysms. Animals also use ultrasound to orient themselves in space, for example the bats and dolphins move in poor visibility conditions, emitting ultrasonic signals, and the reflections of these signals indicate the presence or absence of obstacles along the route. The wavelength of ultrasound is very short, so even the smallest obstacles (power wires) do not escape the attention of animals.

It is almost impossible to record and play back infrasound due to physical reasons; this partly explains the advantage of listening to music live rather than on a recording. Ultra generation audio frequencies used to influence the emotional state of animals (repel rodents).

Our ears are capable of distinguishing frequencies within the audible range. There are people with an absolute ear for music; they are able to distinguish frequencies, naming them on a musical scale - by notes.

A musical notation is a sequence of precisely recorded sounds, each of which has a specific frequency, measured in hertz (Hz).

The distance between notes has a strict dependence in the frequency display, but it is enough to understand that a difference of an “octave” corresponds to a doubling of the frequency.

Note "A" of the first octave = (440 Hz) A-1

Note "A" of the second octave = (880 Hz) A-2

People with perfect pitch can distinguish changes in pitch quite accurately and can determine that the frequency has risen or fallen using a system of notation. However, to determine frequencies measured in hertz, you will need a device - a “spectrum analyzer”.

In life, it is enough for us to use fixed values ​​and distinguish changes in pitches based on notes; this will be enough to determine whether the sound has risen or fallen (examples of musicians who use a musical notation system for recording sound changes). However, when working with sound professionally, precise numerical values ​​in hertz (or meters) may be required, which must be determined by instruments.

Types of sounds.

All sounds existing in nature are divided into: musical and noise. The main role in music is played by musical sounds, although noise sounds are also used (in particular, almost all percussion instruments make noise sounds).

Noise sounds do not have a clearly defined pitch, for example, crackling, creaking, knocking, thunder, rustling, etc.

Such instruments include almost all drums: triangle, snare drum, various types of cymbals, bass drum, etc. There is a certain amount of convention in this, which should not be forgotten. For example, a percussion instrument such as a “wooden box” has a sound with a fairly clearly defined pitch, but this instrument is still classified as a noise instrument. Therefore, it is more reliable to distinguish noise instruments by the criterion of whether it is possible to perform a melody on a given instrument or not.

Musical sounds are sounds that have a certain pitch that can be measured with absolute accuracy. Any musical sound can be repeated with the voice or on any instrument.

Man is truly the most intelligent of the animals inhabiting the planet. However, our mind often deprives us of superiority in such abilities as perception of the environment through smell, hearing and others sensory sensations. So, most animals are much ahead of us if we're talking about about the auditory range. The human hearing range is the range of frequencies that the human ear can perceive. Let's try to understand how the human ear works in relation to sound perception.

Human hearing range under normal conditions

On average, the human ear can detect and distinguish sound waves in the range of 20 Hz to 20 kHz (20,000 Hz). However, as a person ages, the auditory range of a person decreases, in particular, its upper limit decreases. In older people it is usually much lower than in young people, with infants and children having the highest hearing abilities. Auditory perception of high frequencies begins to deteriorate from the age of eight.

Human hearing under ideal conditions

In the laboratory, a person's hearing range is determined using an audiometer, which emits sound waves of different frequencies, and headphones tuned accordingly. Such ideal conditions The human ear can detect frequencies ranging from 12 Hz to 20 kHz.


Hearing range in men and women

There is a significant difference between the hearing range of men and women. It has been found that women are more sensitive to high frequencies compared to men. The perception of low frequencies is at more or less the same level in men and women.

Various scales to indicate hearing range

Although the frequency scale is the most common scale for measuring human hearing range, it is also often measured in pascals (Pa) and decibels (dB). However, measuring in pascals is considered inconvenient, since this unit involves working with very large numbers. One microPascal is the distance covered by a sound wave during vibration, which is equal to one tenth the diameter of a hydrogen atom. Sound waves travel a much greater distance in the human ear, making it difficult to indicate the range of human hearing in pascals.

The softest sound that can be detected by the human ear is approximately 20 µPa. The decibel scale is easier to use because it is a logarithmic scale that directly references the Pa scale. It takes 0 dB (20 µPa) as a reference point and then continues to compress this pressure scale. Thus, 20 million μPa equals only 120 dB. It turns out that the range human ear is 0-120 dB.

The hearing range varies significantly from person to person. Therefore, to detect hearing loss, it is best to measure the range of audible sounds in relation to a reference scale, rather than in relation to a conventional standardized scale. Tests can be carried out using complex instruments for hearing diagnostics, which allow you to accurately determine the extent and diagnose the causes of hearing loss.

Psychoacoustics, a field of science bordering between physics and psychology, studies data on a person’s auditory sensation when a physical stimulus—sound—is applied to the ear. A large amount of data has been accumulated on human reactions to auditory stimuli. Without this data, it is difficult to obtain a correct understanding of the operation of audio transmission systems. Let's consider the most important features human perception of sound.
A person feels changes in sound pressure occurring at a frequency of 20-20,000 Hz. Sounds with frequencies below 40 Hz are relatively rare in music and do not exist in spoken language. Very high frequencies musical perception disappears and a certain vague sound sensation arises, depending on the individuality of the listener and his age. With age, a person's hearing sensitivity decreases, primarily in the upper frequencies of the sound range.
But it would be wrong to conclude on this basis that the transmission of a wide frequency band by a sound-reproducing installation is unimportant for older people. Experiments have shown that people, even if they can barely perceive signals above 12 kHz, very easily recognize the lack of high frequencies in a musical transmission.

Frequency characteristics of auditory sensations

The range of sounds audible to humans in the range of 20-20,000 Hz is limited in intensity by thresholds: below - audibility and above - pain.
The hearing threshold is estimated minimum pressure, more precisely, with a minimum increment of pressure relative to the boundary, it is sensitive to frequencies of 1000-5000 Hz - here the hearing threshold is the lowest (sound pressure about 2-10 Pa). Toward lower and higher sound frequencies, hearing sensitivity drops sharply.
The pain threshold determines the upper limit of the perception of sound energy and corresponds approximately to a sound intensity of 10 W/m or 130 dB (for a reference signal with a frequency of 1000 Hz).
As sound pressure increases, the intensity of the sound also increases, and the auditory sensation increases in leaps, called the intensity discrimination threshold. The number of these jumps at medium frequencies is approximately 250, at low and high frequencies it decreases and on average over the frequency range is about 150.

Since the range of intensity changes is 130 dB, the elementary jump in sensations on average over the amplitude range is 0.8 dB, which corresponds to a change in sound intensity by 1.2 times. At low levels hearing these jumps reach 2-3 dB, at high levels they decrease to 0.5 dB (1.1 times). An increase in the power of the amplification path by less than 1.44 times is practically not detected by the human ear. With a lower sound pressure developed by the loudspeaker, even doubling the power of the output stage may not produce a noticeable result.

Subjective sound characteristics

Sound transmission quality is assessed based on auditory perception. Therefore, it is correct to determine technical requirements to the sound transmission path or its individual links is possible only by studying the patterns connecting the subjectively perceived sensation of sound and the objective characteristics of sound are height, volume and timbre.
The concept of pitch implies a subjective assessment of the perception of sound across the frequency range. Sound is usually characterized not by frequency, but by pitch.
A tone is a signal of a certain pitch that has a discrete spectrum (musical sounds, vowel sounds of speech). A signal that has a wide continuous spectrum, all frequency components of which have the same average power, is called white noise.

Gradual increase frequencies sound vibrations from 20 to 20,000 Hz is perceived as a gradual change in tone from the lowest (bass) to the highest.
The degree of accuracy with which a person determines the pitch of a sound by ear depends on the acuity, musicality and training of his ear. It should be noted that the pitch of a sound depends to some extent on the intensity of the sound (at high levels, sounds of greater intensity appear lower than weaker ones.
The human ear can clearly distinguish two tones that are close in pitch. For example, in the frequency range of approximately 2000 Hz, a person can distinguish between two tones that differ from each other in frequency by 3-6 Hz.
The subjective scale of sound perception in frequency is close to the logarithmic law. Therefore, doubling the vibration frequency (regardless of the initial frequency) is always perceived as the same change in pitch. The height interval corresponding to a 2-fold change in frequency is called an octave. The range of frequencies perceived by humans is 20-20,000 Hz, which covers approximately ten octaves.
An octave is a fairly large interval of change in pitch; a person distinguishes significantly smaller intervals. Thus, in ten octaves perceived by the ear, more than a thousand gradations of pitch can be distinguished. Music uses smaller intervals called semitones, which correspond to a change in frequency of approximately 1.054 times.
An octave is divided into half octaves and a third of an octave. For the latter, the following range of frequencies is standardized: 1; 1.25; 1.6; 2; 2.5; 3; 3.15; 4; 5; 6.3:8; 10, which are the boundaries of one-third octaves. If these frequencies are placed at equal distances along the frequency axis, you get a logarithmic scale. Based on this everything frequency characteristics Sound transmission devices are built on a logarithmic scale.
The loudness of the transmission depends not only on the intensity of the sound, but also on the spectral composition, the conditions of perception and the duration of exposure. Thus, two sounding tones of medium and low frequency, having the same intensity (or the same sound pressure), are not perceived by a person as equally loud. Therefore, the concept of loudness level in backgrounds was introduced to designate sounds of the same loudness. The sound volume level in the backgrounds is taken to be the sound pressure level in decibels of the same volume of a pure tone with a frequency of 1000 Hz, i.e. for a frequency of 1000 Hz the volume levels in backgrounds and decibels are the same. At other frequencies, sounds may appear louder or quieter at the same sound pressure.
The experience of sound engineers in recording and editing musical works shows that in order to better detect sound defects that may arise during work, the volume level during control listening should be maintained high, approximately corresponding to the volume level in the hall.
With prolonged exposure to intense sound, hearing sensitivity gradually decreases, and the more, the higher the sound volume. The detected decrease in sensitivity is associated with the reaction of hearing to overload, i.e. with its natural adaptation. After some break in listening, hearing sensitivity is restored. It should be added to this that the hearing aid, when perceiving high-level signals, introduces its own, so-called subjective, distortions (which indicates the nonlinearity of hearing). Thus, at a signal level of 100 dB, the first and second subjective harmonics reach levels of 85 and 70 dB.
A significant level of volume and the duration of its exposure cause irreversible phenomena in the auditory organ. It was noted that young people last years hearing thresholds increased sharply. The reason for this was the passion for pop music, which is different high levels sound volume.
The volume level is measured using an electroacoustic device - a sound level meter. The sound being measured is first converted into electrical vibrations by the microphone. After amplification by a special voltage amplifier, these oscillations are measured with a pointer instrument adjusted in decibels. In order for the device readings to correspond as accurately as possible to the subjective perception of loudness, the device is equipped with special filters that change its sensitivity to the perception of sound different frequencies in accordance with the characteristics of hearing sensitivity.
Important characteristic sound is timbre. The ability of hearing to distinguish it allows you to perceive signals with a wide variety of shades. The sound of each of the instruments and voices, thanks to their characteristic shades, becomes multicolored and well recognizable.
Timbre, being a subjective reflection of the complexity of the perceived sound, has no quantitative assessment and is characterized by qualitative terms (beautiful, soft, juicy, etc.). When transmitting a signal along an electroacoustic path, the resulting distortions primarily affect the timbre of the reproduced sound. The condition for the correct transmission of the timbre of musical sounds is the undistorted transmission of the signal spectrum. The signal spectrum is the collection of sinusoidal components of a complex sound.
The simplest spectrum is the so-called pure tone; it contains only one frequency. The sound of a musical instrument is more interesting: its spectrum consists of the frequency of the fundamental tone and several “impurity” frequencies called overtones (higher tones). Overtones are a multiple of the frequency of the fundamental tone and are usually smaller in amplitude.
The timbre of the sound depends on the distribution of intensity over overtones. The sounds of different musical instruments vary in timbre.
More complex is the spectrum of combinations of musical sounds called a chord. In such a spectrum there are several fundamental frequencies along with corresponding overtones
Differences in timbre are mainly due to the low-mid frequency components of the signal, therefore, a large variety of timbres is associated with signals lying in the lower part of the frequency range. Signals belonging to its upper part, as they increase, increasingly lose their timbre coloring, which is due to the gradual departure of their harmonic components beyond the limits of audible frequencies. This can be explained by the fact that up to 20 or more harmonics are actively involved in the formation of the timbre of low sounds, medium 8 - 10, high 2 - 3, since the rest are either weak or fall outside the range of audible frequencies. Therefore, high sounds, as a rule, are poorer in timbre.
Almost everyone natural sources sound, including at sources of musical sounds, is observed specific addiction timbre depending on the volume level. Hearing is also adapted to this dependence - it is natural for it to determine the intensity of a source by the color of the sound. Louder sounds are usually more harsh.

Musical sound sources

Big influence The sound quality of electroacoustic systems is influenced by a number of factors characterizing the primary sources of sounds.
The acoustic parameters of musical sources depend on the composition of the performers (orchestra, ensemble, group, soloist and type of music: symphonic, folk, pop, etc.).

The origin and formation of sound on each musical instrument has its own specifics associated with the acoustic characteristics of sound production in a particular musical instrument.
An important element of musical sound is attack. This is a specific transition process during which stable sound characteristics are established: volume, timbre, pitch. Any musical sound goes through three stages - beginning, middle and end, and both the initial and final stages have a certain duration. initial stage called an attack. It lasts differently: for plucked instruments, percussion and some wind instruments it lasts 0-20 ms, for the bassoon it lasts 20-60 ms. An attack is not just an increase in the volume of a sound from zero to some steady value; it can be accompanied by the same change in the pitch of the sound and its timbre. Moreover, the attack characteristics of the instrument are not the same in different areas its range with different playing styles: the violin, in terms of the wealth of possible expressive methods of attack, is the most perfect instrument.
One of the characteristics of any musical instrument is its frequency range. In addition to the fundamental frequencies, each instrument is characterized by additional high-quality components - overtones (or, as is customary in electroacoustics, higher harmonics), which determine its specific timbre.
It is known that sound energy is unevenly distributed across the entire spectrum of sound frequencies emitted by a source.
Most instruments are characterized by amplification of fundamental frequencies, as well as individual overtones, in certain (one or more) relatively narrow frequency bands (formants), different for each instrument. Resonant frequencies (in hertz) of the formant region are: for trumpet 100-200, horn 200-400, trombone 300-900, trumpet 800-1750, saxophone 350-900, oboe 800-1500, bassoon 300-900, clarinet 250-600 .
Other characteristic property musical instruments - the strength of their sound is determined by the greater or lesser amplitude (span) of their sounding body or air column (a greater amplitude corresponds to a stronger sound and vice versa). The peak acoustic power values ​​(in watts) are: for large orchestra 70, bass drum 25, timpani 20, snare drum 12, trombone 6, piano 0.4, trumpet and saxophone 0.3, trumpet 0.2, double bass 0.( 6, small flute 0.08, clarinet, horn and triangle 0.05.
The ratio of the sound power extracted from an instrument when played “fortissimo” to the power of sound when played “pianissimo” is usually called the dynamic range of the sound of musical instruments.
The dynamic range of a musical sound source depends on the type of performing group and the nature of the performance.
Consider the dynamic range individual sources sound. The dynamic range of individual musical instruments and ensembles (orchestras and choirs of various compositions), as well as voices, is understood as the ratio of the maximum sound pressure created by a given source to the minimum, expressed in decibels.
In practice, when determining the dynamic range of a sound source, one usually operates only on sound pressure levels, calculating or measuring their corresponding difference. For example, if the maximum sound level of an orchestra is 90 and the minimum is 50 dB, then the dynamic range is said to be 90 - 50 = 40 dB. In this case, 90 and 50 dB are sound pressure levels relative to zero acoustic level.
The dynamic range for a given sound source is not a constant value. It depends on the nature of the work being performed and on the acoustic conditions of the room in which the performance takes place. Reverberation expands the dynamic range, which typically reaches its maximum in rooms with large volumes and minimal sound absorption. Almost all instruments and human voices The dynamic range is uneven across sound registers. For example, the volume level of the lowest sound on a forte for a vocalist is equal to the level of the highest sound on a piano.

The dynamic range of a particular musical program is expressed in the same way as for individual sound sources, but the maximum sound pressure is noted with a dynamic ff (fortissimo) tone, and the minimum with a pp (pianissimo).

The highest volume, indicated in the notes fff (forte, fortissimo), corresponds to an acoustic sound pressure level of approximately 110 dB, and the lowest volume, indicated in the notes ppr (piano-pianissimo), approximately 40 dB.
It should be noted that the dynamic nuances of performance in music are relative and their relationship with the corresponding sound pressure levels is to some extent conditional. The dynamic range of a particular musical program depends on the nature of the composition. Thus, the dynamic range of classical works by Haydn, Mozart, Vivaldi rarely exceeds 30-35 dB. The dynamic range of pop music usually does not exceed 40 dB, while that of dance and jazz music is only about 20 dB. Most works for orchestra of Russian folk instruments also have a small dynamic range (25-30 dB). This is also true for a brass band. However, the maximum sound level of a brass band in a room can reach a fairly high level (up to 110 dB).

Masking effect

The subjective assessment of loudness depends on the conditions in which the sound is perceived by the listener. IN real conditions an acoustic signal does not exist in absolute silence. At the same time, extraneous noise affects the hearing, complicating sound perception, masking to a certain extent the main signal. The effect of masking a pure sine wave by extraneous noise is measured by the value indicating. by how many decibels the threshold of audibility of the masked signal increases above the threshold of its perception in silence.
Experiments to determine the degree of masking of one sound signal by another show that a tone of any frequency is masked by lower tones much more effectively than by higher ones. For example, if two tuning forks (1200 and 440 Hz) emit sounds with the same intensity, then we stop hearing the first tone, it is masked by the second (by extinguishing the vibration of the second tuning fork, we will hear the first again).
If two complex sound signals, consisting of certain spectra of sound frequencies, then the effect of mutual masking occurs. Moreover, if the main energy of both signals lies in the same region of the audio frequency range, then the masking effect will be the strongest. Thus, when transmitting an orchestral piece, due to masking by the accompaniment, the soloist’s part may become poorly intelligible and indistinct.
Achieving clarity or, as they say, “transparency” of sound in the sound transmission of orchestras or pop ensembles becomes very difficult if an instrument or individual groups of orchestra instruments play in one or similar registers at the same time.
The director, when recording an orchestra, must take into account the features of camouflage. At rehearsals, with the help of the conductor, he establishes a balance between the strength of the sound of the instruments of one group, as well as between the groups of the entire orchestra. The clarity of the main melodic lines and individual musical parts is achieved in these cases by the close placement of microphones to the performers, the deliberate selection by the sound engineer of the most important instruments in a given place of the work, and other special techniques sound engineering.
The phenomenon of masking is opposed by the psychophysiological ability of the hearing organs to single out from the general mass of sounds one or more that carry the most important information. For example, when an orchestra is playing, the conductor notices the slightest inaccuracies in the performance of a part on any instrument.
Masking can significantly affect the quality of signal transmission. A clear perception of the received sound is possible if its intensity significantly exceeds the level of interference components located in the same band as the received sound. With uniform interference, the signal excess should be 10-15 dB. This feature of auditory perception finds practical application, for example, in assessing the electroacoustic characteristics of media. So, if the signal-to-noise ratio of an analog record is 60 dB, then the dynamic range of the recorded program can be no more than 45-48 dB.

Temporal characteristics of auditory perception

Hearing aid, like any other oscillatory system, is inertial. When the sound disappears auditory sensation does not disappear immediately, but gradually, decreasing to zero. The time during which the noise level decreases by 8-10 backgrounds is called the hearing time constant. This constant depends on a number of circumstances, as well as on the parameters of the perceived sound. If two short sound pulses arrive to the listener, identical in frequency composition and level, but one of them is delayed, then they will be perceived together with a delay not exceeding 50 ms. At large delay intervals, both impulses are perceived separately, and an echo occurs.
This feature of hearing is taken into account when designing some signal processing devices, for example, electronic delay lines, reverberates, etc.
It should be noted that, due to the special property of hearing, the sensation of the volume of a short-term sound pulse depends not only on its level, but also on the duration of the pulse’s impact on the ear. Thus, a short-term sound, lasting only 10-12 ms, is perceived by the ear quieter than a sound of the same level, but affecting the hearing for, for example, 150-400 ms. Therefore, when listening to a broadcast, loudness is the result of averaging the energy of the sound wave over a certain interval. In addition, human hearing has inertia, in particular, when perceiving nonlinear distortions, it does not feel them if the duration of the sound pulse is less than 10-20 ms. That is why in level indicators of sound recording household radio-electronic equipment, the instantaneous signal values ​​are averaged over a period selected in accordance with the temporal characteristics of the hearing organs.

Spatial representation of sound

One of the important human abilities is the ability to determine the direction of a sound source. This ability is called the binaural effect and is explained by the fact that a person has two ears. Experimental data shows where the sound comes from: one for high-frequency tones, one for low-frequency tones.

The sound travels a shorter distance to the ear facing the source than to the other ear. As a result, the pressure of sound waves in ear canals differs in phase and amplitude. The amplitude differences are significant only at high frequencies, when the sound wavelength becomes comparable to the size of the head. When the difference in amplitude exceeds a threshold value of 1 dB, the sound source appears to be on the side where the amplitude is greater. The angle of deviation of the sound source from the center line (line of symmetry) is approximately proportional to the logarithm of the amplitude ratio.
To determine the direction of a sound source with frequencies below 1500-2000 Hz, phase differences are significant. It seems to a person that the sound comes from the side from which the wave, which is ahead in phase, reaches the ear. The angle of deviation of sound from the midline is proportional to the difference in the time of arrival of sound waves to both ears. A trained person can notice a phase difference with a time difference of 100 ms.
The ability to determine the direction of sound in the vertical plane is much less developed (about 10 times). This physiological feature is associated with the orientation of the hearing organs in the horizontal plane.
A specific feature of spatial perception of sound by a person is manifested in the fact that the hearing organs are able to sense the total, integral localization created with the help of artificial means of influence. For example, in a room, two speakers are installed along the front at a distance of 2-3 m from each other. The listener is located at the same distance from the axis of the connecting system, strictly in the center. In a room, two sounds of equal phase, frequency and intensity are emitted through the speakers. As a result of the identity of the sounds passing into the organ of hearing, a person cannot separate them; his sensations give ideas about a single, apparent (virtual) sound source, which is located strictly in the center on the axis of symmetry.
If we now reduce the volume of one speaker, the apparent source will move towards the louder speaker. The illusion of a sound source moving can be obtained not only by changing the signal level, but also by artificially delaying one sound relative to another; in this case, the apparent source will shift towards the speaker emitting the signal in advance.
To illustrate integral localization, we give an example. The distance between the speakers is 2 m, the distance from the front line to the listener is 2 m; in order for the source to move 40 cm to the left or right, it is necessary to submit two signals with a difference in intensity level of 5 dB or with a time delay of 0.3 ms. With a level difference of 10 dB or a time delay of 0.6 ms, the source will “move” 70 cm from the center.
Thus, if you change the sound pressure created by the speaker, the illusion of moving the sound source arises. This phenomenon is called summary localization. To create summary localization, a two-channel stereophonic sound transmission system is used.
Two microphones are installed in the primary room, each of which works on its own channel. The secondary has two loudspeakers. The microphones are located at a certain distance from each other along a line parallel to the placement of the sound emitter. When moving the sound emitter, different sound pressure will act on the microphone and the time of arrival of the sound wave will be different due to the unequal distance between the sound emitter and the microphones. This difference creates a total localization effect in the secondary room, as a result of which the apparent source is localized at a certain point in space located between two loudspeakers.
It should be said about the binaural sound transmission system. With this system, called an artificial head system, two separate microphones are placed in the primary room, spaced at a distance from each other equal to the distance between a person's ears. Each of the microphones has an independent sound transmission channel, the output of which in the secondary room includes telephones for the left and right ears. If the sound transmission channels are identical, such a system accurately conveys the binaural effect created near the ears of the “artificial head” in the primary room. Having headphones and having to use them for a long time is a disadvantage.
The hearing organ determines the distance to the sound source in a series indirect signs and with some errors. Depending on whether the distance to the signal source is small or large, its subjective assessment changes under the influence various factors. It was found that if the determined distances are small (up to 3 m), then their subjective assessment is almost linearly related to the change in the volume of the sound source moving along the depth. An additional factor for complex signal is its timbre, which becomes more and more “heavy” as the source approaches the listener. This is due to the increasing intensification of low overtones compared to high register overtones, caused by the resulting increase in volume level.
For average distances of 3-10 m, moving the source away from the listener will be accompanied by a proportional decrease in volume, and this change will apply equally to the fundamental frequency and harmonic components. As a result, there is a relative strengthening of the high-frequency part of the spectrum and the timbre becomes brighter.
As the distance increases, energy losses in the air will increase in proportion to the square of the frequency. Increased loss of high register overtones will result in decreased timbral brightness. Thus, the subjective assessment of distances is associated with changes in its volume and timbre.
In conditions indoors the signals of the first reflections, delayed relative to the direct reflection by 20-40 ms, are perceived by the hearing organ as coming from different directions. At the same time, their increasing delay creates the impression of a significant distance from the points from which these reflections occur. Thus, by the delay time one can judge the relative distance of secondary sources or, what is the same, the size of the room.

Some features of the subjective perception of stereophonic broadcasts.

A stereophonic sound transmission system has a number of significant features compared to a conventional monophonic one.
The quality that distinguishes stereophonic sound, volume, i.e. natural acoustic perspective can be assessed using some additional indicators that do not make sense with a monophonic sound transmission technique. Such additional indicators include: hearing angle, i.e. the angle at which the listener perceives the stereophonic sound picture; stereo resolution, i.e. subjectively determined localization of individual elements of the sound image at certain points in space within the audibility angle; acoustic atmosphere, i.e. the effect of giving the listener a feeling of presence in the primary room where the transmitted sound event occurs.

On the role of room acoustics

Colorful sound is achieved not only with the help of sound reproduction equipment. Even with fairly good equipment, the sound quality may be poor if the listening room does not have certain properties. It is known that in a closed room a nasal sound phenomenon called reverberation occurs. By affecting the organs of hearing, reverberation (depending on its duration) can improve or worsen sound quality.

A person in a room perceives not only direct sound waves created directly by the sound source, but also waves reflected by the ceiling and walls of the room. Reflected waves are heard for some time after the sound source has stopped.
It is sometimes believed that reflected signals only play negative role, interfering with the perception of the main signal. However, this idea is incorrect. A certain part of the energy of the initial reflected echo signals, reaching the human ears with short delays, amplifies the main signal and enriches its sound. In contrast, later reflected echoes. whose delay time exceeds a certain critical value, form a sound background that makes it difficult to perceive the main signal.
The listening room should not have big time reverberation. Living rooms, as a rule, have little reverberation due to their limited size and the presence of sound-absorbing surfaces, upholstered furniture, carpets, curtains, etc.
Obstacles of different nature and properties are characterized by a sound absorption coefficient, which is the ratio of the absorbed energy to the total energy of the incident sound wave.

To increase the sound-absorbing properties of the carpet (and reduce noise in the living room), it is advisable to hang the carpet not close to the wall, but with a gap of 30-50 mm).

Below 20 Hz and above 20 kHz there are, respectively, areas of infra- and ultrasound inaudible to humans. Curves located between the threshold curve pain and the hearing threshold curve are called equal loudness curves and reflect the difference in human perception of sound at different frequencies.

Since sound waves are an oscillatory process, the sound intensity and sound pressure at a point in the sound field change in time according to a sinusoidal law. Characteristic quantities are their root-mean-square values. The dependence of the root-mean-square values ​​of the sinusoidal noise components or their corresponding levels in decibels on frequency is called the frequency spectrum of noise (or simply spectrum). Spectra are obtained using a set of electrical filters that pass the signal in a certain frequency band - bandwidth.

To obtain the frequency characteristics of noise, the audio frequency range is divided into bands with a certain ratio of boundary frequencies (Fig. 2)

Octave band - frequency band in which the upper limit frequency f V equal to twice the lower frequency f n , i.e. f V/ f n = 2. For example, if we take a musical scale, then a sound with a frequency of f = 262 Hz is “do” of the first octave. Sound from f= 262 x 2 = 524 Hz - “up to” the second octave. “A” of the first octave is 440 Hz, “A” of the second is 880 Hz. Most often, the sound range is divided into octaves, or octave bands. The octave band is characterized by the geometric mean frequency

fthis year =fn fV

In some cases (detailed study of noise sources, sound insulation efficiency), division into half-octave bands is used (fв/fн =
) and third-octave bands (fв/fн =
= 1,26).

3. Measurement of industrial noise

Sound is characterized by its intensity
and sound pressure R Pa. In addition, any noise source is characterized by sound power, which is the total amount of sound energy emitted by the noise source into the surrounding space.

Taking into account the logarithmic dependence of sensation on changes in the energy of the stimulus (Weber-Fechner law) and the expediency of unifying units and the convenience of operating with numbers, it is customary to use not the values ​​of intensity, sound pressure and power themselves, but their logarithmic levels

L J = 10 lg ,

Where I– sound intensity at a given point, I 0 – sound intensity corresponding to the hearing threshold equal to 10 -12 W/m, R– sound pressure at a given point in space, R 0 – threshold sound pressure equal to 210 -5 Pa, F– sound power at a given point, F 0 - threshold sound power equal to 10 -12 W.

At normal atmospheric pressure

L J = L p = L

Sound pressure level is used to measure noise to assess its impact on humans. L p(often denoted simply L). Intensity level L J used in acoustic calculations of premises.

When assessing and normalizing noise, a specific quantity called sound level is also used. Sound level - This general level noise measured on the A-scale of a sound level meter. Modern sound level meters usually use two sensitivity characteristics - “A” and “C” (see figure). The “C” characteristic is almost linear over the entire measured range and is used to study the noise spectrum. The “A” characteristic simulates the sensitivity curve of the human ear. Sound level unit – dB(A). Thus, the level in dB(A) corresponds to the subjective perception of noise by a person.

Study of the effectiveness of noise protection using sound insulation and sound absorption methods

Goal of the work: familiarization with the methodology and instruments for measuring noise of different nature and time characteristics, its hygienic assessment and standardization, as well as with methods of protection by sound insulation and sound absorption.

Theoretical part

Noise is defined as any sound that is undesirable for humans. From a physical point of view, noise is a chaotic combination of sounds of varying frequencies and intensities (strengths) that arise during mechanical vibrations in solid, liquid and gaseous media.

Noise as an acoustic process is characterized from physical and physiological aspects. From the physical side, it is a phenomenon associated with the wave-like distribution of vibrations of particles of an elastic medium. From the physiological side, it is characterized by a sensation caused by the impact of sound waves on the organ of hearing.

Industrial noise measurement

Sound is characterized by its intensity, sound pressure R [Pa] and power W(W), which is total sound energy emitted by a noise source into the surrounding space.

Taking into account the logarithmic dependence of the sensation on changes in the energy of the stimulus and the convenience of operating with numbers, it is customary to use not the values ​​of intensity, sound pressure and power themselves, but their logarithmic levels

L J = 10 lg ,

Where I– sound intensity at a given point, I 0– sound intensity corresponding to the hearing threshold equal to 10 -12 W/m, R– sound pressure at a given point in space, P 0– threshold sound pressure equal to 2×10 -5 Pa, F– sound power at a given point, F 0- threshold sound power equal to 10 -12 W.

At normal atmospheric pressure L J = L p = L

To measure noise to assess its impact on humans, sound pressure level is used. The intensity level is used in acoustic calculations of rooms.



Sound frequency range

The ear perceives vibrations of the environment in the frequency range from 16 to 20,000 Hz. Maximum hearing sensitivity occurs at frequencies of 1-3 kHz. Sounds that have the same energy but different frequencies are perceived as different in volume. Noise with a frequency of 1000 Hz is taken as a reference when assessing loudness. Lowest sound pressure, sensational sound at a frequency of 1000 Hz is called the hearing threshold. A sound pressure of 200 Pa causes a sensation of pain in the hearing organs and is called the pain threshold.

A 10dB change feels like doubling the volume. Sound pressure levels at a frequency of 1000 Hz are taken as loudness levels. The unit of loudness level is background.

Below 20 Hz and above 20 kHz there are regions of infra- and ultrasound, inaudible to humans, respectively. The curves located between the pain threshold curve and the hearing threshold curve are called equal loudness curves and reflect the difference in human perception of sound at different frequencies.

Since sound waves are an oscillatory process, the sound intensity and sound pressure at a point in the sound field change in time according to a sinusoidal law. Characteristic quantities are their root-mean-square values. The dependence of the root-mean-square values ​​of the sinusoidal noise components or their corresponding levels in decibels on frequency is called the frequency spectrum of noise (or simply spectrum). Spectra are obtained using a set of electrical filters that pass the signal in a certain frequency band - bandwidth.