Tuesday, January 1, 2008

Miking a Violin

White, Fig. 14.1
B asic technique:
1. Place a mic slightly above and about 2 meters in front of the violin;
2. Place a ribbon mic over the player’s [right] shoulder;
3. Position a mic [5 cm to 15 cm] underneath the violin;
4. Position a mic behind the violinist so that the head and body of the violinist are partially obstructing the direct path between the mic and the instrument.”
  —  Owsinski, p. 150.
To this, consider adding #5: a ‘near-coincident’ pair—two cardioid-pattern mics spaced about 15 cm apart horizontally (about as far apart as your ears) and with their axes angling away from each other at about 120 degrees—can improve depth and phasing without adding reverb confusion. You can set this up on a normal mic stand or mic boom with a Sabra Som ST4 Mic Bar or equivalent. Depending on the orientation of the two mics with respect to the instrument, the physical (acoustic) time-delay associated with the 15 cm between them (i.e., between the centers of the two mics’ cardioids) will be as much as 0.5 msec for normal chamber music concert hall sound velocities, room temperatures and humidity levels.

Some might think that adding more mics like this is too complex, too many degrees-of-freedom, too many choices in mixing, too much bother.

But I think it’s not overly complex. And there’s physiology and acoustics underlying a rationale for doing it. The results can be worth the effort if you’ve got the time to do it. In fact, in comparison to so many other aspects of sound engineering that have changed dramatically over the decades, it’s a bit surprising that miking practice has changed so little.

To figure out how ‘near-coincident mic pair’ and other mic array techniques work, it’s useful to consider binaural physiology of hearing—specifically, the psychoacoustical phenomenon known as the ‘precedence effect’. The precedence effect is when reflected signals are neurologically ‘inhibited’ for a period ranging from hundreds of microseconds to a few milliseconds after a direct signal is received by a human listener in a reverberant room.

Multi-microphone digital processing schemes have of course been used for years in connection with adaptive noise canceling (ANC) and other sound engineering techniques to remove noise and reverberation distortion. The individual microphone signals are divided into frequency bands whose corresponding outputs are co-phased (delayed differences are compensated) and summed. The gain of each band is set according to the degree of correlation between microphone signals in that band. ANC operations are equivalent to a time-varying linear filter whose properties depend on the short-term spectra of the two (or more) input channels. This approach can help improve coloration (early echoes that contribute spectral distortion) and reverberant tails (late echoes). But ANC isn’t what I’m talking about.

Human binaural hearing’s ability to localize a sound when there are delayed reflections of the original sound that interfere with the localization process has been termed the ‘precedence effect’ or the ‘first wavefront’ effect. The precedence effect is how human binaural hearing tends to base the judgments of localization and pitch predominantly on inter-aural cues carried by earlier, direct sound—and it contributes quite a bit to our perception of the depth and timbre of stringed instruments’ sounds. In general, the precedence effect affects pairs of coherent acoustic wavefronts that differ in arrival time at the ear by from less than 1 millisecond up to about 10 milliseconds. Conventional wisdom is that the precedence effect arises because of both ipsilateral (same-side) and contralateral (opposite-sides) neural inhibition in the conduction pathways for each ear. Colburn and Durlach (1978) gave a good, early summary of this.

Sayers and Cherry (1957) were among the first to quantitatively describe binaural hearing directly in terms of interaural correlation in terms of a running crosscorrelation function:
Running Cross-correlation
where xl and xr are the left and right signals respectively and τ is the time delay between the signals at time t.

Lindemann also used a model of binaural hearing based on a running crosscorrelation function. Lindemann’s model, which provides a quantitative basis for the precendence effect, proposed two different criteria that are associated with accurately perceiving the lateral displacement of auditory events: location of the centroid, and location of the maximum. These criteria both relied on the information in the running inhibited crosscorrelation function Ψ.

While violin and other string sounds contain continuous excitations, they aren’t flute-like; they aren’t uniform excitation of pure sinusoids—which is why somewhat esoteric miking techniques might be helpful for strings but offer no noticeable advantage for other instruments.

Basically, I suggest that the natural correlation and inhibition mechanism can be profitably leveraged by thoughtfully placing ‘near-coincident’ dual mics, to supplement the more conventional schemes described in the blockquote above. Use your normal mics just as you routinely do. But add a ‘near-coincident’ dual-mic pair as well. For recording, the dual-mic signals should ideally be sampled at high KHz rates, to be consistent with capturing the sub-millisecond precedence effect.

(In terms of post-processing and mixing, a set of parallel independent delay lines with a 0.5 millisec-or-better resolution of delay times may be helpful to tweak the high-bandwidth-sampled multi-mic signals—but I have not experimented with that myself.)

Just a thought… Try it and see what you think. Happy New Year.




No comments:

Post a Comment