Front/Back panning

I have a sound sample of a singing bowl and I want to add this to a stereo track. I want the listener to perceive the sound as if the real bowl was placed somewhere in the room.
Ardour has a panner for left/right, but how can I position the sound front/back? Can this even be done (relatively easily) with a stereo track?

Googling yields a lot of very confusing information…

front - delay reverb, back - filters maybe

IIRC (the source), it is from the dear old classic “The Mixing Engineer’s Handbook” by Bobby Owsinski:

left / right: panning
front / back: loudness
bottom / top: (filtering the) frequencies
size of the sound source: reverb / effects

:sunglasses:

2 Likes

I’ve experimented with plugins that can achieve this:

Panagament
https://www.auburnsounds.com/products/Panagement.html
Fun, but had a tendency to crash when i tested the free version.

IEM Plug-in Suite

This is a suite of ambisonics plugins. Might be a bit too much if you are working on regular music.
This demo is really helpfull: How to make Ambisonic/Binaural 360 Audio with Ardour - YouTube

If you are using Linux you might find them as packages on your system (iem-plugin-suite-vst).

1 Like

You have to filter the sound in a way that mimics the changes to a source timbre when it is behind you.

The problem is that the exact change varies slightly from person to person, and is very ambiguous even if you match very closely for one person’s head shape, amount of hair, size of ears, etc.

How do you know if the sound is just a little duller than your were expecting, or is a brighter source but located behind you? In a real room with a real object producing sound behind you, you rotate your head left and right slightly, and that gives you additional cues that your brain can use to disambiguate the actual position.

When you do that with the sound coming from speakers in front of you, your brain detects that the changes in sound do not match what years of experience have taught you to expect, and the illusion tends to fall apart.

1 Like

This sounds very much like: It cannot be done with just headphones or static monitors.

That is incorrect.

HRTF are entirely about placing sound around you by emulating the cues provided by your natural hearing. Specifically there are three ways we place sound in space around us:

Amplitude
Timing
Timbre

Specifically, we have a large, hopefully dense material between our two ear canals. As a result, Amplitude and Timing will be different based off where the sound comes from one side or another. This is also part of why lower frequencies that are good at diffracting around our head are more difficult for us to locate in space, and why you are generally good going mono with bass frequencies.

But the pinnae of our ears also affect the sound, changing the timbre in ways that we don’t always even recognize as our brain also compensates for it. Sounds coming from the front get focused into the ear canal by the pinnae, and have a brighter sound, whereas sounds from the rear are partially blocked by the pinnae, specifically some of the HF material. This happens in specific ways our brains recognize and account for and helps us place sounds behind or in front of us.

Combined with the ability to place in the left/right using the above cues this allows us to place sounds in a 2D space around us fairly easily.

Now to the OP’s original question, along with HRTF which typically only work on headphones as it requires specific timings as well as direct unimpeded access to our ear canals, or at the very least is far less effective in open space, there are other ways we place things as well. As people have commented, volume is obviously one, louder sounds help us place things as closer, vs farther away, but also timbre, we hear a bit more bass in closer sounds for instance. Along with this direct vs reverberant sound is another cue for us, the more reverb we hear we naturally tend to place those sounds farther away.

So judicious use of two processes really help us with this. Compression can be used to bring things to the forefront by increasing the apparent volume even moreso than just turning up the fader, as well as adjusting timbre based on compression vs frequency content, and reverb processes pull things into the background by decreasing that direct vs reverberant ratio. Combined with some judicious EQ work, this can provide the second dimension to a 2 dimensional soundstage, increasing the bass slightly for sounds you want to sound closer, decreasing it slightly for those farther back, and adjusting the HF response to emulate whether it is facing directly to you or not, etc. You can get very detailed with this if you really choose to by modifying the sound of reverb vs the direct sound etc. to help place it farther but you do hit a law of diminishing returns very quickly, so I don’t really recommend going that distance for many things.

     Seablade

I will say that I may have interpreted the original question incorrectly. If the question is about distance in front of the listener then techniques as Seablade pointed out can be not only effective but pretty robust.
I had originally interpreted the question about “front/back” as asking about placing sounds behind the listener using only stereo speakers. I have heard that demonstrated before, but that is what I was referring to as a non-robust illusion, making a sound appear behind the listener using only two front speakers. If the question was about closer or farther away but always in front, then play around with early reflections, reverb blend, and high frequency roll-off, you should be able to get some semblance of layering of distance that is reasonably good for any listener.

It is much more convincing when good HRTF practices are put into play and listened to over headphones. This is what the section I put on the role of the pinnae in the hearing does, but the problem is that it gets confused when listening over open air speakers as you have additional reflections, incorrect timings, etc. so the illusion is fairly poor at best in many cases unless listening over headphones (Closed and controlled environment).

    Seablade

It may depend on the specific person, but even that can fall apart pretty easily when you turn your head and the image doesn’t do the “right” thing to match your head movements. With some people it just sounds like the room is spinning with them, with some people their brain just does the equivalent of saying “nope, this doesn’t make sense, I’m ignoring everything.”
I’ve heard that the head tracking dynamic systems like SMYTH Research are really cool because they rotate the HRTF processed audio as you turn your head, so that the auditory location of the sources stays in the original locations, but that is way out of my price range to own, and I’ve never known anyone with one to let me try, so I just have to take other people’s word on how cool it is.

Yea I was going to comment that there are headphone tracking solutions for that as well, and they are coming down in price primarily due to gaming actually, that uses them. In fact some gaming sound solutions apply HRTF to stereo headphones to create 3D spaces, and then they are also looking into tracking solutions due to VR etc. as well.

   Seablade

Thanks for all your valuable input. I’ve got some reading and experimenting to do…

@ccaudle You did understand my question correctly: I meant front and back.

If you imagine yourself lying on the floor, surrounded by singing bowls. Someone then strikes the bowls, one at a time slowly, randomly. It is this effect that I’m after.
Being able to exactly locate the actual placement of the bowl is not important, but it is important that you feel surrounded.

This topic was automatically closed 91 days after the last reply. New replies are no longer allowed.