Handling stereo amp sims

This is pretty newbie. What’s the standard practice for using an amp sim like ToneLib GFX or Guitarix… output to a stereo track, or route left & right to separate mono tracks? Or some kind of summing? I keep getting pounded on the head that my guitar tracks should be mono. Thanks!

Are you recording the guitar signal, then processing, or recording the output of the processor?
If you are recording the raw guitar signal, that is mono, record to a mono track and put the stereo processor on the track and route to a stereo output bus.

If you are recording the output of the processor (i.e. amp sim) and it has stereo outputs, then record to a stereo track.


I’m doing a re-amp type of setup, capturing a dry DI signal (mono) and a wet (Guitarix - stereo) track (mostly for performance reasons) when I track, and then I’ll re-process the DI track later (the subject of the question here). So most of my question is about the standard practice, where I mostly see in my studies where I rarely see a stereo track. Thanks!

How many people, so many approaches.

I’m putting an amp sim plugin onto two separate DI tracks panned hard left/right but with no IR processor module. Then I route these tracks to stereo bus and apply IR there. I’m using LSP Impulse responses plugin for that.

BTW, I made some (24 pcs to be precise) IR’s lately from my speakers (v30 and Eminence Legend) and microphones (Shure sm57 + Audio Technica mb2k). I can share it, let me know if you’re interested.

1 Like

Thank you! I want to hear how people handle their processing. And I definitely appreciate the offer of the IRs, but I have so many already, that I am drowning in IRs :slight_smile: . I can barely work through my current collection adequately. But that’s very generous! I’m sure someone else will take you up on that.

For my guitar recordings I applied a wiring hack to my Gibson so I am able to record both pickups at once into a stereo wave track. :slight_smile:
Then, I split up the left and right channel (respective the signal of the neck and bridge pickup) into separate mono busses where I apply different Guitarix plugins:

  • for clean sounds I started to like “Gx Studio Preamp Mono” quite a lot a while ago
  • for distorted sounds I always have a “GxAmplifier-X” which I sometimes prepend with a “GxMXR Distortion”
  • for lead sounds I usually put a “GxTubeScreamer” before the GxAmplifier.
  • in either case I try to achieve a “warm” sound from the neck pickup and an “aggressive” sound from the bridge pickup path

This way I can do a recording session once and decide later, which of the two pickup signals (plus plugins) sounds better to be used for the mix. In practice however, I always ended up using a mix of both so far, which I do as follows:

  • I use the faders of the mono busses to get the right balance of both sounds (warm vs. aggressive).
  • pan the two mono busses hard left and right and route them to a stereo bus
  • there I usually apply an LSP parametric EQ to do some further sound shaping (usually the low end and the area around 4kHz needs some cutting…)
  • then I use the panner of this stereo bus (in “azimuth and width” mode) to put the sum of both signals to wherever I want it to be. Usually I only use a little width here, something around 20%. So if I have e.g. two rhythm guitar tracks, I end up having two of these stereo busses, where each of them already gets some stereo sound “for free”. After moving the azimuths to the left respective right, I finally get the “full” stereo sound from both tracks.
  • finally, I have another separate stereo bus running “gx_zita_rev1_stereo” where I use post fader aux sends from the guitar stereo busses to it.



Makes me think of those Rickenbackers with two outputs :slight_smile:

Interesting thread, thanks all for sharing so far.

Currently I’m recording for a hypothetical two man band situation whereby I play baritone guitar (and sing) and a drummer does percussion. This band doesn’t exist yet but I hope it will after moving house later this year :upside_down_face:

Anyway, long story short, for ‘live’ I add a submarine pickup to the lowest two strings of my baritone (A and D) and send this through a Boss OC3 then an NUX bass amp/cab modeller. The main out gets split into stereo and goes through two separate, different NUX guitar amp/cab modellers. It seems to work really well.

For recording, I use old hardware with an old underpowered laptop so I mimic the above setup like this;

Baritone submarine pickup → Boss OC3 → left input on audio interface → audio track with a bass amp/cab sim (Audio Assault duality for example).

Main baritone pickup → right input on audio interface → 2 separate audio tracks in Ardour, panned appropriately, one of which has Airwindows SampleDelay at the start of the plugin chain to deal with phase / stereo image, each track with very different amp/cab sims on (usually some Guitarix LV2 stompboxes and amp / tone stack going into an Audio Assault cab IR via x42 IR loader).

From there I the bass is routed straight to the master while the baritone tracks go through a buss.

1 Like

You’re getting beat over the head because guitar tracks are mono.

Guitar output is mono. The amp is mono. The mic is mono. The channel on a hardware console is mono. How can a stereo track be a “sim”?

For performance reasons, use a real amp. You can still use a DI box and/or take a DI from the effects send.

If you want more than one mono channel, do another take. Then you can pan & route them to a stereo bus.

And if you really want to have some fun, mix 2 or 3 different mono cabinet mic IRs for each track before bussing.

Good luck

1 Like

This was more along the lines of the push-back that I was looking for. :slight_smile: Not that I don’t value everyone else’s input so far as well!

Not so fast, there are amp simulators which include stereo chorus and flanger effects, stereo delay, stereo reverb, etc.
If you are recording a physical amp with a single physical microphone, that is a mono source, so sure, a mono track makes most sense. But the original question was about recording an amp simulator with stereo outputs, which makes it a stereo source. If you are recording a stereo source then a stereo track is the most natural fit.

OP did not say a stereo track was a sim, he said he was using Guitarix amplifier simulator to process a DI guitar signal.

I have stereo channels on my hardware console, that is where I connect the stereo output from my Line6 hardware amp simulator. :slight_smile:

1 Like

Gallien-Krueger 250ML, Roland JC-120 and others disagree with you.

I don’t want to get too far off-course, so my main question was the point that mc888 did make, and what I’ve found, is that mixers are going to want/expect mono tracks. I’m not educated enough yet to know the “problems” with handling stereo tracks… Trying to make it so that my work jibes more with the standard practice, so I can learn from the standard practice, and, honestly, I don’t want to be a mix or mastering engineer, and want to pass my stuff off, and I don’t want to give my session to someone and have them say “aw, man” :slight_smile:

If that question is really “what track configuration would someone running ProTools in a commercial operation expect to get from me” then you will have to ask the person you are working with.
Since you asked in an Ardour forum I assumed you were asking what makes sense to do in Ardour.

Whenever you are working with someone else the best practice would be to communicate with the person up front, before you spend a lot of time recording, and find out what they prefer.

Even after talking to someone before you start you should still document carefully what you have done and include the documentation with the files you send over so the person you are working with does not have to guess about your naming convention, track configuration, file format, etc. and does not have to rely on memory for what was discussed previously.

There are no technical problems, anyone who has experience with orchestra or big band recording should expect to get a combination of stereo and mono tracks, the problems are caused either by lack of experience or because of using overly limited software.


In this case I would take the dry DI into a mono track, and the Guitarix output (presumably wired into Ardour using Jack/Pipewire) into a stereo track.

As @ccaudle says, there’s no “standard” way to do things, as it depends on what you are recording. In general, guitars tend to be mono, but there’s various permutations.

For instance, you might be recording a mono guitar amp with 2 microphones (a fairly common practice) in which case you would get two mono tracks containing the guitar recording which you may chose to treat as mono by blending them, or to treat as stereo by panning them, etc.

Personally I would record these as two separate mono racks in Ardour.

If you are recording through a guitar processor, like the Line 6 Helix, Boss GT1000, etc. then you will get a stereo input because these units have stereo capabilities. In some cases you may get a dry signal too. In my experience, this is also on a stereo pair, so would connect to a stereo track (I have a Boss GT-001 unit which presents in this way), even though the dry track is, essentially, mono with the same audio on both channels. The Boss Katana USB recording also presents this way.

In the case of an amp sim, you may get a stereo output, and I would treat it as a stereo track.

In the case of re-amping a mono DI, depending on the plugin used, this may result in a mono or stereo output. In the latter case, you would end up with a track with mono input and stereo output which is perfectly valid. If you bounce this to a new track, that track would be stereo (input and output).

My personal view is it’s probably best treat mono inputs (to Ardour) as mono and stereo tracks as stereo. Then everything generally “just works” in logical ways.

If I went to a studio and had to deal with a recording engineer, I would expect exactly the same.




Not so fast, there are amp simulators which include stereo chorus and flanger effects, stereo delay, stereo reverb, etc.

Effects are not an amp.

I have stereo channels on my hardware console, that is where I connect the stereo output from my Line6 hardware amp simulator.

Any amp with built-in stereo effects will require stereo output because it’s an effects unit.

Gallien-Krueger 250ML, Roland JC-120 and others disagree with you.

Those have embedded stereo effects. Most amps have an effects loop, not built in effects.

Let the mixing engineer decide, how fancy the stereo effects may get. Keep the material simple. Too many effects and stereo thingies may not fit in the final mix. I’d prefer a good sounding mono track, as long as the stereo effect is not a essential part of the sound. Sure a fully blown up guitarix stack (or whatever) sounds great by itself, but often it buries other instruments. After all, what counts is the performance anyway ;).
I’d go with John Frusciante who said he likes to record on analog tape not because of the sound, but because of the decisions it forces you to make. All the great suggestions made here I think is more mixing than tracking.

1 Like

IMO this kind of conflicts with your previous statement and I wonder if you misunderstand what he meant by this.

In the analogue tape days, the ability to reprocess and cut, slice and dice tracks was limited, so you had to make decisions up front: which amplifier, which microphone(s) and placement, what pre-amps, even down to which room acoustics. This often included “what effects to use”.

The ideas is that you should be deliberate about what you are recording and know up-front what sound you want to achieve from it: if you don’t have an idea then you shouldn’t be tracking yet.

Back then it simply was not viable to make most of those decisions at the mix stage: “Fix it in the mix” was a last resort, not a way of working. Of course there are notable exceptions but, in general, the objective was to capture the sound you want at the tracking stage as much as possible.

This contrasts with the modern DAW approach where you can record a track dry and, through the power of endless plugins, conjure up any amp, mic, pre-amp, and effect you want. And if you make mistakes, comping and even beat-by-beat time correction is entirely possible without excessively degrading the audio.

I’m not saying either approach is wrong, or necessarily better than the other. But a lot of artists seem to have come to the conclusion that the difference between analogue and digital is often more about the techniques, and production rather than the media itself. A huge part of that is the workflow, which is why Harrison have their particular take on workflow in Mixbus.

For more on this, I suggest “Zen and the Art of Mixing” by Mixerman.

For my own part, I have dabbled with the analogue world in the distant past (yes, I have spliced magnetic tape with a razor and chinagraph pencil as tools) but have more experience with the current digital DAW world.

When I track, I prefer to do it mindfully and with tones and effects decided up front. After all, they are often a key part of the performance, so you benefit from having them there even if it’s for monitoring.

But I also tend to track dry where I can, because you never know how useful that dry track is going to be!



If you have only 24 Tracks you tend not to use a lot of tracks just for one guitar…that’s what I meant. This could be very helpful for the mixing engineer, because it avoids dealing with all those crazy effects that sound only good if you solo the track ;). As I said, unless the effect is not substancial for the specific sound. In most cases they aren’t. :wink: