Audio interface passthrough vs. trip thru convertors/linux usb

I have been reading with interest the recent discussions about linux audio interfaces on this forum. I’m trying to improve my tracking chain (aren’t we all?) and capture/conversion, and I’m trying to isolate the weakest part of that link. I suspect it might be conversion.

I have two inexpensive boxes, a Yamaha/Steinberg UR22mkII and a MOTU M4. I ran some tests this weekend to A/B what I believe is the effect of the A/D and D/A of the interface convertors. I have a single line in (mic’ed acoustic guitar into external preamp) passing through the box to my headphones via the ‘mix’ knobs on the audio interface. In linux, I have jack connecting the single line in to the two output channels. When I turn the hardware mix knob to the left (hardware monitoring of input), I hear a certain clarity and detail to my picking and strumming; when I turn it to the right (software/daw monitoring of input), I hear less clarity and detail, and it feels ‘blurry’.

I realize this isn’t incredibly scientific, but I did enlist my kid to randomly pick a side, and most of the time I could guess which side it was.

Both interfaces seem to colour the sound, and somewhat in the same way.

Perhaps one of you experts could sanity check this for me? It seems like this is an ok perceptual way of checking a conversion roundtrip, and I don’t think the latency or me physically holding/playing the guitar during the listening test has a substantial effect here. I feel like I have a very nice sound going in (don’t we all?), but am losing some of it in the recording.

On a related note, has anyone out there had success with higher-end conversion devices on linux/usb? I see claims of usb CC, but I’m wary of that (eg. MOTU M4 needed a quirk in linux kernel driver, from forums things like Lynx HILO claim CC but don’t actually work on linux). Anyone try a Dangerous Convert AD+, Prism Lyra, Grace M900?

Blair

You need to make sure that the volume of A/B samples is exactly the same. Even a slight difference will make the louder one sound better because of how our hearing works. It might be easier to arrange this with a line input where you could drive audio from a cd or something like that.

If your tests aren’t double blind, they aren’t worth even discussing.

I’d even consider a ban on discussions of non-double-blind comparisons in these forums, but that seems a bit heavy handed.

1 Like

For what it is worth:
I also own a cheap interface and for recording - mostly vocals and guitars - I always use the direct monitoring of the IF.
With the hardware I own I was not able to bring down latency to make DAW monitoring a pleasent experience for me.

Are you sure that isn’t because of the delay between hearing the acoustic output from the instrument, then several ms later the returned audio from the software monitoring? You did not say what buffer size you are using, but software monitoring would introduce at least a few ms of latency to the monitoring path, which can produce comb filtering artifacts that sound kind of weird.

I’m not sure how broad a meaning you convey with the term “conversion” but in general using a fairly strict definition the A/D and D/A converters are not the weak link in anything made in the last ten years. The microphone/instrument amplifiers, power supply noise filtering, and general analog circuitry layout are an order of magnitude more significant than the specific A/D or D/A converter device used. Of course since most devices are integrated with all that in one small box maybe you mean the entire audio interface device when you say “conversion.”
tl,dr: Your particular device may or may not be high quality, the method you used has no chance of properly showing that.

@Mikael:

Perceptually they appear to be at nearly the same volume in my headphones. Because this is purely the hardware route vs. no-processing trip thru computer, I would expect in the ideal for the exact same level/signal to come out of either path.

I would consider an interface to be defective if it didn’t produce a near match in level between these two audio paths, no?

I’ll try comparing the levels out of the line-out tho, as it seems very similar to what’s going on with the headphone out and it is something I can more easily measure. Any suggestions on matching the levels here at the headphones without test gear? Mic my cans?

@Paul:

I’m trying to figure out what double-blind would look like here. It seems hard to setup such a test here without maybe building a mechanical arm to randomly pick a side and rotate the physical control. Ignoring the issues with me making the input signal, this seems like a decent single-blind test with very little opportunity for information about the choice to leak (kid in separate room not visible simply saying ‘k’ when ready). I guess we could de-humanize that communication channel a bit so I don’t subliminally read the inflections of his k’s?

@Chris:

Yes, I am talking about the entire effect of the recording/playback of the interface here. My mental model of the chains I’m comparing are: line_in -> headphone_amp vs. line_in -> a/d -> usb_out -> usb_in -> d/a -> headphone_amp. I assume usb_out -> usb_in is the exact same digital signal, only shifted in time. I believe this is accurate, no? But the whole inner block seems to colour / blur the signal some. I suppose even then it could be mostly one side of the conversion, hopefully d/a?

I tried a variety of buffer sizes, from 16 to 1024. As suggested in the original post, I was worried about simultaneously producing the physical input signal and evaluating the physical output signal, but I would assume the effect size is small and that it would be a wash given other latencies in the system.

@all:

Thanks for your thoughts, I’m mostly convinced tho that the answer is to spend money. Maybe I’ll try finding a good representative mono source signal to factor out that issue, and maybe I’ll tighten up my experimental methodology.

I’d still welcome anyone’s suggestions on linux-compatible audio interfaces or even separate adc and dac with ‘transparent’ conversion.

Blair

I think you can get to single blind easily by having someone else operate the equipment. Wear headphones and don’t face them, and you’re close enough to double blind that I’ll listen to your experiences :slight_smile:

As Chris said, probably any interface nowadays has transparent AD/DA conversion since analog / digital conversion is a problem solved decades ago and even good AD/DA chips are cheap.

I’ve never had a problem like yours and I’ve used expensive and cheap interfaces (Pro Tools at work, Behringer, Alesis, Presonus at home). If you really can hear a difference between direct headphone out and sound coming back from the computer then I guess your device is faulty.

Just buy a new interface and get peace of mind. You don’t need to worry about the AD/DA chip of the interface, mic preamps are probably the weakest link here.

If you want to test your interface you could create a frequency sweep from 20 Hz to 20 kHz and play that with an external device to your audio interfaces line input and record it to computer. Then you can examine the recording for any dips in the frequency spectrum. You could do the same to your headphone output by recording it with another computer.

It is easy to trick oneself to hear a difference in two audio samples when there is none, that’s why double blind tests are used. I think in cases where the difference is subtle it would be better not to trust ones ears but just measure the frequency response.

I tend to think ‘transparent’ is more of a marketing term than an engineering term. It’s the marketing department’s way of selling nothing. A transparent device does nothing to the signal. The engineering term is ‘linear time-invariant’ but market research may have shown that the engineering term doesn’t sell interfaces.

Strangely when musicians get a linear time-invariant device they then complain that it sounds ‘harsh’ or ‘sterile’ and proceed to mangle their signal with non-linear emulations of analogue gear.

So in your specific case – the two signals are different. The signal that has gone through the computer has gone through two filters. You might be hearing that. You could try a higher sample rate and see if you notice. The MOTU M4 has an ESS A/D chip which is also found in more expensive interfaces.

You may notice, but your listeners won’t. :slight_smile:

1 Like

Careful listening experiments have shown that two otherwise identical signals which differ in amplitude, with one being as little as 0.1dB higher amplitude, are detected not as different in loudness, but as differing very slightly in timbre, with the slightly higher amplitude signal being described as slightly more “bright” or “open.” For a consumer line level signal of around 2V RMS maximum level that would be a difference of just barely over 20mV at full signal level, correspondingly lower difference at lower signal levels.
Most careful experimenters aim for plenty of margin to make sure that level differences do not trigger any false positive results when looking for differences, so try to match within 0.05dB.
To do that you will need to play a signal through your interface (single frequency full amplitude is easiest to measure) through the direct path and DAW monitor path and verify that the ratio between the amplitudes through the two paths is no more than 1.0058.

Spoiler alert: Most interfaces cannot match gain that tightly between any two paths without trimming.

1 Like

Maybe I’m misreading your post, but your perception will be hugely warped by the fact that you are operating the sound source (the guitar) while listening (have I interpreted that right?).

Do you have a looper pedal? You could put that (or some other source) between the preamp and your interface, let it loop a sample of your playing, while you listen and your son operates the mix control. Sure, the looper has it’s own AD/DA cycle, but signal degradation is cumulative, so if it’s real you’ll still hear it.

Ultimately, I am pretty sure that 90% of the best albums ever created were made on gear that has worse specs than what you’re using today.

Also, air is such a poor medium for conducting sound that it astonishes me that we spend so much time agonising over the rest of the signal chain :rofl:

1 Like

I recommend watching Julian Krause’s “Noise Compared” video on YouTube. IIRC the MOTU did pretty well, while the Steinberg, not so much.

Thanks all, I spent this past week with an API A2D unit to try out that unit’s onboard converters via spdif to a pci card. I couldn’t tell the difference between that and the M4 converters fed from the line out. The signals summed together with one side inverted came out relatively faint. I also sanity checked looping the track via D/A then A/D on the M4, and that was roughly identical.

I quite like the sound of the API preamps. I am also happy with my MOTU M4 unit.

I am convinced now that the capture is accurate, and that I need to adjust my process for mic placement while tracking myself, and that my fun tickets are probably better spent on instruments, mics, and preamps. Thanks for talking me down off that one!

Blair

This topic was automatically closed 91 days after the last reply. New replies are no longer allowed.