Classical chamber music recorded on Soundcraft

I’d like to share a couple of tracks I recorded last week as a “byproduct” of a live streaming of my friends’ concert.
With this new situation of closed concerts, live streaming became the “next big thing” so I decided to jump on the wagon and have started a small business for the audio production. And although I already have got enough gear to start the job, I was somewhat worried to do live stream through a device that is connected to a computer (latency, xruns…)… So I bought an analog console: Soundcraft Signature 12MTK. This is a little review of the device.
First: it works with Linux beautifully (Debian testing on 5 years old Dell Inspiron intel i3, 4 Gb of RAM). It is recognized by ALSA and Ardour, but one must pay attention to routing which is a bit complicated, but very powerful, since one can route anything to anywhere, both in Ardour and on Soundcraft. It takes a little getting used to.
At first I was concerned with the quality of preamps (for the classical, at least), or at least of their coloration (the “signature” sound), I guess everyone is a bit sceptical about cheap stuff (it costs around 400€). There is no reason for concern, the preamps sound beautifully, they have enough gain and the noise is unnoticeable. And it is such a joy to do a monitor mix using faders and knobs (as opposed to dsp mixers - if those work with Linux at all)

So these are my tracks: The first is duo for violin and clarinet by D. Milhaud, and the second is a quintet for clarinet and strings by A Grgin. It is recorded in a very noisy cinema theater using pair of spaced omnis (Rode NT5 with omni capsules 50 cm separation), and a NOS pair (Line Audio CM3 @20 cm) placed on the same bar (Faulkner’s array?). Very little processing was added: some stereo balancing, very little reverb (zita-rev) the x42 stereo limiter, and HP filter in the duo - in an attempt to combat LF noise.


Thanks for sharing (both music and mixer review)

1 Like

Sounds great, and thanks for the console review! I’m curious where the sound went once it left the mixer: did you go straight into the camera, and then from the camera into OBS or something similar?

I did my first livestream concert last weekend (on Zoom, for a folk music festival). We got lots of compliments on the sound (I used a QSC Touchmix digital mixer, the Touchmix 8, which only has analog outputs, no USB). The converters in my camera are not very good, so instead of going directly into the camera I went out to an interface and from that into the computer. Not an optimal solution: the sound got converted twice (A/D and D/A in the Touchmix, and then A/D in the interface), but it still sounded great. The other issue, though, was sync. Since video is more processor-intensive than audio, I expected the video to lag behind the audio and was prepared to add some delay to the audio out on the Touchmix (it can apply delay to the main and aux outs). But in fact it was just the opposite: the audio was lagging the video.

In retrospect I should have used OBS and established sync there, but instead I used the camera as a webcam directly into Zoom and chose the interface as the audio source. Live and learn. We tested our setup with remote participants multiple times before the festival and sync wasn’t a problem, but it was noticeably off during the festival itself. The Touchmix performed admirably, and I believe its operating system actually runs on Linux because firmware updates are tar.gz files. Coincidentally I also used Line Audio CM3s for our concert: a pair in ORTF. I put them over us about two meters in front, out of camera. The sound was excellent and several people found the stereo imaging remarkable, including the Zoom technician at the festival.

Anyway, I’m curious about where you sent the audio from the console during the live stream.

Bhurley, thank you for sharing your experience.
For this particular gig I was hired as an audio guy only, and my task was to mix the sound to the best of my abilities and pass the master bus stereo analog signal through XLR cables to a video mixer of some sort. I didn’t see which mixer it was because we used theater infrastructure (built-in cabling) which enabled us to work separately. I was given the desk of the stage manager just left of the stage, behind the curtain, and the video crew was operating from the central console in the middle of the audience. These two places are permanently connected and there are 16 channel XLR boxes on each end. So I believe the concert was streamed from a single device.
This recording I shared is a “postproduction” mix of the separate tracks I recorded in Ardour. While not strictly necessary, It helped a lot to have computer hooked to the analog mixer through USB, both for the backup recording, and more importantly, for the metering, especially since I am long time Ardour user and I am familiar with Ardour’s meters.

This is the full livestream, if you are interested in comparison between Ardour mix and live analog mix (through video mixer’s converters):

Milhaud duo starts at 31:20 and the Grgin’s encore at 1:17:40

1 Like

Thanks for the explanation! If we do more of these I may invest in an ATEM mini, which would allow me to mix the audio and video without putting any strain on the resources of the computer that’s sending the stream.

1 Like

The ATEM mini is a decent little kit, especially for the price, however keep in mind some things:

While Audio is capable in terms of channel processing, the majority of channels come in from HDMI embeds. The only analog audio inputs are via 1/8" jack, so chances are you will want a different mixer with XLR inputs to feed into here (The ATEM Mini can be switched between Mic and Line Level, as well as having a plugin in power for 1/8" lavs and camera mics). There is also no reverb or effects, so you have the channel processing (EQ and Dynamics) but that is about it. No busses, no real routing, no effects.

That being said, in the right circumstances it is great. Just keep in mind the limitations.


EDIT: And the Touchmix could be a great mixer for smaller concerts in that setup for the record. I am looking at proposing that for a client for a similar type of situation where there is an ATEM Mini. And yes I believe it runs Linux, QSC has several devices that do, including their entire QSYS processing line.

Thanks for the caveats with the ATEM. I was thinking of using it in conjunction with the Touchmix, which would give me all the FX, compression, etc. that I need. I’ve used the Touchmix to do live sound for years, it’s an excellent little console with lots of useful capabilities. My only reservations with the ATEM are specific to the camera I use; OBS has the specific capabilities I need but getting audio from OBS to Zoom is a little complex and I haven’t been able to get it to work yet despite trying the several methods detailed in various online sources and youtube tutorials.

Yea when I did a ‘zoom’ production last year, I split the audio so a copy was going to OBS and a copy was going to Zoom. Not sure how easy it would be to route one to the other sadly. If only there was an audio daemon available that would make this easier if software supported it, we could call it ‘plug’ or something like that :slight_smile:


PS Yes that was a joke in the last sentence in case I needed to clarify that for anyone reading.

Great stuff. Did you use zita-rev1 as a JACK plugin, LADSPA rev-plugin or Faust? Settings? Also, how much of the peaks were shaved with the x42 limiter? What was your target loudness?

Have you tried @x42’s new true peak function on his limiter yet? It’s my favorite all-around limiter at this point…

Thank you for listening!
I used zita-rev1 LADSPA plugin with delay as little as possible, 20 ms, Xover@525 (default), RT-low 1,8 s, RT-mid 2 s, damping@7000, no EQ and output was 100% because it was on dedicated bus and the amount of reverb is controlled by the bus fader (easier for the automation). The concert was recorded with two pairs of microphones: spaced omnis for the general sound and airiness, and NOS cardioids (well,… subcardioids) for the focus. The send to the reverb bus was from omnis only, while cardioids remain on the function of focusing the sound (I ended up using very little of the latter in the mix, because, for some reason, they tended to favour clarinet over strings - dynamically). Another good thing with using two pairs of sonically different microphones is I could mix desirable sound colour without using EQ. Omnis have got LF extension and quite a lift @9kHz, and CM3s are exactly the opposite: flat in HF and roll of from 200Hz downwards.
Very little is shaved by x42 limiter. less then 2 dB. I like to preserve dynamic range of the performance as much as possible, and at the same time I try to avoid raising noise floor from the audience (and backstage machinery in this case,air conditioner or something very annoying - luckily below frequency range of the clarinet so I high passed it in Milhaud). As a result the loudness turned out around -18 LUFS for the dynamical movements up to -16 LUFS for some more “cheerful” movements.
I have yet to try the new x42 limiter, but the old one has worked perfectly fine for me: I always set the threshold to -1 dB and most of the time I apply LP filter at around 18k (and HP filter at the lowest frequency of the lowest instrument) with a gentle slope - If I understand correctly true peak clipping is more likely to occur at very high frequencies. So the true peak rarely “breaches” digital peak limits, and when it does it is usually by 0,1 - 0,2dB, a performer must do something really wild to cause a problem with this.