Binary "rollover" on Ardour recording

I am having some strange “clipping” in my recording. It actually looks like a binary rollover: The peaks of the waveforms that would normally be clipped appear again at the “other side”. The recording was done with Ardour 5.10 and the distorted waveforms are visible within Ardour if you zoom enough. Here is the related post with screen shot in the Audacity forum.:

http://forum.audacityteam.org/viewtopic.php?f=28&t=96250#p329479

My setup:
XUbuntu 16.04 64bit
Ardour 5.10
Recording format: 16bit INT wav.
Hardware: Presonus FireBox using Jackd / FFADO interface.

This is not known behaviour by Ardour. On the other hand, recording to 16 bit integer wave format is something that probably very, very few Ardour users have ever bothered to do (that’s an export format, not a good choice for native recording). If you can reliably reproduce it, please file a bug report at http://tracker.ardour.org/ so that we can go through the normal bug flow with it.

I did some testing today and it actually looks like a bug in Ardour that causes this problem: The rollover clipping happens when “16bit INT wav” is choosen as sample format AND the source of the audio track used for recording is not the recording hardware but another Ardour element - in my test (and in the recording setup) it was an audio bus. The fader of the bus allowed to amplify the signal beyond limits and the inverted clipping occurred. However, I cannot observe the problem when I use 32bit float format for the samples. So it looks as somewhere in the chain Ardour Bus - output - 16bit Audio Track input - Recording we do have a wrong format conversion.

Bug reported: http://tracker.ardour.org/view.php?id=7412

I routinely record in 16 bits because it consumes less disk space than 24 or 32 bits. Many USB audio interfaces won’t give you more than 16 bits so there is no benefit in recording in a higher bit depth.

Imho there is some hype around higher bit depths. 16 bits still is excellent quality, 24 might be better on paper but few people can really distinguish between 16 and 24 bits in a blind listening test. How you position your microphone affects much more your sound quality than the difference between 16 and 24 bits. Also moving 20 cm away from the focus of your speakers has a much more dramatic effect on sound quality than the bit depth.

Please don’t mock 16 bits, it is still excellent quality :slight_smile:

@mhartzel

The difference between 16 and 24 bit isn’t for listening, it is much more about the processing and editing after recording, giving yourself plenty of headroom to prevent clipping without having to worry as much about noise floors when editing later.

Listening purposes you are correct, there isn’t very much difference for most things, which is why CDs with 16/44.1 became such as standard.

Thanks for your comment seablade, I appreciate it. I know this is a issue of personal taste, so I don’t expect people to agree with my view :slight_smile:

Me and some of my sound engineer colleagues uses 16 bits for routine television work after we found no added benefit for using 24 bits. I mixed in 24 bits for a long time and returned to 16 bits because I could not find any difference in the process of mixing or sound quality of the finished product. 24 bits has more headroom only if you use lower signal levels when recording.

The only technical differences between 16 and 24 bits are: 24 bits has a bigger dynamic range, lower noise floor and the quantization step size is smaller.

You can only get the lower noise floor if your sound sources are noiseless and your listening environment is also noiseless. This is very difficult to achieve. This also means you have to listen to sound pressure levels exceeding 100 dBs. If your listening volume is any lower you won’t benefit from the 24 bits lower noise floor or bigger dynamic range. With lower sound pressure level the 24 bit noise levels will sink down below the noise of your listening room and you can’t hear the difference between the noise floor of 16 and 24 bits.

The quantization step size is quite meaningless in reality because DA - converters uses capacitors to smooth the voltage curve of the audio. The DA output does not have any voltage steps, the voltage is smooth as analog audio. The steps only exists in the digital storage form of the audio.

This is my personal experience about the issue, others probably disagree :slight_smile:

On my phone right now so I will have to respond more later, but short version of the response is record stuff with lots of dynamic range and you should hear a significant difference. I am talking live concert recordings, sound effects/nature recordings etc.

Heavily compressed electric guitar is the studio maybe not so much. Full live orchestral experience a bit more

I agree, I would record a classical music concert in 24 or 32 bits because of the high dynamic range of the music. But the noise floor of the recording is really dictated by the native noise of the microphones and how many of those you have and the noise of the room and the audience. With those sources of noise won’t ever get even close to the theoretical noise floor of 24 bits.

Noise levels in the nature (wind, etc) tends to be so high that I don’t think there is much benefit from 24 bits there.

To put my point in another way: in the late 1980’s when analog recording was at its top, you needed hundreds of thousands of euros worth of multitrack recorders and equipment to record music. Now you can get comparable recording quality and even exceed the noise floor levels of the 80’s studio in a 400 euro laptop and a 120 euro 16 bit sound card.

I think we have been a bit blinded by the constant progress of technology. What we have now in a cheap form exceeds the quality of an 80’s expensive studio. Many really good quality records were made in that era.

Of course 24 bit is better than 16 and 32 is better than 24 and 64 is even better than that. But after a certain quality is achieved the increase in quality is probably not perceivable to listeners anymore. In my opinion 16 bits is plenty, Beatles and The Rolling Stones didn’t have that quality when they made their classic records :slight_smile:

You are welcome to disagree, this is just how I feel about it :slight_smile:

@mhartzel Considering the Beatles middle material (A Hard Days Night thru Yellow Submarine) was recorded 4 track on either Studer or BTR machines at 15ips, on half inch tape, they are very quiet recordings. Because of the lack of tracks, they had to overdub from one tape machine to another, therefore introducing much more noise, and so on and so on. I have heard bootlegs of the originals before overdubbing, and they sound flipping AMAZING! Way less tape hiss than 1/4" cassettes cartridges with dbx or dolby noise reduction (running at what, 3.25ips). " all the Beatles’ multi-tracks are 15 ips, CCIR EQ, with no noise reduction. EMI had their own brand of recording tape, which was called… EMITAPE."

EMI built most of their equipment in house (including tape recorders and mixers). I remember when the broadcast industry was much the same. Anyway, I am one of those “few” people who can hear the difference between 16 and 24 bit. Even at 52, I can still hear 18k frequencies. I can hear the artifacts in 160kb/s mp3s (even when encoded using LAME, which is arguably the best mp3 encoder). Yes indeed there is a difference! I just got home from a recording session at a friend’s house. He recorded about 2 hours of our band’s live performance in Mixbus. After going into the control room to listen to it, I ask him why all the vocals are only on the left channel ( and I could not hear my guitar). Turns out his piezo driver in the right JBL had failed and he didn’t even notice it. WOW!

At the TV station where I work, I make sure all the news editors are set to record 24 bit (32 bit float) in FCP, because these kids don’t watch, nor care about their audio levels. It made a huge difference in distorted audio when they finally exported their packages to our playback server. Sure, it uses more disk space, but gigabytes are so cheap anymore, who cares?

At 24Bit (fixed point) you are already effectively limited by the atomic noise of the components. There is no real benefit to capturing the audio at any higher resolution. There is however a benefit to encoding that data inside the DAW at higher bit depths or in floating point formats in order to retain as much of that information as possible during subsequent processing. 32Bit float, is generally ‘good enough’ in this respect, and an entirely reasonable compromise given that most general purpose CPUs only provide either 32 or 64Bit float, and there are / were some performance gains to be had with 32 vs 64bit float, but 40Bit float is generally considered a better minimum in dedicated processors. It’s interesting to note that some original analogue recordings which are now (re)released in 192kHz 24 (or more) bits, were effectively only captured on what would be 11 or 12bit machines (in digital terms), but at least now you can hear the tape hiss with more ‘transparency’ and ‘clarity’.

Thanks for your comments guys :slight_smile:

Indeed using higher bit-rates for files on disk during production is really just convenient to prevent mistakes. Disk-space is cheap, re-doing some bounce is not.

Regarding “classical music” recordings: Even16bit are more than sufficient for the dynamic-range S/N for individual mics or the final master. But many of those recordings are massive multi-tracks and the 16bit noise-floor may add up for a few hundred channels, maybe.

Since it hasn’t been mentioned, yet: https://xiph.org/video/vid2.shtml is an outstanding introduction into digital signal behavior incl bit-depth. One can’t link to that often enough.