there are discussions in certain recording engineer forum regarding the precision of the mixed master signal and the influence on the sound compared to analog high end mixers. E. g. some statements are pointing to the floating point precision which is reportedly not sufficiend to be on a competitive basis to the analog technology.
So my question is: Does it make a significant difference, if Ardour is running as a 32Bit or 64Bit application? Does it use data types with different length on different platforms?
my guess would be no. but i would disagree with that opinion you mention anyway. i don’t have any numbers at hand at the moment, but generally speaking 32-bit allows a SNR accuracy far superior to any analog equipment (noteably, including A/D converters).
If you are right and 64bit is only for the paranoid, while 32bit offers all the SNR on would ever need, isn’t using a 64bit-compiled version of ardour just a big fat memory hog?
We all know that plenty of RAM is essential. Does running it in 64bit native on an AMD64 machine bring any performance benefits? Is it possible to use 32bit floats in 64bit mode and get some throughput advantage or something? I know very little about these machine details as you can guess, I’ve been out of the whole thing since the Z80…
Ardour and all other Jack applications speak 32bit floating point audio. It doesn’t matter if you’re on x86 or x86_64, the data is the same… although x86_64 has greater performance potential (not due to being 64 bit, but due to having more general purpose registers).
At the volume levels most jack applications operate at (i.e. <1.0), the SNR of 32bit floats is substantially better than any A/D or D/A which can currently be built. Plus, there is approximately ~100dB of additional headroom available to prevent clipping.
As such there is no reason why Jack’s 32bit floats should be a problem.
There are, however, audio processing processes which are more sensitve to noise, for example: an IIR filter’s inner loop. I would expect that any audio processor module which needed additional precision would make use of it. 64bit floats work fine on 32bit CPUs (it’s integer datatypes that the bittness of the cpu is talking about). Such modules can then truncate off the extra precision when they transmit their output back to Jack.
The whole 64bit vs 32bit argument is much more of an issue for embedded systems that use fixed point or integer datatypes … it’s a bit tricker to get a good dynamic range vs SNR trade-off on non-floating point systems.
Double precision floating point values take up 80 bits, not 64, thus rendering the 32/64 word size even less significant. But yes, the greatly increased number of registers is a real win for x86_64.
Yes the x86 FPU has a native floating point unit of 80 bits… but thats not a double.
Per the IEEE 754 standard a double is 64 bits… if you use doubles in your code, the FPU will work internally with 80bit values, but when they get copied out into memory they will be standard 64 bit doubles.
This can lead to some interesting situations with code which isn’t very numerically stable. You can solve this (on GCC, I think some other compilers call it a doubledouble) by using long doubles (which on x86 are 80bits), by putting the FPU in 64bit mode, or by using SSE2 rather than the FPU (which on the P4 you want to do for performance reasons anyways).
Off topic… I know.