Disk Read Failures

I’ve recorded about 45mins worth of tracks. While playing them back, about half way through everything stops and a window appears stating:

“The disk system on your computer was not able to keep up with Ardour. Specifically, it failed to read data from disk quickly enough to keep up with playback.”

Sometimes, I’m able to play the tracks all the way through and other times I get this problem. How do I solve this?

For some reason, your disk system is incapable of providing ardour data at the rate ardour needs to output it to your soundcard. This is a complicated matter which sometimes need answers to a lot of questions.

Is DMA enabled on the disk? If not or you are unsure about it, study hdparm. Google is your friend.

How many tracks are there? i.e. is your disk really fast enough (which it probably is unless it’s really old or you are doing very exotic stuff like tens or hundreds of tracks full of regions)

How fast is the disk? This relates to the previous question.

On which filesystem is that disk on? Some people insist on using vfat partitions for data so that the files are accessible from both Linux and Windows on dual boot systems. vfat is about the worst filesystem ever invented and is especially crap as a filesystem for DAW sessions.

Did those recordings fill that filesystem? If yes, you might’ve succeeded in fragmenting the data badly. This is very difficult to achieve on proper filesystem (in other words: not vfat), but it is possible.

Are there other processes accessing the disk? Multiple drive intensive tasks at the same time affect the performance of the disk seriosly.

Is your swap partition on the same drive and is your system swapping? Another variation of the previous situation.

Whew!..A lot of questions you are posing to a newbie.

Here is a synposis of my hdparm readings:

How fast is the disk?

  1. /dev/hde:

Timing buffer-cache reads: 128 MB in 3.04 seconds = 42.12 MB/sec
Timing buffered disk reads: 64 MB in 2.86 seconds = 22.38 MB/sec

  1. /dev/hdf:

Timing buffer-cache reads: 128 MB in 2.35 seconds = 54.42 MB/sec
Timing buffered disk reads: 64 MB in 4.31 seconds = 14.85 MB/sec

  1. /dev/hdg:

Timing buffer-cache reads: 128 MB in 2.26 seconds = 56.54 MB/sec
Timing buffered disk reads: 64 MB in 2.37 seconds = 26.99 MB/sec

Is DMA enabled on the disk?

UDMA modes: udma0 udma1 udma2 udma3 udma4 *udma5

UDMA modes: udma0 udma1 *udma2 udma3 udma4

UDMA modes: udma0 udma1 udma2 udma3 udma4 *udma5

On which filesystem is that disk on?

Ext2 filesystem

How many tracks are there?

Two stereo tracks. Just straight forward recordings with a few automated gain movements. Nothing complex.

Are there other processes accessing the disk?

I don’t believe so. The screensaver is off and so are all the power management settings.

Is your swap partition on the same drive and is your system swapping?

The swap partition is on the same drive, but how would I know if it is swapping?

After reviewing these disk speed reading, I wonder if the problem is with /dev/hdf. I’m running jack and the ardour program on /dev/hde but the files are being stored on /dev/hdf. /dev/hdf is an old 18GB WD drive I’m using to store my mixes.

/dev/hde and /dev/hdg are recent 80Gb and 160GB WD drives respectively.

I am posing a lot of questions, but that is merely due to the fact that you are asking a very difficult question. And I believe it’s better to give newbies the tools to begin learning the system than to just ask “easy” questions which results in zero learning. :slight_smile:

Regarding swap:

Swapping occurs when there is not enough memory in the system to keep all processes in physical ram. Swapping is actually very clever for rarely used applications. Say you open a word processor, keep it open and start to do DAW work. The word processor can be safely (and quickly) swapped off memory as you are not using it. Once you have done your DAW work and go back to writing the word processor gets swapped in and everything is just groovy.

But the problem with swapping is that if your system has so little memory that multiple actively used applications couldn’t fit in physical memory (RAM), the system would have to “swap-out” and “swap-in” the processes to / from the disk constantly. This is slow and causes a lot of disk i/o which affects performance.

So, regarding swap the answer to your question “how would I know if it is swapping”, is whether your applications are working more sluggishly than normal and is your hard drive almost constantly at work. Contributing factors: how many applications you are actively using, and how much ram do you have.

Regarding disk speed.

This is my (SATA) disk at work:
Timing cached reads: 4356 MB in 2.00 seconds = 2178.36 MB/sec
Timing buffered disk reads: 152 MB in 3.01 seconds = 50.54 MB/sec

Is hdf the disk you are using for recording? It’s performance is quite weak. You might have DMA disabled. Could you paste the output of “hdparm /dev/hdf” (or whatever drive you are recording onto). If “using_dma = 0 (off)”, then that’s most likely the issue.

Even if DMA is enabled, a quick glance at the options hdparm offers you is a good idea: http://www.linuxdevcenter.com/pub/a/linux/2000/06/29/hdparm.html

Also, which distro are you using? Some distros omit to set drive parameters to sane values by default. This is usually quite easily fixed via extra packages. But to do that you have to refer to your distributions documentation.

the link above looks as a very usefull, but it does makes me little scary to play with that stuff…
on my machine it looks this way:

murija2:~# hdparm /dev/hda

multcount = 16 (on)
IO_support = 0 (default 16-bit)
unmaskirq = 0 (off)
using_dma = 1 (on)
keepsettings = 0 (off)
readonly = 0 (off)
readahead = 256 (on)
geometry = 65535/16/63, sectors = 78140160, start = 0

can someone recommand, what should be optimized here?


Thanks sampo!..I believe the DMA featre is enable but I will do a hdparm.

multcount = 16 (on) (JACK/ARDOUR APPLICATION)
IO_support = 1 (32-bit)
unmaskirq = 0 (off)
using_dma = 1 (on)
keepsettings = 0 (off)
readonly = 0 (off)
readahead = 8 (on)
geometry = 77545/16/63, sectors = 78165360, start = 0

multcount = 16 (on)
IO_support = 1 (32-bit)
unmaskirq = 0 (off)
using_dma = 1 (on)
keepsettings = 0 (off)
readonly = 0 (off)
readahead = 8 (on)
geometry = 34960/16/63, sectors = 35239680, start = 0

multcount = 16 (on)
IO_support = 1 (32-bit)
unmaskirq = 0 (off)
using_dma = 1 (on)
keepsettings = 0 (off)
readonly = 0 (off)
readahead = 8 (on)
geometry = 19457/255/63, sectors = 312581808, start = 0

I’m using RED HAT 9.0 with the 2.4.26-1.ll.rh90.ccrma low latency kernel I downloaded from Planet CCRMA. I pretty much followed the steps laid out by Planet CCRMA in regards to tuning up the hard drive with hdparm.


To nowhiskey: you should look at enabling 32 bit IO support, that is quite safe on most drives. Also fiddling with the readahead count might make a difference. DMA is enabled for you and that’s the single most important configuration. See the CCRMA tuning guide funkathon posted.

To funkathon: does the problem still persist? Your system seems adequately configured. What kind of computer is this? How much ram?

Btw. The system you are running is quite old, you might want to consider trying a new distribution like Ubuntu Edgy or FC6 where most of this stuff works out of the box.

To Sampo:

I began using Ardour with the ardour-0.9beta16.1 release and just kept upgrading with each release. I’m on the Ardour-0.99.2 release using the same Red Hat 9.0 kernel I started off with. This is a self-built computer, my first born, an AMDK6-2 500Mhz cpu with 512MB PC100 SDRAM.

Hmmm!..By the way I have a dual booted Linux system along with UBUNTU 6.06 Dapper Drake. I’m not too familiar with the Debian based linux systems but I guess this is good time to get acquainted with it. I have UBUNTU and REDHAT sharing /dev/hde. What I find strange is that when I did a hdparm while using UBUNTU the disk readings were remarkably faster than when I had RED HAT loaded.

The improved performance might be just because Ubuntu uses a newer kernel which is more optimized than the 2.4.x series. Or, maybe Ubuntu gives different parameters to the hard drives which work better for you. There is also 6.10 (Edgy) available which you might want to consider over 6.06.

Your system seems adequate for DAW use, but you will probably need ot control your plugin use (don’t use so many at the same time, and freeze tracks after you are satisfied with the parameters).

Also, 512MB of memory is enough for Ardour but it is a bit limited in the sense that when you run Ardour, other big applications (like openoffice) will most likely get swapped out to disk. This means that “activating” (say, switching to it’s window) can cause swapping which can cause problems if you are, say, recording at the same time.

Oh. And there is 0.99.3 available which includes some relatively important fixes compared to 0.99.2. Full release notes available at http://ardour.org/node/190 . For some strange reson, certain distributions (debian for example) have not packaged 0.99.3 and keep distributing 0.99.2. You might want to consider building 0.99.3 yourself. The instructions on how to do that can be found at http://ardour.org/building .

hi again,
i am little confused now, trying to change the default settings in my demudi/debian box.

first, changing to 32bit mode i get the values wich are about 4MB/sec worse than in 16bit mode, something like:

murija2:~# hdparm -Tt /dev/hda

Timing cached reads: 1428 MB in 2.00 seconds = 713.20 MB/sec
Timing buffered disk reads: 88 MB in 3.06 seconds = 28.77 MB/sec

in 16bit mode, it is like:

murija2:~# hdparm -Tt /dev/hda

Timing cached reads: 1632 MB in 2.00 seconds = 815.71 MB/sec
Timing buffered disk reads: 104 MB in 3.05 seconds = 34.12 MB/sec

so it is better for me to keep the default settings?

another thing is that me anyway dont know how to keep the settings changed on the next reboot - after a reboot the settings are still the defaults one.
tried already asking in agnula forum, but untill now no answer there.

hdparm -Tt measures only the raw throughput. This is not an absolute meter for hard drive performance in DAW use. For good DAW use, the disk needs performance, but also autonomity: that the drive controller chip can work with as little interruption to the cpu as possible.

There used to be good tools to measure how much disk usage affects low latency audio performance, but the tester (http://www.gardena.net/benno/linux/audio/) is written for much older systems than what we have today. It uses OSS style /dev/dsp interface to access the sound card and is meant for use on 2.4.x kernels. It provides very strange numbers on my 2.6.x system.

If anybody is willing and able to port that tester to either use jack or alsa and be compatible with 2.6.x, I (and probably the community) would bow down to thank you!

Thanks Sampo!..I’m no longer getting that error message again. Basically, I decided to use /dev/hdg as my storage drive for Ardour session instead of /dev/hdf based on the hdparm readings above. I’ll take your advice and try using UBUNTU in the future.

By the way I’m having problems upgrading from ardour.99.2 to ardour.99.3. I will post the error message I’m getting on my next post.

OK, now I’ve caught the “tuning bug”… :slight_smile:

When I look at hdparm output:

Timing cached reads: 4356 MB in 2.00 seconds = 2178.36 MB/sec
Timing buffered disk reads: 152 MB in 3.01 seconds = 50.54 MB/sec

Which of those (very disparate) values is the one that matters? Cached or buffered?

I gotta get home and do some tests…

From “man hdparm”, flag -T:

This displays the speed of reading directly from the Linux buffer cache without disk access. This measurement is essentially an indication of the throughput of the processor, cache, and memory of the system under test.

flag -t:

This displays the speed of reading through the buffer cache to the disk without any prior caching of data. This measurement is an indication of how fast the drive can sustain sequential data reads under Linux, without any filesystem overhead.

Essentially, -T tells you about system throughput, -t tells you how fast does the disk read data. Note that hdparm corrects the results of -t with the results from -T. This means that you get more meaningful results for -t when you run -T with it.

I hope this helps but I had problems with the following processes:

  • updatedb – check crontab entries
  • selinux – if enabled


I’ve started running into the same errors - disk can’t keep up. This on an AMD dual core system with 2GB ram, with Ardour on a 7200 Seagate 320GB SATA 2 drive and recordings on 2 X 7200 Seagate 320GB SATA 2 in a RAID 0 array!

I’m simply recording a drum sequence from Hydrogen onto 4 stereo tracks in Ardour - no processing or automation - and can’t make it 20 seconds into the song. Very frustrating, as I experienced this in 0.99 but hadn’t experienced it in 2.0 for the past 2 months or so. I’m running Gutsy Gibbon with the rt kernel.

Tried recording the same thing to my non-RAID drive and it worked just fine (?!?). Why should the slower (by half) drive keep up better than the RAID array?

RAID array:

~$ sudo hdparm -Tt /dev/md0

Timing cached reads: 1544 MB in 2.00 seconds = 772.49 MB/sec
Timing buffered disk reads: 442 MB in 3.00 seconds = 147.32 MB/sec

Other (non-RAID) drive:

~$ sudo hdparm -Tt /dev/sda

Timing cached reads: 1640 MB in 2.00 seconds = 820.90 MB/sec
Timing buffered disk reads: 224 MB in 3.01 seconds = 74.48 MB/sec

I don’t have anything else running except the usual assortment of background processes - any suggestions?

OK, I noticed each track had a ton of takes - removed them all save the last ones and now I’m no longer getting that error. Still find it goofy that the error should appear at all, since really it only needs to presumably process the selected tracks and not those which are being used upon playback or recording…is this a potential bug?

I find this problem very strange as when one uses 96khz/24bit for his sessions, it takes 96000 x 3 = ~300Kbytes per second and track. That would mean as little as 2,4Mbytes/sec when you record eight tracks simultaneously, which is less than 1/5 of what hdparm shows your disk is capable of… Even if you use ~20 tracks or so, your system should be able to handle that, if the disk has no other use than the DAW job, am I right? So I have been asking myself: why the heck is my system telling me it is unable to keep up with ardour when I use as little as eight mono tracks and two or three plugins?

I set up a performance monitor in my tray which showed 100% processor load every time I started playback or recording. Later on, I discovered that there was a plugin /either a delay or an eq, not quite sure right now/ which caused my system to overload - even when playback has been stopped! Everything would just freeze, my mouse would take 20 seconds to show any response to having been moved and nothing would make my system work again, save, surprisingly, playback: when I pressed the spacebar in such situation, Ardour started playing and even if the system load has been horrible, the system at least responded. I would then hit Alt+F2, wait for a while and then type killall ardour and Enter. And the processor load monitor would go down like a stone in the water in as little as two seconds! Ardour would go down then, too, of course, which is the only drawback of the above method :oP so probably my disk is as little autonomous as a disk can be, as it must have posed a lot of processor interrupts to cause such an overload. /Not mentioning that plugin as I haven’t been using it anymore./

I have been using Reaper on W2K later on and I ran into virtually the same problem: with around 12-15 tracks /and yes, around the same number of plugins plus some master compressor etc./ the processor would load up to hundred percent and yes, the system would slow down significantly, but the playback would continue, which I find essential. The number of tracks and plugins in use seems twice as large as with Ardour, but the samplerate has been set to 44100Hz, so I wouldn’t say there was any significant change to the amount of data processed. The same machine, just to make it clear; using Ext2, Ext3 and Reiser filesystem for all my linux work and storage and NTFS under M$, of course. No dualboot, though: I replaced Debian with the W2K platform, partly to try and fiddle with Reaper a bit and partly for there is still as little good ladspa plugins on the road and as many free VSTs which I already know from when I used to use Cubase and which do not work on Linux yet /I never figured out how to get the Wine/FST thing to work for me, yes, I am lame ;oP/

So. The issue remains the same. There is a problem between the disks and Ardour; no matter where it lays, it is sure to become a headliner once users run into the need of trying the unlimitedness of Ardour’s track count. Other systems such as the above mentioned W2K/Reaper combination of course do have the same problem. That and my experiences with other platforms I came across (ie Gentoo/Ardour 2, MacOS X/ProTools, WXP/Cubase SX3,…) persuaded me that the problem is not software dependent and that DAW work /unless you process one mono vocal recording for, say, rewriting it on a sheet of paper/ is god damn hardware intensive and no matter what OS/DAW do you use and how much do you pay for it, the main brake of your system will be always the hardware. Sad.

Here’s what I’ve found over the last few weeks relating to this kind of thing:

Someone mentioned Hydrogen. Hydrogen can have a really bad effect on system performance, especially if your samples are very long. It’s not really a bug in Hydrogen though, it’s just that it actually has a huge amount of work to do. You may not realise this but if your drum samples overlap (i.e. the next ‘hit’ is played before the previous one ends) then Hydrogen will not stop the previous one but will effectively be playing two copies of the same sample simultaneously. This helps to make it sound much more realistic and is in general a good thing. In one drumkit I had where I had recorded the complete ambience of all my cymbals the samples were 30 seconds long and I could quickly max out both my disk and CPU with one bar of ride cymbal, so I had to trade off between recording purity and CPU usage (and to be honest you really can’t tell the difference now that I’ve shortened them all). So you need to be aware of that first. Also it’s a good idea to use WAV, not FLAC, and to make sure your samples in Hydrogen are the same sample rate your sound card uses, otherwise Hydrogen has to resample everything.

Secondly, LADSPA plugins. I don’t know if there is a problem in Ardour but I suspect there isn’t and that LADSPA is just not very good. I look forward to the adoption of LV2 by everybody. I’ve had crashes, system lockups, CPU load reaching 100% with two plugins running, you name it LADSPA has caused it. Some LADSPA plugins are just buggy (TAP plugins I’ve found to be the main culprits) and some are just inefficient I suspect. I use CMT and Steve Harris plugins now almost exclusively and have found far fewer problems since. I’ve also found that things become massively more reliable when I stop using the rpms from my distro and compile the plugins myself. People using other distros have also reported that this helps so I think it must be a general LADSPA thing but I’m at a loss to explain it.

Finally, your hardware… I’ve been running DAWs on various systems since the acronym was coined. There is no substitute for carefully chosen, precisely tuned hardware. If you think you can just buy a boggo-standard grey box and turn it into Abbey Road you are, I’m afraid, very much mistaken. I’ve always had the best results with Intel stuff - not just the processors but the chipsets and all ancilliaries. Even the BIOS on your motherboard can make a huge difference, especially if it isn’t very intelligent at assigning interrupts etc. Windows/Linux/MacOS is less important a factor than the harwdware it’s running on I’ve always found. Under Linux I’ve found that the kernel keeps getting better in this area too - I’ve got better performance with 2.6.24 than I had with 2.6.23 for instance… but 2.6.25rc8 doesn’t work at all on my motherboard.

In short then, it’s a lottery. I’ve basically learned over the years that it’s going to take 6 months with a new system before I’ve found its limitations (unles it’s a really bad one :-)) and tuned it as best I can. Once you’ve done that, for heaven’s sake leave it alone :-).