The Edirol would not cause a disk system error like is being reported. Only the disk not being able to keep up with Ardour’s requests for reading/writing would cause this. It is much more likely that there are random processes causing the spindle to move to other areas of the disk and it is not able to return and write out what Ardour is requesting at a fast enough time to keep up because of this. This is why it is a MUCH better idea to write to another disk all together than to read or write from the OS disk.
The larger figure is meaningless, but 54MB/s is what I’d expect from a laptop disk. Fast SSDs are in the 200MB/s range. These are best case scenario read speeds. In the real world, any disk seeking will significantly reduce the performance for rotational disks.
I’ve fought this particular situation before, here is a list of things you might try, in no particular order:
Use an external drive, eSATA would be best, followed by FW800, FW400, USB 2.0
Use the realtime kernel and rtirq
Make sure the CPU governer has all CPUs set to maximum performance
If you have a big session try consolidating some of the tracks to avoid seeking (or get an SSD)
Delete or deactivate (not mute) any tracks you don’t need
Turn off plugins when recording
If you have formatted the disk with XFS, use xfs_fsr to defragment the disk (this is a good reason for using XFS or any filesystem that allows you to defragment it)
Some of the above are about maximising CPU, but I’ve seen ardour produce this error when the CPU gets so overloaded that it can’t service the disk. This is more likely if your external disk is USB 2.0 which needs more CPU that eSATA or Firewire.
Just for the sake of clarity, one more time, the read and write numbers are completely false in terms of measuring hard drive performance for this purpose.
I realized I didn’t clarify I meant for this purpose initially. As I mentioned above, you ened to look at the minimum read and write numbers, and given that there are many other processes reading and writing from disk at the same time, you will get substantially lower numbers for the minimums the moment that another area of the disk gets accessed.
I have to disagree that the speed figures are completely meaningless. The cached read speed is the speed of a contiguous data read from the cache on the hard disk, not the disk itself, so it’s not really useful here. The buffered disk read speed is for contiguous data reads from the disk and is at least indicative of the performance that can be expected in a recording system, even if it isn’t an exact measurement.
In practice a recording system will be unable to achieve those speeds continuously because the heads spend a lot of time moving back and forth to different areas of the disk (unless you’re using an SSD) which will slow it down considerably, but I still believe that a higher reading from hdparm will indicate a higher obtainable read/write speed in a recording situation.
Back to the OP’s problem, I’d be a little surprised if a disk giving the hdparm figures quoted is the bottleneck here, unless the head positioning servo is faulty.
Another thing: how full is the file system? Use df to check. If there isn’t a lot of free space it can get badly fragmented, which will really slow things down.
The buffered disk read speed is for contiguous data reads from the disk and is at least indicative of the performance that can be expected in a recording system.
And has nothing to do with write speed either which is what is needed for recording data to the disk, which is where the OP is having problems supposedly.
In practice a recording system will be unable to achieve those speeds continuously because the heads spend a lot of time moving back and forth to different areas of the disk (unless you're using an SSD) which will slow it down considerably, but I still believe that a higher reading from hdparm will indicate a higher read/write speed in a recording situation.
It will indicate a higher performance in some regard but it will not be linear in any way, so it is effectively meaningless.
Back to the OP's problem, I'd be a little surprised if a disk giving the hdparm figures quoted is the bottleneck here, unless the head positioning servo is faulty.
You already mentioned what is likely the problem, if the heads are going back and forth over different areas of the drive, which happens MUCH more frequently when running the OS and Swap off the same drive, then performance is going to take a nosedive very quickly, and not linerally either. You will be getting a fraction of the performance as a result. This is why a single drive for the OS and data is not usually preferred. Can it be done? Certainly, in fact I did it for a long time, but I wouldn’t depend on it without some custom tweaking as was already mentioned above. It will be unreliable, sometimes working great and sometimes seemingly not working at all, and all it takes is that one time in a recording session to screw with things.
As I mentioned above, increase that setting to an appropriate amount that provides a balance between performance of not getting that error, and seek time in the session. Follow Nick’s suggestions is preferable in most cases.
Another thing: how full is the file system? Use df to check. If there isn't a lot of free space it can get badly fragmented, which will really slow things down.
if the heads are going back and forth over different areas of the drive, which happens MUCH more frequently when running the OS and Swap off the same drive, then performance is going to take a nosedive very quickly, and not linerally either. You will be getting a fraction of the performance as a result. This is why a single drive for the OS and data is not usually preferred.
To add to Nick’s comments, it can be beneficial to copy all of the data to another drive and then back again every once in a while if you’re using ext3. I do this whenever I notice fsck reporting more than a few percent of discontiguous data on my audio file system (which is on a different disk than the OS). Heavy editing in particular can cause a lot of fragmentation even if there’s plenty of space left on the disk.
Heavy editing in particular can cause a lot of fragmentation even if there’s plenty of space left on the disk Editing in Ardour doesn’t cause any disk fragmentation whatsoever. Nothing gets repeatedly written to disk except the session file, the history file and (probably) some backup session files.
I stand corrected. I should probably have said “heavy editing, then re-recording the edited sections to new tracks while simultaneously trying to do multiple overdubs and generally getting the session in a mess”, which is the self-inflicted scenario that usually leads to fragmentation problems on my system
For long time I feel that this disk “… not fast enough …” message/behavior is just a weak point in Ardours recording buffer algorithm. If enough RAM available, with a half way capable notebook harddisk, it shouldn’t happen. Ardour should buffer to extra RAM. I have unlimited RAM in limits.conf, a rt-kernel, swap off. Every other day I get this message when doing 10 track recording with RMEs multiface.
Maybe the message itself isn’t pointing in the right direction in every case.
For playback only, the seek time of the harddisk should become more relevant. (The before mentioned parameter “track-buffer-seconds”).
Anyway, Ardour is one of the greatest programs I can run.
@itsgeorg: For long time I feel that this disk “… not fast enough …” message/behavior is just a weak point in Ardours recording buffer algorithm. If enough RAM available, with a half way capable notebook harddisk, it shouldn’t happen. Ardour should buffer to extra RAM. I have unlimited RAM in limits.conf, a rt-kernel, swap off. Every other day I get this message when doing 10 track recording with RMEs multiface.
There’s no reason for any contemporary system to ever get this message unless some part of it is not working correctly. You suggest that Ardour should just use more RAM - this is not really feasible. The place where we realize that things are too slow is in a realtime thread. This thread cannot safely allocate big chunks of memory. This is why recording (and playback) happens via two BIG lock free ringbuffers. These buffers are 5 seconds per track by default (one for playback, one for capture) When you see this message it means that your system was unable to keep data flowing to/from these buffers for 5 seconds. That is ridiculous behaviour. If it occurs, the realtime thread isn’t in a position to do anything about it, including allocating more memory.
Moreover, the message covers both capture and playback. If it occurs for the playback buffer, there’s nothing that allocating more RAM there and then can do. The data needed to be delivered for playback is missing: game over.
As Seablade indicated you can choose to increase the amount of buffering in RAM if you wish. But really, a system that can’t keep data flowing reasonably smoothly with 5 seconds worth of buffering in each direction is probably a system that shouldn’t be used for multitrack audio.
Will increase the buffers to 10 seconds, which would not hurt here at all, and 5 seconds doesn’t seem to me so shure of a bet with all that reasonable system activity beeing on.
The more mediocre/universal systems can do recording, the better
phew, okay, this issue seems to draw more attention to it than I thought.
thanks for all the advice!
@Peder
Like Seablade already mentioned, I think the Edirol device itself is not the problem. I had a look at the instructions on how to set up firewire in ubuntu studio (the link you posted) and found that firewire devices are not given maximum priority by default. Maybe this is something that contributes to the problem. Though, it is quite strange that the recording sessions with internally generated signals from VLC went through without any glitches. Maybe the firewire connection uses up a lot of cpu load additionally and that, combined with the hard drive issue, generates the error message?
@Seablade
The figures I was provided with from hdparm might not be correct as you suggested, but since the actual performance of the disk doesn’t seem to be the problem in the first place, I don’t mind that much.
It’s a good thing to know that having separate disks for the OS and for Audio production is recommended. I didn’t know that this causes that much of a problem. (I thought that using separate partitions does the trick already…)
If I were to use an external drive for recording via FW400, could the firewire bus be a bottleneck performance-wise?
@jrigg
I’m not sure about the firewire chipset. Device manager tells me the following:
R5C832 IEEE 1394 Controller
Ricoh Co Ltd (Lenovo)
I already thought about getting a firewire expresscard so I can use the 6-pin FW cable (the notebook default is only a 4-pin connection so I have to use the external AC-adapter for the edirol device, which is quite annoying). Could this also provide an improvement in this case?
As I’ve learned now my system with it’s present configuration is obviously not the one of choice for audio production.(well okay, I already thought that this would be the case)
Still, there has to be a way to get Ardour running at least a bit more smoothly with this system. The way Audacity handles multitrack recording flawlessly even on my recording setup (without any tweaking beforehand) shows that the task is technically feasible on this system. (or is there some fundamental difference between Ardour and Audacity when it comes to plain recording tasks?)
Special thanks to Nick for your composition of useful suggestions. I’ll follow your suggestions and tell you later if it worked.
Yes using a firewire drive and a firewire audio will cause a bottle neck for the firewire bus on your lappy.
Getting an express card for an extra firewire device would be the ticket.
This is the set up I use on my MBP.
SIIG Express Firewire
I connect my M-Audio 410 to the the express card and I connect my external hard drive to the firewire connection one the MBP.
The drive you write audio to should be a 7200 RPM drive. No Lower.
If I were to use an external drive for recording via FW400, could the firewire bus be a bottleneck performance-wise?
If they are on the same bus, yes it will be. As phillip8888 pointed out the better solution would be to seperate it onto a seperate FW800 bus for the hard drive, or eSATA for the Hard Drive.
Or of course not use firewire for audio but I think that is less likely given you already bought the interface;)
For the record Paul and I do disagree with the assessment on some level of what constitutes an acceptable system in this regard and we have discussed it at length before. That being said, the basic premise of his point is perfectly valid, that if you are having issues with writing for a period of 5 seconds, you really shouldn’t be using that setup for multitracking. This is absolutely correct, though I do often get forced to use less than ideal setups, for instance my laptop which has become my primary machine as of late, which is why the option I posted above exists, so that I can compensate for that in some way and at least increase my chances of a successful recording…
I’ve recorded a session yesterday and it worked without glitches for over 30 Minutes. (which is quite a success!! )
All I’ve changed so far was adding the “noatime” -parameter for the mounted drives.
I guess I’ll change that buffer-thingy in the ardour config as well just to be on the safe side, but it seems like the “noatime” option did the trick for now.
Sure, you’re right: For professional use the system i’m using right now would certainly be unacceptable. But since I’m doing this as a hobby I don’t want to spend another hundreds of €uros on hardware (which I already had to spend for the interface and a mixing console). So if there is a way to get things running with the present hardware I have then things are perfectly fine, even if its not the best possible way of doing this.
Glad it is working for you. I wouldn’t change the buffer settings unless you need to, if you simply didn’t have the noatime parameter and that is all you needed, glad it is a relatively simple fix then.