I have this warning if i start Ardour on Linux Mint DE (Debian Edition) and Linux Mint 21
WARNING: Your system has a limit set for reserving memory. This could cause Ardour to run out of memory before the system limit is reached.
You can view the memory limit with 'ulimit -l' and usually change it in /etc/security/limits.conf.
The terminal says:
stefan@studio10:~$ ulimit -l
I have 16 gb ram in my studio desktop. What do i have to adjust that Ardour works memory optimized?
#Each line describes a limit for a user in the form:
#<domain> <type> <item> <value>
#<domain> can be:
# - a user name
# - a group name, with @group syntax
# - the wildcard *, for default entry
# - the wildcard %, can be also used with %group syntax,
# for maxlogin limit
# - NOTE: group and wildcard limits are not applied to root.
# To apply a limit to the root user, <domain> must be
# the literal username root.
#<type> can have the two values:
# - "soft" for enforcing the soft limits
# - "hard" for enforcing hard limits
#<item> can be one of the following:
# - core - limits the core file size (KB)
# - data - max data size (KB)
# - fsize - maximum filesize (KB)
# - memlock - max locked-in-memory address space (KB)
# - nofile - max number of open file descriptors
# - rss - max resident set size (KB)
# - stack - max stack size (KB)
# - cpu - max CPU time (MIN)
# - nproc - max number of processes
# - as - address space limit (KB)
# - maxlogins - max number of logins for this user
# - maxsyslogins - max number of logins on the system
# - priority - the priority to run user process with
# - locks - max number of file locks the user can hold
# - sigpending - max number of pending signals
# - msgqueue - max memory used by POSIX message queues (bytes)
# - nice - max nice priority allowed to raise to values: [-20, 19]
# - rtprio - max realtime priority
# - chroot - change root to directory (Debian-specific)
#<domain> <type> <item> <value>
#* soft core 0
#root hard core 100000
#* hard rss 10000
#@student hard nproc 20
#@faculty soft nproc 20
#@faculty hard nproc 50
#ftp hard nproc 0
#ftp - chroot /ftp
#@student - maxlogins 4
# End of file
Yes, although the checking is (I believe) overly strict. If there is any limit on locked memory that message will be displayed. I have memlock limited to 16GiB on my machine, which is enough to hold 397 track hours in memory (i.e. one track of 397 hours, or 48 tracks of 8 hours, or 96 tracks of 4 hours, etc.).
Personally I feel that that is plenty, so I just ignore the message “warning” that I have a limit on the amount of memory which can be locked.
Your current limit is 2,031,484 KiB, or almost 2GiB. That is plenty, don’t worry about changing it if you don’t mind just ignoring that warning when it pops up.
Thank you very much. In LMDE5 the alert is away - in LM21 i don’t know if i had clicked “nevers show again that warning” - if yes, does anyone know where to activate this again?
In LM21 i had problems with huge SF2 collections (collections with more than 500mb - Ardour crashed. I think that was the solution. I have 16 GB and if only 2GB were reserved for Ardour its not optimal performance.
I added your tweak also in Linux Mint 21. Thanks for it.
Loading the CrisisGeneralMidi SoundFont into ACE Fluidsynth requires 1.6 GB alone. Now add a second instance of the plugin and the total goes up to 3.2 GB. That does not include Ardour itself (250+ MB) and other realtime critical address space. None of which should not be paged (which can happen even if you have 32GB or more physical RAM).
Ardour can use as much memory as needed. Memory-locking only prevents that memory used by Ardour is paged (swapped out to disk).
When overall free physical RAM is low, the hard-disk can be used as additional memory. Disk reads and writes are very slow (compared to RAM), and the time requires to access the disk is unpredictable. While memory is paged from/to disk the application is unresponsive (it waits for data).
If a low latency audio application has to wait for data to be paged-in from disk, it will result in audible dropouts. So the goal is to prevent any data to be paged-out to disk by locking the memory into RAM.
OK, I have to raise my hand and admit to making assumptions about work flow that don’t apply to everyone. I tend to think in terms of audio recording and forget that there are people who are willing to throw GB at their sample instruments as well.
The easiest is to just set ulimit -l to unlimited as recommended.
The old school conservative approach was to not allow unlimited allocation of anything, in case a program with an error was started and attempted to consume all memory, all processing cycles, all disk space, etc. While doing some research to try to determine what would happen to a running system if a process actually did attempt to lock all physical memory, I did not find an answer to that question, but did find that even Oracle recommends setting ulimit -l to unlimited. In my experience people who are paying for an Oracle database get very upset if their database server crashes for any reason, so I suppose that if Oracle considers that setting safe enough to use for enterprise critical database installs it should be fine for your audio workstation to set ulimit -l to unlimited as well.