How to Solve the “Too Many Open Files” Error on Linux

 

Linux laptop showing a bash prompt

On Linux computer systems, system assets are shared amongst the customers. Attempt to use greater than your justifiable share and also you’ll hit an higher restrict. You may also bottleneck different customers or processes.

Shared System Assets

Amongst its different gazillion jobs, the kernel of a Linux laptop is all the time busy watching who’s utilizing how lots of the finite system assets, resembling RAM and CPU cycles. A multi-user system requires fixed consideration to verify individuals and processes aren’t utilizing extra of any given system useful resource than is suitable.

It’s not honest, for instance, for somebody to hog a lot CPU time that the pc feels sluggish for everybody else. Even in the event you’re the one one who makes use of your Linux laptop, there are limits set for the assets your processes can use. In any case, you’re nonetheless simply one other consumer.

Some system assets are well-known and apparent, like RAM, CPU cycles, and arduous drive house. However there are numerous, many extra assets which might be monitored and for which every consumer—or every user-owned course of—has a set higher restrict. Considered one of these is the variety of recordsdata a course of can have open without delay.

Should you’ve ever seen the “Too many recordsdata open” error message in a terminal window or discovered it in your system logs, it implies that the higher restrict has been hit, and the method will not be being permitted to open any extra recordsdata.

It’s Not Simply Information You’ve Opened

There’s a system-wide restrict to the variety of open recordsdata that Linux can deal with. It’s a really giant quantity, as we’ll see, however there’s nonetheless a restrict. Every consumer course of has an allocation that they’ll use. They every get a small share of the system complete allotted to them.

What truly will get allotted is quite a few file handles. Every file that’s opened requires a deal with. Even with pretty beneficiant allocations, system-wide, file handles can get used up quicker than you would possibly first think about.

Linux abstracts nearly every thing in order that it seems as if it’s a file. Typically they’ll be simply that, plain previous recordsdata. However different actions resembling opening a listing makes use of a file deal with too. Linux makes use of block particular recordsdata as a form of driver for {hardware} units. Character particular recordsdata are very comparable, however they’re extra usually used with units which have an idea of throughput, resembling pipes and serial ports.

Block particular recordsdata deal with blocks of knowledge at a time and character particular recordsdata deal with every character individually. Each of those particular recordsdata can solely be accessed by utilizing file handles. Libraries utilized by a program use a file deal with, streams use file handles, and community connections use file handles.

Abstracting all of those totally different necessities in order that they seem as recordsdata simplifies interfacing with them and permits things like piping and streams to work.

You may see that behind the scenes Linux is opening recordsdata and utilizing file handles simply to run itself—by no means thoughts your consumer processes. The rely of open recordsdata isn’t simply the variety of recordsdata you’ve opened. Virtually every thing within the working system is utilizing file handles.

File Deal with Limits

The system-wide most variety of file handles could be seen with this command.

cat /proc/sys/fs/file-max

Finding the system maximum for open files

This returns a preposterously giant variety of 9.2 quintillion. That’s the theoretical system most. It’s the most important potential worth you’ll be able to maintain in a 64-bit signed integer. Whether or not your poor laptop may truly deal with that many recordsdata open without delay is one other matter altogether.

On the consumer stage, there isn’t an specific worth for the utmost variety of open recordsdata you’ll be able to have. However we will roughly work it out. To search out out the utmost variety of recordsdata that one in every of your processes can open, we will use the ulimit command with the -n (open recordsdata) possibility.

ulimit -n

Finding how many files a process can open

And to search out the utmost variety of processes a consumer can have we’ll use ulimit with the -u (consumer processes) possibility.

ulimit -u

Finding the number of processes a user can have

Multiplying 1024 and 7640 provides us 7,823,360. In fact, a lot of these processes can be already utilized by your desktop surroundings and different background processes. In order that’s one other theoretical most, and one you’ll by no means realistically obtain.

The essential determine is the variety of recordsdata a course of can open. By default, that is 1024. It’s price noting that opening the identical file 1024 occasions concurrently is identical as opening 1024 totally different recordsdata concurrently. When you’ve used up all your file handles, you’re achieved.

It’s potential to regulate the variety of recordsdata a course of can open. There are literally two values to contemplate once you’re adjusting this quantity. One is the worth it’s at the moment set to, or that you simply’re making an attempt to set it to. That is referred to as the smooth restrict. There’s a arduous restrict too, and that is the best worth which you can increase the smooth restrict to.

The way in which to consider that is the smooth restrict actually is the “present worth” and the higher restrict is the best worth the present worth can attain. A daily, non-root, consumer can increase their smooth restrict to any worth as much as their arduous restrict. The foundation consumer can enhance their arduous restrict.

To see the present smooth and arduous limits, use ulimit with the -S (smooth) and -H (arduous) choices, and the -n (open recordsdata) possibility.

ulimit -Sn
ulimit -Hn

Finding the soft and hard limit for process file handles

To create a state of affairs the place we will see the smooth restrict being enforced, we created a program that repeatedly opens recordsdata till it fails. It then waits for a keystroke earlier than relinquishing all of the file handles it used. This system is named open-files.

./open-Information

The open-files program hitting the soft limit of 1024

It opens 1021 recordsdata and fails because it tries to open file 1022.

1024 minus 1021 is 3. What occurred to the opposite three file handles? They had been used for the STDIN, STDOUT, and STDERR streams. They’re created robotically for every course of. These all the time have file descriptor values of 0, 1, and a couple of.

RELATED: Learn how to Use the Linux lsof Command

We will see these utilizing the lsof command with the -p (course of) possibility and the method ID of the open-filesprogram. Handily, it prints its course of ID to the terminal window.

lsof -p 11038

The stdin, stdout, and stderr streams and file handles in the lsof command output

In fact, In a real-world state of affairs, you may not know which course of has simply wolfed up all of the file handles. To begin your investigation you might use this sequence of piped instructions. It’ll let you know the fifteen most prolific customers of file handles in your laptop.

lsof | awk '{ print $1 " " $2; }' | type -rn | uniq -c | type -rn | head -15

Seeing the processes that use the most file handles

To see extra or fewer entries regulate the -15 parameter to the head command. When you’ve recognized the method, that you must determine whether or not it has gone rogue and is opening too many recordsdata as a result of it’s uncontrolled, or whether or not it actually wants these recordsdata. If it does want them, that you must enhance its file deal with restrict.

Growing the Gentle Restrict

If we enhance the smooth restrict and run our program once more, we should always see it open extra recordsdata. We’ll use the ulimit command and the -n (open recordsdata) possibility with a numeric worth of 2048. This would be the new smooth restrict.

ulimit -n 2048

Setting a new file handle soft limit for processes

This time we efficiently opened 2045 recordsdata. As anticipated, that is three lower than 2048, due to the file handles used for STDIN , STDOUT , and STDERR.

Making Everlasting Modifications

Growing the smooth restrict solely impacts the present shell. Open a brand new terminal window and verify the smooth restrict. You’ll see it’s the previous default worth. However there’s a method to globally set a brand new default worth for the utmost variety of open recordsdata a course of can have that’s persistent and survives reboots.

Out-dated recommendation usually recommends you edit recordsdata resembling “/and so forth/sysctl.conf” and “/and so forth/safety/limits.conf.” Nevertheless, on systemd-based distributions, these edits don’t work constantly, particularly for graphical log-in periods.

The method proven right here is the way in which to do that on systemd-based distributions. There are two recordsdata we have to work with. The primary is the “/and so forth/systemd/system.conf” file. We’ll want to make use of sudo .

sudo gedit /and so forth/systemd/system.conf

Editing the system.conf file

Seek for the road that incorporates the string “DefaultLimitNOFILE.” Take away the hash “#” from the beginning of the road, and edit the primary quantity to no matter you need your new smooth restrict for processes to be. We selected 4096. The second quantity on that line is the arduous restrict. We didn’t regulate this.

The DefaultLimitNOFILE value in the system.conf file

Save the file and shut the editor.

We have to repeat that operation on the “/and so forth/systemd/consumer.conf” file.

sudo gedit /and so forth/systemd/consumer.conf

Editing the user.conf file

Make the identical changes to the road containing the string “DefaultLimitNOFILE.”

The DefaultLimitNOFILE value in the user.conf file

Save the file and shut the editor. You have to both reboot your laptop or use the systemctl command with the daemon-reexec possibility in order that systemd is re-executed and ingests the brand new settings.

sudo systemctl daemon-reexec

Restarting systemd

Opening a terminal window and checking the brand new restrict ought to present the brand new worth you set. In our case that was 4096.

ulimit -n

Checking the new soft limit with ulimit -n

We will take a look at this can be a stay, operational worth by rerunning our file-greedy program.

./open-Information

Checking the new soft limit with the open-files program

This system fails to open file quantity 4094, which means 4093 had been recordsdata opened. That’s our anticipated worth, 3 lower than 4096.

All the pieces is a File

That’s why Linux is so depending on file handles. Now, in the event you begin to run out of them, you know the way to extend your quota.

RELATED: What Are stdin, stdout, and stderr on Linux?

Leave a Reply

Your email address will not be published. Required fields are marked *