Reading 1022nd files in Linux

I came across this problem again, haha.

I was running a program that requires opening 8000+ files at the same time. But I had trouble while reading the 1022nd file, just like before. This time I decided to solve it permanently.

Add this into /etc/security/limits.conf:

*      soft    nofile   9000
*      hard    nofile   9000

And you are all set. Alternatively, if you just want a one-time violation, then use this:

ulimit -n 9000

I have seen a post before talking about this problem, and it is said by default CentOS has the limit of 1024. Within these 1024 file descriptors, 2 are reseved by the system (stdout and stderr). In my case, another descriptor was occupied by the script, therefore we had problem around 1022nd file.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s