I came across this problem again, haha.
I was running a program that requires opening 8000+ files at the same time. But I had trouble while reading the 1022nd file, just like before. This time I decided to solve it permanently.
Add this into /etc/security/limits.conf:
* soft nofile 9000 * hard nofile 9000
And you are all set. Alternatively, if you just want a one-time violation, then use this:
ulimit -n 9000
I have seen a post before talking about this problem, and it is said by default CentOS has the limit of 1024. Within these 1024 file descriptors, 2 are reseved by the system (stdout and stderr). In my case, another descriptor was occupied by the script, therefore we had problem around 1022nd file.