DaanDeMeyer / reproc

A cross-platform (C99/C++11) process library

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

run fails when ulimit is set over 1024^2

danpf opened this issue · comments

I usually don't use more than 1024^2 file descriptors, but have set ulimit -n unlimited in the past to overcome some problems on computers with lots of cpus.

If ulimit -n unlimited is set on Mac:

this causes the call to get_max_fd() to return INT_MAX, and then the process fails here:

if (max_fd > MAX_FD_LIMIT) {

This appears to only happen on mac. I'm guessing the problem is that mac doesn't report the actual limit while on linux it actually returns 1024^2.

original issue:
mamba-org/mamba#1758

I don't really know what the best solution to this would be, as I'm not a mac expert... : /

We ran into this as well. The issue with the current handling is that it is not clear at all that this is reproc failing, not the called process.

We currently work around this by if needed setting a lower rlimit in case we see a limit reproc does not want to use. This is far from ideal. I wonder if the code here could be restructured to remove that arbitrary limit inside reproc, e.g., by using close_range(2).