Node.js EMFILE: too many open files
Too many open files
Verified against Node.js docs: errors, libuv source: src/unix/stream.c, POSIX getrlimit(2) man page · Updated May 2026
> quick_fix
Your process has hit the OS file descriptor limit. As a quick unblock, raise ulimit -n to 65536 in your shell. The real fix is to find the FD leak: log lsof -p PID counts over time, then close the file/socket/stream that keeps growing.
# Check current limit and usage
ulimit -n
lsof -p $(pgrep -f 'node ') | wc -l
# Raise for current shell (macOS/Linux)
ulimit -n 65536
# Then restart your Node processWhat causes this error
Every TCP socket, file handle, and pipe in Node holds a file descriptor (FD). The kernel limits how many a single process can hold. When fs.open, net.connect, or http.request tries to allocate one past the limit, the syscall returns EMFILE and Node throws. Common causes: forgetting to close streams, fan-out fs.readFile in a tight loop, or under-tuned production limits.
How to fix it
- 01
step 1
Confirm it is EMFILE, not ENFILE
EMFILE = per-process limit. ENFILE = system-wide limit (rare on modern kernels). Check err.code. The fixes are different.
stream.on('error', (err) => { if (err.code === 'EMFILE') console.error('Per-process FD limit hit') if (err.code === 'ENFILE') console.error('System-wide FD limit hit') }) - 02
step 2
Raise the per-process limit
Default ulimit -n on macOS is 256, absurdly low. On Linux it's usually 1024. Bump to at least 65536 for any service handling concurrent connections. In systemd units, set LimitNOFILE=65536. In Docker, --ulimit nofile=65536:65536.
# Permanent (Linux) - edit /etc/security/limits.conf * soft nofile 65536 * hard nofile 65536 # systemd service [Service] LimitNOFILE=65536 - 03
step 3
Find the FD leak with lsof
Snapshot lsof output every minute. The leaked resource is whatever count keeps climbing. Sockets to a specific host? Open files in a tmp dir? Pipes from child processes?
while sleep 30; do echo "$(date) $(lsof -p $PID 2>/dev/null | wc -l) FDs" done - 04
step 4
Close streams in finally or use pipeline
Read streams that error mid-flight don't always emit close. Use stream.pipeline (which always destroys both ends on error) or finished() to guarantee cleanup.
import { pipeline } from 'node:stream/promises' await pipeline( fs.createReadStream(src), transform, fs.createWriteStream(dst) ) // both streams destroyed on success or error - 05
step 5
Cap concurrency on file fan-out
Promise.all(files.map(fs.readFile)) opens N FDs at once. With 5000 files and ulimit 1024, EMFILE is guaranteed. Use a concurrency limiter like p-limit or process in batches.
import pLimit from 'p-limit' const limit = pLimit(50) await Promise.all( files.map(f => limit(() => fs.promises.readFile(f))) ) - 06
step 6
For HTTP clients, reuse a keep-alive Agent
Each new http.request without a shared Agent opens a fresh socket. Under load, these accumulate. Use a global Agent with keepAlive: true and a maxSockets cap.
import { Agent } from 'node:https' const agent = new Agent({ keepAlive: true, maxSockets: 100 }) fetch(url, { agent }) // reuses sockets
Why EMFILE happens at the runtime level
Every Node I/O resource (file, socket, pipe, FIFO, eventfd) is a file descriptor managed by the kernel. The setrlimit RLIMIT_NOFILE caps how many a process can hold; on Linux the default soft limit is 1024 and the hard limit is whatever the system allows. When libuv calls open(2), socket(2), or pipe(2) past this cap, the kernel returns -1 with errno EMFILE (errno 24). libuv translates this into the JS-visible error. Each TCP TIME_WAIT entry continues to count against the limit until the kernel reclaims it 60-120 seconds after close.
Common debug mistakes for EMFILE
- Awaiting Promise.all over thousands of fs.readFile calls without a concurrency limiter, opening every file simultaneously.
- Forgetting to call response.body.cancel() or stream.destroy() in error paths, leaking sockets per failed request.
- Using fetch without a shared keep-alive Agent in a loop, then wondering why FD usage grows linearly with request count.
- Setting ulimit -n in one shell but starting the service via systemd or pm2 which uses its own limits.
- Trusting graceful-fs to mask the leak in production without ever fixing the underlying unclosed handles.
When EMFILE signals a deeper problem
Repeated EMFILE under realistic load means the application has unbounded resource fan-out, it pretends external systems have infinite throughput. The architectural fix is to introduce explicit backpressure: bounded queues for outgoing requests, semaphores around fs operations, and connection pools with maxSockets sized to actual downstream capacity. Frameworks like undici expose Pool and Dispatcher abstractions for this. Without bounded concurrency the same code will hit ENOMEM, ETIMEDOUT, or downstream rate limits next; EMFILE is just the first wall the OS happens to enforce.
Frequently asked questions
Why is the macOS default file descriptor limit so low?
macOS inherits the BSD default of 256 OPEN_MAX, then layers a per-session ulimit on top. For desktop use this is fine; for server workloads it's catastrophic. Bump it in ~/.zshrc (ulimit -n 65536) and confirm with sysctl kern.maxfiles, which limits the system-wide ceiling. macOS Sonoma and later raised internal limits but not the user-visible default. Production servers should never run on macOS; use Linux where 1024 is the default and 65536 is a one-line config change.
Does keep-alive in HTTP clients reduce EMFILE risk?
Yes, dramatically. Without keep-alive, each fetch opens a new TCP socket, completes the request, then leaves the socket in TIME_WAIT for 60-120 seconds, still consuming an FD. With keep-alive, sockets are reused for subsequent requests to the same host, capping FD usage at the Agent's maxSockets value. Always set maxSockets explicitly in production; the default Infinity will exhaust FDs under load before the application backpressures.
Can libuv work around EMFILE automatically?
Partially. libuv catches EMFILE during accept(2) for TCP listeners and applies a 'reserve a spare FD' trick: it pre-opens an FD, releases it on EMFILE, accepts then closes the incoming connection, and reopens the spare. This prevents the listener from getting stuck but each rejected connection still drops cleanly. For fs operations there's no such workaround; EMFILE propagates straight to your error handler. The graceful-fs npm package patches fs.open to retry on EMFILE, useful but masks the underlying leak.
How do I confirm the FD limit is actually raised in production?
Inside the Node process, log the value returned by process.getrlimit when available, otherwise read /proc/PID/limits at runtime. Check both soft and hard limits. Containers often inherit the daemon's limits, not the system's. Kubernetes pods need spec.containers[].securityContext or LimitRange. The /proc file shows live values: open it directly with fs.readFile to log limits at boot.