Skip to main content

Command Palette

Search for a command to run...

Is Node.js Really Single-Threaded?

How Node.js Handles Multiple Requests with a Single Thread

Published
β€’7 min read
Is Node.js Really Single-Threaded?
S

Frontend Developer πŸ’» | Fueled by curiosity and Tea β˜• | Always learning and exploring new technologies.

When developers first hear Node.js is single-threaded, they usually have one of two reactions either they're confused, or they're skeptical.

How can a single thread handle thousands of simultaneous users?
That's a fair question and the answer changes the way you think about concurrent systems.

What a Thread Actually Is ?

Before we can talk about single-threaded, we need to be clear on what a thread is.

A process is a running program. When you start Node.js, your OS creates a process for it memory space, file handles, all of it. That process is isolated. Other processes can't mess with its memory directly.

A thread lives inside a process. It's a unit of execution a sequence of instructions the CPU can run. One process can have many threads, all sharing the same memory space, all potentially running at the same time on different CPU cores.

Traditional web servers like Apache create a new thread (or even a new process) per incoming request.

One user connects β†’ one thread. A hundred users β†’ a hundred threads. That model is simple and intuitive.

Node.js says one thread. For everything.

That sounds like a weakness. Here's why it isn't at least not for most web applications.

Event Loop

The single thread in Node.js isn't doing what you think it's doing most of the time. It's not running your code nonstop. It's mostly checking.

The event loop runs in a continuous cycle. Each pass through the loop (called a "tick") looks like this:

  1. Are there any timers that have expired? Run their callbacks.

  2. Are there any pending I/O callbacks ready? Run them.

  3. Are there any setImmediate callbacks queued? Run them.

  4. Any close events? Handle them.

  5. Nothing ready? Check again.

The entire secret of Node.js performance lives in step 5. When there's nothing ready, the thread doesn't spin-wait burning CPU. It goes into an efficient waiting state, and the OS wakes it up the moment something is ready.

This is called I/O multiplexing and Node.js gets it essentially for free by sitting on top of libuv, which uses epoll on Linux, kqueue on macOS, and IOCP on Windows under the hood.

These are OS-level mechanisms specifically designed for efficient I/O monitoring.

What Actually Happens When Multiple Requests Hit Node.js

Let's walk through three simultaneous requests arriving at a Node.js server.

Request A arrives. Your route handler fires. It calls db.query(...) with a callback. Node registers that callback and hands the DB work off to libuv. Your handler returns. Control goes back to the event loop.

Microseconds later, Request B arrives. Same thing handler fires, calls an API, callback registered, handler returns. Event loop keeps spinning.

Immediately after, Request C arrives. Maybe this one doesn't need I/O it just reads from an in-memory cache and returns a response. The handler runs to completion in a single tick. Response sent. Done.

Now the event loop keeps checking. Eventually, the DB responds for Request A. The callback fires. You format the data, call res.send(). Done. Then the API responds for Request B. Its callback fires. Response sent.

From the outside from the perspective of the three clients who connected all three requests were handled at the same time. From Node's perspective, there was always only one thing running at any given instant. But because the expensive work was delegated, the thread was almost never actually waiting.

Timeline (single thread):

t=0ms   Request A arrives β†’ DB query fired β†’ handler exits β†’ event loop
t=1ms   Request B arrives β†’ API call fired β†’ handler exits β†’ event loop
t=2ms   Request C arrives β†’ cache lookup β†’ response sent β†’ event loop
t=45ms  DB responds β†’ Request A callback β†’ response sent
t=120ms API responds β†’ Request B callback β†’ response sent

One thread. Three requests handled. No waiting.

Thread vs Process: One More Thing Worth Knowing

Threads within a process share memory. This is efficient (no copying data) but risky one thread can corrupt another's memory. Debugging race conditions in multi-threaded apps is notoriously painful.

Processes have separate memory. Safer, but communicating between them requires explicit mechanisms (IPC, message passing, shared files).

Node.js sidesteps the thread safety problem entirely by using one JS thread. No shared mutable state between concurrent handlers. No race conditions in your JS code.

When you do want parallelism in Node.js (for CPU-heavy work), you use Worker Threads or the Cluster module (spawns multiple Node processes, each with their own event loop). These are explicit, opt-in parallelism.

Why Node.js Scales So Well (The Actual Reason)

It has a very low per-connection cost.

In a thread-per-request model, each connection holds a thread. Threads typically consume between 1MB and 8MB of stack memory by default. Ten thousand concurrent connections = 10–80 GB of memory just for stack space. Before you've done any actual work.

In Node.js, an open connection waiting for data is just an entry in the event loop β€” a file descriptor and a callback reference. A few hundred bytes. Ten thousand concurrent open connections might cost you 50MB total.

This is the real reason LinkedIn went from 30 servers to 3 with Node.js. It wasn't that Node's code ran faster. It's that each idle connection cost almost nothing, so one machine could hold far more of them.

The technical term is C10K problem handling ten thousand concurrent connections. This was a serious challenge in the early 2000s. Node.js's architecture basically solves it by default.

Actual Limitation

If you block the event loop, everything stops.

Not slows down. Stops. Every single client waiting on your server will be frozen until your blocking code finishes, because there's only one thread.

This can happen with:

  • A for loop iterating over 10 million items

  • Synchronous file reads (fs.readFileSync) in a hot path

  • Heavy JSON parsing of a massive payload

  • crypto.pbkdf2Sync with many iterations

  • Any CPU-intensive computation running to completion without yielding

app.get('/danger', (req, res) => {
  let sum = 0;
  for (let i = 0; i < 1_000_000_000; i++) {
    sum += i; // 1 billion iterations on the JS thread
  }
  res.send({ sum }); // every other user waited for this
});

In a multi-threaded server, this would only block one thread. In Node.js, it blocks everything.

The fix: offload CPU work to Worker Threads, or use setImmediate/process.nextTick to yield control back to the event loop between chunks of work. But the key lesson is: Node's single-threaded model rewards I/O-heavy workloads and punishes CPU-heavy ones.

Know this going in. Build accordingly.

Moral of the Story

Single-threaded doesn't mean limited. It means focused. Node.js made a deliberate bet: most web applications are I/O-bound, not CPU-bound. If that's true for your application and it usually is then a single well-managed thread with non-blocking I/O outperforms a pool of threads most of which are just waiting.

Understand the event loop. Respect the single thread. Don't block it with CPU work. That's the entire contract.

I’m currently deep-diving into the JavaScript, building projects and exploring the internals of the web. If you're on a similar journey or just love talking about JavaScript, let’s stay in touch!

Keep coding and keep building.

Node JS

Part 4 of 6

Sharing my Node JS journey. What i learn and have experienced.

Up next

NodeJs for building fast WebApp

Why Node.js is perfect for building fast web apps