Clustering Node.js apps
Yes, Node is fast enough. But it only uses a single thread, so we can make it much faster in most cases (on a multithreaded systems of course).
Nice documentation about clustering in Node.js is here
In a nutshell, master process will just share connections between forked processes. There are some useful events for processes monitoring and re-forking they in case of trouble.
Let's create an extra-simple web-server:
var http = require('http');
http.createServer(function (req, res) {
res.writeHead(200, {'content-type': 'text/plain'});
res.end('Hello World!');
}).listen(3000)
I'll use wrk to run benchmark from incredible Iron web framework for Rust. On my quad core vps results isn't so bad:
Running 10s test @ http://127.0.0.1:3000/
12 threads and 900 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 140.61ms 69.12ms 785.85ms 88.44%
Req/Sec 398.78 247.27 1.28k 62.36%
46600 requests in 10.03s, 6.93MB read
Requests/sec: 4646.52
Transfer/sec: 707.87KB
Hmm, 4646.52 request per second.
Now, let's speed it up!
var http = require('http'),
cluster = require('cluster'),
coresnum = require('os').cpus().length;
if (cluster.isMaster) {
//master process invokes firstly and creates as many forks as cores/threads in system
for (var i=0;i<coresnum;i++) {
cluster.fork();
}
cluster.on('exit', function (worker, code, signal) {
console.log('Someone is dead, PID: ' + worker.process.pid + ', signal: ' + signal || code);
cluster.fork();
});
}
else {
//will be invoked by every fork
http.createServer(function (req, res) {
res.writeHead(200, {'content-type': 'text/plain'});
res.end('Hello World!');
}).listen(3000)
}
Function emitted after 'exit' event will re-fork any forked process that was killed/terminated/etc.
Let's kill some fork:
$ ps ax | grep node
17064 pts/3 Sl+ 0:00 node cluster.js
17069 pts/3 Sl+ 0:00 /usr/local/bin/node --debug-port=5859 /tmp/cluster.js
17074 pts/3 Sl+ 0:00 /usr/local/bin/node --debug-port=5860 /tmp/cluster.js
17075 pts/3 Sl+ 0:00 /usr/local/bin/node --debug-port=5861 /tmp/cluster.js
17080 pts/3 Sl+ 0:00 /usr/local/bin/node --debug-port=5862 /tmp/cluster.js
17092 pts/2 S+ 0:00 grep node
$ kill 17074
Nice:
Someone is dead, PID: 17074, signal: SIGTERM
Run wrk again:
$ wrk -t12 -c900 -d10s http://127.0.0.1:3000/
Running 10s test @ http://127.0.0.1:3000/
12 threads and 900 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 62.34ms 144.61ms 2.00s 96.96%
Req/Sec 1.74k 763.11 8.61k 85.76%
178799 requests in 10.10s, 26.60MB read
Socket errors: connect 0, read 0, write 0, timeout 413
Requests/sec: 17709.63
Transfer/sec: 2.63MB
Not bad, now we have 17709.63 requests per second, not 4646.52 as before.
How about Iron?
extern crate iron;
use iron::prelude::*;
use iron::status;
fn main() {
Iron::new(|_: &mut Request| {
Ok(Response::with((status::Ok, "Hello world!")))
}).http("0.0.0.0:3000").unwrap();
}
Wrk again:
$ wrk -t12 -c900 -d10s http://127.0.0.1:3000/
Running 10s test @ http://127.0.0.1:3000/
12 threads and 900 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 1.03ms 0.95ms 20.98ms 96.66%
Req/Sec 2.59k 563.94 4.06k 58.67%
77369 requests in 10.05s, 8.41MB read
Requests/sec: 7698.92
Transfer/sec: 857.11KB
Since Rust is compiled language and multithreaded by default. I thin'k Node.js results is impress :)