17

Reading a book on Single Page Web Applications I came to a paragraph that got me thinking:

Node.js is non-blocking and event-driven. In a nutshell, this means a single Node.js instance on modest hardware can handle tens or hundreds of thousands of concurrent open connections, such as those used in real-time messaging, which is often a highly desired feature of modern SPAs.

I noticed the case of using the Raspberry Pi as a Rails server, so how about for Node.js?

How do I setup my Raspberry Pi to serve a Node.js application?
Did anyone try, are there tips & tricks, maybe gotchas or limitation to consider?


Edit: To avoid misunderstandings or off-topics, let's please keep the focus on the Raspberry Pi, in the Node.js context:

  1. How suited is the Raspberry Pi to serve Node applications?
  2. If that is the case, how can one fine tune the Raspberry Pi for best results?
Marius Butuc
  • 925
  • 3
  • 11
  • 20
  • Because the book is all about Single page application, node.js had to make an appearance there. Yes it is possible to serve everything up with node. But I doubt it will ever be done in any production environment as it can get very complex and unfriendly – Piotr Kula Jan 23 '13 at 16:35
  • It is weird how you updated you answer to ask 2 specific questions to avoid confusion and then mark an answer for installing node.js- which was not the question? You original question was, how to set it up and any advice. Why did I even bother. LOL :) – Piotr Kula Jan 24 '13 at 08:39
  • 1
    Looks like when the choice was made you were still editing; choosing the answer can be edited just like the answers, so thanks for pointing that out. :) – Marius Butuc Jan 24 '13 at 13:35

4 Answers4

16

Getting Node.js on a Raspberry Pi

You can either:

  1. Compile Node.js yourself (as ppumkin already pointed out)—takes about 2 hours on a Raspberry Pi.
  2. Or you can download the binary v0.8.17

Performance

I did a quick performance test (to give a rough first impression):

  1. My Raspberry Pi is overclocked (Turbo) with default memory_split (64)

  2. Tests were performed over my local network (802.11g Wifi).

  3. I used the standard "Hello World" example from the Node.js website:

    var http = require('http');
    http.createServer(function (req, res) {
      res.writeHead(200, {'Content-Type': 'text/plain'});
      res.end('Hello World\n');
    }).listen(1337, '127.0.0.1');
    console.log('Server running at http://127.0.0.1:1337/');
    
  4. Apache Bench settings: ab -r -n 10000 -c 100 http://192.168.0.116:1337/

So these tests are not representative for a normal web application (both concerning the network connection and the length/complexity of the transferred content).

Results

Server Software:        node.js/0.8.17
Server Hostname:        192.168.0.116
Server Port:            1337

Document Path:          /
Document Length:        12 bytes

Concurrency Level:      100
Time taken for tests:   53.824 seconds
Complete requests:      10000
Failed requests:        0
Write errors:           0
Total transferred:      1130791 bytes
HTML transferred:       120084 bytes
Requests per second:    185.79 [#/sec] (mean)
Time per request:       538.238 [ms] (mean)
Time per request:       5.382 [ms] (mean, across all concurrent requests)
Transfer rate:          20.52 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        2  178 405.2     40    4975
Processing:     7  342 1136.4     50   31533
Waiting:        6  274 1047.6     48   31533
Total:         11  520 1238.7     94   31581

Percentage of the requests served within a certain time (ms)
  50%     94
  66%    112
  75%    303
  80%    714
  90%   1491
  95%   2499
  98%   3722
  99%   5040
 100%  31581 (longest request)

For a comparison, I also installed nginx on my Raspberry Pi and ran the same test with the default "Welcome to nginx!" HTML file:

Server Software:        nginx/1.2.1
Server Hostname:        192.168.0.116
Server Port:            80

Document Path:          /
Document Length:        151 bytes

Concurrency Level:      100
Time taken for tests:   46.959 seconds
Complete requests:      10000
Failed requests:        0
Write errors:           0
Total transferred:      3610361 bytes
HTML transferred:       1510151 bytes
Requests per second:    212.95 [#/sec] (mean)
Time per request:       469.590 [ms] (mean)
Time per request:       4.696 [ms] (mean, across all concurrent requests)
Transfer rate:          75.08 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        2  162 407.8     40    4999
Processing:     5  256 979.8     45   29130
Waiting:        5  256 979.8     45   29130
Total:         32  418 1078.6     88   30477

Percentage of the requests served within a certain time (ms)
  50%     88
  66%     97
  75%    105
  80%    258
  90%   1064
  95%   2382
  98%   3412
  99%   4145
 100%  30477 (longest request)

Optimizing Raspberry Pi settings

Use raspi-config to change the following settings:

  1. Set the memory_split for the GPU to 16 (lowest value)
  2. Set overclocking mode to "Turbo" for fastest RAM/CPU settings
Frederic
  • 351
  • 1
  • 5
10

Web Servers

Node.JS can be used as a web server replacement on the Pi and you can create stand alone or Single Page web applications with ease.

But just for your information, in most real world applications it is recommended to use servers like the modern nginx , light weight lighttpd or the chunky but fully featured apache2! And then script node.js to complement the site.

Obviously the possibilities are endless and everything depends on what you would like to achieve.

Raspberry Pi?

The Raspberry Pi can run any of those web servers. It can also run Node without any serious complications and is really fast without any complicated tweaking.

The Raspberry Pi is very capable but best would be to turn the memory split to least graphics and most RAM. Forget about using an IDE and just do everything via SSH. If you really need some more juice then put a heat sink on the BCM chip and overclock it as you feel safe. Another option would be to use multiple Pi's as a cluster to help with load balancing. You can start digging around here about clustering.

But do you really need to use node.js?

Node.JS was intended to be used when you start (or anticipate) to get hundreds and thousands of request that require small chunks of data to be stored to DB, cached or read back with minimal server overhead. So you drive it using JS on the client but Node.JS is actually driven by C/C++. So what happens if you need custom module or a specific change in the base code?

In a application that serves webpages node.js does not usually outperform apache for example, on single requests. The non-blocking feature of node.js is great if you have thousands of request per second for most of the day, this is where apache would block and crash.

A real world example

Ebay - During an auction when you have a count down of the last 30 seconds. You can have several people that used to refresh the page vigorously and increasing bids. This is where node.js shines, because today you need NOT refresh any more. That is because the JS ajaxes to node.js very often (300ms~600ms) from all clients and is able to provide a "real life auction" experience. Ebay does not run solely off node.js but on very complex load balanced server farms.

To build and install Node.js on the Pi*:

Obviously there is nothing wrong with using node.js instead of others and how best to learn node if not on a neat little device like the Pi. So you can compile the code yourself like this.

$ sudo apt-get install git-core build-essential libssl-dev 
$ mkdir ~/nodeDL && cd ~/nodeDL 
$ git clone https://github.com/joyent/node.git
$ git checkout v0.6.15 (to checkout the most recent stable version at time of writing)

update: later versions of node (current version is v0.8.18) can be built without the special steps below

next we need to tell the compiler to use the armv6 architecture for the compilation:

$ export CCFLAGS='-march=armv6'
$ export CXXFLAGS='-march=armv6'
and then edit deps/v8/SConstruct around the line 82 mark, to add “-march=armv6”:
'all': {
   'CCFLAGS':      ['$DIALECTFLAGS', '$WARNINGFLAGS', '-march=armv6'],
   'CXXFLAGS':     ['-fno-rtti', '-fno-exceptions', '-march=armv6'],
 },

Then comment out lines starting around the 157 mark, to remove the vfp3 and simulator parts. Since this is a JSON-like object, remember to remove the comma on the CPPDEFINES line!

'armeabi:softfp' : {
   'CPPDEFINES' : ['USE_EABI_HARDFLOAT=0']
  # 'vfp3:on': {
  #   'CPPDEFINES' : ['CAN_USE_VFP_INSTRUCTIONS']
  # },
  # 'simulator:none': {
  #   'CCFLAGS':     ['-mfloat-abi=softfp'],
  # }
 },

Then the usual configure, make, make install process, NB I had to manually specify the location of the OpenSSL libpath:

$ ./configure --openssl-libpath=/usr/lib/ssl 
$ make (to compile node (This took 103 minutes!))
$ sudo make install 

Thats it, you should now have a working Node.JS install!

$ node -v should show you the version number
$ npm -v should show you the version of the Node Package Manager

* References and original article

But as pointed out in other answers you can simply download the a pre compiled binary that will just work.

Conclusions

A good piece of Pi isn't bad. You can run just about anything on the Pi- Just don't expect production level performance.

Piotr Kula
  • 17,168
  • 6
  • 63
  • 103
  • 2
    Well ... you are right in that you most probably want to combine Node.js with an additional "front-end" web server such as Nginx. But what do you mean by "you NEED a C/C++ developer"? As long as you do not want to work on Node.js core or write platform-dependent modules, you do not need C/C++ at all. JavaScript is enough for the common Node.js app developer. Where did I get you wrong? – Golo Roden Jan 23 '13 at 19:42
  • All I meant by that was that node.js is written in C/C++ - When I was doing research on node.js I came across many sites that demonstrated how to expand on the library. But that required pure knowledge in C/C++ - For most purposes you won't need to - But if you ever land up in a situation like that then node.js is the wrong solution. As it happened to be in my case. – Piotr Kula Jan 23 '13 at 19:48
  • My question **is Raspberry Pi-focused** --- How suited is Raspberry Pi to serve Node applications? If that is the case, how can I fine tune the RPi for best results? --- and **not Node-focused** --- How good or bad is Node? But thanks for your opinion; I'll edit the initial question to make it more clear. – Marius Butuc Jan 23 '13 at 20:16
  • Yea I answered question 1- The Pi can handle node.js plus a full LAMP stack too! How to fine tune it? That is open to discussion. Please be more specific what parameters you want to fine tune? I also expanded on what I feel can help with performance. – Piotr Kula Jan 23 '13 at 21:01
  • 1
    I will upvote if you merge your two answers into this one. – Jivings Jan 23 '13 at 21:05
  • I very much doubt that someone using the pi for a web server really cares about stacking node with another server. Saying that node "is not a web server replacement" is *just plain FALSE*. Node.js IS A WEB SERVER. Stacking it will just waste a bunch of RAM for absolutely zero gain. – goldilocks Jan 23 '13 at 21:26
  • Show me a commercial webapp what runs on node, and only node? I mean entirely node. – Piotr Kula Jan 23 '13 at 21:29
  • Show me a commercial webapp that runs on the Raspberry Pi and then we can talk logic. This is obviously *NOT* for some large scale commercial project. Why waste your time and resources on completely unnecessary things? – goldilocks Jan 23 '13 at 21:30
  • Exactly, as hobbyists solution you can do what you like with it. Educationally enlightening but by no means a turnkey solution. When learning about these things you need to know about the final applications too. It was just a remark to let the mind wonder and think about this. – Piotr Kula Jan 23 '13 at 21:32
  • It has nothing to do with hobbyist vs. whatever-you-want-to-call-yourself. It has to do with *appropriate solutions to problems*. Someone could be developing a commercial application to run on the pi, using node, but (!) not for some high traffic purpose (since the pi is the wrong choice for that, period). In this case, node.js is a great choice and stacking it with nginx pointless. – goldilocks Jan 23 '13 at 21:35
  • But the sole purpose of the Pi and the charity is for educational purposes! A commercial company will go and make their own PCB and if they want to use Node.js, to turn a light on or off. Good for them. If they cannot make their own PCB then they will employ people with Raspberry Pi experience that learn from sites like this and similar. But you just said its not about commercial web apps 2 comments before, and the recent comment you talking about developing commercial apps?! I would not replace Apache2 with node only. Why? Ask a new question why and stop hogging the comments please :) – Piotr Kula Jan 23 '13 at 21:44
  • Just ran across this and thought of you: http://learn.adafruit.com/webide/overview a commercial app which runs on the pi using node (and only node) as a server, in exactly the context I mentioned. BTW, I am sure there are also full blown internet sites that use node this way, but obviously *there is no way to know that client side*. Your assessment of node.js, its use value and purposes, etc is *just way, way off base* and inaccurate, period. – goldilocks Feb 10 '13 at 15:20
1

Q: How suited is the Raspberry Pi to serve Node applications?

A: Very well suited:) No doubt about it.

Q: If that is the case, how can one fine tune the Raspberry Pi for best results?

A: Dont't! Focus on writing very well designed node applications. Optimizing your applications script is the way to go.

Always use a proxyserver, for example nginex, just for one reason: Node.JS is still in her childhood years (compared with Apache), so you can assume that there are safety issues still to discover.

Terradon
  • 111
  • 2
0

Just a big comment here for comparison against an old desktop computer (fx8150, 2.1GHz), using same code as Frederic's answer:

var http = require('http');
http.createServer(function (req, res) {
  res.writeHead(200, {'Content-Type': 'text/plain'});
  res.end('Hello World\n');
}).listen(1337, '127.0.0.1');
console.log('Server running at http://127.0.0.1:1337/');

Single core of fx8150 at 2.1GHz can do this:

 latency: {
    average: 21.03,
    mean: 21.03,
    stddev: 5.72,
    min: 1,
    max: 215,
    p0_001: 0,
    p0_01: 0,
    p0_1: 1,
    p1: 17,
    p2_5: 18,
    p10: 18,
    p25: 19,
    p50: 20,
    p75: 21,
    p90: 23,
    p97_5: 31,
    p99: 44,
    p99_9: 85,
    p99_99: 196,
    p99_999: 203,
    totalCount: 224747
  },
  requests: {
    average: 11237.5,
    mean: 11237.5,
    stddev: 1436.76,
    min: 5529,
    max: 12300,
    total: 224747,
    p0_001: 5531,
    p0_01: 5531,
    p0_1: 5531,
    p1: 5531,
    p2_5: 5531,
    p10: 9871,
    p25: 11119,
    p50: 11735,
    p75: 11895,
    p90: 12015,
    p97_5: 12303,
    p99: 12303,
    p99_9: 12303,
    p99_99: 12303,
    p99_999: 12303,
    sent: 224989
  },

It has 12k requests per second. Now with queueing:

var http = require('http');
let cmdQueue=[];
function proc()
{
  while(cmdQueue.length>0)
  {
        let cmd = cmdQueue.pop();
    cmd.res.writeHead(200, {'Content-Type': 'text/plain'});
    cmd.res.end('Hello World\n');
  }
  setTimeout(function(){ proc(); },1);
};
proc();
http.createServer(function (req, res) {
  setTimeout(function(){
    cmdQueue.push({res:res,req:req});
  },0);
}).listen(1337, '127.0.0.1');
console.log('Server running at http://127.0.0.1:1337/');

benchmark:

  latency: {
    average: 19.19,
    mean: 19.19,
    stddev: 6.51,
    min: 2,
    max: 234,
    p0_001: 3,
    p0_01: 4,
    p0_1: 7,
    p1: 9,
    p2_5: 10,
    p10: 13,
    p25: 15,
    p50: 19,
    p75: 22,
    p90: 25,
    p97_5: 30,
    p99: 37,
    p99_9: 95,
    p99_99: 180,
    p99_999: 208,
    totalCount: 245669
  },
  requests: {
    average: 12283.5,
    mean: 12283.5,
    stddev: 1268.9,
    min: 7178,
    max: 13067,
    total: 245669,
    p0_001: 7179,
    p0_01: 7179,
    p0_1: 7179,
    p1: 7179,
    p2_5: 7179,
    p10: 11479,
    p25: 12199,
    p50: 12527,
    p75: 12935,
    p90: 13055,
    p97_5: 13071,
    p99: 13071,
    p99_9: 13071,
    p99_99: 13071,
    p99_999: 13071,
    sent: 245911
  },

13k requests per second.

I used autocannon module for nodejs load testing:

const autocannon = require('autocannon')

autocannon({
url: ["http://127.0.0.1:1337/"],
  connections:242, 
  pipelining: 1, 
  duration: 20, 
  workers:22
}, console.log)

all 22 threads running on same host machine. So, it is true that a modest(fx8150) CPU can handle tens of thousands to hundred thousands requests per second.

To compare with Apache Benchmark (same settings as Frederic's answer):

ab -r -n 10000 -c 100 http://127.0.0.1:1337/

Benchmarking 127.0.0.1 (be patient)
Completed 1000 requests
Completed 2000 requests
Completed 3000 requests
Completed 4000 requests
Completed 5000 requests
Completed 6000 requests
Completed 7000 requests
Completed 8000 requests
Completed 9000 requests
Completed 10000 requests
Finished 10000 requests


Server Software:        
Server Hostname:        127.0.0.1
Server Port:            1337

Document Path:          /
Document Length:        12 bytes

Concurrency Level:      100
Time taken for tests:   2.134 seconds
Complete requests:      10000
Failed requests:        0
Total transferred:      1130000 bytes
HTML transferred:       120000 bytes
Requests per second:    4686.83 [#/sec] (mean)
Time per request:       21.336 [ms] (mean)
Time per request:       0.213 [ms] (mean, across all concurrent requests)
Transfer rate:          517.20 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0    2   1.3      2       7
Processing:     7   19   6.2     17      53
Waiting:        5   14   4.1     13      38
Total:          9   21   6.5     19      53

Percentage of the requests served within a certain time (ms)
  50%     19
  66%     21
  75%     24
  80%     26
  90%     31
  95%     35
  98%     40
  99%     43
 100%     53 (longest request)

Let's increase asynchronicity:

var http = require('http');
let cmdQueue=[];
function proc()
{
  if(cmdQueue.length>0)
  {
    setTimeout(function(){
        while(cmdQueue.length>0)
        {
            let cmd = cmdQueue.pop();
            cmd.res.writeHead(200, {'Content-Type': 'text/plain'});
            cmd.res.end('Hello World\n');
        }
        setTimeout(function(){ proc(); },1);
    },1)
  }
  else
  {
    setTimeout(function(){ proc(); },1);
  }
};
proc();
http.createServer(function (req, res) {
  setTimeout(function(){
    cmdQueue.push({res:res,req:req});
  },0);
}).listen(1337, '127.0.0.1');
console.log('Server running at http://127.0.0.1:1337/');

result:

ab -r -n 10000 -c 100 http://127.0.0.1:1337/
This is ApacheBench, Version 2.3 <$Revision: 1807734 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/

Benchmarking 127.0.0.1 (be patient)
Completed 1000 requests
Completed 2000 requests
Completed 3000 requests
Completed 4000 requests
Completed 5000 requests
Completed 6000 requests
Completed 7000 requests
Completed 8000 requests
Completed 9000 requests
Completed 10000 requests
Finished 10000 requests


Server Software:        
Server Hostname:        127.0.0.1
Server Port:            1337

Document Path:          /
Document Length:        12 bytes

Concurrency Level:      100
Time taken for tests:   1.806 seconds
Complete requests:      10000
Failed requests:        0
Total transferred:      1130000 bytes
HTML transferred:       120000 bytes
Requests per second:    5537.69 [#/sec] (mean)
Time per request:       18.058 [ms] (mean)
Time per request:       0.181 [ms] (mean, across all concurrent requests)
Transfer rate:          611.09 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0    2   1.4      2       9
Processing:     4   15   4.2     15      34
Waiting:        2   10   3.6      9      23
Total:          6   18   4.2     18      34

Percentage of the requests served within a certain time (ms)
  50%     18
  66%     20
  75%     21
  80%     22
  90%     24
  95%     25
  98%     27
  99%     29
 100%     34 (longest request)

and with autocannon load tester:

  latency: {
    average: 20.78,
    mean: 20.78,
    stddev: 7.38,
    min: 1,
    max: 138,
    p0_001: 1,
    p0_01: 1,
    p0_1: 2,
    p1: 5,
    p2_5: 9,
    p10: 11,
    p25: 15,
    p50: 21,
    p75: 27,
    p90: 30,
    p97_5: 33,
    p99: 34,
    p99_9: 54,
    p99_99: 97,
    p99_999: 124,
    totalCount: 227311
  },
  requests: {
    average: 11366.4,
    mean: 11366.4,
    stddev: 538.05,
    min: 10818,
    max: 13429,
    total: 227311,
    p0_001: 10823,
    p0_01: 10823,
    p0_1: 10823,
    p1: 10823,
    p2_5: 10823,
    p10: 10863,
    p25: 10959,
    p50: 11375,
    p75: 11415,
    p90: 11495,
    p97_5: 13431,
    p99: 13431,
    p99_9: 13431,
    p99_99: 13431,
    p99_999: 13431,
    sent: 227553
  },

a bit higher requests per second but only half the latency at worst case which is better for client experience.