Author Archives: SaltwaterC

About SaltwaterC

Mr. Sarcasm, Developer, Sysadmin.

Inlining the PEM encoded files in node.js

Multi line strings in JavaScript are a bitch. At least till ES6. The canonical example for a node.js HTTPS server is:

// curl -k https://localhost:8000/
var https = require('https');
var fs = require('fs');
 
var options = {
  key: fs.readFileSync('test/fixtures/keys/agent2-key.pem'),
  cert: fs.readFileSync('test/fixtures/keys/agent2-cert.pem')
};
 
https.createServer(options, function (req, res) {
  res.writeHead(200);
  res.end("hello world\n");
}).listen(8000);

All fine and dandy as the sync operation doesn’t penalize the event loop. It is associated with the server startup cost. However, jslint yells about using sync operations. As the code is part of the boilerplate for testing http-get, refactoring didn’t make enough sense. Making jslint to STFU is usually the last option. The content of the files never changes, therefore it doesn’t make any sense to read them from the disk either. Inlining is the obvious option.

Couldn’t find any online tool to play with. Therefore I fired a PHP REPL, then used my PCRE-fu to solve this one. The solution doesn’t look pretty, but it gets the job done:

php > var_dump(preg_replace('/\n/', '\n\\' . "\n", file_get_contents('server.key')));
string(932) "-----BEGIN RSA PRIVATE KEY-----\n\
MIICXAIBAAKBgQCvZg+myk7tW/BLin070Sy23xysNS/e9e5W+fYLmjYe1WW9BEWQ\n\
iDp2V7dpkGfNIuYFTLjwOdNQwEaiqbu5C1/4zk21BreIZY6SiyX8aB3kyDKlAA9w\n\
PvUYgoAD/HlEg9J3A2GHiL/z//xAwNmAs0vVr7k841SesMOlbZSe69DazwIDAQAB\n\
AoGAG+HLhyYN2emNj1Sah9G+m+tnsXBbBcRueOEPXdTL2abun1d4f3tIX9udymgs\n\
OA3eJuWFWJq4ntOR5vW4Y7gNL0p2k3oxdB+DWfwQAaUoV5tb9UQy6n7Q/+sJeTuM\n\
J8EGqkr4kEq+DAt2KzWry9V6MABpkedAOBW/9Yco3ilWLnECQQDlgbC5CM2hv8eG\n\
P0xJXb1tgEg//7hlIo9kx0sdkko1E4/1QEHe6VWMhfyDXsfb+b71aw0wL7bbiEEl\n\
RO994t/NAkEAw6Vjxk/4BpwWRo9c/HJ8Fr0os3nB7qwvFIvYckGSCl+sxv69pSlD\n\
P6g7M4b4swBfTR06vMYSGVjMcaIR9icxCwJAI6c7EfOpJjiJwXQx4K/cTpeAIdkT\n\
BzsQNaK0K5rfRlGMqpfZ48wxywvBh5MAz06D+NIxkUvIR2BqZmTII7FL/QJBAJ+w\n\
OwP++b7LYBMvqQIUn9wfgT0cwIIC4Fqw2nZHtt/ov6mc+0X3rAAlXEzuecgBIchb\n\
dznloZg2toh5dJep3YkCQAIY4EYUA1QRD8KWRJ2tz0LKb2BUriArTf1fglWBjv2z\n\
wdkSgf5QYY1Wz8M14rqgajU5fySN7nRDFz/wFRskcgY=\n\
-----END RSA PRIVATE KEY-----\n\
"
php > var_dump(preg_replace('/\n/', '\n\\' . "\n", file_get_contents('server.cert')));
string(892) "-----BEGIN CERTIFICATE-----\n\
MIICRTCCAa4CCQDTefadG9Mw0TANBgkqhkiG9w0BAQUFADBmMQswCQYDVQQGEwJS\n\
TzEOMAwGA1UECBMFU2liaXUxDjAMBgNVBAcTBVNpYml1MSEwHwYDVQQKExhJbnRl\n\
cm5ldCBXaWRnaXRzIFB0eSBMdGQxFDASBgNVBAMTC1N0ZWZhbiBSdXN1MCAXDTEx\n\
MDgwMTE0MjU0N1oYDzIxMTEwNzA4MTQyNTQ3WjBmMQswCQYDVQQGEwJSTzEOMAwG\n\
A1UECBMFU2liaXUxDjAMBgNVBAcTBVNpYml1MSEwHwYDVQQKExhJbnRlcm5ldCBX\n\
aWRnaXRzIFB0eSBMdGQxFDASBgNVBAMTC1N0ZWZhbiBSdXN1MIGfMA0GCSqGSIb3\n\
DQEBAQUAA4GNADCBiQKBgQCvZg+myk7tW/BLin070Sy23xysNS/e9e5W+fYLmjYe\n\
1WW9BEWQiDp2V7dpkGfNIuYFTLjwOdNQwEaiqbu5C1/4zk21BreIZY6SiyX8aB3k\n\
yDKlAA9wPvUYgoAD/HlEg9J3A2GHiL/z//xAwNmAs0vVr7k841SesMOlbZSe69Da\n\
zwIDAQABMA0GCSqGSIb3DQEBBQUAA4GBACgdP59N5IvN3yCD7atszTBoeOoK5rEz\n\
5+X8hhcO+H1sEY2bTZK9SP8ctyuHD0Ft8X0vRO7tdt8Tmo6UFD6ysa/q3l0VVMVY\n\
abnKQzWbLt+MHkfPrEJmQfSe2XntEKgUJWrhRCwPomFkXb4LciLjjgYWQSI2G0ez\n\
BfxB907vgNqP\n\
-----END CERTIFICATE-----\n\
"
php >

This gave me usable multi line strings that don’t break the PEM encoding.

Update: shell one liner with Perl

cat certificate.pem | perl -p -e 's/\n/\\n\\\n/'

Doing what Dropbox is doing and doing it wrong

Let’s take a couple of examples. Switched from an older machine recently, therefore I need to setup all my stuff. As I don’t like to depend on a single service, for redundancy’s sake, I also keep a backup for Dropbox.

SpiderOak – backs up stuff, uses client side encryption, has optional sync between your machines. So far, so good. In the latest OS X client, at least, the possibility to paste the password is missing. Thanks, I’ll me use my password manager instead with services that don’t do such a braindead thing. Seriously, there’s a thing that improves the security of the password authentication. It is called two factor authentication. Dropbox has it. Google has it. In fact, any decent service has it. Disabling the possibility to paste the password, not so much.

Google Drive – you wouldn’t think I’m letting Google of the hook this time. As I don’t trust with my data these sync services, I always do client side encryption. Dropbox doesn’t choke on it, SpiderOak doesn’t choke on it. Google Drive must be a special kind of breed as it chokes on my encrypted files with “Upload Error – An unknown issue has occurred “. Gee, let me fix the error message for you: “your piece of shit encrypted files aren’t of any use for us, there’s no personal info there”. Was it that difficult? Thanks, but the market is full of alternatives. Seriously Google, you could do better than this “not being evil” thing.

Async frameworks “Hello World” showdown

This is not intended to be a proper comparison between these frameworks. However, since the “Hello World” test is the lowest common denominator, it is a pretty clear indicator that an application can’t exceed in performance these numbers. Also, what Guillermo did not understand from my comment is the fact that 1000 requests at the concurrency of 10 is way to few for get a proper picture of a “Hello World” showdown.

Tested frameworks:

  • node.js – v0.6.17
  • vert.x – v1.0 final + OpenJDK 7 installed from the Ubuntu repository – using the JavaScript bindings
  • luanode – built from the master branch using the Ubuntu provided lua dependencies
  • luvit – built from the master branch
  • react – cloned the master branch

I also wanted to test node.native, but it kept crashing on me. You can see that it is a pretty old issue. I didn’t have the patience to make the v0.1.0 branch to work with the previously used code. But I’d like to give it a run for its money.

The system used for the testing is a modest Athlon II X2 240e (2.8GHz) with 4GB or DDR2 800MHz running the latest Kubuntu 12.04 LTS amd64. Since ab pretty much takes a CPU core for itself, the frameworks ran a single process that occupied a single CPU core. I tried running a node.js HTTP server wrapped with the cluster module. Or passing -instances 2 to the vertx framework. The results were pretty much the same, therefore using just a single CPU core is a fair comparison.

The ab command that I used to hammer the Hello World! output:

ab -r -k -n 1000000 -c 1000 http://127.0.0.1:{port_name}/

The command ran at least a couple of times before saving the results. Just to make sure that everything is properly warmed up.

The averages graph:

The test sources and full ab output is available on this gist. There’s interesting output in the results.txt file for the stats nerds.

PS: I have the impression (but did not test) that vert.x may be a little bit faster, but ab is the actual bottleneck.

Update: added React (node.php) to comparision. Too lazy to plot another graph. But at 1573.40 req/s, it is harly a match even for luanode. Used the PHP 5.3.10 from the Ubuntu repositories.

Update: added another React (node.php) to comparision, but with a custom build of PHP 5.4.3. This time, it managed to get 3727.49 req/s.

Poor man’s tail recursion in node.js

If you find yourself in the situation of doing recursion over a large-enough input in node.js, you may encounter this:

node.js:201
        throw e; // process.nextTick error, or 'error' event on first tick
              ^
RangeError: Maximum call stack size exceeded

Oops, I smashed the stack. You may reproduce it with something like this:

var foo = []
 
for (var i = 0; i < 1000000; i++) {
    foo.push(i)
}
 
var recur = function (bar) {
    if (bar.length > 0) {
        var baz = bar.pop()
        // do something with baz
        recur(bar)
    } else {
        // end of recursion, do your stuff
    }
}
 
recur(foo)

“Thanks, that’s very thoughtful. But you’re not helping.” Bear with me. The solution is the obvious tail call elimination. But JavaScript doesn’t have that optimization.

However, you may wrap the tail call in order to call the above recur() function in a new stack. The proper recur() implementation is:

var recur = function (bar) {
    if (bar.length > 0) {
        var baz = bar.pop()
        // do something with baz
        process.nextTick(function () {
            recur(bar)
        })
    } else {
        // end of recursion, do your stuff
    }
}

Warning: please read this carefully. I gave you the solution for recurring over such a large input, but the performance is poor. Using process.nextTick (or a timer function such as setTimeout for that matter, slower BTW) is an expensive operation. Didn’t test where’s the actual bottleneck (epoll itself under Linux, libuv | libev, etc).

time node recur.js
node recur.js  1.36s user 0.28s system 101% cpu 1.610 total

The cost of this method is high. Therefore, don’t attempt this in a web application. It kills the event loop. For instance, I don’t use node for writing web applications. It is a difficult task, while the cost of the event loop itself isn’t that negligible as you may think. It useful as long as the CPU time is negligible compared to the time spent doing IO. Therefore, please, don’t include me in the group of people that thinks about node as the hammer for all the problems you throw at it.

If you’re wondering why I won’t just simply iterate the object, the answer is simple: because that “do something with baz” involves some async IO that would kill the second data provider. Sequential calls ensure that everybody in the architecture stays happy. Besides, I don’t actually use bar.pop(), but something like bar.splice(0, 5000) for packing more data in less remote calls and less events. bar.shift() in a situation like this is as slow as molasses in January. In an async framework, the order of the items from a TODO list is not relevant, therefore use the fastest way.

If you’re still wondering why I still use a solution like this, the above technique is part of the cost associated with the start-up cost. The application fetches all the required data in RAM. Having the application to kill the event loop for 20-30 seconds before hitting the Internet pipe is negligible for a process that runs for hours or days. After the application hits the Internet, only then I can say that node is in use for the stuff where it shines. I know, before this, I listed all the wrong reasons for using node as a tool.

Computing file hashes with node.js

Since node.js has the shiny crypto module which binds some stuff to the openssl library, people might be tempted to compute file hashes with node.js. At least the crypto manual page shows how to do a SHA1 for a given file (mimics sha1sum). Should people do this? The answer is: NO. Some may say because it blocks the event loop. I say: because it is as slow as molasses in January. At least compared to dedicated tools.

Let’s have a look:

var filename = process.argv[2];
var crypto = require('crypto');
var fs = require('fs');
 
var shasum = crypto.createHash('sha256');
 
var s = fs.ReadStream(filename);
s.on('data', function(d) {
  shasum.update(d);
});
 
s.on('end', function() {
  var d = shasum.digest('hex');
  console.log(d + '  ' + filename);
});

time node hash.js ubuntu-10.04.3-desktop-i386.iso
208fb66dddda345aa264f7c85d011d6aeaa5588075eea6eee645fd5307ef3cac ubuntu-10.04.3-desktop-i386.iso
node hash.js ubuntu-10.04.3-desktop-i386.iso 28.92s user 0.80s system 100% cpu 29.661 total

time sha256sum ubuntu-10.04.3-desktop-i386.iso
208fb66dddda345aa264f7c85d011d6aeaa5588075eea6eee645fd5307ef3cac ubuntu-10.04.3-desktop-i386.iso
sha256sum ubuntu-10.04.3-desktop-i386.iso 4.86s user 0.21s system 99% cpu 5.093 total

time openssl dgst -sha256 ubuntu-10.04.3-desktop-i386.iso
SHA256(ubuntu-10.04.3-desktop-i386.iso)= 208fb66dddda345aa264f7c85d011d6aeaa5588075eea6eee645fd5307ef3cac
openssl dgst -sha256 ubuntu-10.04.3-desktop-i386.iso 4.40s user 0.17s system 100% cpu 4.567 total

Edit: to sum up for those with little patience:

node hash.js – 29.661s
sha256sum – 5.093s
openssl dgst -sha256 – 4.567s

/Edit

That’s a ~6.5X speed boost just by invoking openssl alone instead of binding to its library. node.js does something terribly wrong somewhere since the file I/O is not to blame for the slowness:

var filename = process.argv[2];
var fs = require('fs');
 
var s = fs.ReadStream(filename);
s.on('data', function(d) {
 
});
 
s.on('end', function() {
 
});

time node read.js ubuntu-10.04.3-desktop-i386.iso
node read.js ubuntu-10.04.3-desktop-i386.iso 0.62s user 0.60s system 106% cpu 1.148 total

This little example that I hacked together shows that using child_process.exec is pretty fine:

var exec = require('child_process').exec;
exec('/usr/bin/env openssl dgst -sha256 ' + process.argv[2], function (err, stdout, stderr) {
	if (err) {
		process.stderr.write(err.message);
	} else {
		console.log(stdout.substr(-65, 64));
	}
});

time node hash2.js ubuntu-10.04.3-desktop-i386.iso
208fb66dddda345aa264f7c85d011d6aeaa5588075eea6eee645fd5307ef3cac
node hash2.js ubuntu-10.04.3-desktop-i386.iso 4.44s user 0.19s system 100% cpu 4.630 total

So you can have your cake and eat it too. The guys with the philosophy got this one right.