Category Archives: Web

Getting a HTTPS certificate information into the shell

Due to the HeartBleed SNAFU, I needed a quick solution for getting the information from a certificate deployed on a remote machine. As I rarely leave the comfort of my terminal, as always, I simply dumped a new function into the shell’s ~/.*rc file.

Here it is:

Defaults to port 443 if the second argument is unspecified. Example:

get_cert google.com
 
Certificate:
    Data:
        Version: 3 (0x2)
        Serial Number:
            59:fa:65:0e:26:a1:67:3d
        Signature Algorithm: sha1WithRSAEncryption
        Issuer: C=US, O=Google Inc, CN=Google Internet Authority G2
        Validity
            Not Before: May  7 12:15:37 2014 GMT
            Not After : Aug  5 00:00:00 2014 GMT
        Subject: C=US, ST=California, L=Mountain View, O=Google Inc, CN=*.google.com
        Subject Public Key Info:
            Public Key Algorithm: rsaEncryption
            RSA Public Key: (2048 bit)
                Modulus (2048 bit):
                    00:c2:9a:87:d1:79:0a:10:28:64:f3:d7:12:48:93:
                    13:24:c9:05:9e:1b:94:0d:b1:d6:02:54:27:e2:a4:
                    87:45:ab:f8:17:19:db:0d:b0:a9:80:34:a1:2a:5e:
                    98:a7:85:a6:66:2b:69:5c:85:16:fd:43:9f:6f:40:
                    f2:36:d8:47:4f:16:cd:ef:f4:67:75:c2:07:89:fa:
                    37:c8:c3:08:37:0b:ec:e3:61:48:86:86:bd:7b:5d:
                    cc:10:96:9d:be:07:e1:c3:e2:c1:23:04:fa:a6:93:
                    99:b5:42:7c:55:ab:91:6b:8a:d4:bf:8d:23:df:9d:
                    4d:96:a7:31:e0:f7:04:39:db:66:d3:d5:64:36:1f:
                    ef:71:af:df:0b:86:d9:6a:fc:12:c4:8c:94:fe:91:
                    6f:d4:6c:c0:f8:ec:68:b0:7e:fd:71:42:43:42:34:
                    ad:a2:fb:3e:12:98:68:d1:b4:23:e6:7d:8a:75:9c:
                    c1:82:bb:95:55:28:15:50:9e:d0:49:21:b7:6f:ce:
                    44:c1:3d:9a:e3:05:28:96:0c:53:44:68:6b:63:b4:
                    f1:4a:79:3d:09:7d:f0:96:e6:78:95:36:aa:89:40:
                    d8:9c:60:f9:b1:1c:23:d5:6c:b7:2d:f4:e4:00:ff:
                    7b:9b:f2:02:43:c1:d2:e1:95:2c:a1:41:d4:88:72:
                    0b:69
                Exponent: 65537 (0x10001)
        X509v3 extensions:
            X509v3 Extended Key Usage:
                TLS Web Server Authentication, TLS Web Client Authentication
            X509v3 Subject Alternative Name:
                DNS:*.google.com, DNS:*.android.com, DNS:*.appengine.google.com, DNS:*.cloud.google.com, DNS:*.google-analytics.com, DNS:*.google.ca, DNS:*.google.cl, DNS:*.google.co.in, DNS:*.google.co.jp, DNS:*.google.co.uk, DNS:*.google.com.ar, DNS:*.google.com.au, DNS:*.google.com.br, DNS:*.google.com.co, DNS:*.google.com.mx, DNS:*.google.com.tr, DNS:*.google.com.vn, DNS:*.google.de, DNS:*.google.es, DNS:*.google.fr, DNS:*.google.hu, DNS:*.google.it, DNS:*.google.nl, DNS:*.google.pl, DNS:*.google.pt, DNS:*.googleapis.cn, DNS:*.googlecommerce.com, DNS:*.googlevideo.com, DNS:*.gstatic.com, DNS:*.gvt1.com, DNS:*.urchin.com, DNS:*.url.google.com, DNS:*.youtube-nocookie.com, DNS:*.youtube.com, DNS:*.youtubeeducation.com, DNS:*.ytimg.com, DNS:android.com, DNS:g.co, DNS:goo.gl, DNS:google-analytics.com, DNS:google.com, DNS:googlecommerce.com, DNS:urchin.com, DNS:youtu.be, DNS:youtube.com, DNS:youtubeeducation.com
            Authority Information Access:
                CA Issuers - URI:http://pki.google.com/GIAG2.crt
                OCSP - URI:http://clients1.google.com/ocsp
 
            X509v3 Subject Key Identifier:
                2B:56:D4:98:8E:81:28:99:CD:17:89:09:21:EB:3B:8B:EF:7E:19:A0
            X509v3 Basic Constraints: critical
                CA:FALSE
            X509v3 Authority Key Identifier:
                keyid:4A:DD:06:16:1B:BC:F6:68:B5:76:F5:81:B6:BB:62:1A:BA:5A:81:2F
 
            X509v3 Certificate Policies:
                Policy: 1.3.6.1.4.1.11129.2.5.1
 
            X509v3 CRL Distribution Points:
                URI:http://pki.google.com/GIAG2.crl
 
    Signature Algorithm: sha1WithRSAEncryption
        27:35:81:4e:df:79:e9:c7:9c:c1:5b:9c:35:4e:67:00:de:38:
        cb:a0:2f:58:91:61:11:a3:cf:ae:49:63:84:76:74:20:43:35:
        7c:e7:82:3e:7f:43:c8:94:71:9d:33:72:cc:3c:3e:0f:97:00:
        ef:08:65:7c:cc:e3:32:ca:16:b4:fb:73:7c:43:b2:eb:47:2d:
        3b:b9:b2:c8:4b:1a:ca:77:d0:65:55:fc:1c:76:7e:6d:86:d9:
        2f:e5:7b:63:5c:5f:77:9e:75:39:4f:36:0a:c2:8a:35:dc:f7:
        02:cb:03:c6:17:bb:2f:03:a1:de:12:c9:ab:03:ce:6d:17:0e:
        cb:78:24:e4:36:13:0e:e7:12:e5:e2:84:42:cc:d3:aa:32:b7:
        78:07:ae:24:23:69:be:90:9c:d3:38:51:05:5a:69:05:be:e3:
        8d:7b:ae:2b:37:c1:35:3a:b5:51:1f:46:fe:10:a7:ce:af:d2:
        b6:a5:8f:13:a5:57:03:63:25:0e:bc:6e:c7:e5:7b:22:4e:ff:
        67:9d:15:30:93:21:c1:08:03:6f:ab:5a:33:d8:41:c2:2a:8e:
        5a:a9:67:26:6e:6a:c1:46:8e:50:e7:4e:c7:51:66:eb:0b:ef:
        9d:c9:6a:d2:7f:a9:25:89:c2:28:aa:e5:fd:e4:74:9b:95:32:
        5d:15:ed:d0

Use the cache, Luke, Part 2: don’t put all your eggs into the memcached buck … basket

This is the second part of a series called: Use the cache, Luke. If you missed the first part, here it is: From memcached to Membase memcached buckets. Meanwhile, the AWS ElastiCache service proved to have better network latency than our own rolled out Membase setup, therefore the migration was easily done by simply switching the memcached config. No vendor lock in.

However, it took me a while to write this second part.

If you can see this, then you might need a Flash Player upgrade or you need to install Flash Player if it’s missing. Get Flash Player from Adobe. This error may appear if the URL path to the embedded object is broken or you have connectivity issue to the embedded object. Powered BY XVE Various Embed.

Please have a look at the above video. Besides the general common sense guidelines about how to scale your stuff, and the Postgres typical stuff, there’s a general rule: cache, cache, and then cache some more.

However, too much caching in memcache (whatever implementation) may kill the application at some point. The application may not be database dependent, but it is cache dependent. Anything that affects the cache may have the effect of a sledgehammer on your database. Of couse, you can always scale vertically that DB instance, scale horizontally by adding read-only replicas, but the not-so-fun part is that it costs a lot just to have the provisioned resources in order to survive a cache failure.

The second option is to have a short lived failover cache on the application server. Something like five minutes, while the distributed cache from memcache may last for hours. Enough to keep the database from being hit from live traffic, while you don’t have to provision a really large database instance. Of course, it won’t work with stuff that needs some “real time” junk, but it works with data that doesn’t change with each request.

There are a lot of options for a failover cache since there’s no distributed setup to think about. It may be a memcached daemon, something like PHP’s APC API, or, the fastest option: the file based caching. Now you may think that I’m insane, but memcached still has the IPC penalty, especially for TCP communication, while if you’re a PHP user, APC doesn’t perform as expected.

I say file based caching, not disk based caching, as the kernel does a pretty good job at “eating your RAM” with the disk caching stuff. It takes more to implement it since the cache management logic must be implemented into the application itself, you don’t have stuff like LRU, expiration, etc. by default, but for failover reasons, it is good enough to worth the effort. In fact, it ran for a few days on the failover cache without any measurable impact.

The next part for not using the same basket for all of your eggs is: cache everywhere you can. For example, by using the nginx FastCGI cache, we could shave off 40% of our CPU load. Nothing experimental about this last part. It is production for the last 18 months. If you get it right, then it could be a really valuable addition to a web stack. However, a lot of testing is required before pushing the changes to production. We hit a lot of weird bugs for edge cases. The rule of thumb is: if you get the cache key right, then most of the issues are gone before going live.

In fact, by adding the cache control stuff from the application itself, we could push relatively shortly lived pages to the CDN edges, shaving off a lot of latency for repeated requests as there’s no round trip from the hosting data center to the CDN edge. Yes, it’s the latency, stupid. The dynamic acceleration that CDNs provide is nice. Leveraging the HTTP caching capabilities is nicer. Having the application in a data center closer to the client is desirable, but unless your target market is more distributed than having a bunch of machines into the same geo location, it doesn’t make any sense to deploy into a new data center which adds its fair share of complexity when scaling the data layer.

Use the cache, Luke, Part 1: from memcached to Membase memcached buckets

I start with a quote:

Matt Ingenthron said internally at Membase Inc they view Memcached as a rabbit. It is fast, but it is pretty dumb and procreates quickly. Before you know it, it will be running wild all over your system.

But this post isn’t about switching from a volatile cache to a persistent solution. It is about removing the dumb part from the memcached setup.

We started with memcached as this is the first step. The setup had its quirks since AWS EC2 doesn’t provide by default a fixed addressing method while the memcached client from PHP still has issues with the timeouts. Therefore, the fallback was the plain memcache client.

The fixed addressing issue was resolved by deploying Elastic IPs with a little trick for the internal network, as explained by Eric Hammond. This might be unfeasible for large enough deployments, but it wasn’t our case. Amazon introduced ElastiCache since then which removes this limitation, but having a bunch of t1.micros with reservation is still way much cheaper. Which makes me wonder why they won’t introduce machine addresses which internally resolve as internal address. They have this technology for a lot of their services, but it is simply unavailable for plain EC2 instances.

Back to the memcached issues. Having a Membase cluster that provides a memcached bucket is a nice drop-in replacement, if you lower a little bit your memory allocation. Membase over memcached still has some overhead as its services tend to occupy more RAM. The great thing is that the cluster requires fewer machines with fixed addressing. We use a couple for high availability reasons, but this is not the rule. The rest have the EC2 provided dynamic addresses. If a machine happens to go down, another one can take up its place.

But there still is the client issue. memcached for PHP is dumb. memcache for PHP is even dumber. None of these can actually speak the Membase goodies. This is the part where Moxi (Memcached Proxy) kicks in. For memcached buckets, Moxi can discover the newly added machines to the Membase cluster without doing any client configuration. Without any Moxi server configuration as the config is streamed to the servers via the machines that have the fixed addresses. With plain memcached, every time there was a change, we needed to deploy the application. The memcached cluster was basically nullified till it was refilled. Doesn’t happen with Moxi + Membase. Since there no “smart client” for PHP which includes the Moxi logic, we use client side Moxi in order to reduce the network round-trips. There still is a local communication over the loopback interface, but the latency is far smaller than doing server-side Moxi. Basically the memcache for PHP client connects to 127.0.0.1:11211 aka where Moxi lives, then the request hits the appropriate Membase server that holds our cached data. It also uses the binary protocol and SASL authentication which is unsupported by the memcache for PHP client.

The last of the goodies about the Membase cluster: it actually has an interface. I may not be an UI fan, I live most of my time in /bin/bash, but I am a stats junkie. The Membase web console can give you realtime info about how the cluster is doing. With plain memcached you’re left in the dust with wrapping up your own interface or calling stats over plain TCP. Which is so wrong at so many levels.

PS: v2.0 will be called Couchbase for political reasons. But currently the stable release is still called Membase.

Why sometimes I hate RFCs

Every time when there’s a debate about the format of something that floats around the Internets, people go to RFCs in order to figure out who’s right and who’s wrong. Which may be a great thing in theory. In practice, the rocket scientists that wrote those papers might squeeze a lot of confusion into a single page of text, as the G-WAN manual states.

Today’s case was a debate about the Expires header timestamps as defined by the HTTP/1.1 specs (RFC 2616). If you read the 14.21 section regarding the Expires header, you can see the following statement:

The format is an absolute date and time as defined by HTTP-date in section 3.3.1; it MUST be in RFC 1123 date format:

Expires = “Expires” “:” HTTP-date

I made a newb mistake in thinking that the RFC 1123 dates are legal Expires timestamps. Actually, by proof reading 3.3.1 of RFC 2616 you may deduce the following: the dates in use by the HTTP/1.1 protocol are not the dates into the RFC 1123 format, but the actual format is a subset of RFC 1123. The debate started around the GMT specification which in the HTTP/1.1 contexts is actually UTC, but it must be specified as GMT anyway. Even more, +0000 which is valid timezone specifier as defined by RFC 1123 is not valid for Expires timestamps. Although some caches accept +0000 as valid timezone specifier for the HTTP timestamps, some of them don’t.

It isn’t that the RFCs are broken per se, but the language they use can be very confusing sometimes.

nginx + PHP-FPM for high loaded virtual private servers

The VPSes that use the OS level virtualization have some challenges that you won’t find in setups that use full OS virtualization / paravirtualization. Stuff like OpenVZ doesn’t provide you a method for tuning OS parameters which may impair certain setups such as the above mentioned into the article title. All the stuff mentioned in my previous article applies here as well, except the sysctl tuning part. It would be redundant to mention those tips again.

However, the thing that applies differently is the fact that the maximum connection backlog on a OpenVZ VPS is typically 128, therefore you can not increase the listen.backlog parameter of PHP-FPM over that value. The kernel denies you that. Even more, due to the low memory setups of a lot of VPSes out there in the wild, you may be forced to use a low value for pm.max_children which translates this into a shorter cycle for the process respawn. A process that is respawning, can’t handle incoming connections, therefore the heavy lifting is done by the kernel backlog. The concurrency may not increase linearly with the number of available processes into the FPM pool because of this consequence.

Since the backlog is kept by the kernel, it is common for all services, therefore it may affect nginx as well if it can not accept() a connection for some reason. Load balancing over UNIX sockets doesn’t increase the possible concurrency level. It simply adds useless complexity. Using TCP sockets may increase the reliability of the setup, at the cost of added latency and used up system resources, but it would fail in the end. Some measures need to be taken.

I gave a thought about using HAProxy as connections manager since HAProxy has a nice maxconn feature that would pass the connections to an upstream up to a defined number. But that would add another layer of complexity. It has its benefits. But it also means that’s another point of failure. Having more services to process the same request pipeline is clearly suboptimal if the services composing the request pipeline won’t add some clear value to the setup, for example the way a proxy cache does.

Then I thought about node.js, but implementing FastCGI on top of node seems a hackish solution at best. One of the Orchestra guys did it, but I wouldn’t go into production with a node.js HTTP server for static objects and FastCGI response serving, no matter how “cool” this may sound. Then the revelation hit me: Ryan Dahl, the author of node.js, wrote a nginx module that adds the maxconn feature to nginx: nginx-ey-balancer. Hopefully, someday this would make it into upstream.

The module adds some boilerplate to the configuration since it requires a fastcgi_pass to an upstream unlike direct fastcgi_pass to an UDS path, but otherwise that this, it works as advertised. Although the module wasn’t actively maintained, or, at least this is how things look from outside, the v0.8.32 patch works even for nginx v1.0.2. Having nginx to act as connection manager instead of sending all the connections straight to the FPM upstream may have clear benefits from the concurrency point of view. It is recommended to set max_connections to the value of net.core.somaxconn. That guarantees the fact that no connection gets dropped because the FPM pool processes are respawning due to a short cycle policy.

By using this module, nginx could handle easily around 4000 concurrent connections for a 4 worker process setup, but increasing the workers number does not increase linearly the possible concurrency. Anyway, at that concurrency level, most probably the issues caused by the IPC between nginx and PHP-FPM would be your last concern. This setup simply removes an IPC limit which is ridiculously small most of the times.