Category Archives: Web

Getting a HTTPS certificate information into the shell

Due to the HeartBleed SNAFU, I needed a quick solution for getting the information from a certificate deployed on a remote machine. As I rarely leave the comfort of my terminal, as always, I simply dumped a new function into the shell’s ~/.*rc file.

Here it is:

Defaults to port 443 if the second argument is unspecified. Example:

 

Use the cache, Luke, Part 2: don’t put all your eggs into the memcached buck … basket

This is the second part of a series called: Use the cache, Luke. If you missed the first part, here it is: From memcached to Membase memcached buckets. Meanwhile, the AWS ElastiCache service proved to have better network latency than our own rolled out Membase setup, therefore the migration was easily done by simply switching the memcached config. No vendor lock in.

However, it took me a while to write this second part.

If you can see this, then you might need a Flash Player upgrade or you need to install Flash Player if it’s missing. Get Flash Player from Adobe. This error may appear if the URL path to the embedded object is broken or you have connectivity issue to the embedded object. Powered BY XVE Various Embed.

Please have a look at the above video. Besides the general common sense guidelines about how to scale your stuff, and the Postgres typical stuff, there’s a general rule: cache, cache, and then cache some more.

However, too much caching in memcache (whatever implementation) may kill the application at some point. The application may not be database dependent, but it is cache dependent. Anything that affects the cache may have the effect of a sledgehammer on your database. Of couse, you can always scale vertically that DB instance, scale horizontally by adding read-only replicas, but the not-so-fun part is that it costs a lot just to have the provisioned resources in order to survive a cache failure.

The second option is to have a short lived failover cache on the application server. Something like five minutes, while the distributed cache from memcache may last for hours. Enough to keep the database from being hit from live traffic, while you don’t have to provision a really large database instance. Of course, it won’t work with stuff that needs some “real time” junk, but it works with data that doesn’t change with each request.

There are a lot of options for a failover cache since there’s no distributed setup to think about. It may be a memcached daemon, something like PHP’s APC API, or, the fastest option: the file based caching. Now you may think that I’m insane, but memcached still has the IPC penalty, especially for TCP communication, while if you’re a PHP user, APC doesn’t perform as expected.

I say file based caching, not disk based caching, as the kernel does a pretty good job at “eating your RAM” with the disk caching stuff. It takes more to implement it since the cache management logic must be implemented into the application itself, you don’t have stuff like LRU, expiration, etc. by default, but for failover reasons, it is good enough to worth the effort. In fact, it ran for a few days on the failover cache without any measurable impact.

The next part for not using the same basket for all of your eggs is: cache everywhere you can. For example, by using the nginx FastCGI cache, we could shave off 40% of our CPU load. Nothing experimental about this last part. It is production for the last 18 months. If you get it right, then it could be a really valuable addition to a web stack. However, a lot of testing is required before pushing the changes to production. We hit a lot of weird bugs for edge cases. The rule of thumb is: if you get the cache key right, then most of the issues are gone before going live.

In fact, by adding the cache control stuff from the application itself, we could push relatively shortly lived pages to the CDN edges, shaving off a lot of latency for repeated requests as there’s no round trip from the hosting data center to the CDN edge. Yes, it’s the latency, stupid. The dynamic acceleration that CDNs provide is nice. Leveraging the HTTP caching capabilities is nicer. Having the application in a data center closer to the client is desirable, but unless your target market is more distributed than having a bunch of machines into the same geo location, it doesn’t make any sense to deploy into a new data center which adds its fair share of complexity when scaling the data layer.

Use the cache, Luke, Part 1: from memcached to Membase memcached buckets

I start with a quote:

Matt Ingenthron said internally at Membase Inc they view Memcached as a rabbit. It is fast, but it is pretty dumb and procreates quickly. Before you know it, it will be running wild all over your system.

But this post isn’t about switching from a volatile cache to a persistent solution. It is about removing the dumb part from the memcached setup.

We started with memcached as this is the first step. The setup had its quirks since AWS EC2 doesn’t provide by default a fixed addressing method while the memcached client from PHP still has issues with the timeouts. Therefore, the fallback was the plain memcache client.

The fixed addressing issue was resolved by deploying Elastic IPs with a little trick for the internal network, as explained by Eric Hammond. This might be unfeasible for large enough deployments, but it wasn’t our case. Amazon introduced ElastiCache since then which removes this limitation, but having a bunch of t1.micros with reservation is still way much cheaper. Which makes me wonder why they won’t introduce machine addresses which internally resolve as internal address. They have this technology for a lot of their services, but it is simply unavailable for plain EC2 instances.

Back to the memcached issues. Having a Membase cluster that provides a memcached bucket is a nice drop-in replacement, if you lower a little bit your memory allocation. Membase over memcached still has some overhead as its services tend to occupy more RAM. The great thing is that the cluster requires fewer machines with fixed addressing. We use a couple for high availability reasons, but this is not the rule. The rest have the EC2 provided dynamic addresses. If a machine happens to go down, another one can take up its place.

But there still is the client issue. memcached for PHP is dumb. memcache for PHP is even dumber. None of these can actually speak the Membase goodies. This is the part where Moxi (Memcached Proxy) kicks in. For memcached buckets, Moxi can discover the newly added machines to the Membase cluster without doing any client configuration. Without any Moxi server configuration as the config is streamed to the servers via the machines that have the fixed addresses. With plain memcached, every time there was a change, we needed to deploy the application. The memcached cluster was basically nullified till it was refilled. Doesn’t happen with Moxi + Membase. Since there no “smart client” for PHP which includes the Moxi logic, we use client side Moxi in order to reduce the network round-trips. There still is a local communication over the loopback interface, but the latency is far smaller than doing server-side Moxi. Basically the memcache for PHP client connects to 127.0.0.1:11211 aka where Moxi lives, then the request hits the appropriate Membase server that holds our cached data. It also uses the binary protocol and SASL authentication which is unsupported by the memcache for PHP client.

The last of the goodies about the Membase cluster: it actually has an interface. I may not be an UI fan, I live most of my time in /bin/bash, but I am a stats junkie. The Membase web console can give you realtime info about how the cluster is doing. With plain memcached you’re left in the dust with wrapping up your own interface or calling stats over plain TCP. Which is so wrong at so many levels.

PS: v2.0 will be called Couchbase for political reasons. But currently the stable release is still called Membase.

Why sometimes I hate RFCs

Every time when there’s a debate about the format of something that floats around the Internets, people go to RFCs in order to figure out who’s right and who’s wrong. Which may be a great thing in theory. In practice, the rocket scientists that wrote those papers might squeeze a lot of confusion into a single page of text, as the G-WAN manual states.

Today’s case was a debate about the Expires header timestamps as defined by the HTTP/1.1 specs (RFC 2616). If you read the 14.21 section regarding the Expires header, you can see the following statement:

The format is an absolute date and time as defined by HTTP-date in section 3.3.1; it MUST be in RFC 1123 date format:

Expires = “Expires” “:” HTTP-date

I made a newb mistake in thinking that the RFC 1123 dates are legal Expires timestamps. Actually, by proof reading 3.3.1 of RFC 2616 you may deduce the following: the dates in use by the HTTP/1.1 protocol are not the dates into the RFC 1123 format, but the actual format is a subset of RFC 1123. The debate started around the GMT specification which in the HTTP/1.1 contexts is actually UTC, but it must be specified as GMT anyway. Even more, +0000 which is valid timezone specifier as defined by RFC 1123 is not valid for Expires timestamps. Although some caches accept +0000 as valid timezone specifier for the HTTP timestamps, some of them don’t.

It isn’t that the RFCs are broken per se, but the language they use can be very confusing sometimes.

nginx + PHP-FPM for high loaded virtual private servers

The VPSes that use the OS level virtualization have some challenges that you won’t find in setups that use full OS virtualization / paravirtualization. Stuff like OpenVZ doesn’t provide you a method for tuning OS parameters which may impair certain setups such as the above mentioned into the article title. All the stuff mentioned in my previous article applies here as well, except the sysctl tuning part. It would be redundant to mention those tips again.

However, the thing that applies differently is the fact that the maximum connection backlog on a OpenVZ VPS is typically 128, therefore you can not increase the listen.backlog parameter of PHP-FPM over that value. The kernel denies you that. Even more, due to the low memory setups of a lot of VPSes out there in the wild, you may be forced to use a low value for pm.max_children which translates this into a shorter cycle for the process respawn. A process that is respawning, can’t handle incoming connections, therefore the heavy lifting is done by the kernel backlog. The concurrency may not increase linearly with the number of available processes into the FPM pool because of this consequence.

Since the backlog is kept by the kernel, it is common for all services, therefore it may affect nginx as well if it can not accept() a connection for some reason. Load balancing over UNIX sockets doesn’t increase the possible concurrency level. It simply adds useless complexity. Using TCP sockets may increase the reliability of the setup, at the cost of added latency and used up system resources, but it would fail in the end. Some measures need to be taken.

I gave a thought about using HAProxy as connections manager since HAProxy has a nice maxconn feature that would pass the connections to an upstream up to a defined number. But that would add another layer of complexity. It has its benefits. But it also means that’s another point of failure. Having more services to process the same request pipeline is clearly suboptimal if the services composing the request pipeline won’t add some clear value to the setup, for example the way a proxy cache does.

Then I thought about node.js, but implementing FastCGI on top of node seems a hackish solution at best. One of the Orchestra guys did it, but I wouldn’t go into production with a node.js HTTP server for static objects and FastCGI response serving, no matter how “cool” this may sound. Then the revelation hit me: Ryan Dahl, the author of node.js, wrote a nginx module that adds the maxconn feature to nginx: nginx-ey-balancer. Hopefully, someday this would make it into upstream.

The module adds some boilerplate to the configuration since it requires a fastcgi_pass to an upstream unlike direct fastcgi_pass to an UDS path, but otherwise that this, it works as advertised. Although the module wasn’t actively maintained, or, at least this is how things look from outside, the v0.8.32 patch works even for nginx v1.0.2. Having nginx to act as connection manager instead of sending all the connections straight to the FPM upstream may have clear benefits from the concurrency point of view. It is recommended to set max_connections to the value of net.core.somaxconn. That guarantees the fact that no connection gets dropped because the FPM pool processes are respawning due to a short cycle policy.

By using this module, nginx could handle easily around 4000 concurrent connections for a 4 worker process setup, but increasing the workers number does not increase linearly the possible concurrency. Anyway, at that concurrency level, most probably the issues caused by the IPC between nginx and PHP-FPM would be your last concern. This setup simply removes an IPC limit which is ridiculously small most of the times.