printf(" SaltwaterC ");

nginx + PHP-FPM for high loaded virtual private servers

The VPSes that use the OS level virtualization have some challenges that you won’t find in setups that use full OS virtualization / paravirtualization. Stuff like OpenVZ doesn’t provide you a method for tuning OS parameters which may impair certain setups such as the above mentioned into the article title. All the stuff mentioned in my previous article applies here as well, except the sysctl tuning part. It would be redundant to mention those tips again.

However, the thing that applies differently is the fact that the maximum connection backlog on a OpenVZ VPS is typically 128, therefore you can not increase the listen.backlog parameter of PHP-FPM over that value. The kernel denies you that. Even more, due to the low memory setups of a lot of VPSes out there in the wild, you may be forced to use a low value for pm.max_children which translates this into a shorter cycle for the process respawn. A process that is respawning, can’t handle incoming connections, therefore the heavy lifting is done by the kernel backlog. The concurrency may not increase linearly with the number of available processes into the FPM pool because of this consequence.

Since the backlog is kept by the kernel, it is common for all services, therefore it may affect nginx as well if it can not accept() a connection for some reason. Load balancing over UNIX sockets doesn’t increase the possible concurrency level. It simply adds useless complexity. Using TCP sockets may increase the reliability of the setup, at the cost of added latency and used up system resources, but it would fail in the end. Some measures need to be taken.

I gave a thought about using HAProxy as connections manager since HAProxy has a nice maxconn feature that would pass the connections to an upstream up to a defined number. But that would add another layer of complexity. It has its benefits. But it also means that’s another point of failure. Having more services to process the same request pipeline is clearly suboptimal if the services composing the request pipeline won’t add some clear value to the setup, for example the way a proxy cache does.

Then I thought about node.js, but implementing FastCGI on top of node seems a hackish solution at best. One of the Orchestra guys did it, but I wouldn’t go into production with a node.js HTTP server for static objects and FastCGI response serving, no matter how “cool” this may sound. Then the revelation hit me: Ryan Dahl, the author of node.js, wrote a nginx module that adds the maxconn feature to nginx: nginx-ey-balancer. Hopefully, someday this would make it into upstream.

The module adds some boilerplate to the configuration since it requires a fastcgi_pass to an upstream unlike direct fastcgi_pass to an UDS path, but otherwise that this, it works as advertised. Although the module wasn’t actively maintained, or, at least this is how things look from outside, the v0.8.32 patch works even for nginx v1.0.2. Having nginx to act as connection manager instead of sending all the connections straight to the FPM upstream may have clear benefits from the concurrency point of view. It is recommended to set max_connections to the value of net.core.somaxconn. That guarantees the fact that no connection gets dropped because the FPM pool processes are respawning due to a short cycle policy.

By using this module, nginx could handle easily around 4000 concurrent connections for a 4 worker process setup, but increasing the workers number does not increase linearly the possible concurrency. Anyway, at that concurrency level, most probably the issues caused by the IPC between nginx and PHP-FPM would be your last concern. This setup simply removes an IPC limit which is ridiculously small most of the times.

Exit mobile version