nginx + PHP-FPM for high loaded virtual private servers

The VPSes that use the OS level virtualization have some challenges that you won’t find in setups that use full OS virtualization / paravirtualization. Stuff like OpenVZ doesn’t provide you a method for tuning OS parameters which may impair certain setups such as the above mentioned into the article title. All the stuff mentioned in my previous article applies here as well, except the sysctl tuning part. It would be redundant to mention those tips again.

However, the thing that applies differently is the fact that the maximum connection backlog on a OpenVZ VPS is typically 128, therefore you can not increase the listen.backlog parameter of PHP-FPM over that value. The kernel denies you that. Even more, due to the low memory setups of a lot of VPSes out there in the wild, you may be forced to use a low value for pm.max_children which translates this into a shorter cycle for the process respawn. A process that is respawning, can’t handle incoming connections, therefore the heavy lifting is done by the kernel backlog. The concurrency may not increase linearly with the number of available processes into the FPM pool because of this consequence.

Since the backlog is kept by the kernel, it is common for all services, therefore it may affect nginx as well if it can not accept() a connection for some reason. Load balancing over UNIX sockets doesn’t increase the possible concurrency level. It simply adds useless complexity. Using TCP sockets may increase the reliability of the setup, at the cost of added latency and used up system resources, but it would fail in the end. Some measures need to be taken.

I gave a thought about using HAProxy as connections manager since HAProxy has a nice maxconn feature that would pass the connections to an upstream up to a defined number. But that would add another layer of complexity. It has its benefits. But it also means that’s another point of failure. Having more services to process the same request pipeline is clearly suboptimal if the services composing the request pipeline won’t add some clear value to the setup, for example the way a proxy cache does.

Then I thought about node.js, but implementing FastCGI on top of node seems a hackish solution at best. One of the Orchestra guys did it, but I wouldn’t go into production with a node.js HTTP server for static objects and FastCGI response serving, no matter how “cool” this may sound. Then the revelation hit me: Ryan Dahl, the author of node.js, wrote a nginx module that adds the maxconn feature to nginx: nginx-ey-balancer. Hopefully, someday this would make it into upstream.

The module adds some boilerplate to the configuration since it requires a fastcgi_pass to an upstream unlike direct fastcgi_pass to an UDS path, but otherwise that this, it works as advertised. Although the module wasn’t actively maintained, or, at least this is how things look from outside, the v0.8.32 patch works even for nginx v1.0.2. Having nginx to act as connection manager instead of sending all the connections straight to the FPM upstream may have clear benefits from the concurrency point of view. It is recommended to set max_connections to the value of net.core.somaxconn. That guarantees the fact that no connection gets dropped because the FPM pool processes are respawning due to a short cycle policy.

By using this module, nginx could handle easily around 4000 concurrent connections for a 4 worker process setup, but increasing the workers number does not increase linearly the possible concurrency. Anyway, at that concurrency level, most probably the issues caused by the IPC between nginx and PHP-FPM would be your last concern. This setup simply removes an IPC limit which is ridiculously small most of the times.

4 thoughts on “nginx + PHP-FPM for high loaded virtual private servers

  1. Ronny

    hey saltwater i have a question.

    I run 50+ websites on a small and cheap VPS and im experiencing some problems when my traffic peaks.

    DO you know if it is possible to run Nginx in combination with Directadmin? I benchmarked Nginx vs Apache and the performance is much higher. Sadly i haven’t found a solution for it, seems like Nginxt is incompattible with Directadmin.

    have you got any experience with this? Or do you know any methods to increase the performance? (Centos 5, OPENVZ, 256 Ram)

  2. SaltwaterC Post author

    Hello,

    Unfortunately I don’t have any relevant experience with any control panel. My tool kit has stuff like: bash, ssh, apt. Theoretically, with enough knowledge about the control panel in use, one could deploy nginx in front of Apache in order to lower the memory usage.

    You can shorten the lifespan of an Apache child in order to force the process recycling more often. You can disable the keep-alives (or make them really short) in order to free the process pool for serving new requests. Strip down all the modules that are not in use by Apache or PHP. You could use a threaded Apache MPM and talk with PHP over FastCGI (the preferred approach of decent shared hosts). These are a few pieces of advice, but the list is far from complete. These came from the top of what I can remember about the Apache administration.

    Under OpenVZ you could also tweak the process / thread stacks which by default are pretty high. OpenVZ shows the stack as used memory while most of the processes may use little of it. You can see a lot of savings here, especially if you have a MySQL server that runs on the same machine. The thread stacks of MySQL are very expensive under OpenVZ.

    You could also buy a bigger VPS. For example I have a VPS Start from IntoVPS. $10/mo, 512 RAM. A lot of bang for the buck. http://www.intovps.com/

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.