nginx + PHP-FPM for high loaded websites

The title of the post is quite obvious. Apache2 + mod_php5 lost the so called “crown” for quite a while. The fist eye opener was a thing that got me pretty annoyed back in 2008. Dual-quad machine, 4 GiB of RAM, RAID 5 SAS @ 10k RPM – the server was looking pretty good, apparently. The only show stopper: Apache2 + mod_php5 that choked at 350 concurrent clients. What? No more free RAM? WTF? LAMP is not my cup of tea for obvious reasons. LEMP seems to be more appropriate while I ditched Apache from all the production servers.

Since then, I keep telling people that Apache2 + mod_php5 + prefork MPM is a memory hog, while most of the stuff comes from the brain-dead client connection management that Apache does. One process per connection is a model that ought to be killed. Probably the best bang for the buck is using nginx as front-end for serving the static objects and buffering the response from Apache in order to spoon-feed the slow clients. But here’s the kicker: for virtual hosting setups, Apache/prefork/PHP is pretty dull to configure safely aka isolate the virtual hosts. Packing more applications together over a production cluster is an obvious way for doing server consolidation.

There’s mod_fcgid/FastCGI … but nginx supports FastCGI as well. Therefore, cutting off the middle man was the obvious solution. By using this setup, you won’t lose … much. PHP-FPM was an obvious solution as the default FastCGI manager that used to come as the sole solution from the PHP upstream is pretty dumb. Instead of having a dedicated PHP service for each virtual host, one can have a dedicated process pool for each virtual host. Believe me, this method is much easier to maintain.

While nginx comes pretty well tuned by default, except the open file cache settings, for PHP-FPM you need to tune some essential settings. nginx has a predictable memory usage. I can say that most of the time the memory used by nginx is negligible compared to the memory used by the PHP subsystem. However, PHP-FPM has a predictable memory consumption if used properly. The process respawn feature ought to be used in order to keep the memory leaks under control. pm.max_requests is the option that you need to tune properly. One may use 10000 served request before respawning, but under very restricted memory requirements I even had to use 10 (very low memory VPS).

The pm.max_children option may be used to spawn an adaptive number of processes, based onto the server load, but IMHO that might overcommit the system resources into the worse case scenario. Having a fixed number of processes per pool is preferred. Usually I have a rough estimation of the memory consumption in order to keep all the runtime into the RAM. Thrashing the memory is not something you would want onto a loaded web server. For everything else, there’s Master … cough! munin.

The Inter-Process Communication is preferred over an UNIX Domain Socket. Unlike the TCP sockets, the UDS doesn’t have the TCP overload that the IPC has even for connections over the loopback interface. For small payloads, UDS might have a 30% performance boost due to lower latency than the TCP stack. Rememer: you can’t beat the latency. For larger payloads, the TCP latency has a lower impact, but it’s still there. Another nice thing about the UDS is the namespace. UDS uses the filesystem for defining a new listening socket. Under Linux, classic ACLs may be used for restricting what user can read / write to the UDS. BSD systems may be more permissive for this kind of stuff. The TCP sockets require a numerical port that can’t be used for something else while the management from an application that generates the configuration files is more difficult for this kind of settings. For UDS the socket name can be derived from the host name. As I said into a previous article, I don’t write the configuration files by hand. Having a decent way of keeping the IPC namespace is always a plus.

Another thing that you should take care of is the listen.backlog option of PHP-FPM. Using TCP sockets seem to be more reliable for PHP-FPM. However, that’s untrue if you dig enough. The IPC starts to fail around 500 concurrent connections, while for UDS this happens way faster aka for an 1 process pool, you can serve at most 129 concurrent clients. 129 is not a random number. PHP-FPM can keep 128 connections into its backlog while the 129th connection is the active process. The default listen.backlog for Linux is 128, although the PHP-FPM documentation may state -1 aka the maximum allowed value by the system. Taking a peek at the PHP-FPM source code reveals this (sapi/fpm/fpm/fpm_sockets.h):

/*
  On FreeBSD and OpenBSD, backlog negative values are truncated to SOMAXCONN
*/
#if (__FreeBSD__) || (__OpenBSD__)
#define FPM_BACKLOG_DEFAULT -1
#else
#define FPM_BACKLOG_DEFAULT 128
#endif

The default configuration file that is distributed with the PHP-FPM source tree states that the value is 128 for Linux. The php.net statement that it defaults to -1 gave me a lot of grief as I though the manual won’t give me rubbish instead of usable information. However, since PHP 5.3.5 you may debug the configuration by using the -t flag for the php-fpm binary. You can use it like:

php-fpm -tt -y /path/to/php-fpm.conf

The doube t flag is not a mistake. If you’re using NOTICE as the debug level, the double t testing level prints the internal values of the PHP-FPM configuration:

php-fpm -tt -y /etc/php-fpm/php-fpm.conf
[08-May-2011 20:53:26] NOTICE: [General]
[08-May-2011 20:53:26] NOTICE:  pid = /var/run/php-fpm.pid
[08-May-2011 20:53:26] NOTICE:  daemonize = yes
[08-May-2011 20:53:26] NOTICE:  error_log = /var/log/php-fpm.log
[08-May-2011 20:53:26] NOTICE:  log_level = NOTICE
[08-May-2011 20:53:26] NOTICE:  process_control_timeout = 0s
[08-May-2011 20:53:26] NOTICE:  emergency_restart_interval = 0s
[08-May-2011 20:53:26] NOTICE:  emergency_restart_threshold = 0
[08-May-2011 20:53:26] NOTICE:
[08-May-2011 20:53:26] NOTICE: [www]
[08-May-2011 20:53:26] NOTICE:  prefix = undefined
[08-May-2011 20:53:26] NOTICE:  user = www-data
[08-May-2011 20:53:26] NOTICE:  group = www-data
[08-May-2011 20:53:26] NOTICE:  chroot = undefined
[08-May-2011 20:53:26] NOTICE:  chdir = undefined
[08-May-2011 20:53:26] NOTICE:  listen = /var/run/php-fpm.sock
[08-May-2011 20:53:26] NOTICE:  listen.backlog = -1
[08-May-2011 20:53:26] NOTICE:  listen.owner = undefined
[08-May-2011 20:53:26] NOTICE:  listen.group = undefined
[08-May-2011 20:53:26] NOTICE:  listen.mode = undefined
[08-May-2011 20:53:26] NOTICE:  listen.allowed_clients = undefined
[08-May-2011 20:53:26] NOTICE:  pm = static
[08-May-2011 20:53:26] NOTICE:  pm.max_children = 1
[08-May-2011 20:53:26] NOTICE:  pm.max_requests = 0
[08-May-2011 20:53:26] NOTICE:  pm.start_servers = 0
[08-May-2011 20:53:26] NOTICE:  pm.min_spare_servers = 0
[08-May-2011 20:53:26] NOTICE:  pm.max_spare_servers = 0
[08-May-2011 20:53:26] NOTICE:  pm.status_path = undefined
[08-May-2011 20:53:26] NOTICE:  ping.path = undefined
[08-May-2011 20:53:26] NOTICE:  ping.response = undefined
[08-May-2011 20:53:26] NOTICE:  catch_workers_output = no
[08-May-2011 20:53:26] NOTICE:  request_terminate_timeout = 0s
[08-May-2011 20:53:26] NOTICE:  request_slowlog_timeout = 0s
[08-May-2011 20:53:26] NOTICE:  slowlog = undefined
[08-May-2011 20:53:26] NOTICE:  rlimit_files = 0
[08-May-2011 20:53:26] NOTICE:  rlimit_core = 0
[08-May-2011 20:53:26] NOTICE:
[08-May-2011 20:53:26] NOTICE: configuration file /etc/php-fpm/php-fpm.conf test is successful

This stuff is not documented properly. I discovered it by having a nice afternoon at work, reading the PHP-FPM sources. That could save me some hours of debugging the internal state of PHP-FPM by other means. Maybe, for the 1st time, saying “undocumented feature” doesn’t sound like marketing crap implying “undiscovered bug”.

You may use listen.backlog = -1 for the system to decide, or you may use your own limit. -1 is a valid value as the listen(3) man page says. I am planning for opening a new issue as -1 is a more appropriate default for Linux as well. However, please keep in mind that a high backlog value may be truncated by the Linux kernel. For example, under Ubuntu Server this limit is … 128. The same manual page for listen(3) states that the maximum value for the backlog option is the SOMAXCONN value. While reading the Linux kernel sources is not exactly toilet reading, I could find the exact implementation of the listen syscall (net/socket.c):

/*
 *      Perform a listen. Basically, we allow the protocol to do anything
 *      necessary for a listen, and if that works, we mark the socket as
 *      ready for listening.
 */
 
SYSCALL_DEFINE2(listen, int, fd, int, backlog)
{
        struct socket *sock;
        int err, fput_needed;
        int somaxconn;
 
        sock = sockfd_lookup_light(fd, &err, &fput_needed);
        if (sock) {
                somaxconn = sock_net(sock->sk)->core.sysctl_somaxconn;
                if ((unsigned)backlog > somaxconn)
                        backlog = somaxconn;
 
                err = security_socket_listen(sock, backlog);
                if (!err)
                        err = sock->ops->listen(sock, backlog);
 
                fput_light(sock->file, fput_needed);
        }
        return err;
}

In plain English: the backlog value can not be higher than net.core.somaxconn. In order to be able to queue more idle connections into the kernel backlog, you ought to inrease the SOMAXCONN value:

root@localhost~# sysctl net.core.somaxconn=1024

The sysctl utility however modifies this value till the system is rebooted. In order to make it persistent, you have to define it as a new file into /etc/sysctl.d/. Or at least, using sysctl.d is recommended as it keeps the configuration to be more structured. I used /etc/sysctl.d/10-unix.conf:

net.core.somaxconn=1024

for having 1024 queued connections per listening UDS + the number of active connections that equals the size of the process pool. Remember that you need to restart the PHP-FPM daemon for the new backlog setting to be enabled. You may increase the limit as the usage model seems fit. Since nginx doesn’t queue any FastCGI connections, you need to be very careful about this setting. All the requests go straight to the kernel backlog. If there’s no more room for new connections, a 502 response is returned to the client. I can safely assume that you would like to avoid this.

Another thing that you should take care of for the number of idle connections to the PHP upstream is the fact that nginx opens a file descriptor for each UDS connection. If you increase too much the SOMAXCONN limit without increasing the number of allowed file descriptors per process, you will run into 502 errors as well. By default, a process may open up to 1024 file descriptors. Usually I increase this limit by adding a ulimit -n $fd_value to the init script of a certain service instead of increasing this limit as system wide.

You may want to buffer the FastCGI response in nginx as well. Buffering the response doesn’t tie the upstream PHP process for longer than needed. As nginx properly does the spoon-feeding to slow clients, the system is free to process more requests from the queue. fastcgi_buffer_size and fastcgi_buffers are the couple of options that you need to tune in order to fit your application usage mode.

Update (Aug 24, 2011):

Increasing the SO_SNDBUF also helps. Writes to the socket won’t block as it would be the kernel’s job to stream the data to the clients. For a server with enough memory, nginx could be free to do something else. The socket(7) man page comes to the rescue in order to demystify the SO_SNDBUF concept. Basically net.core.wmem_max is the one to blame when writes to the socket are blocking. By default the net.core.wmem_max is 128k which is very small for a busy server. If the server has a fat network pipe available, then you can get some more hints here: Linux Tuning. It may not be the case for most EC2 scenarios where the networking is shared. Therefore smaller buffers will do just fine. But it may be the case if you’re playing like me with toys that have dual 1G network interfaces.

9 thoughts on “nginx + PHP-FPM for high loaded websites

  1. Enkrs

    Thank you for the great article. I just updated my nginx-php configs accordingly. One note tough, setting backlog to -1 didn’t work on my Debian squeeze:

    WARNING: [pool www] listen.backlog(-1) was too low for the ondemand process manager. I updated it for you to 128.

    I just updated the config with a value close to the adjusted somaxconn.

    Works perfectly and I don’t see the occasional 502s anymore. Thank you so much.

  2. SaltwaterC Post author

    This is odd. According to the documentation, -1 should be treated as 0 which means let the kernel decide. However, I do have some squeeze machines that didn’t report this warning. All I can see in the logs are notices due to daily service reload after log rotation.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.