Apache, PHP-FPM, chroot jails, MediaWiki, MySQL, and so on - Ansuz - mskala's home page

Wed 22 Jan 2020 by mskala Tags used: linux, software, reference

These are some notes on configuring Apache httpd to run large PHP applications via PHP-FPM in separate chroot jails. I recently had occasion to do that, and I had to find bits and pieces of information about it in many different places around the Net, so I'm compiling these notes both for my own future use and for anyone who's contemplating a similar project. There are a number of subtle details needed to do things like get TeX working (needed for MediaWiki math), configure process-pool policy, and so on. I'm not going to go into much detail on why someone would want to do this, nor background systems administration concepts like "What is a chroot jail?".

I didn't want to take my whole server down for an extended period; it's running some important Web sites. So I made it a goal to do the whole change-over to the new setup in small, testable steps. Keeping the config files in Subversion helped with that by allowing me to easily keep track of which changes I'd made and how to roll them back where needed.

Taking inventory

I started by going through my existing server configuration, and in particular finding all the PHP scripts in use. I had Apache on a host named "bokan" running a total of 11 name-based virtual hosts, all from various branches of a single Web root directory. Six of these were not actually using any PHP - those virtual hosts handle stuff like the redirect from north-coast-synthesis.com to the preferred URL without hyphens. The remaining virtual servers were using PHP as follows:

  • audio.northcoastsynthesis.com - running a small script (which I wrote) to generate the index pages and RSS feed.
  • video.northcoastsynthesis.com - much like the audio server, running one small PHP script to generate the index pages and RSS feed.
  • edifyingfellowship.org - running a MediaWiki instance (dependencies: TeX and MariaDB) which answers at the main URL; also small scripts I wrote for Tarot card reading and astrological charts (dependencies: TeX and Swiss Ephemeris); and a symlink into the Web space of files.northcoastsynthesis.com below, to allow the Matomo installation there to also answer on edifyingfellowship.org URLs.
  • files.northcoastsynthesis.com - server for "backend" stuff supporting my storefront at northcoastsynthesis.com; the storefront itself is not on my own server. The big PHP thing on this virtual host is a Matomo instance, which depends on MariaDB, but it also has many small scripts I wrote for various purposes like RSS generation and Web log commenting, some of which (such as the newsletter subscription form) handle sensitive customer data and write into flat-file databases. This virtual host also runs the new chord database, which is a CPU hog and depends on a Lilypond installation.
  • An OSCommerce instance, currently not open to the public; part of my ongoing experiments toward someday self-hosting my e-commerce. Dependency: MariaDB.

In general, I want to segregate anything large from the rest of the system, especially if it's something large that I didn't write myself, because I don't trust other people's PHP code very much. MediaWiki, in particular, is a big application I didn't write; it's constantly under attack by spam robots; it's not even very mission-critical for me anyway; and if it gets compromised, I want as many obstacles as possible between the attackers and more sensitive parts of my system. Matomo is another big PHP application I didn't write, and it actually is mission-critical; it should be in its own space. The chord database is not a big target, and I did write it, but because it's a CPU hog with a large dependency, it makes sense to separate it off so I can give it different processing priority. OSCommerce would be mission-critical if I were using it more than experimentally, and it's also a big application I didn't write and a big target for attack. So, putting each of those applications into what will become its own chroot jail and then dividing all the rest into two more jails for the "Edifying Fellowship" and "North Coast Synthesis" stuff, I end up with a list of six jails:

  1. wiki - the MediaWiki instance
  2. matomo - the Matomo instance
  3. edifying - stuff on edifyingfellowship.org other than MediaWiki
  4. miscphp - stuff on public subdomains of northcoastsynthesis.com other than Matomo and the chord database
  5. chords - the chord database
  6. oscommerce - the OSCommerce instance

Each of these would eventually become its own PHP-FPM process pool, running under its own Unix UID and GID, in its own chroot jail.

My configuration before I started this project had PHP active throughout the Web space, handled by mod_php, running all scripts under UID and GID "apache" with access to the entire filesystem. That was activated by including the mod_php.conf file (which I think I didn't edit, or only minimally edited, from its default) in my httpd.conf. The active lines of that file were as follows.

LoadModule php7_module lib64/httpd/modules/libphp7.so
<FilesMatch \.php$> SetHandler application/x-httpd-php

My plan for the project ran more or less as follows.

  • Disable PHP on the virtual hosts that aren't using it.
  • Start running the PHP-FPM server in its default configuration with one process pool that runs scripts under UID and GID "apache" (just like the existing mod_php configuration).
  • Set up the modules and other config on Apache so that it can make reverse-proxy requests to the PHP-FPM server.
  • For one chunk of Web space at a time, change the Apache config to send requests for PHP files to the PHP-FPM server instead of mod_php.
  • Disable mod_php and make sure that all PHP scripts (now running under PHP-FPM) still work.
  • Create the six pools that I actually want to use - but at this point they still all use UID and GID "apache" and have access to the entire filesystem.
  • For one pool at a time, configure Apache to send requests to the appropriate pool instead of the "apache" pool.
  • Remove the "apache" pool from PHP-FPM. Make sure that all scripts still work.
  • Create new Unix users and groups for the pools; set permissions appropriately and switch each pool to use its own UID and GID.
  • Rearrange the data files used by some of my self-written PHP scripts so that A. as few as possible of the data files actually live in the Apache server's Web space, and B. to the extent possible, all the data files used by a given pool/jail will live in a single "data" directory for that pool/jail (reduces the number of bind mounts needed later).
  • Set up each chroot jail to contain all the things needed by the corresponding pool, and to the extent possible, nothing else. Instruct PHP-FPM to run the pool processes chrooted into that jail.

Disabling PHP per vhost

I already had an Apache config file for each virtual host, so blocking any PHP support on the ones that shouldn't have it just meant adding an appropriate "Files" section to each one. This locks out any service of *.php files, not just PHP interpretation of them. That's what I want: if a PHP file somehow manages to make its way into file space served by one of these virtual hosts, I don't want the server to serve the uninterpreted source code (which could contain sensitive stuff like database passwords) to site visitors.

In the final configuration, there also won't be anything globally telling Apache to interpret *.php files, but at this point in the project the existing global mod_php interpretation still exists and is overridden by the per-vhost Files directive. See the Apache documentation on "How the sections are merged" for important though confusing information about the precedence order of configuration directives. The relevant point here is that I'm adding a directive to a Files section inside a virtual host; and that overrides the server-global FilesMatch that currently is sending PHP files to mod_php.

# north-coast-synthesis.com
<VirtualHost *:80> ServerName north-coast-synthesis.com # ... other lines omitted <Files "*.php"> Require all denied </Files>
<VirtualHost *:443> SSLEngine On ServerName north-coast-synthesis.com # ... other lines omitted <Files "*.php"> Require all denied </Files>

Note that with this and all similar config file changes, it's necessary to restart the relevant server after modifying the file (or at least instruct the server to reload its config, if it supports that). The change will not take effect just because you modified the file.

Starting PHP-FPM

PHP-FPM is a server of its own that needs to run like other daemons. My Linux distribution (Slackware) came with an rc.php-fpm file which I added to my rc.local so it would run on boot. I put it in rc.local so that I could easily add the other scripts that set up the chroot jails (to be written later) before PHP-FPM starts up.

The PHP-FPM server automatically reads all *.conf files in /etc/php-conf.d. It comes with a file called www.conf.default, which doesn't actually get loaded because it's not a *.conf file, but contains documentation of the format and options. Working from that, I constructed the following apache.conf file intended to run my PHP scripts the same way they were running under mod_php, the better to minimize any transition issues.

prefix = /srv/php-fpm
user = apache
group = apache
listen = /srv/php-fpm/sockets/apache
listen.owner = apache
listen.group = apache
listen.mode = 0600
pm = dynamic
pm.max_children = 5
pm.start_servers = 2
pm.min_spare_servers = 1
pm.max_spare_servers = 3
pm.max_requests = 500

The [apache] header says this is for a process pool named "apache". The "prefix" refers to a directory I created to hold all my php-fpm-specific stuff; it will have two subdirectories for Unix sockets (one socket per pool) and the chroot jails when those come along. The "user" and "group" settings represent the Unix UID and GID under which the pool processes, and therefore the PHP scripts, will run; PHP scripts run setuid/setgid to these values. I set them both equal to "apache" so that, for the moment, PHP scripts will see no difference from my existing mod_php configuration, which runs everything inside the Apache server process.

Then I configure the location of the Unix-domain socket that will be used for Apache to talk to this pool. I set the permissions to 0600 to allow only the Web server's UID to talk to the pool but in fact I later ended up making my sockets 0660 permissions (allowing group access) so that another unprivileged process elsewhere on the system could hit them to extract status information. The "listen.owner" and "listen.group" settings represent the Unix UID and GID of the listening socket. They and the permissions must be set so that the Apache server (which in my config runs under UID/GID "apache") can connect to the socket. Note that although in this case they are both "apache" and they match "user" and "group", in my final configuration each pool will have its own "user" and "group", and none of those will be "apache", but all pools will use "apache" for "listen.owner" and "listen.group".

The remaining settings relate to controlling processes in the pool, and they are the defaults except that I turned on the "pm.max_requests" feature with the example value of 500 so that each pool process will die and be replaced after serving 500 requests. That seems like a sensible thing to do against the possibility of memory or other resource leaks.

Note that I elected to use Unix-domain sockets for communication between Apache and PHP-FPM. Unix-domain sockets seem like the best option when, as in my config, Apache and PHP-FPM are on the same host. I'm more confident of security if it's done this way and it may also be slightly more efficient. The other option, which is necessary in a larger installation with PHP-FPM pools that are on separate hosts from Apache, is to use TCP sockets and assign the pools port numbers (traditionally starting at port 9000). I did end up assigning TCP port numbers to my pools as well but those numbers, described later, are used only internally to the Apache configuration files for distinguishing which pool is which. The servers do not actually listen or connect to the TCP ports.

Set up Apache to do reverse proxying

The basic architecture here is that Apache answers HTTP and HTTPS requests. It does SSL, access control, geolocation, and URL rewriting, and it serves all static files. In the case of PHP files, if it recognizes the request as going to a PHP file then Apache makes a request of its own to the PHP-FPM server. This forwarding of the request to another server is one case of the more general activity called "reverse proxying," which in turn is a sibling concept to "forward proxying." To make it work, Apache needs to load a module for proxying in general (it supports both forward and reverse) and also a specific proxy-related module for every protocol that will be used on outgoing proxy connections. In this case the only outgoing-connection module needed is for the FastCGI protocol, which is the protocol spoken between Apache and PHP-FPM when it forwards the connection over the Unix socket.

Forward proxying is a security nightmare and if one doesn't actually need that, it is important to make sure it is turned off.

I created a new config file of my own called http-php-fpm.conf and put it with my other add-on httpd config files; I added a line to httpd.conf to load it right below the line that loads mod_php.conf. Note that for the moment, mod_php remains enabled. Here are the contents of the http-php-fpm.conf file.

# config for Apache proxying to PHP-FPM on bokan
# Matthew Skala
# mskala@northcoastsynthesis.com
# load the dynamic modules for proxying and FastCGI
LoadModule proxy_module lib64/httpd/modules/mod_proxy.so
LoadModule proxy_fcgi_module lib64/httpd/modules/mod_proxy.so
# make sure we don't do forward proxying
ProxyRequests off

It just loads the two needed modules, and turns off the forward-proxying feature (although it should be off by default anyway). Turning on reverse proxy will be done locally for the places where PHP scripts need to run.

Switching Web space to PHP-FPM

For each virtual server, at the root of the Web space, I added a Files section in the .htaccess file. I don't fully understand the Apache precedence system. I want this directive to override the global FilesMatch section which is currently sending all *.php files to mod_php, and I think that Files sections inside .htaccess files (which are "directory" context) are merged before (cannot override) global FilesMatch sections. But this directive is inside an If, and those are merged after other things. Anyway, it seems to work as desired: putting this section in the .htaccess appears to override the global mod_php handler for the directory in which it appears, and any subdirectories.

<Files "*.php"> <If "-f %{REQUEST_FILENAME}"> SetHandler "proxy:unix:/srv/php-fpm/sockets/apache|fcgi://localhost:9000/" </If>

There are several different ways to tell Apache to use the reverse proxy. This one seemed to work for me. I briefly explored using mod_rewrite and the [P] flag. That is supposedly less efficient but allows for more elaborate translation of incoming to proxied URLs; I wanted to try it in order to get better access to the PHP-FPM status pages under access control, but I was never able to get a mod_rewrite rule to send a request to the proxy at all. It may be that because at this point there is a global SetHandler being used to activate mod_php, only SetHandler can properly override that to activate the reverse proxy.

The If condition says that this proxying will only be done when the request corresponds to an actual *.php file in the Web space. That is the same way the mod_php installation works, and it may guard against any possible silliness involving requests sent to the proxy for things that are not actually PHP code. But it probably also made it that much harder for me to ever get the PHP-FPM status pages to be served through Apache; I eventually gave up on doing that.

The argument to SetHandler includes an fcgi:// URL pointing at port 9000 on localhost, but PHP-FPM will not really listen to that port. It actually listens on the Unix socket named earlier in the argument string. The fcgi:// URL is a syntactic requirement of Apache; and it uses the port number (possibly also the hostname of "localhost") to distinguish between different "workers" in the reverse-proxy configuration. When I later configured other PHP-FPM pools, I would choose a different port number for each (9001, 9002, etc.) so that Apache could configure them separately - even though, as described below, I ended up choosing not to do specific per-worker configuration in Apache.

Disabling mod_php

Once I had all my Web space either configured to block access to *.php files, or configured to use the reverse proxy to send requests for *.php files to PHP-FPM, I tested all my scripts. At this point the scripts were supposed to be running in the same environment as they had been under mod_php, only inside PHP-FPM instead of inside Apache. They all seemed to work, so the next step was to remove mod_php from my configuration. That meant just commenting out the include line for mod_php.conf in my httpd.conf and restarting the server.

Then I tested all the scripts again to make sure that they could still run - that is, none had secretly been depending on mod_php after I thought I had cut them all over to PHP-FPM.

In the matter of connection reuse

The FPM in PHP-FPM is for "FastCGI Process Manager" and one of its main functions is to manage processes: it starts and stops backend processes in order to try to optimize performance and resource use. You want to have some spare processes running so that when a request comes in, it can go straight to an idle process without having to spin up a new one. But you want processes that remain idle for a long time to be reaped so that the memory they use will be available for other purposes - and that is especially important when there are a lot of different process pools sharing the memory of a small server. There are a bunch of options for PHP-FPM controlling how many processes to start, how many spare ones to keep idle, how long to keep extra idle processes around, and so on. There are also several different selectable entire algorithms for this management, and all this stuff can be configured on a per-pool basis.

Apache's reverse proxy system has to similarly manage virtual connections between Apache and the backend server (in this case PHP-FPM). It can be configured either to set up and tear down such a connection on every request, or to reuse them, with a maximum number of spare connections to keep and timeouts and so on, similar to how PHP-FPM handles processes. Reusing these connections is supposed to improve efficiency because, just as with processes, there's a cost associated with opening a new connection if a request comes in and there is no spare connection for it.

My first thought was that I wanted to turn on connection reuse for Apache's reverse proxy. I spent a few hours figuring out how to do that and eventually decided it was not a good idea after all and I would rather have a configuration with none of this type of connection reuse. I'm documenting it for future reference anyway. Connection reuse can be enabled by adding a section like the following to the Apache configuration. I put it in my http-php-fpm.conf file, to make it global to the whole server. That seems desirable because I don't want to depend on where, elsewhere in the Web space, I might be using any given pool.

<Proxy "fcgi://localhost:9000/"> ProxySet enablereuse=on

Note that some documentation I saw seemed to say that connection reuse cannot be turned on in the case of a Unix-domain socket. That does not appear to be true, or if it was true at some point, it is no longer true. Connection reuse can certainly be turned on for Unix-domain sockets with the current versions of the software. Whether it's a good idea is a separate question.

To turn on the feature there needs to be a section like that for each pool (at this point only one exists, but I'm adding more later), and the fcgi:// URL including hostname and port number need to match the ones that were used in the SetHandler directive that activates reverse proxying, above. This is true even though the port numbers are not real - no server listens on and no client connects to those TCP ports in my configuration. They are just used by Apache as indices to associate these Proxy sections in the config with the "workers" referred to in the SetHandler directives.

It may be possible to merge the Proxy sections for all pools into a single section by means of some kind of wildcard match. I have few enough pools that I preferred not to explore that, figuring I might want different settings for different pools and it would be better to give each one its own configuration section even if they ended up being identical but for port number.

I decided that I do not actually want connection reuse between Apache and PHP-FPM, because connection reuse seems to defeat PHP-FPM's process management, and having that work is more important. The documentation of mod_proxy_fcgi contains a warning which in hindsight I guess refers to this issue, but it was not clear to me when I first read it. It says:

Enable connection reuse to a FCGI backend like PHP-FPM

Please keep in mind that PHP-FPM (at the time of writing, February 2018) uses a prefork model, namely each of its worker processes can handle one connection at the time. By default mod_proxy (configured with enablereuse=on) allows a connection pool of ThreadsPerChild connections to the backend for each httpd process when using a threaded mpm (like worker or event), so the following use cases should be taken into account:

  • Under HTTP/1.1 load it will likely cause the creation of up to MaxRequestWorkers connections to the FCGI backend.

What actually happens with connection reuse is that Apache opens up as many connections as PHP-FPM will allow, and it never closes them, and PHP-FPM opens a process for every connection until it hits its configured maximum, and keeps the process open as long as Apache keeps the connection open, so the pool just sits at the configured maximum number of processes all the time.

It is supposed to be the case that you can give Apache other options along with "enablereuse=on" telling it to expire idle connections after a timeout, but in my experience, it never actually does expire connections no matter what options it is given. Even if all the options worked as advertised, Apache's features for controlling reused connections are less flexible than PHP-FPM's features for controlling the process pools, so given that the concepts of "reused connection" and "pool process" end up being functionally equivalent under connection reuse, it is preferable to have PHP-FPM manage them instead of Apache.

Not turning on reused connections has a slight performance cost because of the added overhead of setting up and tearing down a connection on every incoming HTTP request, but this overhead is apparently very small when using a Unix-domain socket between Apache and PHP-FPM, as I am. It might be a little more if I were using a TCP socket, especially if doing so across a network. Given that I really wanted to use PHP-FPM's process management and have it work, it seemed okay to pay the small cost of not reusing connections between Apache and PHP-FPM.

Note that a Proxy section like this could also be added to the config to control other reverse proxy worker options besides connection reuse, if desired. I read through the list of such options in the ProxyPass directive documentation and didn't see any I wanted to change. If leaving them all on the defaults, then it's possible to omit the Proxy section from the config, and that is what I ended up doing.

Creating new pools

At this point I had all my PHP scripts running through PHP-FPM and the reverse proxy, but they were still running in a single pool. The next step was to split that pool into six, for what would eventually be my six chroot jails. Just splitting up the pool, without changing anything else, was the first priority because once I finished doing that I'd be able to make config changes on one pool at a time without affecting the others, allowing me to experiment with the less-critical pools first, learn exactly how to get them running, and then proceed to the more-critical pools in less fear of breaking something and taking down important applications.

To just create a pool I had to add a new config file to the php-fpm.d directory. Here's such a file for the matomo pool.

prefix = /srv/php-fpm
user = apache
group = apache
listen = /srv/php-fpm/sockets/matomo
listen.owner = apache
listen.group = apache
listen.mode = 0600
pm = dynamic
pm.max_children = 18
pm.start_servers = 3
pm.min_spare_servers = 2
pm.max_spare_servers = 4
pm.max_requests = 500

This is substantially similar to my earlier example for the apache pool. Note that it's still using the apache UID/GID settings; switching those to a new Unix user will be done later. It's a little dangerous because I need to get the permissions right for all the files the PHP code will touch or else my applications will break, so I'm not doing that instantly. Also note that it has its own socket name in /srv/php-fpm/sockets.

This pool uses the "dynamic" process management algorithm, which tries to keep enough processes running that the number of idle processes is kept within a certain range, while also obeying a restriction on the absolute maximum total number of processes (idle or not). It always keeps a nonzero minimum number of processes active, so this pool will appear in the process table all the time, and when the first visitor shows up after a period of idleness, there will always be a process ready to take the first HTTP request (implying low latency on that first requect). I wanted to keep idle processes on "hot" standby like this for the matomo pool because that is the one that runs my analytics tracking script; visitors to my storefront will be hitting that script on every page and I want it to respond as fast as possible.

The settings here specify a maximum of 18 processes total (pm.max_children); a target of between 2 and 4 idle processes at all times (pm.min_spare_servers and pm.max_spare_servers); and to start 3 processes initially (pm.start_servers). Based on my estimates of the memory consumed per process and how much memory I'm willing to spend on Matomo in relation to the capacity of my server, I think those numbers are about right.

I won't dump all my pool configs here but as an example here is the similar file for the chords pool, which has a different configuration.

prefix = /srv/php-fpm
user = apache
group = apache
listen = /srv/php-fpm/sockets/chords
listen.owner = apache
listen.group = apache
listen.mode = 0600
process.priority = 20
pm = ondemand
pm.max_children = 6
pm.process_idle_timeout = 300
pm.max_requests = 500

This one uses the "ondemand" process management algorithm, which does not try to keep some processes active all the time. Instead, it only spins up a server process in response to an incoming request for PHP execution in the relevant pool. That means less resource consumption when no requests are being made to the pool, but some added latency when the requests first come in after an idle period. This algorithm also respects a limit on the maximum number of processes (pm.max_children, in this case 6) and will keep any processes that go idle alive until a timeout expires (pm.process_idle_timout, in this case 300 seconds) to handle any continued traffic that shows up. These numbers were chosen by again estimating memory consumption, and by guessing the maximum time that might elapse between page loads while someone was browsing the database.

Scripts in the chords pool handle engraving of guitar fret diagrams with Lilypond, an activity that consumes a whole lot of CPU power. I don't want someone (especially, a bot) who may be browsing the chord database to slow down the whole server for higher-priority applications should a request for those come in, so I added a process.priority setting saying that processes in this pool will be "niced" to the lowest-ranking priority available.

Upon creating these files and restarting the PHP-FPM server, PHP-FPM will be ready to run PHP scripts in the new pools, but it remains to tell Apache to actually send script requests there. At the moment it is still all pointing at the apache pool from earlier.

Switching Web space to use the new pools

This is easy because it's the same operation performed earlier, of adding or modifying a SetHandler inside a Files/If section in a directory .htaccess file. The added sections look something like this.

<Files "*.php"> <If "-f %{REQUEST_FILENAME}"> SetHandler "proxy:unix:/srv/php-fpm/sockets/wiki|fcgi://localhost:9001/" </If>

Note that each pool has its own socket path which must be included, and I also assigned each pool a port number even though they are fake, to allow linking with a Proxy section elsewhere (see comments above about Proxy sections and connection reuse). When applying this logic to an entire virtual server's Web space, the new section would replace the earlier section I'd created to point it at the apache pool. But my desired configuration also involved having some subdirectories with different pool assignment from their parents, so those got new sections in their own .htaccess files. With these sections, children override parents, but inherit their parents' configuration if there is no override.

I did these a few directories at a time, with testing to make sure that nothing broke and that when I hit URLs in "ondemand" pools, the relevant processes actually did start (indicating that the requests really were going to the new pools). When I thought I had all my PHP files directed to new pools, I deleted the configuration file for the old apache pool, restarted the PHP-FPM server, and tested everything again to make sure there hadn't been anything missed.

New Unix users for the pools

Each pool/jail will have its own Unix user and group. I created a directory /src/php-fpm/chroot to contain a subdirectory for each pool/jail; that will be both the chroot location and the home directory for the Unix user. The command for creating these looks like "useradd -d /srv/php-fpm/chroot/wiki -s /sbin/nologin -U" , where -d is the option to designate the home directory, -s sets the login shell and /sbin/nologin is a fake shell that prevents logins, and -U tells useradd to also create a group of the same name and add the new user to that group.

Changing the PHP-FPM configuration to run each process pool under the appropriate new UID/GID values is easy (just change the "user" and "group" settings for the pool; not the "listen.owner" and "listen.group"), but before restarting the server into that configuration it's important to make sure the permissions on the Web space are appropriate.

The Web server, running as UID and GID "apache", needs to be able to read everything that it will statically serve. It needs to be able to find all the files that it will be telling PHP-FPM to execute, whether it can read them or not. PHP-FPM, running under the relevant pool's UID and GID, needs to be able to read everything that it will execute. PHP scripts may have a need to write to files or directories, depending on the script and how it interacts with the Web space, and any scripts must put appropriate permissions on any new files they create. Ideally, no other file access should be possible.

In practice, what I did was set my unprivileged admin account ("mskala") to be in each of the per-pool groups. Then I made the Web space, as a rule, 0644 permissions with owner mskala and group apache. That's appropriate for static files to be served by the Apache server: it can read them, actually everybody can read them (which is okay because these are files meant to be publicly served on the Web), but only the unprivileged admin account can write them. Where it was necessary for PHP scripts to write to a file or directory, I changed the group to the appropriate pool/jail group and added group write. New files created by PHP scripts would normally end up owned by the pool/jail user.

I could lock this down even tighter by removing world-read and changing groups as appropriate to make sure that the Web server cannot read any files it's not meant to serve as static files, and that the PHP scripts cannot access any files they don't need to touch. But I only really did that for a few sensitive static files that are access-controlled by Apache and shouldn't be visible to scripts or the world. For files that are meant to be served in public Web space anyway, there's little benefit in carefully preventing the scripts from reading them; and having Apache able to read a script is only a problem if we think that our existing configuration of "Never serve a script as a Web document, they should only be accessible through the proxy" will fail. Normally, any sensitive data would be outside the Web space, not served by Apache, and unavailable to PHP by reason of the chroot jailing we're about to implement.

There are a lot of different ways scripts interact with the filesystem, including stuff like cron jobs outside my Apache/PHP installations that write files into Web space to be accessed by Apache and PHP. So it took some testing and debugging to make sure that everything would still work once the PHP scripts were running under their new UIDs. However, world read on most files made the configuration pretty forgiving even if some file ownerships ended up not being what I'd really intended; it's only write permissions that need to be really carefully tested, because those are locked down much more tightly.

Organizing the data file space

In my original inventory I found that a lot of my home-written PHP scripts were using flat file databases stashed in different locations around the system: some actually in Web space with .htaccess files preventing Apache from serving them, some in private directories elsewhere, a few reading files directly out of mskala's home directory, and so on. In order to simplify the construction of chroot jails, I created a single directory named like /srv/www/data/miscphp for all the out-of-Web-space data files needed by each jail that needed such files at all. I moved the data files to these directories and updated the scripts to point to the new locations.

Note when doing this that often an out-of-Web-space data file is the point of communication between a PHP script running in Web space, and something else, such as a cron job, that runs outside of Web space. It is necessary to update both if the location changes.

This policy wasn't absolute. In a few cases, I have relevant read-only data files under Subversion control along with the scripts that use them and then it's really advantageous to keep the data alongside the scripts, in Web space despite the fact that these files meet the other criteria for being moved out of Web space. But for data files already outside of Web space and scattered in semi-random places elsewhere in the filesystem, it made sense to centralize them in the new "data" directories instead of having to extend a lot of tentacles from the chroot jails to touch all the scattered files.

Setting up chroot jails

Each chroot jail ought to contain all, and only, those files that PHP-FPM, the scripts it will run in the relevant pool, and anything needed by any other programs they invoke, will need to touch. Ideally, the jail should contain nothing sensitive. One thing that helps here is that a lot of the shared libraries and stuff needed by PHP-FPM are actually loaded before it goes chroot, and they remain accessible to the process after the chroot call, so they don't need to be included in the jail. It is only files that need to be opened after jailing, that need to be placed inside the jail.

Some thoughts on jails:

  • I have a script to build each jail and I edit what's in the jail by editing the script, instead of making changes directly to the jail. This way, I have a record (and the script is Subversion-controlled, too) of what's supposed to be inside.
  • The jail-creation script starts by destroying the existing jail, if there is one, and starting fresh. This way if the inmate processes riot and trash the jail (through compromise or misconfiguration) there's less danger of stuff from the trashed jail being carried over to the new one - although there can still be contagion through bind mounts, to the extent those are not read-only.
  • Large chunks of file space that need to be mapped into the jail are mapped via bind mounts, but isolated files are copied if the jail doesn't need to be able to write them for unjailed processes to see.

I'll go step by step through my jail-creation script for the miscphp jail, which is one of the simplest ones, and then talk about other issues specific to certain applications and features. This jail contains miscellaneous scripts that run on subdomains of northcoastsynthesis.com. Some of them need access to flat-file databases, and one (the backend for the IDSgrep Online kanji search page) needs to run an external program with some special library and data file dependencies of its own. But the scripts in this jail don't need to make outgoing network connections, connect to MariaDB, or similar.

First I set some variables that will be used in other parts of the file. These make it easier to cut and paste sections of code among my chroot-setup scripts, although there are enough differences and special needs per jail that it may not be practical to really abstract everything more thoroughly.

# config vars

The bind mounts represent windows from the jail into the general filesystem. It's important that they should not be active when we recursively destroy the jail, because we don't want to recurse into the general filesystem and destroy the stuff that these mounts are pointing at. So I test for a known file inside each mount and abort if it is present after the mount is supposed to be removed. Note that these umount lines will issue a harmless error message if the script is run when the mounts already don't exist, such as at startup.

# remove old bind mounts
umount $CHROOTBASE/$JAILNAME/srv/www/htdocs/audio
umount $CHROOTBASE/$JAILNAME/srv/www/htdocs/video
# failsafe: don't proceed if the old bind mounts still exist!
if test -f $CHROOTBASE/$JAILNAME$WEBDATA/mailing-list/subscribers ; then echo "Failed unmounting bind mounts for $JAILNAME" ; exit 1 ;
if test -f $CHROOTBASE/$JAILNAME$WEBROOT/index.html ; then echo "Failed unmounting bind mounts for $JAILNAME" ; exit 1 ;
if test -f $CHROOTBASE/$JAILNAME/srv/www/htdocs/audio/index.php ; then echo "Failed unmounting bind mounts for $JAILNAME" ; exit 1 ;
if test -f $CHROOTBASE/$JAILNAME/srv/www/htdocs/video/index.php ; then echo "Failed unmounting bind mounts for $JAILNAME" ; exit 1 ;

With the bind mounts removed, destroy the jail. The --one-file-system option to rm should be additional protection against deleting outside the bounds of the jail in case of a failed umount.

# blow away the old chroot jail
rm --one-file-system -rf $CHROOTBASE/$JAILNAME

Next, set up the skeleton of directories into which we will copy and bind-mount stuff. I create the full set of /bin, /usr/bin, and /usr/local/bin because the scripts in this jail use binaries from each and may try to do so with hardcoded paths, so I want each binary to be at the same location inside the jail as outside. In the case of libraries, though, I just put them all in /lib64 inside the jail even if they came from /usr/lib64 or /usr/local/lib64 on the outside, because programs searching for dynamic libraries don't really care and it cuts down on the complexity existing within the jail.

The $WEBDATA directory is my unified directory (created as described above) for flat-file databases used by PHP scripts in this jail. The $WEBROOT path is a mount point for bind-mounting the Web root of the main virtual host serving scripts in this jail. Because this jail actually also serves two other virtual hosts, I also create mount points for bind-mounting those vhosts' Web roots. I create a /tmp because some of my scripts need it; it's also useful as a place to put debug logs during testing; and having a /tmp may also be needed by PHP-FPM itself in at least some configurations, though I'm not certain of that. Finally, the directory /usr/local/share is needed for the dictionaries used by IDSgrep (a command-line program that one of my scripts wants to invoke).

# set up directory structure
mkdir -p $CHROOTBASE/$JAILNAME/lib64
mkdir -p $CHROOTBASE/$JAILNAME/srv/www/htdocs/audio
mkdir -p $CHROOTBASE/$JAILNAME/srv/www/htdocs/video
mkdir -p $CHROOTBASE/$JAILNAME/usr/bin
mkdir -p $CHROOTBASE/$JAILNAME/usr/local/bin
mkdir -p $CHROOTBASE/$JAILNAME/usr/local/share

Next I copy over all libraries needed by different command-line programs that will be run by scripts inside the jail. Note that bash and its needed libraries are required in order to run any other command-line programs. The list of required libraries for a program can be found by running the ldd utility against the program binary, although some very complicated programs with "plugin" kinds of interfaces may possibly also try to load other libraries not revealed by ldd. Most of the libraries in this example are pretty common; exceptions are the two from /usr/local/lib64: libbdd is needed by IDSgrep and libkyotocabinet is needed by my local search engine's backend query program.

It may be appropriate to automate this process more, that is, start with a list of binaries instead of a list of libraries, and have the script automatically process the binaries with ldd and copy over whatever libraries are needed. That would be less error-prone as the list of binaries changes and as software upgrades introduce new version numbers. So far, I haven't explored that idea very far.

# copy over libraries
for lib in ld-linux-x86-64.so.2 libc.so.6 libdl.so.2 libm.so.6 \ libpcre.so.1 libpthread.so.0 libtinfo.so.6 libz.so.1 ; \ do cp /lib64/$lib $CHROOTBASE/$JAILNAME/lib64/ ; \ done
cp /usr/lib64/libstdc++.so.6 $CHROOTBASE/$JAILNAME/lib64/
cp /usr/lib64/libgcc_s.so.1 $CHROOTBASE/$JAILNAME/lib64/
cp /usr/local/lib64/libbdd.so.0 $CHROOTBASE/$JAILNAME/lib64/
cp /usr/local/lib64/libkyotocabinet.so.16 $CHROOTBASE/$JAILNAME/lib64/

Executables invoked by PHP scripts directly or indirectly get copied over here. Note the shell is needed to invoke other things; I do a symlink from bash to sh because many programs look for the hardcoded path "/bin/sh"; it's necessary to include any binaries invoked by shell scripts even indirectly, and take note of the fact that some programs wrap their binaries in shell scripts instead of having the user-typed command be the binary directly; and although not shown here, there are actually also a couple of binaries that come into this particular jail through the bind mounts instead of being copied at this point.

# copy over executables
cp /bin/bash $CHROOTBASE/$JAILNAME/bin/
ln -s bash $CHROOTBASE/$JAILNAME/bin/sh
cp /usr/bin/head $CHROOTBASE/$JAILNAME/usr/bin/
cp /usr/bin/sort $CHROOTBASE/$JAILNAME/usr/bin/
cp /usr/local/bin/idsgrep $CHROOTBASE/$JAILNAME/usr/local/bin/

For this particular jail I'm also duplicating /usr/local/share/dict, because IDSgrep wants to search it. That wouldn't be necessary in most other jails.

# copy over dictionaries
cp -a /usr/local/share/dict $CHROOTBASE/$JAILNAME/usr/local/share/

At this point - importantly, with none of the bind mounts active - I set basic permissions for everything. These are tightly locked. Files and directories end up owned by root, readable but not writable by the jailed processes through group permissions, world inaccessible - except /tmp, which gets the usual world-everything permissions and sticky bit. Anything that the jailed processes are allowed to write to, except temporary files, is going to come in through the bind mounts.

# set basic ownership and permissions
chmod -R g=u-w,o-rwx $CHROOTBASE/$JAILNAME
chmod a+rwx,o+t $CHROOTBASE/$JAILNAME/tmp

Finally, I do the bind mounts. The $WEBROOT bind mount would probably be required in almost any such jail so that PHP-FPM can read the scripts it's supposed to execute; in this case because there are also two more Web roots of other virtual hosts served by processes in this pool, those need to be mounted too. In this jail, because there is a $WEBDATA directory, that needs a bind mount. There aren't any other bind mounts in this jail; a jail containing a large application with other needs might need others. Note that the list of bind mounts needs to be kept synchronized with the corresponding unmount commands near the top of the file.

# bind mounts
mount -o bind /srv/www/htdocs/audio \ $CHROOTBASE/$JAILNAME/srv/www/htdocs/audio
mount -o bind /srv/www/htdocs/video \ $CHROOTBASE/$JAILNAME/srv/www/htdocs/video

Whenever this script is run to recreate the jail, it's probably necessary to restart PHP-FPM to make sure all pool processes are pointing at the new version and not into the unlinked old one.

Running external software in chroot

Most of my homegrown PHP scripts, and many externally-written applications, want to run external software with the system(), popen(), and similar calls. PHP uses /bin/sh to do this. I copied bash into each of my chroot jails, and its library dependencies, and created a symlink from /bin/sh to /bin/bash.

strace in chroot

It can be useful to run things under strace inside the chroot, especially when debugging why something fails to run. The strace program has its own library dependencies and my suggestion is to put separate lines in the script file for copying those over, even if they duplicate libraries used by more permanent inmates, so that these lines can easily be commented out when strace isn't desired. In the final "production" version a jail should probably not contain strace because it's too powerful to make available to potential attackers.

# enable for strace
# for lib in librt.so.1 libdw.so.1 libpthread.so.0 libelf.so.1 \
# libz.so.1 liblzma.so.5 libbz2.so.1 ; \
# do cp /lib64/$lib $CHROOTBASE/$JAILNAME/lib64/ ; \
# done
# cp /usr/bin/strace $CHROOTBASE/$JAILNAME/usr/bin/

Whenever using strace, I would edit my scripts to both call the target program in strace, and direct its output into (the chroot jail's) /tmp. Running strace with the -ff and -o options allows it to trace into child processes, which is important in the frequent case where whatever's failing is a shell script; plain strace will just trace the shell itself, which is probably operating normally.

MariaDB (MySQL) in chroot

PHP applications often want access to MySQL, which is MariaDB on my system. I was able to get this working just by doing a bind mount of /var/run/mysql to allow PHP-FPM to connect to the Unix socket of the MariaDB server. The various libraries, etc., needed for PHP's MySQL bindings are preloaded before the pool processes go chroot, so they don't need to be included in the jail. In a more complicatd configuration, such as when connecting to the database server across a network, it might be necessary to make provisions for stuff like DNS resolution as described below.

If PHP scripts invoke external programs and those external programs need to talk to MySQL - including when the external program being invoked is the command-line mysql interface - then it'll probably be necessary to bring some MySQL-related shared libraries into the jail.

TeX in chroot

A really proper MediaWiki installation ought to have access to TeX, which is a huge software system with its own package manager and directory structure. It is used for displaying math in Wiki articles, which may or may not be important on a given Wiki. Since I also need TeX for the astrological chart system, it made sense to me to go ahead and do the work to make it work with MediaWiki.

My MediaWiki installation uses the "png" method, which according to the MediaWiki docs is now deprecated or something. It was the one that I found easiest to get working at the time of the install and if I ever have to switch to their new recommended one, it sounds like that will involve running yet another server. For this article, though, I'm not going to go into MediaWiki math configuration but only talk about how I got my existing and working configuration to work inside the PHP-FPM chroot jail.

Possibly of interest is this article I wrote some time ago about the minimal TeXLive installation for MediaWiki.

My TeXLive installation in the normal file tree lives in /srv/texlive, symlinked at /usr/local/texlive, which is a hardcoded path where some programs look for it, so that mount point and symlink need to be created during the directory creation step. The /srv/texlive tree also needs to be bind mounted.

mkdir -p $CHROOTBASE/$JAILNAME/srv/texlive
ln -s /srv/texlive $CHROOTBASE/$JAILNAME/usr/local/texlive

There are very few shared library requirements for MediaWiki to invoke TeX (because TeX doesn't really use shared libraries, it does its own thing instead); and MediaWiki invokes TeX through the "texvc" program which is actually in MediaWiki's own Web space, so that doen't need to be copied over but comes in with the bind mount of the main MediaWiki Web space. However, the call also goes through some kind of resource-limiting shell script (limit.sh - go read it), so the utilities used by that shell script need to be included.

The command-line TeX programs that need to be invoked should appear in /usr/local/bin, but they should not be copied there. For my MediaWiki installation these are "latex" and "dvipng"; the astro chart application uses several others. They have to be symlinks to the real binaries which live in TeX's own directory structure. This requirement is because when the binaries run, they automatically trace the symlinks to find their real locations and then search surrounding directories for the many, many necessary configuration, macro, precompiled dump, font, cache, and other files that they need.

# copy over executables
cp /bin/bash $CHROOTBASE/$JAILNAME/bin/
ln -s bash $CHROOTBASE/$JAILNAME/bin/sh
for exec in mkdir rmdir sleep timeout ; \ do cp /usr/bin/$exec $CHROOTBASE/$JAILNAME/usr/bin/ ; \ done
# these need to be symlinks for TeX's file searching to work
ln -s /usr/local/texlive/2018/bin/x86_64-linux/latex \ $CHROOTBASE/$JAILNAME/usr/local/bin/latex
ln -s /usr/local/texlive/2018/bin/x86_64-linux/dvipng \ $CHROOTBASE/$JAILNAME/usr/local/bin/dvipng

Finally, note that TeX needs /etc/localtime and some associated scripts need /dev/null, so these must be put in the jail.

cp /etc/localtime $CHROOTBASE/$JAILNAME/etc/
mknod -m 0666 $CHROOTBASE/$JAILNAME/dev/null c 1 3

Although my MediaWiki installation does not need this, my other pool/jail that invokes TeX actually also requires ghostscript in the jail for ps/pdf conversion, and that calls for some care. Ghostscript is another large application which needs to refer to its own structure of data files, and I ended up bind mounting the /usr/share/ghostscript directory for it to use. Unlike the TeX native programs, ghostscript's command-line programs require a long list of shared library dependencies (mostly from /usr/lib64) and I had to add all of them to my list to copy over. And ghostscript's command-line programs are often really shell scripts that invoke the binaries, or even shell scripts that invoke other shell scripts through two or more levels before getting to the binaries, so it's necessary to chase through all of those to make sure that every needed binary or shell script ends up included in the jail. This process is tedious but not really difficult. I don't include the detailed list of what I copied over because it would be highly dependent on the specifics of how the jailed PHP code uses ghostscript.

DNS in chroot

If programs inside the jail (whether PHP-written or external) need to make outgoing connections or analyse incoming connections in certain ways, then they need to be able to do DNS resolution. I found that it worked to just include the files /etc/resolv.conf and /lib64/libnss_dns.so.2 in the jail. The shared library is apparently loaded late; it doesn't appear in the listing from "ldd" but programs that want to do DNS, including the PHP interpreter itself, will try to load it on demand when they want to do DNS and will fail if it's not present. Note that the resolv.conf file may automatically change from time to time if you are running in a DHCP environment and if you want the jailed copy to track changes in the external copy, then you will need to deal with that in some way (such as a bind mount instead of just copying the file). Mine is not expected to change at all frequently and for the moment at least, I'm okay with having jailed DNS break when my nameservers change until I manually refresh it.

I don't know how universal this way of enabling DNS may be; it worked for me, but I've seen a lot of "tutorials" that suggest much more complicated things to do involving running a caching DNS server and making that available to the jailed processes. Information about this topic on the Net is hard to search for because instructions for running a DNS client inside a chroot jail tend to get mixed up with instructions for running a DNS server (like BIND) inside its own chroot jail, which is a different project.

Matomo, being a Web analytics platform, quite likely needs DNS resolution. Any large application that tries to "phone home" or automatically update itself will probably need both DNS and HTTPS (next topic). DNS is also a requirement for sending email from within PHP.

HTTPS clients in chroot

Incoming HTTPS connections are handled by Apache and switching the PHP interpreter doesn't change the configuration for them. But some PHP applications also want to make outgoing HTTPS connections, for instance to update themselves or their data files. That normally entails using DNS (previous section) but they also usually need access to one or more certificate files, used to verify the other end's credentials.

PHP has built-in HTTPS client functionality, but it appears to be disabled globally on my site (allow_url_fopen = 0) and I didn't want to change that given everything seemed to be working fine before the interpreter switch, so I'm not sure what might be needed to make the built-in support work. Instead, my PHP applications that do outgoing HTTPS at all, all seem to be using the CURL library. In addition to the requirements for making DNS work, I was able to get CURL-based HTTPS client functions working just by copying /usr/share/curl/ca-bundle.crt into my jails.

Sending email from within chrooted PHP

PHP includes a built-in function called mail() for sending email messages. People, sometimes including the PHP team themselves, claim that this function sucks and should not be used, but applications do use it. It is designed to work by invoking command-line sendmail, or on many systems, some unworthy newfangled MTA that presents a sendmail-like command line interface for compatibility.

Bringing real sendmail into a chroot jail would be a problem because it has many dependencies, wants to touch a lot of sensitive places in the filesystem, and wants to be setuid root. The usually recommended course of action is to run something called "mini_sendmail", which mimics the command-line interface of real sendmail just far enough to take a message as input and pass the message to the submission port on localhost, where it's assumed one is running a more serious MTA. This piece of software was last updated in 2014 and it has three issues preventing it from working as-is in a chroot jail of the kind I'm describing, at least with MediaWiki.

  • It attempts to find out the currently logged-in user's name with getpwuid(), which depends on a bunch of stuff like /etc/passwd that it would be better not to have inside the jail, and it dies when that fails.
  • It does not correctly implement one specific command-line syntax of the "-f" option (when "-f" and the value for "-f" are in two separate arguments) and that particular syntax is used by MediaWiki. It might be easy to patch MediaWiki to use different syntax that mini_sendmail can understand, but such a patch would be fragile and would possibly need to be repeated for other PHP applications that also use this syntax.
  • It misparses From: headers if they are in the valid syntax "Realname <username@example.com>", changing them into an invalid syntax that will be rejected by SMTP servers, and MediaWiki triggers this issue by usually generating such headers.

There is a fork by Volkan Kucucakar on Github which fixes the first of these problems by adding a command-line option to specify the username. The fork was last updated in 2016. I have submitted a pull request fixing the second issue. But I don't know that that project will ever merge the pull request or be touched again, and I don't really want to encourage people to use Github anyway, so I think that I will probably not submit my patch for the third issue there. Instead, I'll eventually package it up and post it in my own Web space. How fast I do that will depend on how much interest I hear from the community - it's a nontrivial amount of work packaging something like that properly, not necessary for my own use of the software, and only worthwhile if people are going to link to it.

Anyway, patched mini_sendmail is a small binary that can be brought into the chroot jail (renamed to "/usr/bin/sendmail" for compatibility). The jail needs to be configured to support DNS, as above. And the PHP interpreter needs to be told, when it invokes mini_sendmail, to pass the username for the jail so that mini_sendmail won't attempt a getpwuid() and die. That is done by adding a line to the PHP-FPM process pool config file.

php_admin_value[sendmail_path] = "/usr/bin/sendmail -t -i --username=wiki"

The syntax for this command is sensitive. The mini_sendmail program (as another effect of the fact that it's doing its command-line parsing by hand instead of calling a proper library) requires exactly that syntax with the equals sign for the --username= option, and having an equals sign in the string value means that the value assigned to php_admin_value[sendmail_path] must be in quotation marks or PHP-FPM will choke.


These notes cover most of what came up during my project of putting my PHP applications in separate PHP-FPM process pools with chroot jails under Apache. I hope it's of some use. I'll probably update these notes further as I discover other relevant points.

Share on: Facebook Twitter Reddit LinkedIn