The Container Tools team at Google is happy to announce the release of the Container Structure Test framework. This framework provides a convenient and powerful way to verify the contents and structure of your containers. We’ve been using this framework at Google to test all of our team’s released containers for over a year now, and we’re excited to finally share it with the public.The framework supports four types of tests:
- Command Tests - to run a command inside your container image and verify the output or error it produces
- File Existence Tests - to check the existence of a file in a specific location in the image’s filesystem
- File Content Tests - to check the contents and metadata of a file in the filesystem
- A unique Metadata Test - to verify configuration and metadata of the container itself
Command TestsThe Command Tests give the user a way to specify a set of commands to run inside of a container, and verify that the output, error, and exit code were as expected. An example configuration looks like this:
globalEnvVars: - key: "VIRTUAL_ENV" value: "/env" - key: "PATH" value: "/env/bin:$PATH" commandTests: # check that the python binary is in the correct location - name: "python installation" command: "which" args: ["python"] expectedOutput: ["/usr/bin/python\n"] # setup a virtualenv, and verify the correct python binary is run - name: "python in virtualenv" setup: [["virtualenv", "/env"]] command: "which" args: ["python"] expectedOutput: ["/env/bin/python\n"] # setup a virtualenv, install gunicorn, and verify the installation - name: "gunicorn flask" setup: [["virtualenv", "/env"], ["pip", "install", "gunicorn", "flask"]] command: "which" args: ["gunicorn"] expectedOutput: ["/env/bin/gunicorn"]Regexes are used to match the expected output, and error, of each command (or excluded output/error if you want to make sure something didn’t happen). Additionally, setup and teardown commands can be run with each individual test, and environment variables can be specified to be set for each individual test, or globally for the entire test run (shown in the example).
File TestsFile Tests allow users to verify the contents of an image’s filesystem. We can check for existence of files, as well as examine the contents of individual files or directories. This can be particularly useful for ensuring that scripts, config files, or other runtime artifacts are in the correct places before shipping and running a container.
fileExistenceTests: # check that the apt-packages text file exists and has the correct permissions - name: 'apt-packages' path: '/resources/apt-packages.txt' shouldExist: true permissions: '-rw-rw-r--'Expected permissions and file mode can be specified for each file path in the form of a standard Unix permission string. As with the Command Tests’ “Excluded Output/Error,” a boolean can be provided to these tests to tell the framework to be sure a file is not present in a filesystem. Additionally, the File Content Tests verify the contents of files and directories in the filesystem. This can be useful for checking package or repository versions, or config file contents among other things. Following the pattern of the previous tests, regexes are used to specify the expected or excluded contents.
fileContentTests: # check that the default apt repository is set correctly - name: 'apt sources' path: '/etc/apt/sources.list' expectedContents: ['.*httpredir\.debian\.org/debian jessie main.*'] # check that the retry policy is correctly specified - name: 'retry policy' path: '/etc/apt/apt.conf.d/apt-retry' expectedContents: ['Acquire::Retries 3;']
Metadata TestUnlike the previous tests which all allow any number to be specified, the Metadata test is a singleton test which verifies a container’s configuration. This is useful for making sure things specified in the Dockerfile (e.g. entrypoint, exposed ports, mounted volumes, etc.) are manifested correctly in a built container.
metadataTest: env: - key: "VIRTUAL_ENV" value: "/env" exposedPorts: ["8080", "2345"] volumes: ["/test"] entrypoint:  cmd: ["/bin/bash"] workdir: ["/app"]
Tiny ImagesOne interesting case that we’ve put focus on supporting is “tiny images.” We think keeping container sizes small is important, and sometimes the bare minimum in a container image might even exclude a shell. Users might be used to running something like:
`docker run -d "cat /etc/apt/sources.list && grep -rn 'httpredir.debian.org' image"`… but this breaks without a working shell in a container. With the structure test framework, we convert images to in-memory filesystem representations, so no shell is needed to examine the contents of an image!
Dockerless Test RunsAt their core, Docker images are just bundles of tarballs. One of the major use cases for these tests is running in CI systems, and often we can't guarantee that we'll have access to a working Docker daemon in these environments. To address this, we created a tar-based test driver, which can handle the execution of all file-related tests through simple tar manipulation. Command tests are currently not supported in this mode, since running commands in a container requires a container runtime.
This means that using the tar driver, we can retrieve images from a remote registry, convert them into filesystems on disk, and verify file contents and metadata all without a working Docker daemon on the host! Our container-diff library is leveraged here to do all the image processing; see our previous blog post for more information.
structure-test -test.v -driver tar -image gcr.io/google-appengine/python:latest structure-test-examples/python/python_file_tests.yaml
Running in BazelStructure tests can also be run natively through Bazel, using the “container_test” rule. Bazel provides convenient build rules for building Docker images, so the structure tests can be run as part of a build to ensure any new built images are up to snuff before being released. Check out this example repo for a quick demonstration of how to incorporate these tests into a Bazel build.
We think this framework can be useful for anyone building and deploying their own containers in the wild, and hope that it can promote their usage everywhere through increasing the robustness of containers. For more detailed information on the test specifications, check out the documentation in our GitHub repository.
By Nick Kubala, Container Tools team