Category Archives: Systems Administration

Having Docker socket access is (probably) not a great idea

So, what’s the fuss about having access to Docker socket? Well, by default, it is pretty insecure. How insecure? Very. This isn’t something new, others wrote about this before, but I’m surprised people are still getting tripped by this as it isn’t properly advertised.

This isn’t an issue with Docker for Mac / Docker for Windows simply because the actual Docker installation runs in a virtual machine. So, at best, you can compromise the VM rather than the developer machine. This is an issue for people who develop under Linux or run Dockers on servers.

The root of the problem (pun intended) is that the root user inside the container is also the root user on the host machine. Docker is supposed to isolate the process, but, the isolation may fail (which, it has, in the past), or the kernel, which is shared, may have a vulnerability (which has happened in the past).

While the shared kernel by itself is unavoidable (after all, this is all the rage about containers), the root user within the container being the root user on the host can be workaround by user namespaces. This has some drawbacks and missing functionality, so Docker being Docker took convenience over security as defaults, violating an important security principle (secure defaults).

The escalation from user to root if that user has access to the Docker socket is pretty much time immemorial in *nix land and it involves setting the SUID bit.

In practical terms:

  1. Start a container with a volume mount from a path controlled by the unprivileged user.
  2. Set SUID for a binary inside the container, a binary which is owned by root. Consequently, that’s the same UID 0 as the root user on the host which is the crux of the matter. That binary needs to be placed into the volume mount path. Some binaries (such as sh or bash on newer distributions) are hardened and they don’t elevate EUID and EGID to 0 despite SUID being set.
  3. Run the binary on the host with the SUID set and the file owned by root.
  4. …?
  5. Profit. Welcome to EUID 0.

Practical example:

vagrant$ docker run -v $(pwd):/target -it ubuntu:14.04 /bin/bash
[email protected]:/# cp /bin/sh /target/
[email protected]:/# chmod +s /target/sh
[email protected]:/# exit
vagrant$ ./sh
# id
uid=1000(vagrant) gid=1000(vagrant) euid=0(root) egid=0(root) groups=0(root),1000(vagrant) context=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023
cat /etc/shadow

The example uses an older image as sh is not hardened, but you get the gist. Any binary could do damage e.g a SUID cat or tee can arbitrarily write files with root privileges. With root access inside the container, installing packages from a repository is also possible e.g zsh is not hardened even on newer distributions.

For Linux developers, there’s no Docker for Linux. docker-machine still works to create machines in VirtualBox (or other hypervisors, including remotely on cloud). However, that has an expiration date as boot2docker (which is the backend image for the VirtualBox driver) has been deprecated and it recommends, wait for it, Docker for Desktop (Windows or Mac), or the Linux runtime. Precisely that runtime which is has vulnerable defaults. Triple facepalm.

The reasons for discontinuing boot2docker is the existing alternatives, but those alternatives don’t exist for Linux distributions or they are simply deprecated as well. With others being mainly the same idea of a VM (I even maintained one at some point) or docker-machine still depending on boot2docker, I don’t see any easy fix.

Possible solutions:

  • Dust off my old Docker VM (which I have). I wrote that with performance in mind, but for development purposes. It works cross-platform.
  • Try to build a newer boot2docker release. This may be more complicated as it involves upgrading both Tiny Core Linux and Docker itself, plus a host of VM drivers/additions/tools. For the time being, this seems like too much of a time commitment.

docker-machine supports an alternative release URL for boot2docker (if I’m reading the source code correctly i.e apiURL), so it should work with some effort, but without changing the code in docker-machine. Maintaining boot2docker on the other hand is the bit that looks time consuming which is far more than the 5 minutes to build my Docker VM from scratch.

As I’ve mentioned servers, the main takeaway is simple: don’t give access to the Docker socket for users other than root unless user namespaces are employed, provided this isn’t prevented by a legitimate use case which makes user namespaces noop. Should that be the case, then the Docket socket needs to be restricted to root only, otherwise, the risk of accidental machine compromise is too great as it increases the attack surface by a significant margin.

Building a Chef Omnibus package for Raspbian / Raspberry Pi

There are various guides about how to get Chef on a Raspberry Pi, but none I could find about how to build a proper Chef client package. People used to Omnibus packages (Chef, ChefDK) expect a certain consistency when deploying stuff.

I’m using the pi user for the following script under Raspbian Buster:

sudo apt-get install build-essential git ruby bundler
git clone
cd chef
# checkout the desired Chef release tag, for example
git checkout v15.7.32
cd omnibus bundle install --without development --path vendor/bundle
sudo mkdir -p /var/cache/omnibus /opt/chef
sudo chown pi:pi /var/cache/omnibus /opt/chef # if building under the pi user
# git is being bit of a git - use proper values on an actual box, unless it's just
# a build box
git config --global "[email protected]"
git config --global "Your Name"
bundle exec omnibus build chef
# wait for an extreme amount of time...

# check the build results
ls -l pkg
total 32320
-rw-r--r-- 1 root root 33033164 Feb 7 22:07 chef_15.7.32+20200207193316-1_armhf.deb
-rw-r--r-- 1 root root 52348 Feb 7 22:07 chef_15.7.32+20200207193316-1_armhf.deb.metadata.json
-rw-r--r-- 1 root root 6894 Feb 7 22:07 version-manifest.json

dpkg -I pkg/chef_15.7.32+20200207193316-1_armhf.deb
new Debian package, version 2.0.
size 33033164 bytes: control archive=327544 bytes.
298 bytes, 11 lines control
1552093 bytes, 12722 lines md5sums
3190 bytes, 111 lines * postinst #!/bin/sh
1226 bytes, 50 lines * postrm #!/bin/sh
837 bytes, 23 lines * preinst #!/bin/sh
Package: chef
Version: 15.7.32+20200207193316-1
License: Chef EULA
Vendor: Omnibus <[email protected]>
Architecture: armhf
Maintainer: Chef Software, Inc. <[email protected]>
Installed-Size: 121364
Section: misc
Priority: extra
Description: The full stack of chef

In fact, other than the pi user, none of the above steps are Raspbian specific. They work on pretty much all Debian-based distributions. With the exception of the apt-get line, all the steps are in fact distribution agnostic, but I had to learn them the hard way.

After a huge amount of wait, behold a chef deb ready to be installed. That amount may be significantly shorter on a Raspberry Pi 3 or 4B as Omnibus makes use of all CPU cores.

Emulating a Raspberry Pi

While this may not be necessary, I don’t always have a Raspberry Pi I can kill (read stress, I never had one fail) with package builds. It was quite the challenge to find the winning combination. While the build benefits from better storage and more RAM, the CPU speed isn’t impressive. However, speed isn’t the purpose. While it’s possible to use this under native qemu, regardless of host OS, I went the VM route to have more predictable results. The macOS qemu is painful to work with anyway.

Vagrant to the rescue:

Vagrant.configure('2') do |config| = 'bento/ubuntu-16.04'
  config.vm.box_check_update = true

  config.vm.provider 'virtualbox' do |vb| = 'ubuntu-pi'
    vb.cpus = 4
    vb.memory = 2048
    vb.customize ['modifyvm', :id, '--nictype1', 'virtio']

This is a fairly standard Vagrantfile. The vb.customize bit makes sure the network interface uses virtio. I’ve had issues in the past with wobbly performance using the default NIC type.

The actual setup for chroot-ing into a qemu-user-static container is excellently described on the Debian Wiki. The only change was Raspbian Buster which is the current release. I have increased the Raspbian root volume by 4096 MiB.

I have used systemd-nspawn, then after chroot-ing, killed the entry in /etc/ as it spams the shell with messages about failing to load a rather useless library in this setup.

Then, simply use the Raspbian script that I have used on actual Raspberry Pi’s.

Using persistent OpenSSH connections

I found out that using persistent connections greatly improves the productivity when working with SSH. However, finding the appropriate configuration turned out to be a complicated task. I wanted it to be as unobtrusive as possible, to restart the connection when the socket is closed, and to work without blocking timeouts.

After reading the ssh_config man page and some articles, here’s the best thing I came up with:

Host *
	ControlPath ~/.ssh/master-%[email protected]%h:%p
	ControlMaster auto
	ControlPersist 4h
	TCPKeepAlive no
	GSSAPIAuthentication no
	ServerAliveInterval 60
	ServerAliveCountMax 2

The only issue with this configuration is with long hosts (eg: a really long name) as it hits the UNIX_PATH_MAX limit. Unfortunately, the proper solution to this issue isn’t merged into upstream.

The OS X users who also use brew may easily include the patch for the path issue by editing the openssh formula for OpenSSH 6.6p1 with “brew edit openssh”:

  patch do
    url ""
    sha1 "31f6df29ff7ce3bc22ba9bad94abba9389896c26"

With this patch, a value like ~/.ssh/master-%m works for ControlPath. %m is replaced by SHA1(lhost(%l) + rhost(%h) + rport(%p) + ruser(%r)) and it keeps things short and sweet.

Getting a HTTPS certificate information into the shell

Due to the HeartBleed SNAFU, I needed a quick solution for getting the information from a certificate deployed on a remote machine. As I rarely leave the comfort of my terminal, as always, I simply dumped a new function into the shell’s ~/.*rc file.

Here it is:

Defaults to port 443 if the second argument is unspecified. Example:


portspoof trolling

Marius once told me about portspoof. A service to troll those who use various scanners by feeding the scanners with false results. Well, while the idea is good, I’m wary about a service like this as this is the kind of service where you wouldn’t want a buffer overflow.

Giving it a run inside a VM, I noticed something odd when using nmap’s service and version detection probes. This happened on the lower ports (1-50). Then I started to look at something that started to look like a pattern, therefore I increased the port range to include 1-50. portspoof is indeed a tool that trolls baddies and pen testers.

Ran it with:

nmap -sV --version-all -p 1-50

Really smooth guys, really smooth. Sometimes you have to see the big picture: