I decided to play around with PostgreSQL and created a setup where it is running inside Docker and Caddy is taking care of generating Let's Encrypt certificates. I'll show the necessary steps and post the configuration snippets here.
Lets assume PostgreSQL will be reachable via the FQDN
db.example.com. I also assume that you have Caddy configured and running on your linux server.
Caddy is usually delivering websites. What it also does, is automatically generating SSL certificates via the ACME protocol. In Layman's terms: It asks the Let's Encrypt institution to give us a SSL certificate that is considered as secure by most current operation systems .
But we can use Caddy for this feature alone: Just get the certificate. It still listens on the ports 80 and 443, but always delivers a 404 error if not further configured. In my current webserver setup this is particularly useful, because it is running already and I know how it works. I don't need to have another tool like Certbot installed.
Given DNS is already configured, you have to tell Caddy to handle the ACME request. This is the corresponding
[...] db.example.com [...]
That's it so far. Restart Caddy, I do it with
sudo systemctl caddy restart. Caddy might take a while, but eventually will have the certificates generated and be listening on port 80 and 443 for the hostname
db.example.com. When you access those, you'll receive a 404 error. That's fine, because were not interested in HTTP(S) traffic at all, we want to secure port 5432, the default port for the PostgreSQL daemon.
The default location for Caddy certificates is
/etc/ssl/caddy. Given our hostname of
db.example.com they relevant files can be located here:
Configuration of Caddy has concluded.
Let me start with a version of a
docker-compose.yml that I use for the PostgreSQL container:
db: image: 'postgres:alpine' restart: always hostname: 'db.example.com' container_name: 'db' environment: POSTGRES_PASSWORD: "hey-ho-lets-go" ports: - 0.0.0.0:5432:5432 volumes: - db:/var/lib/postgresql/data - /etc/ssl/caddy/acme/acme-v01.api.letsencrypt.org/sites/db.example.com:/var/lib/postgresql/data/ssl
This tells docker to do the following (brief version):
- Create a new container from the
postgres:alpineimage and name it
- Create a volume named
dband mount it inside the container at
- Mount a host directory inside the container. This is the location of our SSL files and it will be mounted inside the postgres data directory
Save the file somewhere and run
docker-compose up -d in that directory. You can see the new volume and container with the following commands:
# docker volume ls DRIVER VOLUME NAME local db # docker container ls NAMES IMAGE STATUS CONTAINER ID db postgres:alpine Up About an hour ca111ecef743
Enter the container and change a few settings.
# docker exec -ti -u postgres -w /var/lib/postgresql/data db /bin/sh $ echo "host all all 0.0.0.0/0 trust" >> pg_hba.conf $ echo "ssl = on" >> postgresql.conf $ echo "ssl_cert_file = 'ssl/db.example.com.crt'" >> postgresql.conf $ echo "ssl_key_file = 'ssl/db.example.com.key'" >> postgresql.conf
Lines starting with
$ are inside the container. We did the following:
- Enter the container as the
- Tell PostgreSQL that connection from all interfaces for all users are okay. You should consider setting up a dedicated user for external access and change the second
allin that line to that users name
- Tell PostgreSQL to enable SSL and where it can find the certificate and and the key
After that leave the container again and restart it:
$ exit # docker restart db
If all went well you should now be able to connect to your PostgreSQL database from any host, with a valid SSL certificate. I used Postico and it worked flawlessly.
Using Docker and Caddy I'm able to run a secure PostgreSQL installation and connect from anywhere without worrying (too much) about security issues. I consider the times where I have to use snakeoil certificates over.
If you want to talk about this drop me an e-mail.