For the purposes of local development, we use Docker container images within a DDEV framework to model a networked architecture with the following agents represented by containers:
rabbitmq
- the message broker, isolated and accessible by the rest of the system.packaging-pipeline
- the frontend “client”, representing both the Drupal.org packaging pipeline and the Composer client a Drupal site builder would use.test-worker
- the “template” worker container, which splits into the 3 signing workers in the architecture
delegated-targets-signer
timestamp-signer
snapshot-signer
We use Packer to build container images, primarily because it is more flexible for provisioning than standard Dockerfiles, but also because it can be leveraged to build any number of other “types” of containers (eg. AWS machine images), if need be.
The Packer JSON configs in
build/packer/docker
delegate to a series of shell scripts and Ansible playbooks to do the actual
provisioning within each container image.
The build/packer/scripts
consist primarily of apt
commands to install necessary system packages into the containers, as well as some cleanup.
The build/ansible
folder contains Playbooks that trigger tasks within our tuf.workers
role to
set up system users, directories and permissions, to place the Python code
within the containers, and to configure Supervisor to manage the Celery workers
running as a service.
Here again, the goal of these provisioning scripts is that they can be reused in other contexts. In particular, we can apply these same Ansible and Shell provisioning scripts to whatever the production environment looks like.
We leverage GitLab’s built-in container registry service to house these Docker
images: registry.gitlab.com/drupal-infrastructure/package-signing/tuf
These containers are in turn pulled into the DDEV environment via Docker Compose configs in
.ddev/docker-compose.*.yml
,
as well as in GitLab CI
For more details about the structure of these components, see the reference section on Docker container images.