- Jinja 71.2%
- Shell 28.8%
| collections | ||
| docs | ||
| roles | ||
| scripts | ||
| AGENTS.md | ||
| ansible.cfg | ||
| bootstrap.yml | ||
| deploy.yml | ||
| exec_in_container.yml | ||
| LICENSE | ||
| manage_services.yml | ||
| README.md | ||
Rootless Podman + Quadlet Ansible Automation
This repository contains the reusable Ansible automation for deploying stateful applications to Debian hosts using rootless Podman, systemd user Quadlets, per-host Caddy ingress, and containerized Restic for backups and migrations. Caddy and Restic are only run as containers; no host packages are installed beyond Podman and basic tooling.
automation/ can be used in two ways:
- as a standalone automation repo with its own
ansible.cfg, inventory, and vars - as a Git submodule inside a control repo that keeps deployment-specific inventory, vars, secrets, and app definitions outside the reusable automation code
Quick start
From a parent control repo that vendors this project as automation/:
ansible-galaxy collection install -r automation/collections/requirements.yml
ansible-vault encrypt group_vars/all/vault.yml
ansible-playbook automation/bootstrap.yml -e "apps_root=$(pwd)/apps"
- Keep the active
ansible.cfgin the directory where you runansible-playbook. - If you are using a parent control repo, set
roles_path = ./automation/rolesandcollections_paths = ./automation/collections:~/.ansible/collections:/usr/share/ansible/collections. - Populate
inventories/production/hosts.ymlwith your hosts. - Map apps per host in
host_vars/<host>.ymlviaassigned_apps. - Adjust
group_vars/all/main.ymlfor usernames, paths, and images. - Define apps in
apps/<app>/app.yml. - Set
apps_rootto the controller path that contains app folders. In a parent control repo this is typically$(pwd)/apps. - Store secrets and overrides in
group_vars/all/vault.ymland encrypt it. - Bootstrap hosts and deploy:
ansible-playbook automation/bootstrap.yml -e "apps_root=$(pwd)/apps". - Deploy apps:
ansible-playbook automation/deploy.yml -e "apps_root=$(pwd)/apps".
If you use automation/ standalone, run the same playbooks from inside this directory and keep your inventory and vars alongside it. A default reusable config lives at automation/ansible.cfg; parent repos can override it with their own top-level ansible.cfg.
Recommended config
The default SSH posture is host key checking enabled with OpenSSH accept-new:
- new hosts are accepted automatically on first connect
- existing host key changes still fail closed
That keeps first-time provisioning convenient without disabling host authenticity checks entirely.
What gets installed
- Podman (from native Debian repos, non-root only).
- Rootless Quadlet-managed containers for all apps and Caddy.
podman-auto-update.timerenabled for the podman user so Quadlet containers refresh from their registries.- Per-app ingress networks (
<app>-ingress-net) and optional internal networks (<app>-internal-net) to isolate apps; Caddy joins only ingress networks for apps on its host. - All persistent data lives under
/srv/data. - Restic runs ad-hoc via
podman runusingroles/app_runtime/files/restic_wrapper.sh; no host Restic binary is required. - SSH hardening via
/etc/ssh/sshd_config.d/99-ansible-hardening.conf: key-based auth only, limited auth attempts, optionalAllowUsers. Default keeps root key login allowed (PermitRootLogin prohibit-password) so Ansible can still connect—adjust as needed.
App model
- App definitions live one per file under
<apps_root>/<app>/app.yml; the folder name is the app name. - Host assignment is controlled in
host_vars/<host>.ymlviaassigned_apps. - Each app may have multiple containers; set
entrypoint: trueon the container Caddy should proxy by default (catch-all). - Volumes are defined by name plus
container_path; the host path is derived automatically as<app_data_base_path>/<app>/<volume name>unless overridden. - Each app gets an ingress network (
<app>-ingress-net) and optional internal network (<app>-internal-net) viapod.ingress_network/pod.internal_network. - Static-only apps use
type: static; static files live under/srv/data/static/<app>/public. - Static apps add a default Caddy
file_serverafteringress.caddy_directives; setingress.disable_default_file_server: truewhen your custom directives already handle redirects, routing, or file serving. - Mixed apps can define
static_pathsto serve filesystem paths from/srv/data/static/<app>/publicalongside proxies. - Secret env overrides come from
vault_app_env_overridesin the vaulted vars file. - Optional per-container
mountscan stage files or directories under the app’s data path and mount them read-only into the container. Each mount needsrelative_path(under the app base),container_path, and eithercontentorsrc. Multiple mounts are supported. - Default mount sources resolve to
<apps_root>/<app>/files/<relative_path>; setsrcto override. - Optional per-container
proxy_pathscan add path-based reverse proxy routes on the app's domains. This is useful when an app has multiple containers that should share the same domain (e.g.,/api/*to an API container, and everything else to the web container).proxy_pathsis a list of objects with:path: a Caddy path matcher like/api/*strip_prefix: optional, defaults totrue(useshandle_pathso/api/foobecomes/fooupstream). Setfalseto preserve the prefix.
service_port: optional container port Caddy should connect to without publishing it on the host; defaults to8080.
- Optional per-container
published_portscan expose a direct host port only when there is no viable alternative through Caddy or the app ingress network (for example, Git SSH).published_portsis a list of objects with:host_port: required host port to bindcontainer_port: required container port to exposeprotocol: optional, defaults totcphost_ip: optional bind address, defaults to all interfaces
- Prefer unprivileged ports and keep the exposure app-specific and justified.
- Optional per-container
healthcheckdocuments the app-facing probe Caddy should use for ingress retry and upstream health decisions.- This is ingress-side behavior only; it does not define the container runtime health state.
- Keep it aligned with the endpoint the app exposes on the ingress network.
- Optional per-container
runtime_healthcheckdefines a Podman container healthcheck when the image and app warrant one.- This is separate from Caddy health/retry and is optional by design.
- Use it only when the image has a meaningful probe command or endpoint; many images do not.
- Treat it as image-specific runtime behavior, not a universal requirement.
- App Quadlet services default to
Restart=on-failureandRestartSec=5s; override per container withrestart_policyandrestart_seconly when an app needs different systemd restart behavior.
Concise examples:
containers:
- name: vikunja
image: vikunja/vikunja:latest
entrypoint: true
healthcheck:
path: /health
runtime_healthcheck:
command: ["/app/vikunja/vikunja", "doctor"]
- name: headscale
image: headscale/headscale:latest
entrypoint: true
healthcheck:
path: /health
runtime_healthcheck:
command: ["headscale", "health"]
For headscale, the ingress healthcheck is usually enough; add runtime_healthcheck only when the selected image exposes a reliable native probe you want Podman to enforce.
Static app example with custom Caddy handling:
type: static
ingress:
main_domain: docs.adityaj.in
tls: true
disable_default_file_server: true
caddy_directives: |
handle / {
redir * /guide/ 308
}
handle {
root * /srv/data/static/docs/public
file_server
}
Playbooks
bootstrap.yml: Prepare hosts and deploy Caddy/apps.deploy.yml: Deploy Caddy and applications.deploy_apps_only.yml: redeploy only the app Quadlets (skip host prep and Caddy). Pass-e target_app=<app>to limit to a single app.manage_services.yml: restart/stop/start app services on their assigned host (-e target_app=blog -e target_state=restart).01_backup_and_stop.yml: stop an app on its current host and back up its data via the Restic container (-e target_app=blog).02_deploy_and_restore.yml: deploy an app to its mapped host and restore data if a Restic snapshot exists (-e target_app=blog). If no backup exists (new app or first deploy), it proceeds with an empty directory.
App deployment roles honor the optional target_app variable to constrain deployments to a single app on its mapped host.
Testing on Testbench
To deploy to a testbench host instead of production:
- Add the testbench host to
inventories/production/hosts.yml. - Update
host_vars/<host>.ymlto pointassigned_appsat the testbench host.
Notes
- Target OS must be Debian; the roles fail early otherwise.
- No host-level Caddy or Restic packages are installed—only Podman and dependencies.
- Keep
apps/<app>/app.ymlandhost_vars/<host>.ymlunder version control as the authoritative mapping for deployments and migrations.