Skip to content

Vaultwarden

This is NOT a Quadlet yet. However, it's installation is handled by Ansible, it works, it's a container, so it's just a bit of a structural issue on my end.

Overview

Official docs can be found here.

As the docs say Vaultwarden is "An alternative server implementation of the Bitwarden Client API, written in Rust and compatible with official Bitwarden clients [...], perfect for self-hosted deployment where running the official resource-heavy service might not be ideal."
It stores passwords and it is compatible with Bitwarden clients. Thats it.

Structure

Reverse Proxy

The reverse proxy (Caddy) handles TLS and domain management.
The Vaultwarden container listens on port 80 by default, but I changed it to port 1111 by default, which is customizable.
The domain it uses must be changed in vars.yml. See Caddy documentation.

Quadlet (or rather Podman)

This is NOT a Quadlet yet. It's just a regular Podman container. migration will happen as soon as I have time for it. So treat this as a "Podman" section

It's Ansible who creates the container so here are the snippets from roles/vaultwarden/tasks/main.yml:

    - name: Run Vaultwarden container
      containers.podman.podman_container:
        name: vaultwarden
        image: "{{ vaultwarden_image }}"
        state: started
        restart_policy: unless-stopped

        user: "{{ uid }}:{{ guid }}"

        volumes:
          - "/{{ parent_dir }}/{{ vaultwarden_dir }}:/data:Z"

        env:
          ROCKET_PORT: "{{ vaultwarden_port }}"
          ROCKET_ADDRESS: "0.0.0.0"
          SIGNUPS_VERIFY: "{{ signups_verify }}"

        network:
          - "{{ podman_network }}"
      when: vaultwarden_bootstrap and not signups_allowed
and
    - name: Run Vaultwarden container
      containers.podman.podman_container:
        name: vaultwarden
        image: "{{ vaultwarden_image }}"
        state: started
        restart_policy: unless-stopped

        user: "{{ uid }}:{{ guid }}"

        volumes:
          - "/{{ parent_dir }}/{{ vaultwarden_dir }}:/data:Z"

        env:
          ROCKET_PORT: "{{ vaultwarden_port }}"
          ROCKET_ADDRESS: "0.0.0.0"
          SIGNUPS_ALLOWED: "{{ signups_allowed }}"
          SIGNUPS_VERIFY: "{{ signups_verify }}"

        network:
          - "{{ podman_network }}"

First boot behaviour

But wait.. there's two containers? Not quite.
You see Vaultwarden allows you to forbid new signups with an env variable.
And since it's exposed over Cloudflare Tunnel, theoretically anyone with the domain could make an account.
To prevent that, on the first run (or when you change vaultwarden_bootsrap variable in roles/vaultwarden/vars/main.yml) it allows signups, then after you made your account (and with pressing Enter signalled to Ansible you are done), it recreates the container but now forbids account creation:
Account creation error

After your first run, don't forget to set vaultwarden_bootstrap to false, or else it'll stop and wait for your Enter every singe time

Automation

There might be a way to set it to false automatically, will look into that

Required variables

  1. Set vaultwarden_domain variable in global vars.yml file.
  2. vaultwarden_bootstrap set to true in roles/vaultwarden/vars/main.yml on your first run. (After, manually set it to false)
  3. podman_network variable filled in global vars.yml file.
  4. vaultwarden_port variable filled in global vars.yml file. (I tested 1111, that is confirmed to work).

Directory layout

If you leave the defaults, the directory layout will look like this:

/stack/
  vaultwarden-data/
Variable mapping:
/stack is parent_dir
vaultwarden-data/ is vaultwarden_dir

Customizations

Only change the roles/vaultwarden/defaults/main.yml, roles/vaultwarden/vars/main.yml and vars.yml. Do NOT mess around in the .j2 files unless you know exactly what you are doing.

There are actually no .j2 files yet, but do NOT mess with the tasks/main.yml either.

Make sure your directory vars do NOT start or end with a /

  1. You can change where they store the data by editing roles/vaultwarden/defaults/main.yml and/or by changing the parent directory in vars.yml.
  2. To change whether you can sign up after first install or you must verify your e-mail, change signups_allowed and signups_verify in roles/vaultwarden/defaults/main.yml (see in table).
  3. To change its port, change vaultwarden_port in vars.yml.

Container name

You can change the container names, but be careful as container names aren't dynamic so other containers might break.

roles/vaultwarden/defaults/main.yml

Variable Default Purpose
vaultwarden_image docker.io/vaultwarden/server:latest Specify Docker image
uid 65534 Run container as unprivileged user
guid 65534 Run container as unprivileged user
dir_permissions 0700 Set permissions of data directory
signups_allowed false Controlling signups
signups_verify true Controlling e-mail verification
vaultwarden_dir vaultwarden-data Specify data directory name

I'm not sure signups_verify works or even exists, will look into that

Dependencies

It really does not have many dependencies, as Ansible handles them anyways. The only thing you have to do is following the Required variables section.

Backups

Danger

They might not be the correct commands as I've hastily pulled these from the older README.
These backups are manual! Not automated. You must run these commands on the server itself.
Use with caution as it erases your current data so only use it when your data is already gone.

Creating the backup

rsync -a --no-xattrs /<parent_dir>/<vaultwarden_dir>/ /tmp/vaultwarden-data-bak && 7z a -p"password" vaultwarden-backup.7z /tmp/vaultwarden-data-bak && rm -rf /tmp/vaultwarden-data-bak
Restoring the backup:

Warning

These commands erase your old directory's data, so only use it if that is unsaveable. Or just use the extract part if you want to.

Restoring the backup

7z x vaultwarden-backup.7z -ppassword -o/<parent_dir>/vaultwarden-restore

Replace your old directory's data

cp -R /<parent_dir>/vaultwarden-restore /<parent_dir>/<vaultwarden_dir> && rm -rf /<parent_dir>/vaultwarden-restore && chown -R nobody:nobody /<stack>/<vaultwarden_dir>

Bugs

None as far as I know