My Self-Hosting Setup
2024-10-04
I thought it would be nice to share how exactly I set up my self-hosting environment. This guide is also helpful for
me to remind myself of the steps to set up my server. If you want to follow this guide, you should have
beginner-to-intermediate Unix/Linux skills, specifically knowing what a file, distribution, firewall, RAID, and IP
are, and being able to edit files in the terminal with nano
.
For this post, I used Boxes from Flathub to run a VM. Since I use Kinoite, installing VirtualBox or Libvirt is a pain, and Boxes "just works," even though it has its drawbacks. Really wish there was an easier way to run Libvirt or VirtualBox on an OSTree distro.
DNS
If you have a static IP address and don't care about a nice hostname (e.g., because you're just hosting for yourself), you can skip this section.
To ensure your server is reachable when you're out of your home, a dynamically updating hostname is really useful. For my setup, setting up a domain requires these steps:
- Find a domain provider that supports domain delegation (supported by most major services). Cloudflare is often recommended because they offer base pricing for domains with no margins. I currently use IONOS. Note: You can skip this if your dynamic DNS provider supports buying a domain from them, or you're fine with a subdomain (like mysite.dynv6.com).
- Figure out a domain you like and buy it from your provider. Be aware that some TLDs are more expensive than others.
- Find a dynamic DNS service (dyndns) that supports domain delegation (sometimes called custom domain or bring-your-own-domain). I use dynv6.com, which is completely free and supports infinite subdomains and custom DNS entries. dynu.com is also a good choice if you want paid support and can buy a domain directly from them.
- Delegate the domain you bought to your dyndns provider if needed, and wait a few days for it to complete.
- Create a script to update the IP regularly (I'll show an example for dynv6 later in this post).
- Create CNAME entries for subdomains that redirect to your main domain. For example, if you bought example.com, you can create a CNAME entry called dav.example.com that points to example.com, which your server can then use to forward to your CalDAV service (example provided further down as well).
OS
I use Debian, installed via the netinst image. Debian is rock-solid, very lightweight (less than 400 packages on a minimal install, with some duplicates due to Debian's strict separation policy), up-to-date enough for my use case, where most of my services run in containers, and it has pretty sane defaults.
I use the text installer, but the graphical one is the same, and it's a bit easier if you're not used to curses interfaces.
The installation process should be pretty self-explanatory for anyone with at least intermediate Linux knowledge. If you find the installation confusing, this guide might be too advanced for you. Here are the steps I configure:
- For locale, I like to choose 'Ireland - en.UE.UTF-8', which is as close to an English-European locale as you can get.
- I just use root for my server. You can't skip user creation, but you can create a throwaway user and delete them later.
- For the file system, I choose 'guided partitioning' and select my root disk. Then I remove the swap partition (I prefer swap files because they're more dynamic), increase the size of the root partition to take up the now-free space, and swap EXT4 with BTRFS, which I consider a must for a server.
- When asked about the software, don't forget to unselect the desktop environment options, and you probably want to add an SSH server to your selection:
Congratulations! After a restart, you now have a working minimal Debian install.
Initial Setup
After you reboot and log in as root, you can delete the user you created during installation:
userdel -r $username
Before installing packages, I like to disable the automatic installation of 'recommended' dependencies. This can
make you gain a deeper understanding of the software you're using by requiring you to look at the dependencies
yourself and check why they're needed. Note: If you don't like doing this, you can simply append
-no-install-recommends
to apt install
to disable it temporarily.
To disable it, save the following in /etc/apt/apt.conf
:
APT::Install-Recommends "false";
APT::Install-Suggests "false";
The second line probably isn't needed, as suggested packages are not installed by default anyway, but it doesn't hurt.
The first things I like to install are fish
and neovim
, but they're personal
preferences.
The important packages for our setup are curl podman netavark aardvark-dns nftables iptables
.
iptables
is needed because the version of netavark in Debian 12 doesn't support nftables properly
yet,
so we need the iptables-to-nftables wrapper commands for now.
Then we also want podman-compose
, but the version in Debian 12 is a bit too old for my liking, so
let's
grab the latest version from their GitHub:
curl -o /usr/local/bin/podman-compose https://raw.githubusercontent.com/containers/podman-compose/main/podman_compose.py
chmod +x /usr/local/bin/podman-compose
To run this version, I also needed to install the python3-dotenv python3-yaml
packages.
Next, we want to change some configs. This is my /etc/containers/containers.conf
:
[network]
firewall_driver="nftables"
dns_bind_port=54
[engine]
healthcheck_events=false
The healthcheck_events
and firewall_driver
settings are not supported in the current
Podman version in Debian, but I added them anyway. healthcheck_events=false
disables healthcheck
logs,
which I find noisy and unnecessary. firewall_driver="nftables"
makes Podman use nftables as a
firewall
backend. Once this is properly supported, we can remove the iptables package. dns_bind_port=54
frees up
port 53, so you can run your own DNS servers. Even though it only binds to the Podman interfaces, it still
blocks
you from binding to 0.0.0.0 in your DNS server. You can use any other port here as well.
In /etc/containers/registries.conf
, you want to uncomment and adapt the following line:
unqualified-search-registries = ["docker.io"]
This allows you to leave out registry names from image tags, i.e., use caddy:latest
instead of
docker.io/caddy:latest
. Not strictly necessary, but most guides and examples online leave out the
registry, and I'm too lazy to add it myself.
Dyndns Update Script
If you use dynv6 like me, you can use this script to update your domain:
#!/bin/sh
hostname="mydomain.com"
token="MYTOKEN"
old_ipv4="$(dig +short -tA $hostname)"
old_ipv6="$(dig +short -tAAAA $hostname)"
echo "Old ipv4: $old_ipv4"
echo "Old ipv6: $old_ipv6"
new_ipv4="$(curl -s ifconfig.io -4)"
new_ipv6="$(curl -s ifconfig.io -6)"
echo "New ipv4: $new_ipv4"
echo "New ipv6: $new_ipv6"
if [ "$old_ipv4" != "$new_ipv4" ]; then
curl -s "http://dynv6.com/api/update?hostname=$hostname&token=$token&ipv4=$new_ipv4"
else
echo "Didn't update ipv4."
fi
if [ "$old_ipv6" != "$new_ipv6" ]; then
curl -s "http://dynv6.com/api/update?hostname=$hostname&token=$token&ipv6=$new_ipv6&ipv6prefix=auto"
else
echo "Didn't update ipv6."
fi
Save it to /usr/local/bin/update_ip
and make the file executable by running
chmod +x /usr/local/bin/update_ip
. Then edit your crontab with crontab -e
(Debian will
probably ask you to select a default editor first), and paste the following line at the bottom:
*/5 * * * * /usr/local/bin/update_ip
This will check every 5 minutes if your domain needs updating by querying DNS for the currently saved IP, comparing it to your current public IP (queried from ifconfig.io), and then calling the dynv6 API to update your IP if either your IPv4 or IPv6 differs.
NAS Storage
If you don't want to use your server for redundant storage, you can skip this part, but the maintenance section is still relevant if you have a single root disk.
If you've bought a few HDDs for NAS storage, setting them up in a RAID is as simple as:
mkfs.btrfs --metadata raid1c3 --data raid1c3 /dev/sda /dev/sdb /dev/sdc
Replace /dev/sda
etc. with your actual disk names, which you can find using lsblk
. The
raid1c3
option means that everything you store on that filesystem is duplicated three times, and
you
can lose two of the disks without problems. You don't need exactly three disks; you can have more, but you can
always lose only two with this config. You can also use raid1
and raid1c4
for two and
four-way redundancy, respectively. Note that RAID5 and RAID6 are not stable with Btrfs currently.
You can mount the disks now with mount /dev/sda /mnt
. You can use the name of any disk here, as it
will
automatically find the others. To mount automatically at boot, add this line to your /etc/fstab
:
UUID=abcd1234-abcd-1234-abcd-abcd1234 /mnt btrfs defaults,compress=zstd,nofail 0 0
compress=zstd
enables transparent compression of all your files with minimal CPU impact.
nofail
means your system can still boot if this mountpoint fails to come up, which might happen if
one
of your disks fails. You can find the UUID by running btrfs filesystem show /mnt
after manually
mounting your disks once.
Next, we want to run some maintenance for our filesystem to detect bitrot, disk failure, and optimize the file
layout. Btrfs has commands for this, btrfs balance
and btrfs scrub
, but Debian also
includes the btrfsmaintenance
package, which runs them automatically from time to time.
After installing the package, edit the config file at /etc/defaults/btrfsmaintenance
. Update some
of
the settings to these values:
BTRFS_BALANCE_MOUNTPOINTS=auto
BTRFS_BALANCE_PERIOD=weekly
BTRFS_SCRUB_MOUNTPOINTS=auto
Instead of auto
, you can also write /:/mnt
or whatever your mountpoints are, but
auto
will run scrub and balance for all mounted Btrfs filesystems. You can adjust the balance and
scrub
periods, but a weekly balance and a monthly scrub are good defaults. Leave the other values alone; defrag can
break
snapshots, and trim isn't needed because Debian comes with a fstrim.service
and
fstrim.timer
that runs once a week by default.
Next, we want some snapshots so we can undo mistakes and access deleted files. For this, we use the
snapper
package.
After installing Snapper, create a default snapshot config like this:
snapper -c root create-config /
and snapper -c nas create-config /mnt
Then change the cleanup policy in the config files in /etc/snapper/configs/
. Snapper creates
snapshots
in three cases by default: once every hour, after doing package updates with apt, and when invoked manually via
snapper create
. The config files decide when these snapshots are deleted. The "timeline" cleanups
are
for the hourly automatic snapshots. A config like this:
TIMELINE_LIMIT_HOURLY="1"
TIMELINE_LIMIT_DAILY="1"
TIMELINE_LIMIT_WEEKLY="1"
TIMELINE_LIMIT_MONTHLY="1"
TIMELINE_LIMIT_YEARLY="1"
would mean keeping one snapshot from the last hour, one from the last day, one from the last week, etc. The
"number"
config relates to the manual and apt-triggered snapshots, with both counting to the same limit. The snapshots
can be
found in the .snapshots
folder at the top directory of each mount point, e.g.,
/.snapshots
and /mnt/.snapshots
.
For backing up your data, many providers have custom tools, but many support rclone, e.g., Hetzner, which makes backing up your data easily scriptable. I personally use the free plan at filen.io and only back up important documents and configs; if my media gets destroyed, it's annoying but not the end of the world. Their CLI client is currently in alpha, and I couldn't get actual syncing to work, but I created this script to manually back up some paths:
#!/bin/sh
upload() {
podman run --name filenio --rm -it -v ./config:/root/.config/filen-cli -v /nas:/nas:ro filen/cli upload "$1" "$1"
}
upload /mnt/somepath
upload /mnt/dir/someotherpath
# etc...
You need to run this script once without the upload "$1" "$1"
at the end to trigger a login and
create
a token that can be used for later.
Containers
Before we start, we'll create a Podman network that allows all of our containers to communicate with each other,
as
they are isolated by default. After running podman network create container-network --ipv6
, you now
have a network called "container-network" with IPv6 functionality.
We can now start working on our services. My file structure is to create a directory for all my containers and then a subdirectory for each container/service. The persistent data of the container is saved in that directory as well. For example, my Caddy directory looks like this:
caddy_config/ caddy_data/ caddy.service docker-compose.yml etc-caddy/ isso-config/ isso-data/
docker-compose.yml
is the container configuration file (yes, I still call them that even though I
use
Podman). If you want to copy my layout, configure it like this:
services:
caddy:
image: caddy:latest
ports:
- 80:80
- '[::]:80:80'
- 443:443
- '[::]:443:443'
- 443:443/udp
- '[::]:443:443/udp'
cap_add:
- NET_ADMIN
networks:
- container-network
volumes:
- ./etc-caddy:/etc/caddy
- ./caddy_data:/data
- ./caddy_config:/config
networks:
container-network:
external: true
Without the "[::]" port forwardings, the container will default to listening only on IPv4. Note that the
caddy:
on the 2nd line is just the name of the container and can be chosen freely.
caddy.service
is the systemd service file so I can autostart the service on boot and see logs
through
journalctl
. The file looks like this for me:
[Unit]
Description=podman-compose Caddy
After=network-online.target
[Service]
Type=simple
WorkingDirectory=/nas/srv/container/caddy/
ExecStartPre=/usr/local/bin/podman-compose pull
ExecStart=/usr/local/bin/podman-compose up --force-recreate
ExecStop=/usr/local/bin/podman-compose down
[Install]
WantedBy=multi-user.target
You can adapt this easily for other containers by just changing the Description and WorkingDirectory fields.
Copy
this file to /etc/systemd/system
, and then you can control your containers with
systemctl
!
The other directories in there are mounted inside the container and contain data that needs to be persisted
across
restarts (isso is used for the comment widget on this page). Note: some services, like Jellyfin, have some data
that
should be persisted (caches for thumbnails and encoded videos), but that are not critical if they're lost. To
reduce
the amount of wear on my HDD RAID, I mount those instead on my boot SSD, somewhere under the /srv
directory. An example is provided further below.
Caddy is the most important part of our setup, as it provides a simple reverse proxy, which is needed to get
HTTPS
working for other containers, as well as a static file server (for e.g., a homepage). A reverse proxy is
necessary
because most services don't include HTTPS, and even if they did, configuring certificates for each of them would
be
a pain. By using HTTP internally and exposing HTTPS via Caddy in a single place, the setup is much more
maintainable. Caddy will also automatically request certificates if your config and domains are set up
correctly,
making setting up HTTPS for another service as simple as adding the following lines to your Caddyfile in
./etc-caddy/Caddyfile
:
dav.mypage.com {
reverse_proxy http://radicale:5232
}
Assuming you set up a CNAME record for dav.mypage.com at your dyndns provider, this will automatically forward all requests to Radicale via HTTPS.
If you followed this far, you now have a good starting point for your setup. You can easily add new containers
by
creating new folders with docker-compose.yml
files and integrating them with Caddy (if needed, some
services obviously don't need HTTP, and therefore no reverse proxy, e.g., Syncthing or RustDesk). Below are some
example compose files from some of my other containers, so you can get a feel for how they work. Here's a quick
reference for what the different keys in the files mean:
- image: Name of the service, can contain a version specifier at the end, or just
:latest
to always pull the newest version when you restart the container. - volumes: Maps the filesystem between your host and the container. Can contain
:ro
at the end to only allow the container to read the files. Left side is the path on your host, the right side is the path in the container. E.g.,/srv/cache:/cache
will map/srv/cache
on your host to/cache
inside the container. - networks: List of networks the container belongs to, will usually just contain "container-network" and can be left out completely if you don't need the container to talk to other containers.
- user: Selects which user to use inside the container. Most containers will default to root, but others use custom users inside the container, which can cause weird permission issues. I recommend using root here if you see this happen.
- ports: Forwards ports to inside the container. By default, the container can only send outgoing requests but not listen for incoming ones. Like with volumes, the left side is the host, and the right side is the container.
- devices: Gives the container access to your devices, mostly used for passing the GPU to the container, with
the
line
'/dev/dri:/dev/dri'
. - cap_add/sysctls: Used for permissions and configuring some values in your kernel. Usually, you just want to copy whatever the upstream guide says.
Jellyfin
services:
jellyfin:
user: root
devices:
- '/dev/dri:/dev/dri'
volumes:
- './data/config:/config'
- '/srv/jellyfin/cache:/cache'
- '/mnt/media/:/media:ro'
networks:
- container-network
image: 'jellyfin/jellyfin:latest'
networks:
container-network:
external: true
No further config needed here, other than adjusting the path to your media.
Radicale
services:
radicale:
image: tomsquest/docker-radicale:latest
user: 0:0
init: true
read_only: true
volumes:
- "./data:/data"
- "./config:/config:ro"
networks:
- container-network
networks:
container-network:
external: true
You need to configure it by placing a config file "config" in the folder "config" (yes, a bit confusing) with these contents:
[server]
hosts = 0.0.0.0:5232,[::]:5232
[auth]
type = htpasswd
htpasswd_filename = /data/htpasswd
htpasswd_encryption = bcrypt
[rights]
type = owner_only
[storage]
filesystem_folder = /data/collections
Then install apache2-utils
with apt, and run htpasswd -B data/htpasswd $username
inside
the container directory to create login information for Radicale. Replace $username
with the
username
you want, and enter the password when prompted.
AdGuard Home
services:
adguard:
image: adguard/adguardhome
volumes:
- ./data:/opt/adguardhome/work
- ./conf:/opt/adguardhome/conf
networks:
- adguard_default
ports:
- 53:53/tcp # Plain DNS
- "[::]:53:53/tcp" # Plain DNS
- 53:53/udp # Plain DNS
- "[::]:53:53/udp" # Plain DNS
networks:
adguard_default:
driver: bridge
enable_ipv6: true
You can leave out the network stuff from this config, but then it will only work with IPv4, at least with the version of podman and podman-compose I used. You could also add certificates to enable encryption, but that's not really needed when running it in a local network. You can still configure encrypted upstream servers either way.
I hope this guide was helpful to you. Leave a comment if you have any constructive criticism!