What are the pros and cons of using Named vs Anonymous volumes in Docker for self-hosting?
I’ve always used “regular” Anonymous volumes, and that’s what is usually in official docker-compose.yml
examples for various apps:
volumes:
- ./myAppDataFolder:/data
where myAppDataFolder/
is in the same folder as the docker-compose.yml file.
As a self-hoster I find this neat and tidy; my docker folder has a subfolder for each app. Each app folder has a docker-compose.yml
, .env
and one or more data-folders. I version-control the compose files, and back up the data folders.
However some apps have docker-compose.yml
examples using named volumes:
services:
mealie:
volumes:
- mealie-data:/app/data/
volumes:
mealie-data:
I had to google documentation https://docs.docker.com/engine/storage/volumes/
to find that the volume is actually called mealie_mealie-data
$ docker volume ls
DRIVER VOLUME NAME
...
local mealie_mealie-data
and it is stored in /var/lib/docker/volumes/mealie_mealie-data/_data
$ docker volume inspect mealie_mealie-data
...
"Mountpoint": "/var/lib/docker/volumes/mealie_mealie-data/_data",
...
I tried googling the why of named volumes, but most answers were talking about things that sounded very enterprise’y, docker swarms, and how all state information should be stored in “the database” so you shouldnt need to ever touch the actual files backing the volume for any container.
So to summarize: Named volumes, why? Or why not? What are your preferences? Given the context that we are self-hosting, and not running huge enterprise clusters.
I use NFS shares for all of my volumes so they’re more portable for future expansion and easier to back up. It uses additional disk space for the cache of course, but i have plenty.
When I add a second server or add a dedicated storage device as I expand, it has made it easier to move with almost no effort.
How does this work? Where is additional space used for cache, server or client?
Or are you saying everything is on one host at the moment, and you use NFS from the host to the docker container (on the same host)?
Yeah, the system was on a single server at first and eventually expanded to either a docker swarm or Kubernetes cluster. So the single server acts as both a docker host and an NFS server.
I’ve had this happen multiple times, so I use this pattern by default. Mostly these are volumes with just config files and other small stuff that it’s OK if it’s duplicated in the docker cache. If it is something like large image caches or videos or other volumes that I know will end up very large then I probably would have started with storage off the server in the beginning. It saves a significant amount of time to not have to reconfigure everything as it expands if I just have a template that I use from the start.