I may have just done the stupidest hack together of scripts possible, but I should probably say where I’m coming from.
Over the last few years, I’ve been trying to get better with containers – specifically within Docker. In my youth, I was anti-Virtualization. I felt like it added unnecessary complexity and didn’t yield many (if any) positive results. What can I say? I was young.
Needless to say, I jumped on the bandwagon and learned VMware and Hyper-V because one or the other was definitely going to be wherever I worked.
Fast forward to about 2020 and I was hearing rumblings about containerization. At first, my lizard-brain kicked in and again went, “fad,” but this time I decided it was probably worth some exploring after I finished my work projects, and my home projects, and my passion projects, and anything else I could throw in front of it.
These past few months, I’ve really been trying my best to get a handle on how it worked, why it’s a good option for certain workloads, and why people are choosing it as opposed to other options.
How it Started
To start, I needed something to work on. I tested and rejected Docker Desktop. Not for any specific reason other than I felt like it was less useful to put my time and effort into an interface that wasn’t true to what I was going to see in the real world. I took a spare Raspberry Pi and installed docker to run a few loads, but many “interesting” images weren’t compatible with the ARM processor. Then I decided to just go all in (on a small scale) and got a Beelink N12 Pro (Technically I got 4, but only one is used for docker).
Now I’ve got a lovely little headless Ubuntu system running docker. Now what?
My (Nearly) First Container
I’m going to be honest. As much as I love being a keyboard junkie and working frequently on the command line, there are times – particularly when I’m learning a new technology – that I prefer a GUI.

Therefore (after the obligatory “Hello World” container) I installed Portainer CE. I needed this to better understand the interactions between the various parts that docker offered before I could jump right into the CLI. The Portainer container uses a local volume for data storage, so it has “memory” after updates and restarts. This was my first introduction to the idea of fixed data with containers.
Volumes are Important
As important as environment variables, network ports, and naming containers is, the volumes are the volume where the important data lives. My “server” is running fine, but I’m always worried about a hardware failure.

Copying off the config files or exporting my container definitions (via something like autocompose) were easy by comparison. But the volumes put my work on this learning project on hold for a bit.
Circling back to Volumes
When I got some time (it was a while), I finally circled back to volumes. After running around a bit, I found something that looked very interesting in the form of vackup (for “volume backup“). This cool little tool became a full-fledged Docker Desktop extension, but I didn’t want/need that. I just wanted to know if it would work for my situation.
I did what any self-respecting home lab admin would do and rolled the dice. Getting the program was easy with a few simple commands provided by the GitHub README.
sudo curl -sSL https://raw.githubusercontent.com/BretFisher/docker-vackup/main/vackup -o /usr/local/bin/vackup && sudo chmod +x /usr/local/bin/vackup
This downloads the executable to my docker host and flips the execution bit on. From the looks of things, all I needed to do was run the utility with a few parameters:
vackup export VOLUME FILE
I gave it a shot with my Beszel container’s associated volume. Since the output TGZ file is always put in the current folder, I changed to a NAS share where I could test.
kevin@docker:~$ cd /mnt/NAS/backups/containers
kevin@dockers:/mnt/NAS/backups/containers$ vackup export beszel_data beszel_data.tgz
The screen blurred by and I had a fresh new Gzipped Tarball on my NAS. I did a quick check and yep, the data looks like it’s there. I even compared a file or two against the actual volume contents and they matched. This looked great!
The Curveball
I was so excited it worked for one volume, it was time to do it for all of them, right?
I ran a quick command to get all my docker volumes (docker volume list --quiet) and then used that to generate a “script” that would backup everything to the NAS. When I say “script” here I really just mean a list of repeated volume backup commands.
It was running great, and the data was streaming onto my NAS and the screen, she was a-scrolling with files being copied. But it didn’t slow down – or even look like it was going to stop. And then I saw some of the file names coming in and thought to myself “aren’t those the files I have in my Plex Media Server’s library?”
Now, within my docker, I’ve got both traditional container volumes and binds to the underlying file system. I know this – I knew it always, but as typical my “get it done now” head, I forgot.
What I really wanted was the volumes that were proper local to the docker machine -the ones stored in its local file system – not these bind mounts. I <CTRL><C>’d out of the job and checked my NAS. Sure enough, I had more than a few files that were tens of GB in size. Time to rethink this.
I started with investigating how to determine if a volume was local or a bind. The output from docker volume list wasn’t much help because what I needed was a little deeper. I finally inspected the docker volume inspect command (see what I did there). This gave me the information I needed, but I still needed to cycle through the JSON body to extrapolate what I needed.
{
"CreatedAt": "2025-05-29T17:58:49Z",
"Driver": "local",
"Labels": null,
"Mountpoint": "/var/lib/docker/volumes/media-nfs-tv/_data",
"Name": "media-nfs-tv",
"Options": {
"device": ":/volume1/Media/Television",
"o": "addr=nas.domain.local,rw,noatime,rsize=8192,wsize=8192,tcp,timeo=14,nfsvers=4",
"type": "nfs"
},
"Scope": "local"
}
This is an example of the types of volumes I was trying to ignore. Now I know I could use something like jq to query for the information I wanted, but I’ve always had to search for the proper syntax. I’m sure it’s easy for people who work with it frequently, but it wasn’t my forte. My forte is PowerShell. And of course, my Ubuntu Docker Server has PowerShell installed.
Using the ConvertFrom-Json function, I could easily turn this into an object and filter those out.
The Result
A few checks of my filters and came up with a convoluted, highly inefficient, but working, filter (for my needs).
$volume = "beszel_data"
docker volume inspect --format json $volume | ConvertFrom-Json | Where-Object { -not ( $_.Options ) -or -not ( $_.Options.o ) }
# ^ this returns something
$volume = "media-nfs-tv"
docker volume inspect --format json $volume | ConvertFrom-Json | Where-Object { -not ( $_.Options ) -or -not ( $_.Options.o ) }
# ^ this returns nothing
My Ultimate Result
If you made is this far, then you have my thanks (or commiseration) and probably want to understand what I ultimately ended up doing.
This is my code with limited comments because future-Kevin needs someone to blame for when he doesn’t understand what past-Kevin did.
## backup_docker_volumes.ps1
$volumes = docker volume list --quiet
ForEach ( $volume in $volumes ) {
$v = docker volume inspect --format json $volume | ConvertFrom-Json
if ( -not ( $v.options ) -or -not ( $v.Options.o ) ) {
#Write-Host "We can backup $( $v.Name )!" -ForegroundColor Green
Start-Process -FilePath "/usr/local/bin/vackup" -ArgumentList "export $( $v.Name ) $( $v.Name )_$( Get-Date -Format "yyyy-MM-dd" ).tgz" -WorkingDirectory "/mnt/NAS/backups/containers" -Wait
#break
}
else {
#Write-Host "We can't backup $( $v.Name )!" -ForegroundColor Red
}
}
Could I have written it entirely in a shell script? Yes. Would I do so in the future? Highly unlikely.
I could potentially improve the logic by running docker volume inspect $( docker volume list --quiet ) at the beginning to get the full list at once, but this works as of today and I don’t want to touch anything at the moment.
The Future
I may rewrite this if I can find a good PowerShell Module that has the support I need. I’m certain not going to write one. Alternatively, I could look into using the docker API and make calls with Invoke-RestMethod to get the introductory data, but why? This works good enough for my home lab and that’s all I need for today.