So I’ve been using rootless podman-compose to run my arr stack forever, and I’ve never had this issue. What seems to be happening is that sometimes, but not always, when a new folder is created or an existing folder’s contents are modified, it seems to be setting the files and their folder’s owner to “52587” which does not exist. This causes it to then not be able to access those files. I can manually change them back, of course, but the container just overwrites it again. If I specify the user in the compose.yml, it seems to ignore it. It is happening with a few different containers (all in the same compose.yml), as I’ve seen it now with Radarr and NZBget. The files are on a 12TB drive, and the container mounts and compose.yml are on the same drive, but the OS (Bazzite) is on a separate drive.
My thoughts so far for possibilities:
- The podman install is fucked somehow
- The drive itself is fucked
- Bazzite’s weirdness is causing an issue
For #1, podman comes with Bazzite by default so I’m not entirely sure if I can rpm-ostree remove and reinstall, though that might be the next step I try. I’m not terribly good with podman to begin with so I’m not sure how to go about troubleshooting it much otherwise.
For #2, this is entirely a possibility, the drive is pretty old, but I’m not seeing any errors or anything in the SMART stuff and outside of this specific issue I have seen no other problems there.
For #3, this issue did start to happen maybe a month after switching from Arch to Bazzite, mostly because I also wanted to use this machine for Sunshine streaming and my arch install was a mess anyway. I know Arch, though, and this immutable stuff has tripped me up before, so maybe I go back. Feels like admitting defeat though, lol.
Any ideas to point me in the right direction would be greatly appreciated. Thanks!
So I’ve been using rootless podman-compose
when a new folder is created or an existing folder’s contents are modified, it seems to be setting the files and their folder’s owner to “52587”
Rootless Docker and Podman run their applications within a user namespace. This means most of the user IDs within the container are mapped to a different uid range on the host, often called a subuid. It’s part of how “rootless” mode can allow an unprivileged user to run software that expects to have privileged IDs.
which does not exist.
Are you sure it doesn’t exist? Have you looked at the ranges defined in /etc/subuid on the host?
My first thought is that the uid numbers you see might be some of your host user’s subuids. If so, they will appear as different uids (perhaps with usernames) within the container. Try launching a shell within the container and examining the same files, to see what their owners appear as there.
If this is what’s happening, it’s normal. As long as the software trying to access the files and the software creating the files are both in the same container, it should be fine. If it doesn’t work, there’s probably another problem in play.
By the way, Podman almost certainly has a way to map certain container uids to host uids of your choice, which can be convenient when you want to share files between containers or between a container and the host.
I only have experience with docker not podman so this may be completely useless, but I found that some containers respect environment variables
environment: - PUID=xxx - PGID=yyy
and some use the user mapping
user: xxx:yyy
(I guess this needs translating to
uidmaps
for podman). Maybe this is an issue here? Although #3 makes it sound like maybe not…Yeah, I’ve tried the user mapping and doesn’t seem to do anything. Haven’t put them in the env since this started happening though I can try that, thanks for the reminder.
Out of curiosity, are the containers that are having issues using non-root users internally? Podman maps your user to root inside the container, so a non-root user can have strange effects.
I had this issue when an image inherited a non-root user upstream.
This may be helpful: https://www.tutorialworks.com/podman-rootless-volumes/#how-to-allow-a-rootless-podman-container-to-write-to-a-volume
The real trick is set the “:Z” flag for the volume. This usually solves most problems, but will not allow that same volume to be used with other running containers.