• 1 Post
  • 2 Comments
Joined 3 years ago
cake
Cake day: July 1st, 2023

help-circle
  • Yeah faced that issue a couple of weeks ago as well after updating Podman. It didn’t allow me to set container individual UID/GID mappings or UserNS when running in a pod, so I just took them out of the pod as I couldn’t be bothered and run them as separate containers in the same network. Works just as good.

    You just have to make sure to move the PublishPort block from the pod quadlet to the gluetun container (for all the containers which route their traffic through gluetun, i.e. which have ‘Network=container: gluetun’ set). This should solve the problem and still allows you to use UserNS or UID/GID mappings on the containers. No disadvantages so far, you just lose the convenience of stopping/starting all the containers at once through the pod. But I’d rather take this ‘inconvenience’ than troubleshooting for days how to make it work with a pod again.



  • Eldaroth@lemmy.worldtoSelfhosted@lemmy.worldArr Podman Quadlets Setup
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    3 months ago

    Nice, did the move from docker to podman a couple of months ago myself. Now running the arr stack, nextcloud, immich and some other services as quadlets. File permission due to podmans rootless nature usually was the culprit if something was not working properly.

    I can share my quadlet systemd files I use for the arr stack. I deployed it as a pod:

    [Unit]
    Description=Arr-stack pod
    
    [Pod]
    PodName=arr-stack
    # Jellyseerr Port Mapping
    PublishPort=8055:5055
    # Sonarr Port Mapping
    PublishPort=8089:8989
    # Radarr Port Mapping
    PublishPort=8078:7878
    # Prowlarr Port Mapping
    PublishPort=8096:9696
    # Flaresolverr Port Mapping
    PublishPort=8091:8191
    # qBittorrent Port Mapping
    PublishPort=8080:8080
    ---
    [Unit]
    Description=Gluetun Container
    
    [Container]
    ContainerName=gluetun
    EnvironmentFile=global.env
    EnvironmentFile=gluetun.env
    Environment=FIREWALL_INPUT_PORTS=8080
    Image=docker.io/qmcgaw/gluetun:v3.40.0
    Pod=arr-stack.pod
    AutoUpdate=registry
    PodmanArgs=--privileged
    AddCapability=NET_ADMIN
    AddDevice=/dev/net/tun:/dev/net/tun
    
    Volume=%h/container_volumes/gluetun/conf:/gluetun:Z,U
    
    Secret=openvpn_user,type=env,target=OPENVPN_USER
    Secret=openvpn_password,type=env,target=OPENVPN_PASSWORD
    
    [Service]
    Restart=always
    
    [Install]
    WantedBy=default.target
    ---
    [Unit]
    Description=qBittorrent Container
    Requires=gluetun.service
    After=gluetun.service
    
    [Container]
    ContainerName=qbittorrent
    EnvironmentFile=global.env
    Environment=WEBUI_PORT=8080
    Image=lscr.io/linuxserver/qbittorrent:5.1.2
    AutoUpdate=registry
    UserNS=keep-id:uid=1000,gid=1000
    Pod=arr-stack.pod
    Network=container:gluetun
    
    Volume=%h/container_volumes/qbittorrent/conf:/config:Z,U
    Volume=%h/Downloads/completed:/downloads:z,U
    Volume=%h/Downloads/incomplete:/incomplete:z,U
    Volume=%h/Downloads/torrents:/torrents:z,U
    
    [Service]
    Restart=always
    
    [Install]
    WantedBy=default.target
    ---
    [Unit]
    Description=Prowlarr Container
    Requires=gluetun.service
    After=gluetun.service
    
    [Container]
    ContainerName=prowlarr
    EnvironmentFile=global.env
    Image=lscr.io/linuxserver/prowlarr:2.0.5
    AutoUpdate=registry
    UserNS=keep-id:uid=1000,gid=1000
    Pod=arr-stack.pod
    Network=container:gluetun
    
    HealthCmd=["curl","--fail","http://127.0.0.1:9696/prowlarr/ping"]
    HealthInterval=30s
    HealthRetries=10
    
    Volume=%h/container_volumes/prowlarr/conf:/config:Z,U
    
    [Service]
    Restart=always
    
    [Install]
    WantedBy=default.target
    ---
    [Unit]
    Description=Flaresolverr Container
    
    [Container]
    ContainerName=flaresolverr
    EnvironmentFile=global.env
    Image=ghcr.io/flaresolverr/flaresolverr:v3.4.0
    AutoUpdate=registry
    Pod=arr-stack.pod
    Network=container:gluetun
    
    [Service]
    Restart=always
    
    [Install]
    WantedBy=default.target
    ---
    [Unit]
    Description=Radarr Container
    
    [Container]
    ContainerName=radarr
    EnvironmentFile=global.env
    Image=lscr.io/linuxserver/radarr:5.27.5
    AutoUpdate=registry
    UserNS=keep-id:uid=1000,gid=1000
    Pod=arr-stack.pod
    Network=container:gluetun
    
    HealthCmd=["curl","--fail","http://127.0.0.1:7878/radarr/ping"]
    HealthInterval=30s
    HealthRetries=10
    
    # Disable SecurityLabels due to SMB share
    SecurityLabelDisable=true
    Volume=%h/container_volumes/radarr/conf:/config:Z,U
    Volume=/mnt/movies:/movies
    Volume=%h/Downloads/completed/radarr:/downloads:z,U
    
    [Service]
    Restart=always
    
    [Install]
    WantedBy=default.target
    ---
    [Unit]
    Description=Sonarr Container
    
    [Container]
    ContainerName=sonarr
    EnvironmentFile=global.env
    Image=lscr.io/linuxserver/sonarr:4.0.15
    AutoUpdate=registry
    UserNS=keep-id:uid=1000,gid=1000
    Pod=arr-stack.pod
    Network=container:gluetun
    
    HealthCmd=["curl","--fail","http://127.0.0.1:8989/sonarr/ping"]
    HealthInterval=30s
    HealthRetries=10
    
    # Disable SecurityLabels due to SMB share
    SecurityLabelDisable=true
    Volume=%h/container_volumes/sonarr/conf:/config:Z,U
    Volume=/mnt/tv:/tv
    Volume=%h/Downloads/completed/sonarr:/downloads:z,U
    
    [Service]
    Restart=always
    
    [Install]
    WantedBy=default.target
    ---
    [Unit]
    Description=Jellyseerr Container
    
    [Container]
    ContainerName=jellyseerr
    EnvironmentFile=global.env
    Image=docker.io/fallenbagel/jellyseerr:2.7.3
    AutoUpdate=registry
    Pod=arr-stack.pod
    Network=container:gluetun
    
    Volume=%h/container_volumes/jellyseerr/conf:/app/config:Z,U
    
    [Service]
    Restart=always
    
    [Install]
    WantedBy=default.target
    

    I run my podman containers in a VM running Alma Linux. Works pretty great so far.

    Had the same issue when debugging systemctl errors, journalctl not being very helpful. At one point I just ran podman logs -f <container> in another terminal in a while loop just to catch the logs of the application. Not the most sophisticated approach, but it works 😄