Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Podman in a network - Netavark (exit code 1): Sysctl error: IO Error: Read-only file system (os error 30) #20713

Closed
i-raise-issues opened this issue Nov 18, 2023 · 4 comments
Labels
kind/bug Categorizes issue or PR as related to a bug. locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments.

Comments

@i-raise-issues
Copy link

i-raise-issues commented Nov 18, 2023

Issue Description

Running Podman in a network fails:
podman run --rm -d --network=my network -p 80:80 -v /sys:/sys --name web nginx:latest. However other podman commands work fine most of the time (build, run etc). Network created via podman network create mynetwork

$ podman run --rm -d --network=my network -p 80:80 -v /sys:/sys --name web nginx:latest
INFO[0000] podman filtering at log level debug
DEBU[0000] Called run.PersistentPreRunE(podman --log-level=debug run --rm -d --network=mynetwork -p 8080:80 -v /sys:/sys --name web nginx:latest)
DEBU[0000] Using conmon: "/usr/bin/conmon"
DEBU[0000] Initializing boltdb state at /home/my-user/.local/share/containers/storage/libpod/bolt_state.db
DEBU[0000] Using graph driver overlay
DEBU[0000] Using graph root /home/my-user/.local/share/containers/storage
DEBU[0000] Using run root /tmp/containers-user-490000123/containers
DEBU[0000] Using static dir /home/my-user/.local/share/containers/storage/libpod
DEBU[0000] Using tmp dir /run/user/490000123/libpod/tmp
DEBU[0000] Using volume path /home/my-user/.local/share/containers/storage/volumes
DEBU[0000] Using transient store: false
DEBU[0000] Not configuring container store
DEBU[0000] Initializing event backend file
DEBU[0000] Configured OCI runtime runj initialization failed: cannot stat OCI runtime runj path: stat /usr/local/bin/runj: permission denied
DEBU[0000] Configured OCI runtime runsc initialization failed: cannot stat OCI runtime runsc path: stat /usr/local/bin/runsc: permission denied
DEBU[0000] Configured OCI runtime crun-wasm initialization failed: cannot stat OCI runtime crun-wasm path: stat /usr/local/bin/crun-wasm: permission denied
DEBU[0000] Configured OCI runtime youki initialization failed: cannot stat OCI runtime youki path: stat /usr/local/bin/youki: permission denied
DEBU[0000] Configured OCI runtime krun initialization failed: cannot stat OCI runtime krun path: stat /usr/local/bin/krun: permission denied
DEBU[0000] Configured OCI runtime ocijail initialization failed: cannot stat OCI runtime ocijail path: stat /usr/local/bin/ocijail: permission denied
DEBU[0000] Configured OCI runtime runc initialization failed: cannot stat OCI runtime runc path: stat /usr/local/bin/runc: permission denied
DEBU[0000] Configured OCI runtime kata initialization failed: cannot stat OCI runtime kata path: stat /usr/local/bin/kata-runtime: permission denied
DEBU[0000] Using OCI runtime "/usr/bin/crun"
INFO[0000] Setting parallel job count to 121
INFO[0000] podman filtering at log level debug
DEBU[0000] Called run.PersistentPreRunE(podman --log-level=debug run --rm -d --network=mynetwork -p 8080:80 -v /sys:/sys --name web nginx:latest)
DEBU[0000] Using conmon: "/usr/bin/conmon"
DEBU[0000] Initializing boltdb state at /home/my-user/.local/share/containers/storage/libpod/bolt_state.db
DEBU[0000] Overriding run root "/run/user/490000123/containers" with "/tmp/containers-user-490000123/containers" from database
DEBU[0000] Using graph driver overlay
DEBU[0000] Using graph root /home/my-user/.local/share/containers/storage
DEBU[0000] Using run root /tmp/containers-user-490000123/containers
DEBU[0000] Using static dir /home/my-user/.local/share/containers/storage/libpod
DEBU[0000] Using tmp dir /run/user/490000123/libpod/tmp
DEBU[0000] Using volume path /home/my-user/.local/share/containers/storage/volumes
DEBU[0000] Using transient store: false
DEBU[0000] [graphdriver] trying provided driver "overlay"
DEBU[0000] overlay: storage already configured with a mount-program
DEBU[0000] backingFs=extfs, projectQuotaSupported=false, useNativeDiff=false, usingMetacopy=false
DEBU[0000] Initializing event backend file
DEBU[0000] Configured OCI runtime ocijail initialization failed: cannot stat OCI runtime ocijail path: stat /usr/local/bin/ocijail: permission denied
DEBU[0000] Configured OCI runtime runc initialization failed: cannot stat OCI runtime runc path: stat /usr/local/bin/runc: permission denied
DEBU[0000] Configured OCI runtime runj initialization failed: cannot stat OCI runtime runj path: stat /usr/local/bin/runj: permission denied
DEBU[0000] Configured OCI runtime youki initialization failed: cannot stat OCI runtime youki path: stat /usr/local/bin/youki: permission denied
DEBU[0000] Configured OCI runtime runsc initialization failed: cannot stat OCI runtime runsc path: stat /usr/local/bin/runsc: permission denied
DEBU[0000] Configured OCI runtime krun initialization failed: cannot stat OCI runtime krun path: stat /usr/local/bin/krun: permission denied
DEBU[0000] Configured OCI runtime crun-wasm initialization failed: cannot stat OCI runtime crun-wasm path: stat /usr/local/bin/crun-wasm: permission denied
DEBU[0000] Configured OCI runtime kata initialization failed: cannot stat OCI runtime kata path: stat /usr/local/bin/kata-runtime: permission denied
DEBU[0000] Using OCI runtime "/usr/bin/crun"
INFO[0000] Setting parallel job count to 121
INFO[0000] Failed to detect the owner for the current cgroup: stat /sys/fs/cgroup/systemd/kubepods.slice/kubepods-podabb864ea_d662_4305_9617_93d3355a468c.slice/cri-containerd-9ef37426dbead1a8000f5d08f79d9c917ae4269549857bb127a7b3743c08735a.scope: no such file or directory
DEBU[0000] Successfully loaded network mynetwork: &{mynetwork 90e5fd8d5123f8ac577a650b9d6e68b21a31a6ec6300569da8bac97475ad69cb bridge podman4 2023-11-20 09:01:02.794307755 +0000 UTC [{{{10.89.3.0 ffffff00}} 10.89.3.1 <nil>}] [] false false true [] map[] map[] map[driver:host-local]}
DEBU[0000] Successfully loaded 2 networks
DEBU[0000] Adding port mapping from 8080 to 80 length 1 protocol ""
DEBU[0000] Pulling image nginx:latest (policy: missing)
DEBU[0000] Looking up image "nginx:latest" in local containers storage
DEBU[0000] Normalized platform linux/amd64 to {amd64 linux  [] }
DEBU[0000] Loading registries configuration "/etc/containers/registries.conf"
DEBU[0000] Loading registries configuration "/etc/containers/registries.conf.d/000-shortnames.conf"
DEBU[0000] Loading registries configuration "/etc/containers/registries.conf.d/001-rhel-shortnames.conf"
DEBU[0000] Loading registries configuration "/etc/containers/registries.conf.d/002-rhel-shortnames-overrides.conf"
DEBU[0000] Trying "docker.io/library/nginx:latest" ...
DEBU[0000] parsed reference into "[overlay@/home/my-user/.local/share/containers/storage+/tmp/containers-user-490000123/containers]@c20060033e06f882b0fbe2db7d974d72e0887a3be5e554efdb0dcf8d53512647"
DEBU[0000] Found image "nginx:latest" as "docker.io/library/nginx:latest" in local containers storage
DEBU[0000] Found image "nginx:latest" as "docker.io/library/nginx:latest" in local containers storage ([overlay@/home/my-user/.local/share/containers/storage+/tmp/containers-user-490000123/containers]@c20060033e06f882b0fbe2db7d974d72e0887a3be5e554efdb0dcf8d53512647)
DEBU[0000] exporting opaque data as blob "sha256:c20060033e06f882b0fbe2db7d974d72e0887a3be5e554efdb0dcf8d53512647"
DEBU[0000] Looking up image "docker.io/library/nginx:latest" in local containers storage
DEBU[0000] Normalized platform linux/amd64 to {amd64 linux  [] }
DEBU[0000] Trying "docker.io/library/nginx:latest" ...
DEBU[0000] parsed reference into "[overlay@/home/my-user/.local/share/containers/storage+/tmp/containers-user-490000123/containers]@c20060033e06f882b0fbe2db7d974d72e0887a3be5e554efdb0dcf8d53512647"
DEBU[0000] Found image "docker.io/library/nginx:latest" as "docker.io/library/nginx:latest" in local containers storage
DEBU[0000] Found image "docker.io/library/nginx:latest" as "docker.io/library/nginx:latest" in local containers storage ([overlay@/home/my-user/.local/share/containers/storage+/tmp/containers-user-490000123/containers]@c20060033e06f882b0fbe2db7d974d72e0887a3be5e554efdb0dcf8d53512647)
DEBU[0000] exporting opaque data as blob "sha256:c20060033e06f882b0fbe2db7d974d72e0887a3be5e554efdb0dcf8d53512647"
DEBU[0000] User mount /sys:/sys options []
DEBU[0000] Looking up image "nginx:latest" in local containers storage
DEBU[0000] Normalized platform linux/amd64 to {amd64 linux  [] }
DEBU[0000] Trying "docker.io/library/nginx:latest" ...
DEBU[0000] parsed reference into "[overlay@/home/my-user/.local/share/containers/storage+/tmp/containers-user-490000123/containers]@c20060033e06f882b0fbe2db7d974d72e0887a3be5e554efdb0dcf8d53512647"
DEBU[0000] Found image "nginx:latest" as "docker.io/library/nginx:latest" in local containers storage
DEBU[0000] Found image "nginx:latest" as "docker.io/library/nginx:latest" in local containers storage ([overlay@/home/my-user/.local/share/containers/storage+/tmp/containers-user-490000123/containers]@c20060033e06f882b0fbe2db7d974d72e0887a3be5e554efdb0dcf8d53512647)
DEBU[0000] exporting opaque data as blob "sha256:c20060033e06f882b0fbe2db7d974d72e0887a3be5e554efdb0dcf8d53512647"
DEBU[0000] Inspecting image c20060033e06f882b0fbe2db7d974d72e0887a3be5e554efdb0dcf8d53512647
DEBU[0000] exporting opaque data as blob "sha256:c20060033e06f882b0fbe2db7d974d72e0887a3be5e554efdb0dcf8d53512647"
DEBU[0000] exporting opaque data as blob "sha256:c20060033e06f882b0fbe2db7d974d72e0887a3be5e554efdb0dcf8d53512647"
DEBU[0000] Inspecting image c20060033e06f882b0fbe2db7d974d72e0887a3be5e554efdb0dcf8d53512647
DEBU[0000] Inspecting image c20060033e06f882b0fbe2db7d974d72e0887a3be5e554efdb0dcf8d53512647
DEBU[0000] Inspecting image c20060033e06f882b0fbe2db7d974d72e0887a3be5e554efdb0dcf8d53512647
DEBU[0000] Inspecting image c20060033e06f882b0fbe2db7d974d72e0887a3be5e554efdb0dcf8d53512647
DEBU[0000] User mount /proc:/proc options []
DEBU[0000] using systemd mode: false
DEBU[0000] setting container name web
DEBU[0000] Loading seccomp profile from "/usr/share/containers/seccomp.json"
DEBU[0000] Adding mount /dev
DEBU[0000] Adding mount /dev/pts
DEBU[0000] Adding mount /dev/mqueue
DEBU[0000] Adding mount /sys/fs/cgroup
DEBU[0000] Allocated lock 0 for container 154649eed3306b189ef11f3fbcf30f0a81c2e4b3b264d2d0f55e4086fb34b203
DEBU[0000] parsed reference into "[overlay@/home/my-user/.local/share/containers/storage+/tmp/containers-user-490000123/containers]@c20060033e06f882b0fbe2db7d974d72e0887a3be5e554efdb0dcf8d53512647"
DEBU[0000] exporting opaque data as blob "sha256:c20060033e06f882b0fbe2db7d974d72e0887a3be5e554efdb0dcf8d53512647"
DEBU[0000] Created container "154649eed3306b189ef11f3fbcf30f0a81c2e4b3b264d2d0f55e4086fb34b203"
DEBU[0000] Container "154649eed3306b189ef11f3fbcf30f0a81c2e4b3b264d2d0f55e4086fb34b203" has work directory "/home/my-user/.local/share/containers/storage/overlay-containers/154649eed3306b189ef11f3fbcf30f0a81c2e4b3b264d2d0f55e4086fb34b203/userdata"
DEBU[0000] Container "154649eed3306b189ef11f3fbcf30f0a81c2e4b3b264d2d0f55e4086fb34b203" has run directory "/tmp/containers-user-490000123/containers/overlay-containers/154649eed3306b189ef11f3fbcf30f0a81c2e4b3b264d2d0f55e4086fb34b203/userdata"
DEBU[0000] overlay: mount_data=lowerdir=/home/my-user/.local/share/containers/storage/overlay/l/U5O3OOOZ7FBWOXYBGRFYZEGVPO:/home/my-user/.local/share/containers/storage/overlay/l/H3MS5VDO4UYRFF3KAPNJQJDHTC:/home/my-user/.local/share/containers/storage/overlay/l/EYOQR6AAXK2H6WCPR75XOLBI5N:/home/my-user/.local/share/containers/storage/overlay/l/45633CMWMO4VOUUVU75DHUI6VH:/home/my-user/.local/share/containers/storage/overlay/l/FYDVJW2KRUBNQ4CCMKP3BQVFL7:/home/my-user/.local/share/containers/storage/overlay/l/PP7MZC4RY4ZYNWN56H42ONNFVO:/home/my-user/.local/share/containers/storage/overlay/l/I4W2YFKJZWQLWBFQL36VN73LSQ,upperdir=/home/my-user/.local/share/containers/storage/overlay/7ae4061ce9e4d7c36f99d3d42f1221cf3405bf2a2856318aa960801d995aa628/diff,workdir=/home/my-user/.local/share/containers/storage/overlay/7ae4061ce9e4d7c36f99d3d42f1221cf3405bf2a2856318aa960801d995aa628/work,,volatile
DEBU[0000] Made network namespace at /run/user/490000123/netns/netns-df1c0980-47d9-fbea-1376-497924113a07 for container 154649eed3306b189ef11f3fbcf30f0a81c2e4b3b264d2d0f55e4086fb34b203
DEBU[0000] The path of /etc/resolv.conf in the mount ns is "/etc/resolv.conf"
DEBU[0000] Mounted container "154649eed3306b189ef11f3fbcf30f0a81c2e4b3b264d2d0f55e4086fb34b203" at "/home/my-user/.local/share/containers/storage/overlay/7ae4061ce9e4d7c36f99d3d42f1221cf3405bf2a2856318aa960801d995aa628/merged"
[DEBUG netavark::network::validation] "Validating network namespace..."
[DEBUG netavark::commands::setup] "Setting up..."
[INFO  netavark::firewall] Using iptables firewall driver
DEBU[0000] Created root filesystem for container 154649eed3306b189ef11f3fbcf30f0a81c2e4b3b264d2d0f55e4086fb34b203 at /home/my-user/.local/share/containers/storage/overlay/7ae4061ce9e4d7c36f99d3d42f1221cf3405bf2a2856318aa960801d995aa628/merged
[DEBUG netavark::network::bridge] Setup network mynetwork
[DEBUG netavark::network::bridge] Container interface name: eth0 with IP addresses [10.89.3.7/24]
[DEBUG netavark::network::bridge] Bridge name: podman4 with IP addresses [10.89.3.1/24]
[DEBUG netavark::network::core_utils] Setting sysctl value for net.ipv4.ip_forward to 1
[DEBUG netavark::network::core_utils] Setting sysctl value for /proc/sys/net/ipv6/conf/eth0/autoconf to 0
[INFO  netavark::network::netlink] Adding route (dest: 0.0.0.0/0 ,gw: 10.89.3.1, metric 100)
[DEBUG netavark::firewall::varktables::helpers] chain NETAVARK-741C6CD328516 exists on table nat
[DEBUG netavark::firewall::varktables::helpers] chain NETAVARK-741C6CD328516 exists on table nat
[DEBUG netavark::firewall::varktables::helpers] chain NETAVARK_FORWARD exists on table filter
[DEBUG netavark::firewall::varktables::helpers] chain NETAVARK_FORWARD exists on table filter
[DEBUG netavark::network::core_utils] Setting sysctl value for net.ipv4.conf.podman4.route_localnet to 1
DEBU[0000] Unmounted container "154649eed3306b189ef11f3fbcf30f0a81c2e4b3b264d2d0f55e4086fb34b203"
DEBU[0000] Network is already cleaned up, skipping...
DEBU[0000] Cleaning up container 154649eed3306b189ef11f3fbcf30f0a81c2e4b3b264d2d0f55e4086fb34b203
DEBU[0000] Network is already cleaned up, skipping...
DEBU[0000] Container 154649eed3306b189ef11f3fbcf30f0a81c2e4b3b264d2d0f55e4086fb34b203 storage is already unmounted, skipping...
DEBU[0000] ExitCode msg: "netavark (exit code 1): sysctl error: io error: read-only file system (os error 30)"
DEBU[0000] Removing container 154649eed3306b189ef11f3fbcf30f0a81c2e4b3b264d2d0f55e4086fb34b203
DEBU[0000] Cleaning up container 154649eed3306b189ef11f3fbcf30f0a81c2e4b3b264d2d0f55e4086fb34b203
DEBU[0000] Network is already cleaned up, skipping...
DEBU[0000] Container 154649eed3306b189ef11f3fbcf30f0a81c2e4b3b264d2d0f55e4086fb34b203 storage is already unmounted, skipping...
DEBU[0000] Removing all exec sessions for container 154649eed3306b189ef11f3fbcf30f0a81c2e4b3b264d2d0f55e4086fb34b203
DEBU[0000] Container 154649eed3306b189ef11f3fbcf30f0a81c2e4b3b264d2d0f55e4086fb34b203 storage is already unmounted, skipping...
Error: netavark (exit code 1): Sysctl error: IO Error: Read-only file system (os error 30)
DEBU[0000] Shutting down engines

Steps to reproduce the issue

Steps to reproduce the issue

  1. Create the network
    podman network create mynetwork
  2. Run the container
    podman --log-level=debug run --rm -d --network=mynetwork -p 8080:80 -v /sys:/sys --name web nginx:latest

Describe the results you received

Describe the results you received

Netavark (exit code 1): Sysctl error: IO Error: Read-only file system (os error 30)

Describe the results you expected

The nginx container to come up successfully.

podman info output

host:
  arch: amd64
  buildahVersion: 1.31.3
  cgroupControllers: []
  cgroupManager: cgroupfs
  cgroupVersion: v1
  conmon:
    package: conmon-2.1.7-1.el9_2.x86_64
    path: /usr/bin/conmon
    version: 'conmon version 2.1.7, commit: fab2fef7227d2dc16478d29f1185953f81451702'
  cpuUtilization:
    idlePercent: 98.87
    systemPercent: 0.4
    userPercent: 0.73
  cpus: 96
databaseBackend: boltdb
  distribution:
    distribution: '"almalinux"'
    version: "9.2"
  eventLogger: file
  freeLocks: 2000
  hostname: my-podman-host
  idMappings:
    gidmap:
    - container_id: 0
      host_id: 490000123
      size: 1
    - container_id: 1
      host_id: 100000
      size: 65536
    uidmap:
    - container_id: 0
      host_id: 490000123
      size: 1
    - container_id: 1
      host_id: 100000
      size: 65536
  kernel: 5.10.198-187.748.amzn2.x86_64
  linkmode: dynamic
  logDriver: k8s-file
  memFree: 17866321920
  memTotal: 32811503616
  networkBackend: netavark
  networkBackendInfo:
    backend: netavark
    dns:
       package: aardvark-dns-1.5.0-2.el9.x86_64
       path: /usr/libexec/podman/aardvark-dns
       version: aardvark-dns 1.5.0
    package: netavark-1.5.0-2.el9.x86_64
    path: /usr/libexec/podman/netavark
    version: netavark 1.5.0
  ociRuntime:
    name: crun
    package: crun-1.8.4-1.el9_2.x86_64
    path: /usr/bin/crun
    version: |-
      crun version 1.8.4
      commit: 5a8fa99a5e41facba2eda4af12fa26313918805b
      rundir: /run/user/490000123/crun
      spec: 1.0.0
      +SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +CRIU +YAJL
  os: linux
  pasta:
    executable: ""
    package: ""
    version: ""
  remoteSocket:
    path: /run/user/490000123/podman/podman.sock
  security:
    apparmorEnabled: false
    capabilities: CAP_CHOWN,CAP_DAC_OVERRIDE,CAP_FOWNER,CAP_FSETID,CAP_KILL,CAP_NET_BIND_SERVICE,CAP_SETFCAP,CAP_SETGID,CAP_SETPCAP,CAP_SETUID,CAP_SYS_CHROOT
    rootless: true
    seccompEnabled: true
    seccompProfilePath: /usr/share/containers/seccomp.json
    selinuxEnabled: false
  serviceIsRemote: false
  slirp4netns:
    executable: /usr/bin/slirp4netns
    package: slirp4netns-1.2.0-3.el9.x86_64
    version: |-
      slirp4netns version 1.2.0
      commit: 656041d45cfca7a4176f6b7eed9e4fe6c11e8383
      libslirp: 4.4.0
      SLIRP_CONFIG_VERSION_MAX: 3
      libseccomp: 2.5.2
  swapFree: 0
  swapTotal: 0
  uptime: 54h 30m 28.00s (Approximately 2.25 days)
plugins:
  authorization: null
  log:
  - k8s-file
  - none
  - passthrough
  - journald
  network:
  - bridge
  - macvlan
  - ipvlan
  volume:
  - local
registries:
  search:
  - registry.access.redhat.com
  - registry.redhat.io
  - docker.io
store:
  configFile: /home/my-user/.config/containers/storage.conf
  containerStore:
    number: 40
    paused: 0
    running: 0
    stopped: 40
  graphDriverName: overlay
  graphOptions: {}
  graphRoot: /home/my-user/.local/share/containers/storage
  graphRootAllocated: 20957446144
  graphRootUsed: 8626634752
  graphStatus:
    Backing Filesystem: extfs
    Native Overlay Diff: "false"
    Supports d_type: "true"
    Using metacopy: "false"
  imageCopyTmpDir: /var/tmp
  imageStore:
    number: 48
  runRoot: /tmp/containers-user-490000123/containers
  transientStore: false
  volumePath: /home/my-user/.local/share/containers/storage/volumes
version:
  APIVersion: 4.6.1
  Built: 1690975616
  BuiltTime: Wed Aug  2 11:26:56 2023
  GitCommit: ""
  GoVersion: go1.19.10
  Os: linux
  OsArch: linux/amd64
  Version: 4.6.1

Podman in a container

Yes

Privileged Or Rootless

Rootless

Upstream Latest Release

Yes

Additional environment details

Linux container running in Kubernetes (Amazon EKS), with Podman 4.6.1 installed
Home Directory mounted via NFS Share /home/my-user
rootless_storage_path mounted under $HOME/.local/share/containers/storage with an Amazon EBS backed Kubernetes PVC

# /etc/containers/containers.conf
[containers]
netns="slirp4netns"
userns="host"
ipcns="host"
utsns="host"
cgroupns="host"
cgroups="disabled"
log_driver = "k8s-file"
volumes = [
    "/proc:/proc"
]
default_sysctls = []
[engine]
cgroup_manager = "cgroupfs"
events_logger="file"
runtime="crun"
# /etc/containers/storage.conf
[storage]
driver = "overlay"
runroot = "/run/containers/storage"
graphroot = "/var/lib/containers/storage"
rootless_storage_path = "$HOME/.local/share/containers/storage"

[storage.options]
additionalimagestores = [
    "/var/lib/shared",
]
pull_options = {enable_partial_images = "false", use_hard_links = "false", ostree_repos=""}

[storage.options.overlay]
mount_program = "/usr/bin/overlayfs"
mountopt = "nodev,fsync=0"
[storage.options.thinpool]

Additional information

Additional information like issue happens only occasionally or issue happens with a particular architecture or on a particular setting

@i-raise-issues i-raise-issues added the kind/bug Categorizes issue or PR as related to a bug. label Nov 18, 2023
@rhatdan
Copy link
Member

rhatdan commented Nov 19, 2023

podman-compose is not a part of the podman project.

If you can figure out the podman command podman-compose is generating we could look at it.

@i-raise-issues i-raise-issues changed the title Podman compose - Netavark (exit code 1): Sysctl error: IO Error: Read-only file system (os error 30) Podman - Netavark (exit code 1): Sysctl error: IO Error: Read-only file system (os error 30) Nov 20, 2023
@i-raise-issues i-raise-issues changed the title Podman - Netavark (exit code 1): Sysctl error: IO Error: Read-only file system (os error 30) Podman in a network - Netavark (exit code 1): Sysctl error: IO Error: Read-only file system (os error 30) Nov 20, 2023
@i-raise-issues
Copy link
Author

podman-compose is not a part of the podman project.

If you can figure out the podman command podman-compose is generating we could look at it.

Thanks for the reply @rhatdan. I've translated the issue into podman only commands and updated the entire Issue.

@rhatdan
Copy link
Member

rhatdan commented Nov 20, 2023

@Luap99 PTAL

@Luap99
Copy link
Member

Luap99 commented Nov 20, 2023

you need to have a read write /proc mount, see #19991 and containers/netavark#825

@Luap99 Luap99 closed this as not planned Won't fix, can't repro, duplicate, stale Nov 20, 2023
@github-actions github-actions bot added the locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments. label Feb 19, 2024
@github-actions github-actions bot locked as resolved and limited conversation to collaborators Feb 19, 2024
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
kind/bug Categorizes issue or PR as related to a bug. locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments.
Projects
None yet
Development

No branches or pull requests

3 participants