Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

bridge: support DHCP ipam driver #869

Closed
wants to merge 1 commit into from
Closed

Conversation

jimparis
Copy link

@jimparis jimparis commented Dec 7, 2023

Fixes #868

Copy link
Member

@Luap99 Luap99 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the contribution. The reason we do not support it right now is because we we do not really support layer 2 bridge setup. Our current code always assumes that we created the bridge adn we will also remove it once all interfaces connected to the bridge are removed.
I assume in order to use it you have your external interface connected to the bridge?

Also this assumption means we always create firewall rules to NAT the traffic which does not make sense for these setups usually as you want to talk directly to your gateway without having NAT involved. So I think we would need add proper layer two bridge support first before enabling DHCP for bridge networks.


As far as these changes the same code should not be duplicated for bridge, I suggest to split the common parts out into helper functions.

@jimparis
Copy link
Author

jimparis commented Dec 8, 2023

Hi! Thanks for looking.

Thanks for the contribution. The reason we do not support it right now is because we we do not really support layer 2 bridge setup. Our current code always assumes that we created the bridge adn we will also remove it once all interfaces connected to the bridge are removed. I assume in order to use it you have your external interface connected to the bridge?

Yes, I already have an existing bridge on the host, which is connected to external physical interfaces as well as VMs running on the host. With this patch, I'm able to treat containers the same as VMs, where they appear on the LAN as if they were their own machine.

Also this assumption means we always create firewall rules to NAT the traffic which does not make sense for these setups usually as you want to talk directly to your gateway without having NAT involved. So I think we would need add proper layer two bridge support first before enabling DHCP for bridge networks.

I don't follow -- there are currently no NAT rules being created as far as I see. My containers are directly accessing the LAN with their assigned DHCP addresses:

# podman network inspect systemd-lan
[
     {
          "name": "systemd-lan",
          "id": "74c11da23cedc245b2e4b0fb1f7a74f6268e37b0fbdf8d9b94c94edaba868b1e",
          "driver": "bridge",
          "network_interface": "brint",
          "created": "2023-12-07T23:00:46.698964795-05:00",
          "ipv6_enabled": false,
          "internal": false,
          "dns_enabled": false,
          "ipam_options": {
               "driver": "dhcp"
          }
     }
]

Host is 10.0.1.1, another physical machine on the LAN is 10.0.1.3, my container is getting DHCP address 10.0.103.189:

# wget -q -O - http://10.0.1.3/ip
This is 10.0.1.3 and you are 10.0.1.1
# podman run --rm --network=systemd-lan alpine wget -q -O - http://10.0.1.3/ip
This is 10.0.1.3 and you are 10.0.103.189

So, this appears to be working exactly as I'd hope. The Netavark debug logs are:

[DEBUG netavark::network::validation] "Validating network namespace..."
[DEBUG netavark::commands::setup] "Setting up..."
[INFO  netavark::firewall] Using iptables firewall driver
[DEBUG netavark::network::bridge] Setup network systemd-lan
[DEBUG netavark::network::bridge] Container interface name: eth0 with IP addresses []
[DEBUG netavark::network::bridge] Bridge name: brint with IP addresses []
[DEBUG netavark::network::core_utils] Setting sysctl value for net.ipv4.ip_forward to 1
[DEBUG netavark::network::core_utils] Setting sysctl value for /proc/sys/net/ipv6/conf/eth0/autoconf to 0
[DEBUG netavark::network::core_utils] Setting sysctl value for /proc/sys/net/ipv4/conf/eth0/arp_notify to 1
[DEBUG netavark::dhcp_proxy::lib] using uds path: /run/podman/nv-proxy.sock
[DEBUG netavark::commands::setup] {
        "systemd-lan": StatusBlock {
            dns_search_domains: Some(
                [],
            ),
            dns_server_ips: Some(
                [],
            ),
            interfaces: Some(
                {
                    "eth0": NetInterface {
                        mac_address: "36:34:07:5d:5a:c4",
                        subnets: Some(
                            [
                                NetAddress {
                                    gateway: Some(
                                        10.0.1.1,
                                    ),
                                    ipnet: 10.0.103.189/16,
                                },
                            ],
                        ),
                    },
                },
            ),
        },
    }
[DEBUG netavark::commands::setup] "Setup complete"

As far as these changes the same code should not be duplicated for bridge, I suggest to split the common parts out into helper functions.

I'm new to Rust and this code, and would prefer if someone more familiar with this could "do it right" :)

@Luap99
Copy link
Member

Luap99 commented Dec 8, 2023

Ah, right I think this is more of a coincident though. Because of the dhcp driver we have no subnets in the ipam struct so the firewall code sees no subnets and doe snot add any firewall rules because of this (This is expected as we also have the ipam driver none were this is wanted).

But given this I think that would make it acceptable to me, it clearly solves your usecase correctly.

src/network/bridge.rs Outdated Show resolved Hide resolved
src/network/bridge.rs Outdated Show resolved Hide resolved
src/network/bridge.rs Outdated Show resolved Hide resolved
@baude
Copy link
Member

baude commented Dec 14, 2023

iill approve as we wait for the comments to be addressed ...

/approve

@jimparis jimparis force-pushed the main branch 3 times, most recently from 7e0eed9 to c52a907 Compare March 10, 2024 05:22
@jimparis
Copy link
Author

Addressed the review comments

@ccakes
Copy link

ccakes commented Apr 27, 2024

Just in case in helps, I'm trying to get the exact same use case working and this would solve it 👍

@jimparis
Copy link
Author

Just in case in helps, I'm trying to get the exact same use case working and this would solve it 👍

Thanks. Have you been able to test this? It's been working fine for me and should hopefully be ready to get merged.

Copy link
Member

@Luap99 Luap99 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think functionally this is fine, however I still have concerns that this is working based on implementation detail; rather than a well defined behaviour.

I think we should expose this as some specific layer2 setup instead, the reason is due our firewall setup and bridge removal. The firewall setup works today with iptables as it doesn't do anything if finds no subnet. However we also have a nftables and firewalld driver that might something in this case.

@mheon WDYT?

@mheon
Copy link
Member

mheon commented Apr 29, 2024

All of our firewall code should be IP based (I don't think we can reasonably do NAT or other firewall operations without a source subnet). So I think it is a reasonable assumption that any firewall driver we include should only function in the case that a source subnet (v4 or v6, doesn't really matter, as long as there is at least 1) is set on the bridge. Anything lower-level than that isn't really a firewall anymore, but would arguably be an implementation detail of the lower-level bridge driver.

@Luap99
Copy link
Member

Luap99 commented Apr 29, 2024

All of our firewall code should be IP based (I don't think we can reasonably do NAT or other firewall operations without a source subnet). So I think it is a reasonable assumption that any firewall driver we include should only function in the case that a source subnet (v4 or v6, doesn't really matter, as long as there is at least 1) is set on the bridge. Anything lower-level than that isn't really a firewall anymore, but would arguably be an implementation detail of the lower-level bridge driver.

Yes the code is ip cased but it will still start to create to tables/chains that then are left empty due to no ips. I rather have a well defined behavior where we do not call into the firewall code at all in this case here so we know we won't mess anything up.

@mheon
Copy link
Member

mheon commented Apr 29, 2024

Yeah, that's reasonable. There's no reason for the firewall code to be called at all if the interface we're configuring doesn't have an IP.

@mheon
Copy link
Member

mheon commented Apr 29, 2024

(Well, a Netavark-managed IP at least)

Copy link
Contributor

openshift-ci bot commented Jun 14, 2024

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: AkiraMoon305, baude, jimparis

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

Support DHCP for bridge driver, like macvlan.

Signed-off-by: Jim Paris <[email protected]>
@jimparis
Copy link
Author

jimparis commented Aug 8, 2024

Rebased and re-tested on main. Where do we stand on this PR? It's still a feature I use and need for my setup.

@mwinters0
Copy link

I created #1090 to address the underlying concerns and preferred approach, since those aren't DHCP-specific.

@mikesbytes
Copy link

Hello, just throwing my hat in the ring for this being a desirable feature. This is pretty critical for achieving my desired setup of hosting podman containers alongside nspawn containers and VMs and reverse proxying them all at the host level while also maintaining direct connectivity inside LAN.

I see that #1090 is now merged, and this PR hasn't seen activity for a couple months. What's the status? I would offer to contribute myself but it looks like things are already basically at the finish line.

@Luap99
Copy link
Member

Luap99 commented Dec 2, 2024

Yes this should be good to rebase and then add a check that DHCP is only supported with the unmanaged mode.

@xhebox
Copy link

xhebox commented Jan 5, 2025

A usecase report from me.

I am using openwrt as a podman host for hosting apps. I'd like to let openwrt creats an interface podman0 and takes over the management for better integration.

But this patch did not work as expected.

Because sending DHCP requests from podman0 does not seem broadcast it correctly to dnsmasq. You can use udhcpc inside the container to get an ip address. But not with netavark dhcp-proxy.

I have to setup another veth pair with one slaved under podman0 bridge. And let mozim dhcp requests on the other veth interface instead of the bridge interface, to really forward BOOTP/DHCP requests.

@jimparis
Copy link
Author

jimparis commented Jan 6, 2025

Yes this should be good to rebase and then add a check that DHCP is only supported with the unmanaged mode.

Sorry, I don't fully understand what #1090 changed and what new check this PR needs. Any chance you could handle updating this PR with what you'd like to see?

@xhebox
Copy link

xhebox commented Jan 7, 2025

Yes this should be good to rebase and then add a check that DHCP is only supported with the unmanaged mode.

Sorry, I don't fully understand what #1090 changed and what new check this PR needs. Any chance you could handle updating this PR with what you'd like to see?

I believe he means that dhcp ipam driver for bridge(this PR) can only be used when {"options":{"mode":"unmanaged"}}(#1090) is specified in the network definition. In other words, guard the code of this PR by some if statements BridgeMode::unmanaged == data.mode.

#1090 basically assume that bridge is created externally and not managed by netavark, same to firewall rules, sysctl settings. netavark just pick up a bridge and attach veth pairs.

@Luap99
Copy link
Member

Luap99 commented Jan 7, 2025

Yes I can take care of the rebase/update before we do the next release. My goal is to get this done for the 1.14 release which will be released in February most likely.

@Luap99
Copy link
Member

Luap99 commented Jan 9, 2025

Replaced with #1159

@Luap99 Luap99 closed this Jan 9, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

bridge doesn't work with DHCP
9 participants