Hi Cube

Docker Networking: macvlan bridge

Docker takes a slightly different approach with its network drivers, confusing new users which are familiar with general terms used by other virtualization products. If you are looking for a way to bridge the container into a physical network, you have come to the right place. You can connect the container into a physical Layer 2 network by using macvlan driver. If you are looking for a different network connection, refer to my docker network drivers post.

Before I begin, you should check some basics on what macvlan is, why it is a better alternative to a linux bridge and how it compares with ipvlan.

Important: As of Docker 1.11 macvlan network driver is part of Docker’s experimental build and is not available in the production release. You can find more info on how to use the experimental build here. If you are looking for a production ready solution to connected your container into a physical Layer 2 network, you should stick to pipework for the time being.

Last but not least, macvlan driver requires Linux Kernel 3.9 or greater. You can check your kernel version with uname -r. If you’re running RHEL (CentoOS, Scientific Linux, …) 6, you’re out of luck, upgrade to RHEL 7. You should be fine with your updated Ubuntu.

Macvlan Bridge Mode Configuration

While there are multiple macvlan modes, Docker macvlan driver only supports macvlan bridge mode. Macvlan bridge mode allows you to configure the following topology:

Docker macvlan bridge mode

Being IPv6 aware, all the configuration examples are dual-stack. If you have no desire or means to provide native IPv6 connectivity, simply omit all IPv6 configurations which I kindly marked in italics.

First, make sure you’re a root or have superuser permissions.

sudo su

By default docker comes with some of the networks preconfigured. List them with:

# docker network ls
NETWORK ID NAME DRIVER 7fca4eb8c647 bridge bridge 9f904ee27bf5 none null cf03ee007fb4 host host

Docker macvlan network connects container interfaces with a parent, physical interface. You need to check if physical interface is up & running:

# ip addr | grep 'mtu|inet'
[...] 2: eth0: mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 inet 10.0.0.2/24 brd 10.0.0.255 scope global eth0 inet6 2001:db8:babe:cafe::2/64 scope global inet6 fe80::baae:deff:fead:beef/64 scope link [...]

eth0 is up & running, it has both IPv4 and IPv6 address, so you’re good to go.

Create a new macvlan network called macvlan0.

docker network create -d macvlan \
    --subnet=10.0.0.0/24 --gateway=10.0.0.1 \
    --subnet=2001:db8:babe:cafe::/64 --gateway=2001:db8:babe:cafe::1 \
    -o parent=eth0 \
    --ipv6 \
    macvlan0

Why do you have to configure both L3 subnet and default gateway if macvlan promises to deliver an L2 network? Surely IP configuration of the containers in the macvlan network is dealt separately, either with the static configuration or by the external DHCP server?

Unfortunately – no. Docker controls the IP address assignment for network and endpoint interfaces via the IPAM driver(s). Libnetwork has a default, built-in IPAM driver and allows third party IPAM drivers to be dynamically plugged. On network creation, the user can specify which IPAM driver libnetwork needs to use for the network’s IP address management. For the time being, there is no IPAM driver that would communicate with external DHCP server, so you need to rely on Docker’s default IPAM driver for container IP address and settings configuration.

Containers use host’s DNS settings by default, so there is no need to configure DNS servers.

If you absolutely need your containers to acquire IP data from the DHCP server, macvlan driver is currently not the solution you are looking for. Use pipework.

Warning: You should not have an external DHCP server assigning IP addresses for the same subnet you have configured at the creation of the macvlan network. Docker’s IPAM driver is not aware of the IP addresses already in use by external DHCP clients, leading to possible IP address conflicts in the subnet.

Verify that the macvlan0 network was created:

# docker network ls
NETWORK ID NAME DRIVER 7fca4eb8c647 bridge bridge 9f904ee27bf5 none null cf03ee007fb4 host host f08ca9e2eb1b macvlan0 macvlan

Check the network details:

# docker network inspect macvlan0
[ { "Name": "macvlan0", "Id": "f08ca9e2eb1b66fdbe0f231235d8879465804e7b702fe3702f2fd22a06f5fdcb", "Scope": "local", "Driver": "macvlan", "EnableIPv6": true, "IPAM": { "Driver": "default", "Options": {}, "Config": [ { "Subnet": "10.0.0.0/24", "Gateway": "10.0.0.1" }, { "Subnet": "2001:db8:babe:cafe::/64", "Gateway": "2001:db8:babe:cafe::1" } ] }, "Internal": false, "Containers": {}, "Options": { "parent": "eth0" }, "Labels": {} } ]

You have just created a network that uses macvlan driver on parent interface eth0. It uses default IPAM driver with one IPv4 and (optionally) one IPv6 subnet. No containers are currently connected to the network.

Time to spin up the first container. Select the image of your choice or just use phusion/baseimage for the purpose of this tutorial:

docker run \
  --name='container0' \
  --hostname='container0' \
  --net=macvlan0 \
  --detach=true \
  phusion/baseimage:latest

container0 has one interface, connected to the macvlan0 network. Use --detach=true to run the container in the background.

Verify that container is running:

# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS NAMES 4eddd1fca8e5 phusion/baseimage:latest "/sbin/my_init" 1 minutes ago Up 1 minutes container0

Check the network details again:

# docker network inspect macvlan0
[ { "Name": "macvlan0", "Id": "f08ca9e2eb1b66fdbe0f231235d8879465804e7b702fe3702f2fd22a06f5fdcb", "Scope": "local", "Driver": "macvlan", "EnableIPv6": true, "IPAM": { "Driver": "default", "Options": {}, "Config": [ { "Subnet": "10.0.0.0/24", "Gateway": "10.0.0.1" }, { "Subnet": "2001:db8:babe:cafe::/64", "Gateway": "2001:db8:babe:cafe::1" } ] }, "Internal": false, "Containers": { "4eddd1fca8e53c016fd742bb67a721126b401906c45b4239c827901fd91ce108": { "Name": "container0", "EndpointID": "932d4d412bcd1d26926709d5932ab1994d09e9b684e07482bf30c0e791c9ec74", "MacAddress": "02:42:0a:0a:28:02", "IPv4Address": "10.0.0.3/24", "IPv6Address": "2001:db8:babe:cafe::3/64" } }, "Options": { "parent": "eth0" }, "Labels": {} } ]

Note that network now has a container attached. IPAM driver ensures the container got an IPv4 and an IPv6 address from the subnets configured for the macvlan network.

Verify that the IP address is really configured in the container by issuing the ip a command:

# docker exec -ti container0 ip a | grep 'mtu|inet'
[...] 26: [email protected]: mtu 1500 qdisc noqueue state UNKNOWN group default inet 10.0.0.3/24 scope global eth0 inet6 2001:db8:babe:cafe::3/64 scope global nodad inet6 fe80::42:aff:fe0a:2802/64 scope link

Also verify IP route in the container and notice the default route pointing to macvlan0 network’s default gateway and route to macvlan0 network subnet:

# docker exec -ti container0 ip route
default via 10.0.0.1 dev eth0 10.0.0.0/24 dev eth0 proto kernel scope link src 10.0.0.3

Optionally, verify IPv6 route:

# docker exec -ti container0 ip -6 route
2001:db8:babe:cafe::/64 dev eth0 proto kernel metric 256 fe80::/64 dev eth0 proto kernel metric 256 default via 2001:db8:babe:cafe::1 dev eth0 metric 1024

Spin up the second container. This time configure the IP address manually:

docker run \
  --name='container1' \
  --hostname='container1' \
  --net=macvlan0 \
  --detach=true \
  --ip=10.0.0.4 \
  --ip6=2001:db8:babe:cafe::4 \
  phusion/baseimage:latest

Check the network details:

# docker network inspect macvlan0
[ { "Name": "macvlan0", "Id": "f08ca9e2eb1b66fdbe0f231235d8879465804e7b702fe3702f2fd22a06f5fdcb", "Scope": "local", "Driver": "macvlan", "EnableIPv6": true, "IPAM": { "Driver": "default", "Options": {}, "Config": [ { "Subnet": "10.0.0.0/24", "Gateway": "10.0.0.1" }, { "Subnet": "2001:db8:babe:cafe::/64", "Gateway": "2001:db8:babe:cafe::1" } ] }, "Internal": false, "Containers": { "1feb1a57f1b8225ac0409fe4a10d7468d6097f5f739ccf4e42fd569ccf246837": { "Name": "container1", "EndpointID": "e05c02ce744ca66d45d60d732e3fc3609d5fe0d67f1bb55b15269de7378ebb48", "MacAddress": "02:42:0a:0a:28:04", "IPv4Address": "10.0.0.4/24", "IPv6Address": "2001:db8:babe:cafe::3/64" }, "4eddd1fca8e53c016fd742bb67a721126b401906c45b4239c827901fd91ce108": { "Name": "container0", "EndpointID": "932d4d412bcd1d26926709d5932ab1994d09e9b684e07482bf30c0e791c9ec74", "MacAddress": "02:42:0a:0a:28:02", "IPv4Address": "10.0.0.3/24", "IPv6Address": "2001:db8:babe:cafe::3/64" } }, "Options": { "parent": "eth0" }, "Labels": {} } ]

Verify that container0 has connectivity with default gateway:

# docker exec -ti container0 ping -c 4 10.0.0.1
PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.502 ms 64 bytes from 10.0.0.1: icmp_seq=2 ttl=64 time=0.214 ms 64 bytes from 10.0.0.1: icmp_seq=3 ttl=64 time=0.268 ms 64 bytes from 10.0.0.1: icmp_seq=4 ttl=64 time=0.199 ms

Try to ping the macvlan’s parent eth0 interface from within the container:

# docker exec -ti container0 ping -c 4 10.0.0.2
PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. From 10.10.40.3 icmp_seq=1 Destination Host Unreachable From 10.10.40.3 icmp_seq=2 Destination Host Unreachable From 10.10.40.3 icmp_seq=3 Destination Host Unreachable From 10.10.40.3 icmp_seq=4 Destination Host Unreachable

Ping will fail. While containers utilize the parent physical interface of the Docker host to reach the outside network, they have no direct connectivity with the physical interface. If you need direct connectivity between the container and the docker host configure a macvlan subinterface on the host, or use a different docker network type.

Verify the connectivity between the containers by pinging container0 from container1:

# docker exec -ti container1 ping -c 4 10.0.0.3
PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.098 ms 64 bytes from 10.0.0.3: icmp_seq=2 ttl=64 time=0.039 ms 64 bytes from 10.0.0.3: icmp_seq=3 ttl=64 time=0.066 ms 64 bytes from 10.0.0.3: icmp_seq=4 ttl=64 time=0.048 ms

Docker macvlan bridge mode connectivity

Finally, check the ARP table on the router. After all the pings performed, it should have the entries for the docker host IP address (mapped to the host’s physical NIC MAC address) and both container IP addresses (mapped to containers’ virtual MAC addresses).

router# show ip arp 
Protocol Address Age (min) Hardware Addr Type Interface Internet 10.0.0.2 7 b8ae.dead.beef ARPA Gi0 Internet 10.0.0.3 3 0242.0a0a.2802 ARPA Gi0 Internet 10.0.0.4 2 0242.0a0a.2804 ARPA Gi0

Congratulations, you have just connected two Docker containers into the physical Layer 2 network using the macvlan network driver!

Next: Configure multiple macvlans on a 802.1Q trunk VLAN sub interfaces

3 comments for “Docker Networking: macvlan bridge

  1. yomgui
    September 23, 2016 at 14:59

    Hi !

    I’m trying to make some tests on docker/macvlan setup to be able to have in/out traffic between containers and the external network.

    I made exactly the same setup as you, macvlan mode bridge :

    # uname -a
    Linux gb-macvlan-01 4.4.0-22-generic #40-Ubuntu SMP Thu May 12 22:03:46 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux

    (Also tried on centos 7)

    # ip show ens3
    2: ens3: mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether fa:16:3e:24:46:e7 brd ff:ff:ff:ff:ff:ff
    inet 192.171.40.42/24 brd 192.171.40.255 scope global ens3
    valid_lft forever preferred_lft forever
    inet6 fe80::f816:3eff:fe24:46e7/64 scope link
    valid_lft forever preferred_lft forever

    # ip route
    default via 192.171.40.1 dev ens3
    169.254.169.254 via 192.171.40.1 dev ens3
    172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1 linkdown
    192.171.40.0/24 dev ens3 proto kernel scope link src 192.171.40.42

    # docker network create -d macvlan –ip-range=192.171.40.144/29 –subnet=192.171.40.0/24 –gateway=192.171.40.1 -o parent=ens3 macvlan0
    # docker run –privileged –network=macvlan0 -itd –name cont1 busybox /bin/ash
    # docker run –privileged –network=macvlan0 -itd –name cont2 busybox /bin/ash

    # docker exec -it cont1 ip route
    default via 192.171.40.1 dev eth0
    192.171.40.0/24 dev eth0 src 192.171.40.144

    Containers cont1 & cont2 connectivity works fine
    But no way to communicate with the external network ….

    from the host:
    # ping -c3 192.171.40.1
    PING 192.171.40.1 (192.171.40.1) 56(84) bytes of data.
    64 bytes from 192.171.40.1: icmp_seq=1 ttl=64 time=0.208 ms
    64 bytes from 192.171.40.1: icmp_seq=2 ttl=64 time=0.324 ms
    64 bytes from 192.171.40.1: icmp_seq=3 ttl=64 time=0.277 ms

    from a container:
    # docker exec -it cont1 ping -c3 192.171.40.1
    3 packets transmitted, 0 packets received, 100% packet loss

    # docker exec -it cont1 nc 192.171.40.1 80
    nc: can’t connect to remote host (192.171.40.1): No route to host

    I’m completely stuck here, it seem a simple case !
    I thought about some sysctl conf like ip_forward, rp_filter or iptables but i think it’s not useful on macvlan …

    • yomgui
      September 26, 2016 at 14:58

      I was testing on Openstack instance, which seems to have some MAC filtering rules on MAC not belong to the proper range …
      Works perfectly on bare metal node…

Leave a Reply