Hi Cube

Docker Networking: macvlan bridge

Docker takes a slightly different approach with its network drivers, confusing new users which are familiar with general terms used by other virtualization products. If you are looking for a way to bridge the container into a physical network, you have come to the right place. You can connect the container into a physical Layer 2 network by using macvlan driver. If you are looking for a different network connection, refer to my docker network drivers post.

Before I begin, you should check some basics on what macvlan is, why it is a better alternative to a linux bridge and how it compares with ipvlan.

Important: As of Docker 1.11 macvlan network driver is part of Docker’s experimental build and is not available in the production release. You can find more info on how to use the experimental build here. If you are looking for a production ready solution to connected your container into a physical Layer 2 network, you should stick to pipework for the time being.

Last but not least, macvlan driver requires Linux Kernel 3.9 or greater. You can check your kernel version with uname -r. If you’re running RHEL (CentoOS, Scientific Linux, …) 6, you’re out of luck, upgrade to RHEL 7. You should be fine with your updated Ubuntu.

Macvlan Bridge Mode Configuration

While there are multiple macvlan modes, Docker macvlan driver only supports macvlan bridge mode. Macvlan bridge mode allows you to configure the following topology:

Docker macvlan bridge mode

Being IPv6 aware, all the configuration examples are dual-stack. If you have no desire or means to provide native IPv6 connectivity, simply omit all IPv6 configurations which I kindly marked in italics.

First, make sure you’re a root or have superuser permissions.

sudo su

By default docker comes with some of the networks preconfigured. List them with:

# docker network ls
NETWORK ID NAME DRIVER 7fca4eb8c647 bridge bridge 9f904ee27bf5 none null cf03ee007fb4 host host

Docker macvlan network connects container interfaces with a parent, physical interface. You need to check if physical interface is up & running:

# ip addr | grep 'mtu|inet'
[...] 2: eth0: mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 inet brd scope global eth0 inet6 2001:db8:babe:cafe::2/64 scope global inet6 fe80::baae:deff:fead:beef/64 scope link [...]

eth0 is up & running, it has both IPv4 and IPv6 address, so you’re good to go.

Create a new macvlan network called macvlan0.

docker network create -d macvlan \
    --subnet= --gateway= \
    --subnet=2001:db8:babe:cafe::/64 --gateway=2001:db8:babe:cafe::1 \
    -o parent=eth0 \
    --ipv6 \

Why do you have to configure both L3 subnet and default gateway if macvlan promises to deliver an L2 network? Surely IP configuration of the containers in the macvlan network is dealt separately, either with the static configuration or by the external DHCP server?

Unfortunately – no. Docker controls the IP address assignment for network and endpoint interfaces via the IPAM driver(s). Libnetwork has a default, built-in IPAM driver and allows third party IPAM drivers to be dynamically plugged. On network creation, the user can specify which IPAM driver libnetwork needs to use for the network’s IP address management. For the time being, there is no IPAM driver that would communicate with external DHCP server, so you need to rely on Docker’s default IPAM driver for container IP address and settings configuration.

Containers use host’s DNS settings by default, so there is no need to configure DNS servers.

If you absolutely need your containers to acquire IP data from the DHCP server, macvlan driver is currently not the solution you are looking for. Use pipework.

Warning: You should not have an external DHCP server assigning IP addresses for the same subnet you have configured at the creation of the macvlan network. Docker’s IPAM driver is not aware of the IP addresses already in use by external DHCP clients, leading to possible IP address conflicts in the subnet.

Verify that the macvlan0 network was created:

# docker network ls
NETWORK ID NAME DRIVER 7fca4eb8c647 bridge bridge 9f904ee27bf5 none null cf03ee007fb4 host host f08ca9e2eb1b macvlan0 macvlan

Check the network details:

# docker network inspect macvlan0
[ { "Name": "macvlan0", "Id": "f08ca9e2eb1b66fdbe0f231235d8879465804e7b702fe3702f2fd22a06f5fdcb", "Scope": "local", "Driver": "macvlan", "EnableIPv6": true, "IPAM": { "Driver": "default", "Options": {}, "Config": [ { "Subnet": "", "Gateway": "" }, { "Subnet": "2001:db8:babe:cafe::/64", "Gateway": "2001:db8:babe:cafe::1" } ] }, "Internal": false, "Containers": {}, "Options": { "parent": "eth0" }, "Labels": {} } ]

You have just created a network that uses macvlan driver on parent interface eth0. It uses default IPAM driver with one IPv4 and (optionally) one IPv6 subnet. No containers are currently connected to the network.

Time to spin up the first container. Select the image of your choice or just use phusion/baseimage for the purpose of this tutorial:

docker run \
  --name='container0' \
  --hostname='container0' \
  --net=macvlan0 \
  --detach=true \

container0 has one interface, connected to the macvlan0 network. Use --detach=true to run the container in the background.

Verify that container is running:

# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS NAMES 4eddd1fca8e5 phusion/baseimage:latest "/sbin/my_init" 1 minutes ago Up 1 minutes container0

Check the network details again:

# docker network inspect macvlan0
[ { "Name": "macvlan0", "Id": "f08ca9e2eb1b66fdbe0f231235d8879465804e7b702fe3702f2fd22a06f5fdcb", "Scope": "local", "Driver": "macvlan", "EnableIPv6": true, "IPAM": { "Driver": "default", "Options": {}, "Config": [ { "Subnet": "", "Gateway": "" }, { "Subnet": "2001:db8:babe:cafe::/64", "Gateway": "2001:db8:babe:cafe::1" } ] }, "Internal": false, "Containers": { "4eddd1fca8e53c016fd742bb67a721126b401906c45b4239c827901fd91ce108": { "Name": "container0", "EndpointID": "932d4d412bcd1d26926709d5932ab1994d09e9b684e07482bf30c0e791c9ec74", "MacAddress": "02:42:0a:0a:28:02", "IPv4Address": "", "IPv6Address": "2001:db8:babe:cafe::3/64" } }, "Options": { "parent": "eth0" }, "Labels": {} } ]

Note that network now has a container attached. IPAM driver ensures the container got an IPv4 and an IPv6 address from the subnets configured for the macvlan network.

Verify that the IP address is really configured in the container by issuing the ip a command:

# docker exec -ti container0 ip a | grep 'mtu|inet'
[...] 26: eth0@if2: mtu 1500 qdisc noqueue state UNKNOWN group default inet scope global eth0 inet6 2001:db8:babe:cafe::3/64 scope global nodad inet6 fe80::42:aff:fe0a:2802/64 scope link

Also verify IP route in the container and notice the default route pointing to macvlan0 network’s default gateway and route to macvlan0 network subnet:

# docker exec -ti container0 ip route
default via dev eth0 dev eth0 proto kernel scope link src

Optionally, verify IPv6 route:

# docker exec -ti container0 ip -6 route
2001:db8:babe:cafe::/64 dev eth0 proto kernel metric 256 fe80::/64 dev eth0 proto kernel metric 256 default via 2001:db8:babe:cafe::1 dev eth0 metric 1024

Spin up the second container. This time configure the IP address manually:

docker run \
  --name='container1' \
  --hostname='container1' \
  --net=macvlan0 \
  --detach=true \
  --ip= \
  --ip6=2001:db8:babe:cafe::4 \

Check the network details:

# docker network inspect macvlan0
[ { "Name": "macvlan0", "Id": "f08ca9e2eb1b66fdbe0f231235d8879465804e7b702fe3702f2fd22a06f5fdcb", "Scope": "local", "Driver": "macvlan", "EnableIPv6": true, "IPAM": { "Driver": "default", "Options": {}, "Config": [ { "Subnet": "", "Gateway": "" }, { "Subnet": "2001:db8:babe:cafe::/64", "Gateway": "2001:db8:babe:cafe::1" } ] }, "Internal": false, "Containers": { "1feb1a57f1b8225ac0409fe4a10d7468d6097f5f739ccf4e42fd569ccf246837": { "Name": "container1", "EndpointID": "e05c02ce744ca66d45d60d732e3fc3609d5fe0d67f1bb55b15269de7378ebb48", "MacAddress": "02:42:0a:0a:28:04", "IPv4Address": "", "IPv6Address": "2001:db8:babe:cafe::3/64" }, "4eddd1fca8e53c016fd742bb67a721126b401906c45b4239c827901fd91ce108": { "Name": "container0", "EndpointID": "932d4d412bcd1d26926709d5932ab1994d09e9b684e07482bf30c0e791c9ec74", "MacAddress": "02:42:0a:0a:28:02", "IPv4Address": "", "IPv6Address": "2001:db8:babe:cafe::3/64" } }, "Options": { "parent": "eth0" }, "Labels": {} } ]

Verify that container0 has connectivity with default gateway:

# docker exec -ti container0 ping -c 4
PING ( 56(84) bytes of data. 64 bytes from icmp_seq=1 ttl=64 time=0.502 ms 64 bytes from icmp_seq=2 ttl=64 time=0.214 ms 64 bytes from icmp_seq=3 ttl=64 time=0.268 ms 64 bytes from icmp_seq=4 ttl=64 time=0.199 ms

Try to ping the macvlan’s parent eth0 interface from within the container:

# docker exec -ti container0 ping -c 4
PING ( 56(84) bytes of data. From icmp_seq=1 Destination Host Unreachable From icmp_seq=2 Destination Host Unreachable From icmp_seq=3 Destination Host Unreachable From icmp_seq=4 Destination Host Unreachable

Ping will fail. While containers utilize the parent physical interface of the Docker host to reach the outside network, they have no direct connectivity with the physical interface. If you need direct connectivity between the container and the docker host configure a macvlan subinterface on the host, or use a different docker network type.

Verify the connectivity between the containers by pinging container0 from container1:

# docker exec -ti container1 ping -c 4
PING ( 56(84) bytes of data. 64 bytes from icmp_seq=1 ttl=64 time=0.098 ms 64 bytes from icmp_seq=2 ttl=64 time=0.039 ms 64 bytes from icmp_seq=3 ttl=64 time=0.066 ms 64 bytes from icmp_seq=4 ttl=64 time=0.048 ms

Docker macvlan bridge mode connectivity

Finally, check the ARP table on the router. After all the pings performed, it should have the entries for the docker host IP address (mapped to the host’s physical NIC MAC address) and both container IP addresses (mapped to containers’ virtual MAC addresses).

router# show ip arp 
Protocol Address Age (min) Hardware Addr Type Interface Internet 7 b8ae.dead.beef ARPA Gi0 Internet 3 0242.0a0a.2802 ARPA Gi0 Internet 2 0242.0a0a.2804 ARPA Gi0

Congratulations, you have just connected two Docker containers into the physical Layer 2 network using the macvlan network driver!

Next: Configure multiple macvlans on a 802.1Q trunk VLAN sub interfaces

9 comments for “Docker Networking: macvlan bridge

Comments are closed.