Docker-docker网络
1. Docker原生网络
• docker的镜像是令人称道的地方,但网络功能还是相对薄弱的部分。
• docker安装后会自动创建3种网络:bridge、host、none
[root@vm2 harbor]# docker network ls ##查看docker网络
NETWORK ID NAME DRIVER SCOPE
49954159e96f bridge bridge local
1cc2593aac80 host host local
36e3b54b4a68 none null local
[root@vm2 harbor]#
• docker安装时会创建一个名为 docker0 的Linux bridge,新建的容器会自动桥接到这个接口。
[root@vm2 harbor]# ip addr show docker0 ##查看docker网桥
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
link/ether 02:42:bf:cd:16:d9 brd ff:ff:ff:ff:ff:ff
inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
valid_lft forever preferred_lft forever
[root@vm2 harbor]# docker run -d nginx ##运行容器
8463be0e2560f97831a3e6a02260d4cf6d2cb4deb53aba931c343a11878d3c88
[root@vm2 yum.repos.d]# brctl show ##查看运行容器的网桥信息
bridge name bridge id STP enabled interfaces
br-62d2884ef1ae 8000.02422f2ac665 no veth3b73c8e
1.1. bridge模式下容器没有一个公有ip,只有宿主机可以直接访问,外部主机 是不可见的。
1.1.1 容器通过宿主机的NAT规则后可以访问外网。
1.2. host网络模式需要在容器创建时指定 –network=host
1.2.1 指定网络模式为host运行容器时
[root@lnmp0 ~]# docker run -it --name demo --network host busybox
/ # ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast qlen 1000
link/ether 00:0c:29:d0:1b:0a brd ff:ff:ff:ff:ff:ff
inet 192.168.155.100/24 brd 192.168.155.255 scope global noprefixroute ens33
valid_lft forever preferred_lft forever
inet6 fe80::c590:6719:6caa:bde5/64 scope link noprefixroute
valid_lft forever preferred_lft forever
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue
link/ether 02:42:81:2c:aa:39 brd ff:ff:ff:ff:ff:ff
inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
valid_lft forever preferred_lft forever
inet6 fe80::42:81ff:fe2c:aa39/64 scope link
valid_lft forever preferred_lft forever
1.2.2 不指定网络模式运行容器时
[root@lnmp0 ~]# docker run -it --name demo busybox
/ # ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
241: eth0@if242: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue
link/ether 02:42:ac:11:00:02 brd ff:ff:ff:ff:ff:ff
inet 172.17.0.2/16 brd 172.17.255.255 scope global eth0
valid_lft forever preferred_lft forever
/ #
1.2.3 host模式可以让容器共享宿主机网络栈,这样的好处是外部主机与容 器直接通信,但是容器的网络缺少隔离性。
1.3. none模式是指禁用网络功能,只有lo接口,在容器创建时使用 –network=none指定。
[root@lnmp0 ~]# docker run -it --name demo --network none busybox
/ # ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
/ #
2. Docker自定义网络
• 自定义网络模式,docker提供了三种自定义网络驱动:
• bridge
• overlay
• macvlan
bridge驱动类似默认的bridge网络模式,但增加了一些新的功能, overlay和macvlan是用于创建跨主机网络。
建议使用自定义的网络来控制哪些容器可以相互通信,还可以自动DNS解析容器名称到IP地址。
2.1 创建自定义网桥
[root@lnmp0 ~]# docker network create -d bridge my_net1 ##自定义创建网桥
06b4afcaa6a377b052798d2dcdaf273bc330932e00b1cd579908633ba38da859
[root@lnmp0 ~]# docker network ls
NETWORK ID NAME DRIVER SCOPE
49954159e96f bridge bridge local
480936d6e80c host host local
06b4afcaa6a3 my_net1 bridge local ##创建成功
2516d6848724 none null local
[root@lnmp0 ~]# docker network inspect my_net1 ##查看创建网桥my_net1的详细信息
[
{
"Name": "my_net1",
"Id": "06b4afcaa6a377b052798d2dcdaf273bc330932e00b1cd579908633ba38da859",
"Created": "2022-07-02T18:53:20.527571903-07:00",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": {},
"Config": [
{
"Subnet": "172.18.0.0/16",
"Gateway": "172.18.0.1"
}
]
},
2.2 还可以自定义网段
2.2.1 在创建时指定参数:–subnet、–gateway
[root@lnmp0 ~]# docker network create --subnet 192.168.0.0/24 --gateway 192.168.0.1 my_net2 ##自定义网段为192.168.0.0,自定义网关为192.168.0.04f23b927e8a30f7a74cb73cf0832b1a4b7f67a85a309949ece20969f167e1cbd
[root@lnmp0 ~]# docker network ls
NETWORK ID NAME DRIVER SCOPE
49954159e96f bridge bridge local
480936d6e80c host host local
06b4afcaa6a3 my_net1 bridge local
4f23b927e8a3 my_net2 bridge local
2516d6848724 none null local
[root@lnmp0 ~]# docker inspect my_net2 ##查看自定义建立网段后的信息
[
{
"Name": "my_net2",
"Id": "4f23b927e8a30f7a74cb73cf0832b1a4b7f67a85a309949ece20969f167e1cbd",
"Created": "2022-07-02T19:01:05.885535718-07:00",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": {},
"Config": [
{
"Subnet": "192.168.0.0/24", ##建立成功
"Gateway": "192.168.0.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {},
"Options": {},
"Labels": {}
}
]
2.2.2 使用–ip参数可以指定容器ip地址,但必须是在自定义网桥上,默认的 bridge模式不支持,同一网桥上的容器是可以互通的。
[root@lnmp0 ~]# docker run -it --name demo --network my_net2 --ip 192.168.0.10 busybox ##使用刚才新建的网桥来运行一个容器
/ # ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
245: eth0@if246: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue
link/ether 02:42:c0:a8:00:0a brd ff:ff:ff:ff:ff:ff
inet 192.168.0.10/24 brd 192.168.0.255 scope global eth0
valid_lft forever preferred_lft forever
[root@lnmp0 ~]# docker run -it --name demo1 --network my_net2 --ip 192.168.0.20 centos ##运行第二个容器,运行时使用刚才新建的网段,使和第一个运行的容器处于同一个网段
[root@0b7d22894c74 /]# ping 192.168.0.10 ##ping第一个容器的IP,可以ping通,说明使用自定义网桥的容器之间可以互相通信
PING 192.168.0.10 (192.168.0.10) 56(84) bytes of data.
64 bytes from 192.168.0.10: icmp_seq=1 ttl=64 time=0.120 ms
64 bytes from 192.168.0.10: icmp_seq=2 ttl=64 time=0.072 ms
64 bytes from 192.168.0.10: icmp_seq=3 ttl=64 time=0.063 ms
^C
--- 192.168.0.10 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2001ms
rtt min/avg/max/mdev = 0.063/0.085/0.120/0.025 ms
[root@0b7d22894c74 /]#ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
249: eth0@if250: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:c0:a8:00:14 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 192.168.0.20/24 brd 192.168.0.255 scope global eth0
valid_lft forever preferred_lft forever
[root@0b7d22894c74 /]#
2.2.3 桥接到不同网桥上的容器,彼此是不通信的
docker在设计上就是要隔离不同network的。
[root@lnmp0 ~]# brctl show
bridge name bridge id STP enabled interfaces
br-06b4afcaa6a3 8000.0242143f97e6 no
br-1c3655a3b16a 8000.02420cfdce1a no veth1251fe6
veth23b8e6a
veth2c3d400
veth3904022
vethd5b63e1
br-2dec7494c057 8000.024254d44e9e no veth9366118
vethca5a7a7
vethed36249
br-4f23b927e8a3 8000.02423da41108 no veth6caae35
br-78df5b219464 8000.0242bc8855a0 no veth27a1c38
veth650beca
br-7d7730fa21f1 8000.024268d16acd no veth00fc99a
veth053b2a2
veth2b8cd6f
veth42f4169
veth4a337e1
vetha2b966a
vethb16b3f2
vethbf39ea6
vethd4be276
vethdc51cc2
docker0 8000.0242812caa39 no
2.2.4 那么如何使两个不同网桥的容器通信呢
• 可以使用 docker network connect命令为vm1添加一块vm2使用的网卡my_net2 ,使vm1同vm2处于同一个网段,这样就达到了不同网段的容器可以互相通信的目的。
[root@lnmp0 ~]# docker run -it --name vm1 --network my_net1 busybox ##使用my_net1运行一个名为vm1容器
/ # ip addr ##查看未添加网卡前的ip
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
255: eth0@if256: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue
link/ether 02:42:ac:12:00:02 brd ff:ff:ff:ff:ff:ff
inet 172.18.0.2/16 brd 172.18.255.255 scope global eth0
valid_lft forever preferred_lft forever
[root@lnmp0 ~]# docker network connect my_net2 vm1 ##为名为vm1的容器添加一块my_net2的网卡
[root@lnmp0 ~]# docker attach vm1 ##重新弄调用vm1容器
/ # ip addr ##查看网卡
257: eth1@if258: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue ##成功添加了一块网段为192.168.0的网卡
link/ether 02:42:c0:a8:00:02 brd ff:ff:ff:ff:ff:ff
inet 192.168.0.2/24 brd 192.168.0.255 scope global eth1
valid_lft forever preferred_lft forever
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
255: eth0@if256: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue
link/ether 02:42:ac:12:00:02 brd ff:ff:ff:ff:ff:ff
inet 172.18.0.2/16 brd 172.18.255.255 scope global eth0
valid_lft forever preferred_lft forever
/ # read escape sequence
[root@lnmp0 ~]# docker run -it --name vm2 --network my_net2 --ip 192.168.0.10 centos ##运行一个名为vm2的容器
[root@29ef41a678ae /]# ping 192.168.0.2 ##ping容器vm1新建网卡的ip,可以ping通,达到了不同网段可以互相通信的目的
PING 192.168.0.2 (192.168.0.2) 56(84) bytes of data.
64 bytes from 192.168.0.2: icmp_seq=1 ttl=64 time=0.062 ms
64 bytes from 192.168.0.2: icmp_seq=2 ttl=64 time=0.071 ms
^C
--- 192.168.0.2 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1000ms
rtt min/avg/max/mdev = 0.062/0.066/0.071/0.009 ms
3. Docker容器通信
3.1 容器之间除了使用ip通信外,还可以使用容器名称通信。
• docker 1.10版本开始,内嵌了一个DNS server。
• dns解析功能必须在自定义网络中使用。
•启动容器时使用–name参数指定容器名称
[root@lnmp0 ~]# docker run -d --name vm1 --name vm1 --network my_net1 nginx
954770d4f89753e4b778615dd7150898ff891259c627ff9829907dd1cf8610bb
[root@lnmp0 ~]# docker run -it --name vm2 --network my_net1 centos
[root@15fab7adfa9b /]# ping vm1
PING vm1 (172.18.0.2) 56(84) bytes of data.
64 bytes from vm1.my_net1 (172.18.0.2): icmp_seq=1 ttl=64 time=0.054 ms
64 bytes from vm1.my_net1 (172.18.0.2): icmp_seq=2 ttl=64 time=0.066 ms
64 bytes from vm1.my_net1 (172.18.0.2): icmp_seq=3 ttl=64 time=0.062 ms
^C
--- vm1 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2002ms
rtt min/avg/max/mdev = 0.054/0.060/0.066/0.010 ms
[root@15fab7adfa9b /]#
3.2 Joined容器一种较为特别的网络模式。
• 在容器创建时使用–network=container:vm1指定。(vm1指定的是运行的容器名)
[root@lnmp0 ~]# docker run -it --name vm2 --network my_net1 centos
[root@e802afb24c28 /]# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
273: eth0@if274: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:ac:12:00:02 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 172.18.0.2/16 brd 172.18.255.255 scope global eth0
valid_lft forever preferred_lft forever
[root@e802afb24c28 /]# [root@lnmp0 ~]#
[root@lnmp0 ~]# docker run -it --name vm1 --network container:vm2 centos
[root@e802afb24c28 /]# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
273: eth0@if274: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:ac:12:00:02 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 172.18.0.2/16 brd 172.18.255.255 scope global eth0
valid_lft forever preferred_lft forever
[root@e802afb24c28 /]#
• --link 可以用来链接2个容器。
• --link的格式:
• --link <name or id>:alias
• name和id是源容器的name和id,alias是源容器在link下的别名。
[root@lnmp0 ~]# docker run -d nginx
748bdfe2f209ed0c8e65c6a740e046e994acf13b7a7b1b2a8e94276787277fbf
[root@lnmp0 ~]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
748bdfe2f209 nginx "/docker-entrypoint.…" 2 minutes ago Up 2 minutes 80/tcp affectionate_albattani
[root@lnmp0 ~]# docker run -it --name vm1 --link affectionate_albattani:web centos
[root@0b8aedee7f98 /]# ping web
PING web (172.17.0.2) 56(84) bytes of data.
64 bytes from web (172.17.0.2): icmp_seq=1 ttl=64 time=0.085 ms
64 bytes from web (172.17.0.2): icmp_seq=2 ttl=64 time=0.034 ms
^C
--- web ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1001ms
rtt min/avg/max/mdev = 0.034/0.059/0.085/0.026 ms
[root@0b8aedee7f98 /]#
3.3 容器如何访问外网是通过iptables的SNAT实现的,原理图
3.4 外网如何访问容器:
•端口映射
• -p选项指定映射端口
[root@lnmp0 ~]# docker run -d --name web -p 80:80 nginx
14a5c3a973310298bee6cf4c54a7097fb77ee00a322c17e18dfb3bd668cee564
[root@lnmp0 ~]# docker port web
80/tcp -> 0.0.0.0:80
80/tcp -> :::80
[root@lnmp0 ~]# netstat -antlupe | grep :80
tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 0 1825175 7850/docker-proxy
tcp6 0 0 :::80 :::* LISTEN 0 1825180 7854/docker-proxy
[root@lnmp0 ~]# ps -ax | grep docker-proxy
7850 ? Sl 0:00 /usr/bin/docker-proxy -proto tcp -host-ip 0.0.0.0 -hos t-port 80 -container-ip 172.17.0.3 -container-port 80
7854 ? Sl 0:00 /usr/bin/docker-proxy -proto tcp -host-ip :: -host-por
[root@lnmp0 ~]# iptables -t nat -nL
.......
Chain DOCKER (2 references)
target prot opt source destination
RETURN all -- 0.0.0.0/0 0.0.0.0/0
RETURN all -- 0.0.0.0/0 0.0.0.0/0
RETURN all -- 0.0.0.0/0 0.0.0.0/0
RETURN all -- 0.0.0.0/0 0.0.0.0/0
RETURN all -- 0.0.0.0/0 0.0.0.0/0
RETURN all -- 0.0.0.0/0 0.0.0.0/0
RETURN all -- 0.0.0.0/0 0.0.0.0/0
DNAT tcp -- 0.0.0.0/0 127.0.0.1 tcp dpt:1514 to:172.19.0.2:10514
DNAT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:80 to:172.17.0.3:80
[root@lnmp0 ~]# curl 172.17.0.3
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
html { color-scheme: light dark; }
body { width: 35em; margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif; }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>
<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>
<p><em>Thank you for using nginx.</em></p>
</body>
</html>
•外网访问容器用到了docker-proxy和iptables DNAT
•宿主机访问本机容器使用的是iptables DNAT
•外部主机访问容器或容器之间的访问是docker-proxy实现
4. 跨主机容器网络
4.1 跨主机网络解决方案
• docker原生的overlay和macvlan
•第三方的flannel、weave、calico
•众多网络方案是如何与docker集成在一起的
• libnetwork docker容器网络库
• CNM(Container Network Model)这个模型对容器网络进行了抽象
4.2 CNM分三类组件
• Sandbox:容器网络栈,包含容器接口、dns、路由表。 (namespace)
• Endpoint:作用是将sandbox接入network(veth pair)
• Network:包含一组endpoint,同一network的endpoint可以通信。
4.3 macvlan网络方案实现
• Linux kernel提供的一种网卡虚拟化技术。
•无需Linux bridge,直接使用物理接口,性能极好。
4.4 在两台docker主机上各添加一块网卡,打开网卡混杂模式:ip link set eth1 promisc on
ip addr 查看一定要在网卡出现 <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP>
docker主机1
[root@lnmp0 ~]# ip link set ens33 promisc on
[root@lnmp0 ~]# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 00:0c:29:d0:1b:0a brd ff:ff:ff:ff:ff:ff
inet 192.168.155.100/24 brd 192.168.155.255 scope global noprefixroute ens33
valid_lft forever preferred_lft forever
inet6 fe80::c590:6719:6caa:bde5/64 scope link noprefixroute
valid_lft forever preferred_lft forever
docker主机2
[root@lnmp1 ~]# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 00:0c:29:e4:11:4d brd ff:ff:ff:ff:ff:ff
inet 192.168.155.150/24 brd 192.168.155.255 scope global noprefixroute ens33
valid_lft forever preferred_lft forever
inet6 fe80::c590:6719:6caa:bde5/64 scope link tentative noprefixroute dadfailed
valid_lft forever preferred_lft forever
inet6 fe80::fa36:fd7b:e476:d9c1/64 scope link noprefixroute
valid_lft forever preferred_lft forever
4.5 在两台docker主机上各创建macvlan网络
在docker主机1中使用macvlan运行一个容器,进入容器中ip addr 查看容器ip ,在docker主机2中做相同的操作,进入容器后ping docker主机1容器中的ip,看使用macvlan网络是否可以实现跨主机通信。
docker主机1
[root@lnmp0 ~]# docker network create -d macvlan --subnet 192.168.0.0/24 --gateway 192.168.0.1 -o parent=ens33 macvlan1
0e681645e75ad1bb7a322748daec4cf709f406029b728092b16099683c787fe4
[root@lnmp0 ~]# docker run -it --name vm1 --network macvlan1 --ip 192.168.0.10 centos
[root@71fe2f808673 /]# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
281: eth0@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default
link/ether 02:42:c0:a8:00:0a brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 192.168.0.10/24 brd 192.168.0.255 scope global eth0
valid_lft forever preferred_lft forever
[root@71fe2f808673 /]# [root@lnmp0 ~]#
docker主机2
[root@lnmp1 ~]# docker network create -d macvlan --subnet 192.168.0.0/24 --gateway 192.168.0.1 -o parent=ens33 macvlan1
50f8bf3010dbe51761777721a728c9bc75dae0453029d1e69036be793726a9ca
[root@lnmp1 ~]# docker run -it --name vm2 --network macvlan1 --ip 192.168.0.20 cen
[root@142347d727c7 /]# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
4: eth0@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOW
link/ether 02:42:c0:a8:00:14 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 192.168.0.20/24 brd 192.168.0.255 scope global eth0
valid_lft forever preferred_lft forever
[root@142347d727c7 /]# ping 192.168.0.10
PING 192.168.0.10 (192.168.0.10) 56(84) bytes of data.
64 bytes from 192.168.0.10: icmp_seq=1 ttl=64 time=0.265 ms
64 bytes from 192.168.0.10: icmp_seq=2 ttl=64 time=0.214 ms
^C
--- 192.168.0.10 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1001ms
rtt min/avg/max/mdev = 0.214/0.239/0.265/0.029 ms
4.6 macvlan网络结构分析
• 没有新建linux bridge
docker主机1
[root@lnmp0 ~]# brctl show
bridge name bridge id STP enabled interfaces
br-1c3655a3b16a 8000.02420cfdce1a no veth23b8e6a
veth2c3d400
veth3904022
vethd5b63e1
br-2dec7494c057 8000.024254d44e9e no veth9366118
vethca5a7a7
vethed36249
br-78df5b219464 8000.0242bc8855a0 no veth27a1c38
veth650beca
br-7d7730fa21f1 8000.024268d16acd no veth00fc99a
veth053b2a2
veth2b8cd6f
veth42f4169
veth4a337e1
vetha2b966a
vethb16b3f2
vethbf39ea6
vethdc51cc2
docker0 8000.0242812caa39 no veth1551903
vethee54d19
docker主机2
[root@lnmp1 ~]# brctl show
bridge name bridge id STP enabled interfaces
docker0 8000.024259a48a87 no
4.7 容器的接口直接与主机网卡连接,无需NAT或端口映射。
• macvlan会独占主机网卡,但可以使用vlan子接口实现多macvlan网络
• vlan可以将物理二层网络划分为4094个逻辑网络,彼此隔离,vlan id取 值为1~4094。
[root@lnmp0 ~]# docker network create -d macvlan --subnet 192.168.10.0/24 --gatewa y 192.168.10.1 -o parent=ens33.1 macvlan2
5c27207a0fb1956eff5024ac0a0bbe9695efdb8fc5486194ccf00f080c3f4cf0
[root@lnmp0 ~]# docker network create -d macvlan --subnet 192.168.20.0/24 --gateway 192.168.20.1 -o parent=ens33.2 macvlan3
978be07adaa5b333aedc6d0194c1dfd2686839a99921ba6fb497ddce0597bd3d
[root@lnmp0 ~]# ip addr
........
282: ens33.1@ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 00:0c:29:d0:1b:0a brd ff:ff:ff:ff:ff:ff
inet6 fe80::20c:29ff:fed0:1b0a/64 scope link
valid_lft forever preferred_lft forever
283: ens33.2@ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 00:0c:29:d0:1b:0a brd ff:ff:ff:ff:ff:ff
inet6 fe80::20c:29ff:fed0:1b0a/64 scope link
valid_lft forever preferred_lft forever
4.8 macvlan
网络间的隔离和连通
网络间的隔离和连通
• macvlan网络在二层上是隔离的,所以不同macvlan网络的容器是不能通信的。
•可以在三层上通过网关将macvlan网络连通起来。
• docker本身不做任何限制,像传统vlan网络那样管理即可。
• docker network子命令
• connect ##连接容器到指定网络
• create ##创建网络
• disconnect ##断开容器与指定网络的连接
• inspect ##显示指定网络的详细信息
• ls ##显示所有网络
• rm ##删除网络