通过OVS + DPDK连接的Docker容器,``Ping''可以工作,但是``iperf''不能 [英] Docker container connected by OVS+DPDK, `Ping` work but `iperf` NOT

查看:208
本文介绍了通过OVS + DPDK连接的Docker容器,``Ping''可以工作,但是``iperf''不能的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在尝试使用DockerOVS+DPDK构建平台.

I am trying to build a platform using Docker, OVS+DPDK.

1.设置DPDK + OVS

1. Set up DPDK + OVS

我将dpdk-2.2.0openvswitch-2.5.1一起设置为DPDK+OVS.首先,我编译DPDK的代码,设置巨大的页面.我不绑定NIC,因为我没有来自外部的流量.

I set up DPDK+OVS using dpdk-2.2.0 with openvswitch-2.5.1. First, I compile the code of DPDK, set up hugepages. I do NOT bind NIC, because I don't get traffic from outside.

然后,我编译openvswitch的代码,设置为with-dpdk.使用以下脚本启动OVS:

Then, I compile the code of openvswitch, set with-dpdk. Start up OVS with the following script:

#!/bin/sh
sudo rm /var/log/openvswitch/my-ovs-vswitchd.log*

export PATH=$PATH:/usr/local/share/openvswitch/scripts

export DB_SOCK=/usr/local/var/run/openvswitch/db.sock

sudo ovsdb-server --remote=punix:/usr/local/var/run/openvswitch/db.sock \
                     --remote=db:Open_vSwitch,Open_vSwitch,manager_options \
                     --private-key=db:Open_vSwitch,SSL,private_key \
                     --certificate=db:Open_vSwitch,SSL,certificate \
                     --bootstrap-ca-cert=db:Open_vSwitch,SSL,ca_cert \
                     --pidfile --detach

sudo ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-init=true

sudo ovs-vsctl set Open_vSwitch . other_config:pmd-cpu-mask=0x6

sudo ovs-vswitchd --dpdk -c 0x1 -n 4 -- unix:$DB_SOCK --pidfile --detach \
                        --log-file=/var/log/openvswitch/my-ovs-vswitchd.log

一切正常,我的OVSDPDK支持下现在可以正常工作.

Everything works fine, my OVS is working now with DPDK support.

2.创建Docker容器,并设置网桥和端口.

2. Create Docker container, and set up bridge and ports.

我使用ubuntu:14.04中的Docker图像,如下所示:

I use an Docker image from ubuntu:14.04 as follows:

#
# Ubuntu Dockerfile
#
# https://github.com/dockerfile/ubuntu
#

# Pull base image.
FROM ubuntu:14.04

# Install.
RUN \
  sed -i 's/# \(.*multiverse$\)/\1/g' /etc/apt/sources.list && \
  apt-get update && \
  apt-get -y upgrade && \
  apt-get install -y build-essential && \
  apt-get install -y software-properties-common && \
  apt-get install -y byobu curl git htop man unzip vim wget && \
  apt-get install -y iperf net-tools && \
  rm -rf /var/lib/apt/lists/*

# Add files.
ADD root/.bashrc /root/.bashrc
ADD root/.gitconfig /root/.gitconfig
ADD root/.scripts /root/.scripts

# Set environment variables.
ENV HOME /root

# Define working directory.
WORKDIR /root

# Install tcpreply
RUN apt-get update
RUN apt-get install -y libpcap-dev
ADD tcpreplay-4.3.2 /root/tcpreplay-4.3.2
WORKDIR /root/tcpreplay-4.3.2   
RUN ./configure
RUN make
RUN make install

# Copy pcap file
ADD test_15M /root/test_15M

# Define default command.
CMD ["bash"]

然后,我使用脚本创建一个OVS网桥(即ovs-br1)和两个带有ovs-docker的端口:

Then, I create one OVS bridge, i.e., ovs-br1, and two ports with ovs-docker using the script:

#!/bin/sh

sudo ovs-vsctl add-br ovs-br1 -- set bridge ovs-br1 datapath_type=netdev

sudo ifconfig ovs-br1 173.16.1.1 netmask 255.255.255.0 up

sudo docker run -itd --name="box1" "ubuntu14-tcpreplay:v1"

sudo docker run -itd --name="box2" "ubuntu14-tcpreplay:v1"

sudo ovs-docker add-port ovs-br1 eth1 box1 --ipaddress=173.16.1.2/24

sudo ovs-docker add-port ovs-br1 eth1 box2 --ipaddress=173.16.1.3/24 

现在,我有一个网桥ovs-br1,带有两个端口(无名称).一个连接到box1(容器1),另一个连接到box2(容器2).

Now, I have one bridge ovs-br1, with two ports (no name). One is connected to box1 (container 1) and the other is connected to box2 (container 2).

3.检查box1box2

3. Check the connection between box1 and box2

首先,我转储ovs-br1

wcf@wcf-OptiPlex-7060:~/ovs$ sudo ovs-ofctl dump-flows ovs-br1
NXST_FLOW reply (xid=0x4):
 cookie=0x0, duration=130.711s, table=0, n_packets=10, n_bytes=768, idle_age=121, priority=0 actions=NORMAL

然后,我转到box1并ping box2

Then, I go to box1 and ping box2

wcf@wcf-OptiPlex-7060:~/ovs$ sudo docker exec -it box1 "/bin/bash"
[ root@45514f0108a9:~/tcpreplay-4.3.2 ]$ ping 173.16.1.3     
PING 173.16.1.3 (173.16.1.3) 56(84) bytes of data.
64 bytes from 173.16.1.3: icmp_seq=1 ttl=64 time=0.269 ms
64 bytes from 173.16.1.3: icmp_seq=2 ttl=64 time=0.149 ms
64 bytes from 173.16.1.3: icmp_seq=3 ttl=64 time=0.153 ms
64 bytes from 173.16.1.3: icmp_seq=4 ttl=64 time=0.155 ms
64 bytes from 173.16.1.3: icmp_seq=5 ttl=64 time=0.167 ms
64 bytes from 173.16.1.3: icmp_seq=6 ttl=64 time=0.155 ms
^C
--- 173.16.1.3 ping statistics ---
6 packets transmitted, 6 received, 0% packet loss, time 4997ms
rtt min/avg/max/mdev = 0.149/0.174/0.269/0.045 ms

一切正常. box1可以ping box2.

Things work fine. box1 can ping to box2.

最后,我测试了box1box2之间的iperf.我在两个容器中都安装了iperf2.

Finally, I test the iperf between box1 and box2. I install iperf2 at both containers.

box1:

[ root@45514f0108a9:~/tcpreplay-4.3.2 ]$ iperf -c 173.16.1.3 -u -t 5
------------------------------------------------------------
Client connecting to 173.16.1.3, UDP port 5001
Sending 1470 byte datagrams
UDP buffer size:  208 KByte (default)
------------------------------------------------------------
[  3] local 173.16.1.2 port 49558 connected with 173.16.1.3 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0- 5.0 sec   642 KBytes  1.05 Mbits/sec
[  3] Sent 447 datagrams
[  3] WARNING: did not receive ack of last datagram after 10 tries.

box2:

[ root@2e19a616d2af:~/tcpreplay-4.3.2 ]$ iperf -s -u
------------------------------------------------------------
Server listening on UDP port 5001
Receiving 1470 byte datagrams
UDP buffer size:  208 KByte (default)
------------------------------------------------------------

box1iperf数据包未收到box2的响应.

The iperf packets from box1 do not get response from box2.

我使用wireshark监视box1box2ovs-br1和两个OVS端口.

I use wireshark to monitor the ovs-br1 and two OVS ports of box1 and box2.

ovs-br1不查看任何流量,但是,两个OVS端口均查看流量. Wireshark的屏幕截图:

ovs-br1 does not view any traffic, however, both OVS ports view traffic. The screen shot of wireshark:

感谢您分享您的想法.

Thank you for sharing your idea.

最美好的祝愿

推荐答案

如果目的是将直接数据包从一个container-1传送到container-2,则应该有流规则说明这一点.例如./ovs-ofctl add-flow br0 in_port=1,action=output:2./ovs-ofctl add-flow br0 in_port=2,action=output:1.

If the intention is direct packets from one container-1 to container-2, then there should be flow rules stating the same. Such as ./ovs-ofctl add-flow br0 in_port=1,action=output:2 or ./ovs-ofctl add-flow br0 in_port=2,action=output:1.

一旦应用了流规则,就可以确保从Linux内核堆栈中至少配置了默认路由,以将数据包发送到所需的接口.例如default route entry for 173.16.1.0/24

Once flow rules are applied to ensure from Linux kernel Stack you have at least default routes configured to send packets to the desired interface. such as default route entry for 173.16.1.0/24

这篇关于通过OVS + DPDK连接的Docker容器,``Ping''可以工作,但是``iperf''不能的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆