在Ubuntu 18.04上的dpdk和ovs上的testpmd问题 [英] problem with testpmd on dpdk and ovs in ubuntu 18.04

查看:445
本文介绍了在Ubuntu 18.04上的dpdk和ovs上的testpmd问题的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我有一个X520-SR2 10G网卡,我将使用它来创建2个使用dpdk编译的OpenvSwitch虚拟接口(从ubuntu 18.04的存储库安装),并使用testpmd测试该虚拟接口,我会执行以下工作: br>

i have a X520-SR2 10G Network Card, i gonna use that to create 2 virtual interfaces with OpenvSwitch that compiled with dpdk (installed from repository of ubuntu 18.04) and test this virtual interfaces with testpmd, i do following jobs :

$ ovs-vsctl add-br br0 -- set bridge br0 datapath_type=netdev

绑定dpdk端口

$ ovs-vsctl add-port br0 dpdk0 -- set Interface dpdk0 type=dpdk options:dpdk-devargs=0000:01:00.0 ofport_request=1
$ ovs-vsctl add-port br0 dpdk1 -- set Interface dpdk1 type=dpdk options:dpdk-devargs=0000:01:00.1 ofport_request=2

创建dpdkvhostuser端口

$ ovs-vsctl add-port br0 dpdkvhostuser0 -- set Interface dpdkvhostuser0 type=dpdkvhostuser ofport_request=3
$ ovs-vsctl add-port br0 dpdkvhostuser1 -- set Interface dpdkvhostuser1 type=dpdkvhostuser ofport_request=4

定义流向

# clear all directions
$ ovs-ofctl del-flows br0

添加新的流向

$ ovs-ofctl add-flow br0 in_port=3,dl_type=0x800,idle_timeout=0,action=output:4
$ ovs-ofctl add-flow br0 in_port=4,dl_type=0x800,idle_timeout=0,action=output:3

转储流向

$ ovs-ofctl dump-flows br0
 cookie=0x0, duration=851.504s, table=0, n_packets=0, n_bytes=0, ip,in_port=dpdkvhostuser0 actions=output:dpdkvhostuser1
 cookie=0x0, duration=851.500s, table=0, n_packets=0, n_bytes=0, ip,in_port=dpdkvhostuser1 actions=output:dpdkvhostuser0

现在我运行testpmd:

now i run testpmd:

$ testpmd -c 0x3 -n 4 --socket-mem 512,512 --proc-type auto --file-prefix testpmd --no-pci --vdev=virtio_user0,path=/var/run/openvswitch/dpdkvhostuser0 --vdev=virtio_user1,path=/var/run/openvswitch/dpdkvhostuser1 -- --burst=64 -i --txqflags=0xf00 --disable-hw-vlan
EAL: Detected 32 lcore(s)
EAL: Auto-detected process type: PRIMARY
EAL: No free hugepages reported in hugepages-1048576kB
EAL: Probing VFIO support...
EAL: VFIO support initialized
Interactive-mode selected
Warning: NUMA should be configured manually by using --port-numa-config and --ring-numa-config parameters along with --numa.
USER1: create a new mbuf pool <mbuf_pool_socket_0>: n=155456, size=2176, socket=0
USER1: create a new mbuf pool <mbuf_pool_socket_1>: n=155456, size=2176, socket=1
Configuring Port 0 (socket 0)
Port 0: DA:17:DC:5E:B0:6F
Configuring Port 1 (socket 0)
Port 1: 3A:74:CF:43:1C:85
Checking link statuses...
Done
testpmd> start tx_first 
io packet forwarding - ports=2 - cores=1 - streams=2 - NUMA support enabled, MP over anonymous pages disabled
Logical Core 1 (socket 0) forwards packets on 2 streams:
  RX P=0/Q=0 (socket 0) -> TX P=1/Q=0 (socket 0) peer=02:00:00:00:00:01
  RX P=1/Q=0 (socket 0) -> TX P=0/Q=0 (socket 0) peer=02:00:00:00:00:00

  io packet forwarding packets/burst=64
  nb forwarding cores=1 - nb forwarding ports=2
  port 0:
  CRC stripping enabled
  RX queues=1 - RX desc=128 - RX free threshold=0
  RX threshold registers: pthresh=0 hthresh=0  wthresh=0
  TX queues=1 - TX desc=512 - TX free threshold=0
  TX threshold registers: pthresh=0 hthresh=0  wthresh=0
  TX RS bit threshold=0 - TXQ flags=0xf00
  port 1:
  CRC stripping enabled
  RX queues=1 - RX desc=128 - RX free threshold=0
  RX threshold registers: pthresh=0 hthresh=0  wthresh=0
  TX queues=1 - TX desc=512 - TX free threshold=0
  TX threshold registers: pthresh=0 hthresh=0  wthresh=0
  TX RS bit threshold=0 - TXQ flags=0xf00
testpmd> stop
Telling cores to stop...
Waiting for lcores to finish...

  ---------------------- Forward statistics for port 0  ----------------------
  RX-packets: 0              RX-dropped: 0             RX-total: 0
  TX-packets: 64             TX-dropped: 0             TX-total: 64
  ----------------------------------------------------------------------------

  ---------------------- Forward statistics for port 1  ----------------------
  RX-packets: 0              RX-dropped: 0             RX-total: 0
  TX-packets: 64             TX-dropped: 0             TX-total: 64
  ----------------------------------------------------------------------------

  +++++++++++++++ Accumulated forward statistics for all ports+++++++++++++++
  RX-packets: 0              RX-dropped: 0             RX-total: 0
  TX-packets: 128            TX-dropped: 0             TX-total: 128
  ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Done.
testpmd>

软件版本:
操作系统:Ubuntu 18.04
Linux内核:4.15
OVS:2.9
DPDK:17.11.3

我现在应该怎么办 ?? 问题出在哪里?

version of softwares:
OS: Ubuntu 18.04
Linux Kernel: 4.15
OVS: 2.9
DPDK: 17.11.3

what should i do now ?? where is the problem from?

推荐答案

最终解决了该问题,问题是套接字内存分配的大小,我将--socket-mem值更改为 1024,1024 (1024M每个numa节点),并使用pktgen创建数据包(与--socket-mem 1024,1024相同).
一切正常.

finally catch the problem , The problem is size of socket memory allocation, i change --socket-mem value to 1024,1024 (1024M for each numa node) and create packets with pktgen (as same using --socket-mem 1024,1024).
Everything works fine.

这篇关于在Ubuntu 18.04上的dpdk和ovs上的testpmd问题的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆