0%

Ubuntu16.04上安装DPDK

DPDK安装

DPDK(Data Plane Development Kit)是一个用来进行包数据处理加速的软件库

从git获取源码

1
$ git clone git://dpdk.org/dpdk

创建环境变量

1
2
3
4
$ cd ~/dpdk
$ export RTE_SDK=`pwd`
$ export DESTDIR=`pwd`
$ export RTE_TARGET=x86_64-default-linuxapp-gcc

因为这些环境变量总是会用到,可以将其放入一个文件,如env.source,使用source env.source

1
2
3
4
5
6
$ cd ~/dpdk
$ more env.source
export RTE_SDK=`pwd`
export DESTDIR=`pwd`
export RTE_TARGET=x86_64-default-linuxapp-gcc
$ source env.source

开始安装

使用dpdk-setup.sh脚本进行安装

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
./usertools/dpdk-setup.sh
------------------------------------------------------------------------------
RTE_SDK exported as /root/dpdk
------------------------------------------------------------------------------
----------------------------------------------------------
Step 1: Select the DPDK environment to build
----------------------------------------------------------
[1] arm64-armv8a-linuxapp-clang
[2] arm64-armv8a-linuxapp-gcc
[3] arm64-dpaa2-linuxapp-gcc
[4] arm64-dpaa-linuxapp-gcc
[5] arm64-stingray-linuxapp-gcc
[6] arm64-thunderx-linuxapp-gcc
[7] arm64-xgene1-linuxapp-gcc
[8] arm-armv7a-linuxapp-gcc
[9] i686-native-linuxapp-gcc
[10] i686-native-linuxapp-icc
[11] ppc_64-power8-linuxapp-gcc
[12] x86_64-native-bsdapp-clang
[13] x86_64-native-bsdapp-gcc
[14] x86_64-native-linuxapp-clang
[15] x86_64-native-linuxapp-gcc
[16] x86_64-native-linuxapp-icc
[17] x86_x32-native-linuxapp-gcc

----------------------------------------------------------
Step 2: Setup linuxapp environment
----------------------------------------------------------
[18] Insert IGB UIO module
[19] Insert VFIO module
[20] Insert KNI module
[21] Setup hugepage mappings for non-NUMA systems
[22] Setup hugepage mappings for NUMA systems
[23] Display current Ethernet/Crypto device settings
[24] Bind Ethernet/Crypto device to IGB UIO module
[25] Bind Ethernet/Crypto device to VFIO module
[26] Setup VFIO permissions

----------------------------------------------------------
Step 3: Run test application for linuxapp environment
----------------------------------------------------------
[27] Run test application ($RTE_TARGET/app/test)
[28] Run testpmd application in interactive mode ($RTE_TARGET/app/testpmd)

----------------------------------------------------------
Step 4: Other tools
----------------------------------------------------------
[29] List hugepage info from /proc/meminfo

----------------------------------------------------------
Step 5: Uninstall and system cleanup
----------------------------------------------------------
[30] Unbind devices from IGB UIO or VFIO driver
[31] Remove IGB UIO module
[32] Remove VFIO module
[33] Remove KNI module
[34] Remove hugepage mappings

[35] Exit Script

Option:
  • Step1

根据自己环境选择相应的build,如我是64位的Intel架构的环境,则选择**[15]**

1
2
3
4
5
6
7
8
...

Installation in /root/dpdk/ complete
------------------------------------------------------------------------------
RTE_TARGET exported as x86_64-native-linuxapp-gcc
------------------------------------------------------------------------------

Press enter to continue ...
  • Step2
  1. 选择**[18]**来家在哪里igb_uio模块
  2. 选择**[21]**来创建Hugepage,这里我输入了128
  3. 选择**[24]**来绑定PCI网卡
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
Option: 21

Removing currently reserved hugepages
Unmounting /mnt/huge and removing directory

Input the number of 2048kB hugepages
Example: to have 128MB of hugepages available in a 2MB huge page system,
enter '64' to reserve 64 * 2MB pages
Number of pages: 128
Reserving hugepages
Creating /mnt/huge and mounting as hugetlbfs

Press enter to continue ...

......

Option: 24

......

Other Compress devices
======================
<none>

Enter PCI address of device to bind to IGB UIO driver: 0b:00.0

Network devices using DPDK-compatible driver
============================================
0000:0b:00.0 'VMXNET3 Ethernet Controller 07b0' drv=igb_uio unused=vmxnet3,uio_pci_generic
0000:13:00.0 'VMXNET3 Ethernet Controller 07b0' drv=igb_uio unused=vmxnet3,uio_pci_generic

确认需要使用的PCI网卡为drv=igb_uio则说明绑定成功

  • Step3 - 测试
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
Option: 27


Enter hex bitmask of cores to execute test app on
Example: to execute app on cores 0 to 7, enter 0xff
bitmask: 0x3
Launching app
sudo: x86_64-default-linuxapp-gcc/app/test: command not found

Press enter to continue ...

...

Option: 28


Enter hex bitmask of cores to execute testpmd app on
Example: to execute app on cores 0 to 7, enter 0xff
bitmask: 0x3
Launching app
sudo: x86_64-default-linuxapp-gcc/app/testpmd: command not found

执行27和28都有可能报错,提示command not found,此时,可以先退出安装脚本

进入/dpdk/x86_64-native-linuxapp-gcc/app目录,会看到testpmd存在于目录下,运行测试,正常状况时,会如下显示

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
$ ./testpmd
EAL: Detected 8 lcore(s)
EAL: Detected 1 NUMA nodes
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: Probing VFIO support...
EAL: PCI device 0000:03:00.0 on NUMA socket -1
EAL: Invalid NUMA socket, default to 0
EAL: probe driver: 15ad:7b0 net_vmxnet3
EAL: PCI device 0000:0b:00.0 on NUMA socket -1
EAL: Invalid NUMA socket, default to 0
EAL: probe driver: 15ad:7b0 net_vmxnet3
EAL: PCI device 0000:13:00.0 on NUMA socket -1
EAL: Invalid NUMA socket, default to 0
EAL: probe driver: 15ad:7b0 net_vmxnet3
testpmd: create a new mbuf pool <mbuf_pool_socket_0>: n=203456, size=2176, socket=0
testpmd: preferred mempool ops selected: ring_mp_mc
Configuring Port 0 (socket 0)

......

TX queue: 0
TX desc=0 - TX free threshold=0
TX threshold registers: pthresh=0 hthresh=0 wthresh=0
TX offloads=0x0 - TX RS bit threshold=0
Press enter to exit

Telling cores to stop...
Waiting for lcores to finish...

---------------------- Forward statistics for port 0 ----------------------
RX-packets: 57 RX-dropped: 0 RX-total: 57
TX-packets: 57 TX-dropped: 0 TX-total: 57
----------------------------------------------------------------------------

---------------------- Forward statistics for port 1 ----------------------
RX-packets: 57 RX-dropped: 0 RX-total: 57
TX-packets: 57 TX-dropped: 0 TX-total: 57
----------------------------------------------------------------------------

+++++++++++++++ Accumulated forward statistics for all ports+++++++++++++++
RX-packets: 114 RX-dropped: 0 RX-total: 114
TX-packets: 114 TX-dropped: 0 TX-total: 114
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Done.

Shutting down port 0...
Stopping ports...
Done
Closing ports...
Done

Shutting down port 1...
Stopping ports...
Done
Closing ports...
Done

Bye...

HugePage问题解决

  • 问题

当运行测试时或者testpmd,可能会遇到如下问题

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
$ ./testpmd
EAL: Detected 8 lcore(s)
EAL: Detected 1 NUMA nodes
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: No free hugepages reported in hugepages-2048kB
EAL: No free hugepages reported in hugepages-2048kB
EAL: FATAL: Cannot get hugepage information.
EAL: Cannot get hugepage information.
PANIC in main():
Cannot init EAL
5: [./testpmd(_start+0x29) [0x498829]]
4: [/lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xf0) [0x7f8a0fee0830]]
3: [./testpmd(main+0xc48) [0x48f528]]
2: [./testpmd(__rte_panic+0xbb) [0x47eb09]]
1: [./testpmd(rte_dump_stack+0x2b) [0x5c8a1b]]
Aborted (core dumped)
  • 问题原因及解决

这说明Hugepage不够用,可以先查看系统内存状况

1
2
3
4
5
6
7
$ cat /proc/meminfo | grep Huge
AnonHugePages: 0 kB
HugePages_Total: 665
HugePages_Free: 0
HugePages_Rsvd: 0
HugePages_Surp: 537
Hugepagesize: 2048 kB

很显然,这里总共有665,太小了,不够用,需要修改系统相关内容

1
2
3
4
5
6
7
8
$ echo 2048 > /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages
$ cat /proc/meminfo | grep Huge
AnonHugePages: 0 kB
HugePages_Total: 2048
HugePages_Free: 1080
HugePages_Rsvd: 0
HugePages_Surp: 0
Hugepagesize: 2048 kB

如果系统重启或者重新编译,则该值会被重新刷新为默认值,需要重新设置