Configure the SmartNIC and run dpdk-testpmd to test performance and features of SmartNICs with Napatech DPDK PMD.
About this task
Procedure
-
Edit the /opt/napatech3/config/ntservice.ini file. The Napatech
driver uses host buffers to receive and transmit data. Host buffers are configured in the
ntservice.ini file. The following shows the default host buffer
configuration.
NumaNode = -1 HostBuffersRx = [4,16,-1] HostBuffersTx = [4,16,-1]
Four 16 Mbytes host buffers for both RX and TX are configured on the NUMA node taken from the NumaNode value. NUMAnode = -1 means that the driver attempts to determine the NUMA node location of the SmartNIC. The number of host buffers must be equal to or larger than the number of queues used by dpdk-testpmd. The number of host buffers for RX and TX is set to 32 in the following example.HostBuffersRx = [32,16,-1] HostBuffersTx = [32,16,-1]
See DN-0449 for more information about the configuration parameters in the /opt/napatech3/config/ntservice.ini file. -
Stop and restart ntservice after changes are made in the
ntservice.ini file.
/opt/napatech3/bin/ntstop.sh /opt/napatech3/bin/ntstart.sh
-
Run dpdk-testpmd in interactive mode as shown in the following
command example.
cd <package_root_directory>/build/app ./dpdk-testpmd -l 0-8 --log-level=ntacc,8 -- -i --nb-cores=8 \ --total-num-mbufs=2048 --rxq=8 --txq=8
where <package_root_directory> is the directory to the unpacked Napatech DPDK package.Note: The command line options in this example are:- -l: List of cores to run on. Core 0 is used to manage the command line and the rest cores are used to forward packets.
- --log-level: Set the log level to the Napatech debugging mode.
- --: DPDK environment abstraction layer commands.
- --nb-cores: Number of cores for packet forwarding processes.
- --rxq: Number of RX queues per port.
- --txq: Number of TX queues per port.
- --total-num-mbufs: Number of mbufs to be allocated in the mbuf pools.
An output example:EAL: Detected 20 lcore(s) EAL: Detected 1 NUMA nodes EAL: Detected static linkage of DPDK EAL: Multi-process socket /var/run/dpdk/rte/mp_socket EAL: Selected IOVA mode 'VA' EAL: Probe PCI driver: net_ntacc (18f4:1c5) device: 0000:03:00.0 (socket 0) rte_pmd_ntacc_dev_probe: Initializing net_ntacc_2.10 DPDK 21.05.0 for 0000:03:00.0 on numa 0 rte_pmd_ntacc_dev_probe: PCI ID : 0x18F4:0x01C5 rte_pmd_ntacc_dev_probe: PCI device: 0000:03:00.0 rte_pmd_init_internals: Checking: 0000:03:00.0 rte_pmd_init_internals: Found: 0000:03:00.0: Ports 2, Offset 0, Adapter 0 rte_pmd_init_internals: Port: 0 - 0000:03:00.0 Port 0 ... DoNtpl: NTPL : assign[priority=62;Descriptor=DYN3,length=26, colorbits=14;Slice=EndOfFrame[-4];streamid=(0..7); Hash=HashWord0_3=Layer3Header[12]/32,HashWord4_7=Layer3Header[16]/32, HashWordP=IpProtocol,XOR=true;tag=port0]=port==0 DoNtpl: NTPL : 3 DoNtpl: NTPL : assign[priority=62;Descriptor=DYN3,length=26, colorbits=14;Slice=EndOfFrame[-4];streamid=(128..135); Hash=HashWord0_3=Layer3Header[12]/32,HashWord4_7=Layer3Header[16]/32, HashWordP=IpProtocol,XOR=true;tag=port1]=port==1 DoNtpl: NTPL : 4 testpmd>
Note: The applied NTPL commands and log messages are displayed in the terminal output. If any error messages are generated in the log, they must be resolved before proceeding to the next step. For example, the following error messages mean that creating filters failed because there are not enough host buffers available. You must increase the number of host buffers in the /opt/napatech3/config/ntservice.ini file. See step 1.... DoNtpl: NT_NTPL() failed: No available host buffer found matching the command DoNtpl: >>> NTPL errorcode: 20002061 DoNtpl: >>> No available hostbuffers DoNtpl: >>> DoNtpl: >>> _dev_flow_isolate: Failed to create default filter in flow_isolate testpmd>
-
Start forwarding.
testpmd> start
An output example:io packet forwarding - ports=2 - cores=8 - streams=16 - NUMA support enabled, MP allocation mode: native Logical Core 1 (socket 0) forwards packets on 2 streams: RX P=0/Q=0 (socket 0) -> TX P=1/Q=0 (socket 0) peer=02:00:00:00:00:01 RX P=1/Q=0 (socket 0) -> TX P=0/Q=0 (socket 0) peer=02:00:00:00:00:00 Logical Core 2 (socket 0) forwards packets on 2 streams: RX P=0/Q=1 (socket 0) -> TX P=1/Q=1 (socket 0) peer=02:00:00:00:00:01 RX P=1/Q=1 (socket 0) -> TX P=0/Q=1 (socket 0) peer=02:00:00:00:00:00 ...
-
Display statistics.
testpmd> show port stats all
An output example:######################## NIC statistics for port 0 ######################## RX-packets: 480838351 RX-missed: 0 RX-bytes: 70676246661 RX-errors: 0 RX-nombuf: 0 TX-packets: 480827182 TX-errors: 0 TX-bytes: 70676065234 Throughput (since last show) Rx-pps: 30084664 Rx-bps: 35377227336 Tx-pps: 30084228 Tx-bps: 35376020184 ############################################################################ ######################## NIC statistics for port 1 ######################## RX-packets: 465786270 RX-missed: 0 RX-bytes: 68465602800 RX-errors: 0 RX-nombuf: 0 TX-packets: 465794263 TX-errors: 0 TX-bytes: 68465205069 Throughput (since last show) Rx-pps: 30085545 Rx-bps: 35377910112 Tx-pps: 30083861 Tx-bps: 35376366704 ############################################################################
- Start the monitoring and profiling tools for performance monitoring. See SmartNIC Performance Monitoring.