Use this information to configure the Napatech SPDK on the IPU for attaching NVMe
devices of the target server.
Before you begin
Make sure that you have:
About this task
Note: The following prompt is used to indicate which part of the system to run the provided
commands on.
- soc#: The SoC on the IPU.
Procedure
-
On the IPU, load the drivers.
soc# modprobe virtio_net
soc# modprobe uio
soc# rmmod ifc_uio.ko 2>/dev/null
soc# cd /opt/ipu_workload_nvme/src/
soc# insmod ipu_mngmnt_tools/software/csc_lek-0.0.0.5/driver/kmod/ifc_uio.ko
-
Initialize the packet processor (PP) switch.
soc# export CSC_MNGMT_PATH=/opt/ipu_workload_nvme/src/ipu_mngmnt_tools/software/management
soc# /opt/ipu_workload_nvme/src/ipu_mngmnt_tools/ppfastpath.sh
The
ppfastpath.sh script contains the following
commands.
cd $CSC_MNGMT_PATH
./csc_mngmt --dev 15:00.0 --flush --all
./csc_mngmt --dev 15:00.0 --sync
./csc_mngmt --dev 15:00.0 --ppcfg fastpath
An
output example:
PP Fastpath Configuration was SUCCESSFUL!
Note: If
the following error messages are generated, reboot the IPU.
** Error: Unable to parse file '/var/csc-config.json'. Error code: 1003. Regenerating...
*** Error: Invalid Action Group Entry for Sys IF 0 Broadcast whilst trying to configure PP fastpath default config
*** Error: Invalid Action Group Entry for Sys IF 1 Broadcast whilst trying to configure PP fastpath default config
*** Error: Invalid Action Group Entry for Sys IF 0 Broadcast whilst trying to configure PP fastpath default config
*** Error: Invalid Action Group Entry for Sys IF 1 Broadcast whilst trying to configure PP fastpath default config
PP Fastpath Configuration contained at least one ERROR
After
reboot, the drivers must be loaded again as described in Step
1 and run the
ppfastpath.sh script again.
soc# reboot
-
On the IPU, add destination network interfaces to the PP lookup table.
soc# cd $CSC_MNGMT_PATH
soc# ./csc_mngmt --dev 15:00.0 --lkup --add --lkup-tag 4 \
--dmac <target_port0_MAC> --route-dest LINE:0:0
soc# ./csc_mngmt --dev 15:00.0 --lkup --add --lkup-tag 4 \
--dmac <target_port1_MAC> --route-dest LINE:1:0
where:
- target_port0_MAC is the MAC address of the NIC on the target
server, which is connected to port 0 of the IPU.
- target_port1_MAC is the MAC address of the NIC on the target
server, which is connected to port 1 of the IPU.
For
example:
cd $CSC_MNGMT_PATH
soc# ./csc_mngmt --dev 15:00.0 --lkup --add --lkup-tag 4 \
--dmac 64:9d:99:ff:ed:dc --route-dest LINE:0:0
soc# ./csc_mngmt --dev 15:00.0 --lkup --add --lkup-tag 4 \
--dmac 64:9d:99:ff:ed:dd --route-dest LINE:1:0
-
Set the following network parameter on the IPU to avoid the ARP flux issue.
soc# echo 1 > /proc/sys/net/ipv4/conf/all/arp_ignore
This
configuration enables responding to the ARP requests only if the target IP is a local
address that is configured on the incoming interface.
-
On the IPU, configure IP addresses of two interfaces.
soc# ip link set dev ens6f1 down
soc# ip a a 172.168.1.1/24 dev ens6f1
soc# ip link set dev ens6f1 up
soc# ip link set dev ens6f2 down
soc# ip a a 172.168.2.1/24 dev ens6f2
soc# ip link set dev ens6f2 up
Note: ens6f4 must not be used.
-
On the IPU, verify the network connections between the IPU and the target server.
For example:
soc# ping 172.168.1.2
soc# ping 172.168.2.2
If it
fails to ping to the target server, make sure that:
- All network connections are established.
- Pluggable modules are securely inserted.
- Cables are properly connected.
- Related devices are functioning correctly without defects.
-
On the IPU, enable the drivers.
soc# /sbin/rmmod vfio-pci
soc# /sbin/rmmod vfio_iommu_type1
soc# /sbin/rmmod vfio
soc# /sbin/modprobe vfio-pci
soc# chmod a+x /dev/vfio
-
Bind the management PCI function of the IPU to the vfio-pci driver.
The PCIe bus ID of the management PCI function on the IPU is
0000:15:00.4.
soc# cd /opt/ipu_workload_nvme/src/
soc# ipu_spdk/dpdk/usertools/dpdk-devbind.py -b vfio-pci 0000:15:00.4
-
On the IPU, start bsc_tgt application. Keep the terminal open.
soc# cd /opt/ipu_workload_nvme/src/
soc# ipu_spdk/test/external_code/bsc_tgt/bsc_tgt -s 5G \
--base-virtaddr 0x2000000000 -m 0x0f
where:
- -s 5G: Specifies the size of the shared memory used by the
application to 5 Gbytes.
- --base-virtaddr 0x2000000000: Sets the starting address for the
application to allocate the memory.
- -m 0x0f: Configures the CPU affinity mask for the application. The
value 0x0f indicates the application to run on CPU cores 0, 1, 2, and
3.
An output
example:
[2025-01-28 09:53:38.643685] Starting SPDK v22.01.3-pre / DPDK 21.11.2 initialization...
[2025-01-28 09:53:38.643756] [ DPDK EAL parameters: [2025-01-28 09:53:38.643764] virtio_blk_net [2025-01-28 09:53:38.643769] --no-shconf [2025-01-28 09:53:38.643774] -c 0x0f [2025-01-28 09:53:38.643779] -m 5120 [2025-01-28 09:53:38.643784] --log-level=lib.eal:6 [2025-01-28 09:53:38.643791] --log-level=lib.cryptodev:5 [2025-01-28 09:53:38.643796] --log-level=user1:6 [2025-01-28 09:53:38.643808] --base-virtaddr=0x2000000000 [2025-01-28 09:53:38.643814] --match-allocations [2025-01-28 09:53:38.643820] --file-prefix=spdk_pid1095 [2025-01-28 09:53:38.643826] ]
EAL: No available 2048 kB hugepages reported
TELEMETRY: No legacy callbacks, legacy socket not created
[2025-01-28 09:53:39.359891] app.c: 601:spdk_app_start: *NOTICE*: Total cores available: 4
[2025-01-28 09:53:39.484647] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1
[2025-01-28 09:53:39.484833] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 2
[2025-01-28 09:53:39.484987] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 3
[2025-01-28 09:53:39.485048] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0
[2025-01-28 09:53:39.485245] accel_engine.c: 510:spdk_accel_engine_initialize: *NOTICE*: Accel engine initialized to use software engine.