Use this information to configure the Napatech SPDK on the IPU for attaching NVMe devices of the target server.
Before you begin
- Installed a Napatech IPU in a server. See Installing a Napatech IPU.
- Programmed the IPU with an appropriate FPGA image. See Napatech Link-Storage™ Software FPGA images.
- Remote SSH access to the SoC on the IPU via the management port or the USB port. See DN-1385.
- Installed the SPDK on the target server. See Installing the SPDK on the Target Server.
- Installed Link-Storage™ Software on the IPU. See Installing the Napatech SDPK on the IPU.
- Configured the target server. See Configuring the Target Server with NVMe™ SSD Disks or Configuring the Target Server with RAM disks.
About this task
Note: The following prompts are used to indicate which part of the system to run the
provided commands on.
- soc#: The SoC on the IPU.
- host#: The server where the IPU is installed.
- target#: The remote server with storage disks.
Procedure
-
On the IPU, load the drivers.
soc# modprobe virtio_net soc# modprobe uio soc# rmmod ifc_uio.ko 2>/dev/null soc# cd /opt/ipu_workload_nvme/src/ soc# insmod ipu_mngmnt_tools/software/csc_lek-0.0.0.5/driver/kmod/ifc_uio.ko
-
Initialize the packet processor (PP) switch.
soc# export CSC_MNGMT_PATH=/opt/ipu_workload_nvme/src/ipu_mngmnt_tools/software/management soc# /opt/ipu_workload_nvme/src/ipu_mngmnt_tools/ppfastpath.sh
The ppfastpath.sh script contains the following commands.cd $CSC_MNGMT_PATH ./csc_mngmt --dev 15:00.0 --flush --all ./csc_mngmt --dev 15:00.0 --sync ./csc_mngmt --dev 15:00.0 --ppcfg fastpath
An output example:PP Fastpath Configuration was SUCCESSFUL!
Note: If the following error messages are generated, reboot the IPU.** Error: Unable to parse file '/var/csc-config.json'. Error code: 1003. Regenerating... *** Error: Invalid Action Group Entry for Sys IF 0 Broadcast whilst trying to configure PP fastpath default config *** Error: Invalid Action Group Entry for Sys IF 1 Broadcast whilst trying to configure PP fastpath default config *** Error: Invalid Action Group Entry for Sys IF 0 Broadcast whilst trying to configure PP fastpath default config *** Error: Invalid Action Group Entry for Sys IF 1 Broadcast whilst trying to configure PP fastpath default config PP Fastpath Configuration contained at least one ERROR
After reboot, the drivers must be loaded again as described in Step 1 and run the ppfastpath.sh script again.soc# reboot
-
On the IPU, add destination network interfaces to the PP lookup table.
soc# cd $CSC_MNGMT_PATH soc# ./csc_mngmt --dev 15:00.0 --lkup --add --lkup-tag 4 \ --dmac <target_port0_MAC> --route-dest LINE:0:0 soc# ./csc_mngmt --dev 15:00.0 --lkup --add --lkup-tag 4 \ --dmac <target_port1_MAC> --route-dest LINE:1:0
where:- target_port0_MAC is the MAC address of the NIC on the target server, which is connected to port 0 of the IPU.
- target_port1_MAC is the MAC address of the NIC on the target server, which is connected to port 1 of the IPU.
cd $CSC_MNGMT_PATH soc# ./csc_mngmt --dev 15:00.0 --lkup --add --lkup-tag 4 \ --dmac 64:9d:99:ff:ed:dc --route-dest LINE:0:0 soc# ./csc_mngmt --dev 15:00.0 --lkup --add --lkup-tag 4 \ --dmac 64:9d:99:ff:ed:dd --route-dest LINE:1:0
-
On the IPU, configure IP addresses of two interfaces.
soc# ip link set dev ens6f1 down soc# ip a a 172.168.1.1/24 dev ens6f1 soc# ip link set dev ens6f1 up soc# ip link set dev ens6f2 down soc# ip a a 172.168.2.1/24 dev ens6f2 soc# ip link set dev ens6f2 up
Note: ens6f4 must not be used. -
On the IPU, verify the network connections between the IPU and the target server.
soc# ping 172.168.1.2
If it fails to ping to the target server, make sure that:- All network connections are established.
- Pluggable modules are securely inserted.
- Cables are properly connected.
- Related devices are functioning correctly without defects.
-
On the IPU, enable the drivers.
soc# /sbin/rmmod vfio-pci soc# /sbin/rmmod vfio_iommu_type1 soc# /sbin/rmmod vfio soc# /sbin/modprobe vfio-pci soc# chmod a+x /dev/vfio
-
Bind the management PCI function of the IPU to the vfio-pci driver.
The PCIe bus ID of the management PCI function on the IPU is 0000:15:00.4.
soc# cd /opt/ipu_workload_nvme/src/ soc# ipu_spdk/dpdk/usertools/dpdk-devbind.py -b vfio-pci 0000:15:00.4
-
On the IPU, start bsc_tgt application. Keep the terminal open.
soc# ipu_spdk/test/external_code/bsc_tgt/bsc_tgt -s 5G \ --base-virtaddr 0x2000000000 -m 0x0f
where:- -s 5G: Specifies the size of the shared memory used by the application to 5 Gbytes.
- --base-virtaddr 0x2000000000: Sets the starting address for the application to allocate the memory.
- -m 0x0f: Configures the CPU affinity mask for the application. The value 0x0f indicates the application to run on CPUs 0, 1, 2, and 3.
[2024-02-29 05:28:46.370746] Starting SPDK v22.01.3-pre git sha1 071e9fa55 / DPDK 21.11.2 initialization... [2024-02-29 05:28:46.370982] [ DPDK EAL parameters: [2024-02-29 05:28:46.371010] virtio_blk_net [2024-02-29 05:28:46.371026] --no-shconf [2024-02-29 05:28:46.371041] -c 0x0f [2024-02-29 05:28:46.371056] -m 5120 [2024-02-29 05:28:46.371070] --log-level=lib.eal:6 [2024-02-29 05:28:46.371092] --log-level=lib.cryptodev:5 [2024-02-29 05:28:46.371109] --log-level=user1:6 [2024-02-29 05:28:46.371126] --base-virtaddr=0x2000000000 [2024-02-29 05:28:46.371143] --match-allocations [2024-02-29 05:28:46.371159] --file-prefix=spdk_pid3801 [2024-02-29 05:28:46.371173] ] EAL: No available 2048 kB hugepages reported TELEMETRY: No legacy callbacks, legacy socket not created [2024-02-29 05:28:47.433020] app.c: 601:spdk_app_start: *NOTICE*: Total cores available: 4 [2024-02-29 05:28:47.573122] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 [2024-02-29 05:28:47.573346] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 3 [2024-02-29 05:28:47.573347] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 [2024-02-29 05:28:47.573237] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 2 [2024-02-29 05:28:47.573931] accel_engine.c: 510:spdk_accel_engine_initialize: *NOTICE*: Accel engine initialized to
-
Open the management interface.
soc# chmod 777 /var/tmp/spdk.sock soc# ./ipu_spdk/scripts/rpc.py bsc_open_mgm_interface "0000:15:00.0"
The physical PCIe bus ID of the IPU must be specified as shown in this command. -
Attach an NVMe controller to NVMe devices of the target server.
For example:
soc# for (( i=0; i<16; i++ )) do n="nqn.2016-06.io.spdk:cnode"$((i)) port=$((4420 + i)) echo "attach controller NVMe"$i" to TCP node "$n ./ipu_spdk/scripts/rpc.py bdev_nvme_attach_controller -b NVMe$i -t tcp \ -a 172.168.1.2 -s $port -f ipv4 -n $n done
where:- -b NVMe$i: Specifies the name of the NVMe block device to which the controller will be attached. Names NVMe0 to NVMe15 are used.
- -t tcp: Sets the transport protocol to TCP.
- -a 172.168.1.2: Specifies the IP address of the target server.
- -s $port: Specifies the port number. Port numbers 4420 to 4435 are configured on the target server.
- -f ipv4: Specifies the IP protocol version as IPv4.
- -n $n: Specifies the NQN (NVMe qualified name) of the controller. -n nqn.2016-06.io.spdk:cnode0 to -n nqn.2016-06.io.spdk:cnode15 are configured on the target server.
attach controller NVMe0 to TCP node nqn.2016-06.io.spdk:cnode0 NVMe0n1 attach controller NVMe1 to TCP node nqn.2016-06.io.spdk:cnode1 NVMe1n1 attach controller NVMe2 to TCP node nqn.2016-06.io.spdk:cnode2 NVMe2n1 attach controller NVMe3 to TCP node nqn.2016-06.io.spdk:cnode3 NVMe3n1 attach controller NVMe4 to TCP node nqn.2016-06.io.spdk:cnode4 NVMe4n1 attach controller NVMe5 to TCP node nqn.2016-06.io.spdk:cnode5 NVMe5n1 attach controller NVMe6 to TCP node nqn.2016-06.io.spdk:cnode6 NVMe6n1 attach controller NVMe7 to TCP node nqn.2016-06.io.spdk:cnode7 NVMe7n1 attach controller NVMe8 to TCP node nqn.2016-06.io.spdk:cnode8 NVMe8n1 attach controller NVMe9 to TCP node nqn.2016-06.io.spdk:cnode9 NVMe9n1 attach controller NVMe10 to TCP node nqn.2016-06.io.spdk:cnode10 NVMe10n1 attach controller NVMe11 to TCP node nqn.2016-06.io.spdk:cnode11 NVMe11n1 attach controller NVMe12 to TCP node nqn.2016-06.io.spdk:cnode12 NVMe12n1 attach controller NVMe13 to TCP node nqn.2016-06.io.spdk:cnode13 NVMe13n1 attach controller NVMe14 to TCP node nqn.2016-06.io.spdk:cnode14 NVMe14n1 attach controller NVMe15 to TCP node nqn.2016-06.io.spdk:cnode15 NVMe15n1
-
On the IPU, verify established connections.
soc# netstat | grep ESTABLISHED
An output example:tcp 0 0 target:4433 172.168.1.1:53920 ESTABLISHED tcp 0 0 target:4422 172.168.1.1:59064 ESTABLISHED tcp 0 0 target:4430 172.168.1.1:34978 ESTABLISHED tcp 0 0 target:4434 172.168.1.1:53888 ESTABLISHED tcp 0 0 target:4435 172.168.1.1:39038 ESTABLISHED tcp 0 0 target:4426 172.168.1.1:48662 ESTABLISHED tcp 0 0 target:4427 172.168.1.1:41692 ESTABLISHED tcp 0 0 target:4420 172.168.1.1:54728 ESTABLISHED tcp 0 0 target:4428 172.168.1.1:36676 ESTABLISHED tcp 0 0 target:4432 172.168.1.1:51988 ESTABLISHED tcp 0 0 target:4421 172.168.1.1:46412 ESTABLISHED tcp 0 0 target:4431 172.168.1.1:47712 ESTABLISHED tcp 0 0 target:4429 172.168.1.1:48362 ESTABLISHED tcp 0 0 target:4424 172.168.1.1:46612 ESTABLISHED tcp 0 0 target:4423 172.168.1.1:59912 ESTABLISHED tcp 0 0 target:4425 172.168.1.1:47218 ESTABLISHED … …
16 connections are established in this example. -
Construct a virtio block device.
soc# ./ipu_spdk/scripts/rpc.py virtio_blk_net_construct bndev0 0000:15:00.4
This constructs a virtio block device with the name bndev0 using PCI bus ID 0000:15:00.4. -
On the IPU, map NVMe devices to a virtio block device.
soc# for (( i=0; i<16; i++ )) do d="NVMe${i}n1" echo "map "$d" to bndev0" ./ipu_spdk/scripts/rpc.py virtio_blk_net_map_bdev bndev0 $((15-i)) $d done
where:- bndev0: Specifies the virtio network device to which the block device will be mapped.
- $((15-i)): Specifies the queue index.
- $d: Specifies the name of the block device to be mapped. In this case, it's named NVMe0n1 to NVMe15n1.
map NVMe0n1 to bndev0 map NVMe1n1 to bndev0 map NVMe2n1 to bndev0 map NVMe3n1 to bndev0 map NVMe4n1 to bndev0 map NVMe5n1 to bndev0 map NVMe6n1 to bndev0 map NVMe7n1 to bndev0 map NVMe8n1 to bndev0 map NVMe9n1 to bndev0 map NVMe10n1 to bndev0 map NVMe11n1 to bndev0 map NVMe12n1 to bndev0 map NVMe13n1 to bndev0 map NVMe14n1 to bndev0 map NVMe15n1 to bndev0