Use this information to configure the SPDK for sharing NVMe™ devices using NVMe™/TCP on the target server.
Before you begin
- Programmed the IPU with an appropriate FPGA image, see Napatech Link-Storage™ Software FPGA images.
- Installed the SPDK on the target server, see Installing the SPDK on the Target Server.
- Installed Link-Storage™ Software on the IPU, see Installing the Napatech SDPK on the IPU.
About this task
Note: The following prompts are used to indicate which part of the system to run the
provided commands on.
- soc#: The SoC on the IPU.
- host#: The server where the IPU is installed.
- target#: The remote server with storage disks.
Procedure
-
On the target server, assign IP addresses.
For example:
target# ip link set down dev enp13s0f0 target# ip addr add 172.168.1.2/24 enp13s0f0 target# ip link set up dev enp13s0f0 target# ip link set down dev enp13s0f0 target# ip addr add 172.168.2.2/24 enp13s0f1 target# ip link set up dev enp13s0f1
-
Detect NVMe™ SSD disks.
For example:
target# lspci | grep "Non-Volatile memory controller"
An output example:01:00.0 Non-Volatile memory controller: Marvell Technology Group Ltd. 88NR2241 Non-Volatile memory controller (rev 20) 4a:00.0 Non-Volatile memory controller: KIOXIA Corporation Device 0011 (rev 01) 4b:00.0 Non-Volatile memory controller: KIOXIA Corporation Device 0011 (rev 01) 4c:00.0 Non-Volatile memory controller: KIOXIA Corporation Device 0011 (rev 01) 4d:00.0 Non-Volatile memory controller: KIOXIA Corporation Device 0011 (rev 01) 61:00.0 Non-Volatile memory controller: KIOXIA Corporation Device 0011 (rev 01) 62:00.0 Non-Volatile memory controller: KIOXIA Corporation Device 0011 (rev 01) 63:00.0 Non-Volatile memory controller: KIOXIA Corporation Device 0011 (rev 01) 64:00.0 Non-Volatile memory controller: KIOXIA Corporation Device 0011 (rev 01) ca:00.0 Non-Volatile memory controller: KIOXIA Corporation Device 0011 (rev 01) cb:00.0 Non-Volatile memory controller: KIOXIA Corporation Device 0011 (rev 01) cc:00.0 Non-Volatile memory controller: KIOXIA Corporation Device 0011 (rev 01) cd:00.0 Non-Volatile memory controller: KIOXIA Corporation Device 0011 (rev 01) e1:00.0 Non-Volatile memory controller: KIOXIA Corporation Device 0011 (rev 01) e2:00.0 Non-Volatile memory controller: KIOXIA Corporation Device 0011 (rev 01) e3:00.0 Non-Volatile memory controller: KIOXIA Corporation Device 0011 (rev 01) e4:00.0 Non-Volatile memory controller: KIOXIA Corporation Device 0011 (rev 01)
In this example, 17 NVMe™ SSD disks are detected. From PCI bus ID 4a:00.0 and below are NVMe™ SSD disks that can be shared across a network. The NVMe™ disk at 01:00.0 is part of the system and must not be shared. -
Check the disk status.
target# cd spdk target# scripts/setup.sh status
An output example:Hugepages node hugesize free / total node0 1048576kB 0 / 0 node0 2048kB 0 / 0 node1 1048576kB 0 / 0 node1 2048kB 0 / 0 Type BDF Vendor Device NUMA Driver Device Block devices NVMe 0000:01:00.0 1b4b 2241 0 nvme nvme0 nvme0c0n1 NVMe 0000:4a:00.0 1e0f 0011 0 nvme nvme1 nvme1n1 NVMe 0000:4b:00.0 1e0f 0011 0 nvme nvme2 nvme2n1 NVMe 0000:4c:00.0 1e0f 0011 0 nvme nvme3 nvme3n1 NVMe 0000:4d:00.0 1e0f 0011 0 nvme nvme4 nvme4n1 NVMe 0000:61:00.0 1e0f 0011 0 nvme nvme5 nvme5n1 NVMe 0000:62:00.0 1e0f 0011 0 nvme nvme6 nvme6n1 NVMe 0000:63:00.0 1e0f 0011 0 nvme nvme7 nvme7n1 NVMe 0000:64:00.0 1e0f 0011 0 nvme nvme8 nvme8n1 DSA 0000:75:01.0 8086 0b25 0 idxd - - NVMe 0000:ca:00.0 1e0f 0011 1 nvme nvme9 nvme9n1 NVMe 0000:cb:00.0 1e0f 0011 1 nvme nvme10 nvme10n1 NVMe 0000:cc:00.0 1e0f 0011 1 nvme nvme11 nvme11n1 NVMe 0000:cd:00.0 1e0f 0011 1 nvme nvme12 nvme12n1 NVMe 0000:e1:00.0 1e0f 0011 1 nvme nvme13 nvme13n1 NVMe 0000:e2:00.0 1e0f 0011 1 nvme nvme14 nvme14n1 NVMe 0000:e3:00.0 1e0f 0011 1 nvme nvme15 nvme15n1 NVMe 0000:e4:00.0 1e0f 0011 1 nvme nvme16 nvme16n1 DSA 0000:f2:01.0 8086 0b25 1 idxd - -
In this example, 0000:4a:00.0 is the PCI bus ID of an NVMe™ SSD disk, nvme is the driver. -
On the target server, configure the SPDK.
Specify the number of huge pages.
target# export NRHUGE=10240
Specify PCIe bus IDs of NVMe™ SSD disks to be shared via NVMe™/TCP. For example:target# export PCI_ALLOWED='0000:4a:00.0 0000:4b:00.0 \ 0000:4c:00.0 0000:4d:00.0 0000:61:00.0 0000:62:00.0 0000:63:00.0 \ 0000:64:00.0 0000:ca:00.0 0000:cb:00.0 0000:cc:00.0 0000:cd:00.0 \ 0000:e1:00.0 0000:e2:00.0 0000:e3:00.0 0000:e4:00.0'
Run the setup script to allocate huge page memory and bind storage devices to a driver.target# scripts/setup.sh
An output example:0000:01:00.0 (1b4b 2241): Skipping denied controller at 0000:01:00.0 lsblk: /dev/nvme0c0n1: not a block device 0000:75:01.0 (8086 0b25): Skipping denied controller at 0000:75:01.0 0000:f2:01.0 (8086 0b25): Skipping denied controller at 0000:f2:01.0 0000:e1:00.0 (1e0f 0011): nvme -> uio_pci_generic 0000:cc:00.0 (1e0f 0011): nvme -> uio_pci_generic 0000:e3:00.0 (1e0f 0011): nvme -> uio_pci_generic 0000:e4:00.0 (1e0f 0011): nvme -> uio_pci_generic 0000:4b:00.0 (1e0f 0011): nvme -> uio_pci_generic 0000:4a:00.0 (1e0f 0011): nvme -> uio_pci_generic 0000:4d:00.0 (1e0f 0011): nvme -> uio_pci_generic 0000:cd:00.0 (1e0f 0011): nvme -> uio_pci_generic 0000:ca:00.0 (1e0f 0011): nvme -> uio_pci_generic 0000:63:00.0 (1e0f 0011): nvme -> uio_pci_generic 0000:cb:00.0 (1e0f 0011): nvme -> uio_pci_generic 0000:64:00.0 (1e0f 0011): nvme -> uio_pci_generic 0000:61:00.0 (1e0f 0011): nvme -> uio_pci_generic 0000:4c:00.0 (1e0f 0011): nvme -> uio_pci_generic 0000:62:00.0 (1e0f 0011): nvme -> uio_pci_generic 0000:e2:00.0 (1e0f 0011): nvme -> uio_pci_generic
The output example shows that NVMe™ devices are bound to a UIO driver. -
Check the status after changes are made.
target# scripts/setup.sh status
An output example:Hugepages node hugesize free / total node0 1048576kB 0 / 0 node0 2048kB 10028 / 10240 node1 1048576kB 0 / 0 node1 2048kB 0 / 0 Type BDF Vendor Device NUMA Driver Device Block devices NVMe 0000:01:00.0 1b4b 2241 0 nvme nvme0 nvme0c0n1 NVMe 0000:4a:00.0 1e0f 0011 0 uio_pci_generic - - NVMe 0000:4b:00.0 1e0f 0011 0 uio_pci_generic - - NVMe 0000:4c:00.0 1e0f 0011 0 uio_pci_generic - - NVMe 0000:4d:00.0 1e0f 0011 0 uio_pci_generic - - NVMe 0000:61:00.0 1e0f 0011 0 uio_pci_generic - - NVMe 0000:62:00.0 1e0f 0011 0 uio_pci_generic - - NVMe 0000:63:00.0 1e0f 0011 0 uio_pci_generic - - NVMe 0000:64:00.0 1e0f 0011 0 uio_pci_generic - - DSA 0000:75:01.0 8086 0b25 0 idxd - - NVMe 0000:ca:00.0 1e0f 0011 1 uio_pci_generic - - NVMe 0000:cb:00.0 1e0f 0011 1 uio_pci_generic - - NVMe 0000:cc:00.0 1e0f 0011 1 uio_pci_generic - - NVMe 0000:cd:00.0 1e0f 0011 1 uio_pci_generic - - NVMe 0000:e1:00.0 1e0f 0011 1 uio_pci_generic - - NVMe 0000:e2:00.0 1e0f 0011 1 uio_pci_generic - - NVMe 0000:e3:00.0 1e0f 0011 1 uio_pci_generic - - NVMe 0000:e4:00.0 1e0f 0011 1 uio_pci_generic - - DSA 0000:f2:01.0 8086 0b25 1 idxd - -
Devices with the uio_pci_generic driver can be shared via NVMe™/TCP. -
Start the nvmf_tgt application and keep this terminal open.
target# build/bin/nvmf_tgt -e all -m 0x1 -r /var/tmp/spdk.sock
where:- -e all: Enables all available subsystems.
- -m 0x1: Specifies the mask of core mask. In this case, it's set to 0x1, indicating that it runs on core 0.
- -r /var/tmp/spdk.sock: Specifies the location of the SPDK control socket. This socket is used for communicating with the SPDK application to issue commands and retrieve information.
[2024-02-23 12:51:20.331771] Starting SPDK v23.01.1 git sha1 186986cf1 / DPDK 22.11.1 initialization... [2024-02-23 12:51:20.331879] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3122 ] EAL: No free 2048 kB hugepages reported on node 1 TELEMETRY: No legacy callbacks, legacy socket not created [2024-02-23 12:51:20.386558] app.c: 712:spdk_app_start: *NOTICE*: Total cores available: 1 [2024-02-23 12:51:20.471888] app.c: 446:app_setup_trace: *NOTICE*: Tracepoint Group Mask all specified. [2024-02-23 12:51:20.471926] app.c: 447:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -p 3122' to capture a snapshot of events at runtime. [2024-02-23 12:51:20.471935] app.c: 452:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.pid3122 for offline analysis/debug. [2024-02-23 12:51:20.471948] reactor.c: 926:reactor_run: *NOTICE*: Reactor started on core 0 [2024-02-23 12:51:20.507267] accel_sw.c: 681:sw_accel_module_init: *NOTICE*: Accel framework software module initialized.
-
Initialize a new NVMe™ transport using the TCP/IP protocol
target# scripts/rpc.py nvmf_create_transport -t TCP
An output example of the nvmf_tgt application:… … [2024-02-23 12:52:20.460873] tcp.c: 629:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
-
Attach 16 NVMe™ SSD disks.
For example:
target# scripts/rpc.py bdev_nvme_attach_controller -b NVMe0 -t PCIe -a 0000:4a:00.0 target# scripts/rpc.py bdev_nvme_attach_controller -b NVMe1 -t PCIe -a 0000:4b:00.0 target# scripts/rpc.py bdev_nvme_attach_controller -b NVMe2 -t PCIe -a 0000:4c:00.0 target# scripts/rpc.py bdev_nvme_attach_controller -b NVMe3 -t PCIe -a 0000:4d:00.0 target# scripts/rpc.py bdev_nvme_attach_controller -b NVMe4 -t PCIe -a 0000:61:00.0 target# scripts/rpc.py bdev_nvme_attach_controller -b NVMe5 -t PCIe -a 0000:62:00.0 target# scripts/rpc.py bdev_nvme_attach_controller -b NVMe6 -t PCIe -a 0000:63:00.0 target# scripts/rpc.py bdev_nvme_attach_controller -b NVMe7 -t PCIe -a 0000:64:00.0 target# scripts/rpc.py bdev_nvme_attach_controller -b NVMe8 -t PCIe -a 0000:ca:00.0 target# scripts/rpc.py bdev_nvme_attach_controller -b NVMe9 -t PCIe -a 0000:cb:00.0 target# scripts/rpc.py bdev_nvme_attach_controller -b NVMe10 -t PCIe -a 0000:cc:00.0 target# scripts/rpc.py bdev_nvme_attach_controller -b NVMe11 -t PCIe -a 0000:cd:00.0 target# scripts/rpc.py bdev_nvme_attach_controller -b NVMe12 -t PCIe -a 0000:e1:00.0 target# scripts/rpc.py bdev_nvme_attach_controller -b NVMe13 -t PCIe -a 0000:e2:00.0 target# scripts/rpc.py bdev_nvme_attach_controller -b NVMe14 -t PCIe -a 0000:e3:00.0 target# scripts/rpc.py bdev_nvme_attach_controller -b NVMe15 -t PCIe -a 0000:e4:00.0
An output example:NVMe0n1 NVMe1n1 NVMe2n1 NVMe3n1 NVMe4n1 NVMe5n1 NVMe6n1 NVMe7n1 NVMe8n1 NVMe9n1 NVMe10n1 NVMe11n1 NVMe12n1 NVMe13n1 NVMe14n1 NVMe15n1
-
Set up subsystems with NVMe™ devices.
For example:
target# for (( i=0; i<16; i++ )) do n="nqn.2016-06.io.spdk:cnode"$((i)) nvme="NVMe$((i))n1" port=$((4420 + i)) ./scripts/rpc.py nvmf_create_subsystem -s SPDK00000000000000 -a -m 32 $n ./scripts/rpc.py nvmf_subsystem_add_ns $n $nvme ./scripts/rpc.py nvmf_subsystem_add_listener -t tcp -f Ipv4 -a 172.168.1.2 -s $port $n done
- Check the output of the nvmf_tgt application.An output example:
… … [[2024-02-23 12:56:14.370576] tcp.c: 850:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 172.168.1.2 port 4420 *** [2024-02-23 12:56:14.870553] tcp.c: 850:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 172.168.1.2 port 4421 *** [2024-02-23 12:56:15.374547] tcp.c: 850:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 172.168.1.2 port 4422 *** [2024-02-23 12:56:15.878562] tcp.c: 850:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 172.168.1.2 port 4423 *** [2024-02-23 12:56:16.338563] tcp.c: 850:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 172.168.1.2 port 4424 *** [2024-02-23 12:56:16.734568] tcp.c: 850:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 172.168.1.2 port 4425 *** [2024-02-23 12:56:17.230553] tcp.c: 850:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 172.168.1.2 port 4426 *** [2024-02-23 12:56:17.734565] tcp.c: 850:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 172.168.1.2 port 4427 *** [2024-02-23 12:56:18.238546] tcp.c: 850:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 172.168.1.2 port 4428 *** [2024-02-23 12:56:18.742568] tcp.c: 850:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 172.168.1.2 port 4429 *** [2024-02-23 12:56:19.246556] tcp.c: 850:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 172.168.1.2 port 4430 *** [2024-02-23 12:56:19.750581] tcp.c: 850:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 172.168.1.2 port 4431 *** [2024-02-23 12:56:20.250569] tcp.c: 850:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 172.168.1.2 port 4432 *** [2024-02-23 12:56:20.754593] tcp.c: 850:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 172.168.1.2 port 4433 *** [2024-02-23 12:56:21.258589] tcp.c: 850:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 172.168.1.2 port 4434 *** [2024-02-23 12:56:21.706625] tcp.c: 850:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 172.168.1.2 port 4435 ***