Use this information to configure the SPDK for sharing RAM devices using NVMe™/TCP on the target server.
Before you begin
- Programmed the IPU with an appropriate FPGA image, see Napatech Link-Storage™ Software FPGA images.
- Installed the SPDK on the target server, see Installing the SPDK on the Target Server.
- Installed Link-Storage™ Software on the IPU, see Installing the Napatech SDPK on the IPU.
About this task
Note: The following prompts are used to indicate which part of the system to run the
provided commands on.
- soc#: The SoC on the IPU.
- host#: The server where the IPU is installed.
- target#: The remote server with storage disks.
Procedure
-
On the target server, assign IP addresses on the target server.
For example:
target# ip link set down dev enp13s0f0 target# ip addr add 172.168.1.2/24 enp13s0f0 target# ip link set up dev enp13s0f0 target# ip link set down dev enp13s0f0 target# ip addr add 172.168.2.2/24 enp13s0f1 target# ip link set up dev enp13s0f1
-
Check the disk status.
target# cd spdk target# scripts/setup.sh status
An output example:Hugepages node hugesize free / total node0 1048576kB 0 / 0 node0 2048kB 10240 / 10240 Type BDF Vendor Device NUMA Driver Device Block devices I/OAT 0000:00:04.0 8086 6f20 unknown ioatdma - - I/OAT 0000:00:04.1 8086 6f21 unknown ioatdma - - I/OAT 0000:00:04.2 8086 6f22 unknown ioatdma - - I/OAT 0000:00:04.3 8086 6f23 unknown ioatdma - - I/OAT 0000:00:04.4 8086 6f24 unknown ioatdma - - I/OAT 0000:00:04.5 8086 6f25 unknown ioatdma - - I/OAT 0000:00:04.6 8086 6f26 unknown ioatdma - - I/OAT 0000:00:04.7 8086 6f27 unknown ioatdma - -
In this example, 0000:4a:00.0 is the PCI bus ID. -
Configure the SPDK.
Specify the number of huge pages.
target# export NRHUGE=10240
Run the setup script to allocate huge page memory and bind storage devices to a driver.target# scripts/setup.sh
An output example:0000:00:04.2 (8086 6f22): ioatdma -> vfio-pci 0000:00:04.3 (8086 6f23): ioatdma -> vfio-pci 0000:00:04.0 (8086 6f20): ioatdma -> vfio-pci 0000:00:04.1 (8086 6f21): ioatdma -> vfio-pci 0000:00:04.6 (8086 6f26): ioatdma -> vfio-pci 0000:00:04.7 (8086 6f27): ioatdma -> vfio-pci 0000:00:04.4 (8086 6f24): ioatdma -> vfio-pci 0000:00:04.5 (8086 6f25): ioatdma -> vfio-pci INFO: Requested 10240 hugepages but 10240 already allocated on node0
The output example shows that storage devices are bound to a vfio-pci driver. -
Check the status after changes are made.
target# scripts/setup.sh status
An output example:Hugepages node hugesize free / total node0 1048576kB 0 / 0 node0 2048kB 10240 / 10240 Type BDF Vendor Device NUMA Driver Device Block devices I/OAT 0000:00:04.0 8086 6f20 unknown vfio-pci - - I/OAT 0000:00:04.1 8086 6f21 unknown vfio-pci - - I/OAT 0000:00:04.2 8086 6f22 unknown vfio-pci - - I/OAT 0000:00:04.3 8086 6f23 unknown vfio-pci - - I/OAT 0000:00:04.4 8086 6f24 unknown vfio-pci - - I/OAT 0000:00:04.5 8086 6f25 unknown vfio-pci - - I/OAT 0000:00:04.6 8086 6f26 unknown vfio-pci - - I/OAT 0000:00:04.7 8086 6f27 unknown vfio-pci - -
Devices with the vfio_pci driver can be shared via NVMe™/TCP. -
Start the nvmf_tgt application and keep this terminal open.
target# build/bin/nvmf_tgt -e all -m 0x1 -r /var/tmp/spdk.sock
where:- -e all: Enables all available subsystems.
- -m 0x1: Specifies the mask of core mask. In this case, it's set to 0x1, indicating that it runs on core 0.
- -r /var/tmp/spdk.sock: Specifies the location of the SPDK control socket. This socket is used for communicating with the SPDK application to issue commands and retrieve information.
[2024-02-27 12:44:01.668115] Starting SPDK v24.05-pre git sha1 9c174d820 / DPDK 23.11.0 initialization... [2024-02-27 12:44:01.668205] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid5346 ] [2024-02-27 12:44:01.773636] app.c: 815:spdk_app_start: *NOTICE*: Total cores available: 1 [2024-02-27 12:44:01.806856] app.c: 519:app_setup_trace: *NOTICE*: Tracepoint Group Mask all specified. [2024-02-27 12:44:01.806907] app.c: 523:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -p 5346' to capture a snapshot of events at runtime. [2024-02-27 12:44:01.806920] app.c: 525:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.pid5346 for offline analysis/debug. [2024-02-27 12:44:01.806936] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
-
Initialize a new NVMe™ transport using the TCP/IP protocol
target# scripts/rpc.py nvmf_create_transport -t TCP
An output example of the nvmf_tgt application:… … [2024-02-27 12:44:37.078619] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
-
Set up subsystems with block devices.
The following commands set up 16 subsystems, each of which is created with a block device and is configured to listen for connections on a specified IP address and port using the TCP protocol. Adjust the number of subsystems as needed.
target# for ((i = 0; i < 16; i++)); do ./scripts/rpc.py nvmf_create_subsystem -s SPDK00000000000001 -a -m 32 \ "nqn.2016-06.io.spdk:cnode$((i))" ./scripts/rpc.py bdev_malloc_create -b "Malloc$((i+1))" 128 512 ./scripts/rpc.py nvmf_subsystem_add_ns "nqn.2016-06.io.spdk:cnode$((i))" \ "Malloc$((i+1))" ./scripts/rpc.py nvmf_subsystem_add_listener -t tcp -f Ipv4 \ -a 172.168.1.2 -s $((4420+i)) "nqn.2016-06.io.spdk:cnode$((i))" done
The first loop creates subsystem nqn.2016-06.io.spdk:cnode1 with block device Malloc1 and configures it to listen for connections on IP 172.168.1.2 and port 4420 using the TCP protocol.
where:./scripts/rpc.py nvmf_create_subsystem -s SPDK00000000000001 -a \ -m 32 nqn.2016-06.io.spdk:cnode0
Create a subsystem with the given subsystem NQN nqn.2016-06.io.spdk:cnode1../scripts/rpc.py bdev_malloc_create -b Malloc1 128 512
Create a block device named Malloc1 with a size of 128 MB and a block size 512 bytes../scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Malloc1
Add the created block device Malloc1 to the subsystem nqn.2016-06.io.spdk:cnode0./scripts/rpc.py nvmf_subsystem_add_listener -t tcp -f Ipv4 -a 172.168.1.2 \ -s 4420 nqn.2016-06.io.spdk:cnode0
Add a listener to the subsystem nqn.2016-06.io.spdk:cnode0. Listen on IP 172.168.1.2 and port 4420.
An output example:Malloc1 Malloc2 Malloc3 Malloc4 Malloc5 Malloc6 Malloc7 Malloc8 Malloc9 Malloc10 Malloc11 Malloc12 Malloc13 Malloc14 Malloc15 Malloc16
-
Verify the output of the nvmf_tgt application.
An output example:
… … [2024-02-29 10:33:30.189099] tcp.c: 952:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 172.168.1.2 port 4420 *** [2024-02-29 10:33:30.732905] tcp.c: 952:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 172.168.1.2 port 4421 *** [2024-02-29 10:33:31.277057] tcp.c: 952:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 172.168.1.2 port 4422 *** [2024-02-29 10:33:31.825205] tcp.c: 952:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 172.168.1.2 port 4423 *** [2024-02-29 10:33:32.366576] tcp.c: 952:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 172.168.1.2 port 4424 *** [2024-02-29 10:33:32.906849] tcp.c: 952:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 172.168.1.2 port 4425 *** [2024-02-29 10:33:33.446812] tcp.c: 952:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 172.168.1.2 port 4426 *** [2024-02-29 10:33:33.987068] tcp.c: 952:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 172.168.1.2 port 4427 *** [2024-02-29 10:33:34.522842] tcp.c: 952:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 172.168.1.2 port 4428 *** [2024-02-29 10:33:35.058931] tcp.c: 952:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 172.168.1.2 port 4429 *** [2024-02-29 10:33:35.599104] tcp.c: 952:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 172.168.1.2 port 4430 *** [2024-02-29 10:33:36.134963] tcp.c: 952:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 172.168.1.2 port 4431 *** [2024-02-29 10:33:36.680974] tcp.c: 952:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 172.168.1.2 port 4432 *** [2024-02-29 10:33:37.217274] tcp.c: 952:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 172.168.1.2 port 4433 *** [2024-02-29 10:33:37.757430] tcp.c: 952:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 172.168.1.2 port 4434 *** [2024-02-29 10:33:38.297513] tcp.c: 952:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 172.168.1.2 port 4435 ***