Deploying the Napatech Device-Plugin Pod

Link-Inline™ Software User Guide

Platform
Napatech SmartNIC
Content Type
User Guide
Capture Software Version
Link-Inline™ Software 3.2

Use this procedure to build and deploy container images for the Napatech device-plugin pod.

Before you begin

Make sure that you have:

About this task

Build and deploy an image for the Napatech device-plugin container and another image for the Napatech main-app container. The same image for the Napatech main-app container is used for deploying the Napatech init container. This procedure provides examples of docker commands for building container images. Alternatively, you can use your preferred tool, such as podman or nerdctl.

Procedure

  1. Extract the Napatech Link-Inline™ Software package.
    cd /opt/
    tar zxf ntinl_package-<version>-linux.tar.gz
    where:
    • version is the Link-Inline™ Software version identifier.

    This is an example command:

    tar zxf ntinl_package-3.0.0_linux.tar.gz
  2. Create a symlink to the unpacked package.
    ln -s ntinl_package-<version>-linux ntinl
    where:
    • version is the Link-Inline™ Software version identifier.
  3. Change to the directory of the unpacked Napatech package.
    cd ntinl
  4. Build the docker container image for the Napatech device-plugin container.
    The following nerdctl command example builds a container image based on the Dockerfile located at kubernetes_v<version>/ntdevplugin/Dockerfile.ntdevplugin.
    docker build -t k8s.io/ntdevplugin:<version> \
    -f kubernetes_v<version>/ntdevplugin/Dockerfile.ntdevplugin \
    kubernetes_v<version>/ntdevplugin/.
    where:
    • version is the version identifier of the corresponding package.
    • build: Used to build container images from a Dockerfile.
    • -t k8s.io/ntdevplugin:<version>: The image will be tagged with k8s.io/ntdevplugin and the version in the namespace k8s.io.
      Note: See the README.md file to find the correct version number of the container image in the following directory of the package.
      /opt/ntinl/kubernetes_v<version>/ntdevplugin/README.md
    • The build context for the image is set to the kubernetes_v22.11.1/ntdevplugin/ directory.
    For example:
    docker build -t k8s.io/ntdevplugin:1.5 \
    -f kubernetes_v22.11.1/ntdevplugin/Dockerfile.ntdevplugin \
    kubernetes_v22.11.1/ntdevplugin/.
    An output example:
    [+] Building 237.4s (17/18)
    [+] Building 237.5s (18/18) FINISHED
     => [internal] load build definition from Dockerfile.ntdevplugin                                   0.0s
     => => transferring dockerfile: 755B                                                               0.0s
     => [internal] load metadata for docker.io/library/fedora:37                                       1.6s
     => [internal] load .dockerignore                                                                  0.0s
     => => transferring context: 2B                                                                    0.0s
     …
     …
     => exporting to docker image format                                                              33.9s
     => => exporting layers                                                                           19.5s
     => => exporting manifest sha256:6d04eadf1e207bd3e4737d3776dff84da4fa4e404c62a4e962e203bb0dc71d3f  0.0s
     => => exporting config sha256:2a3998bec168d333108e2d1a05272ee98f732b0e4f85458d67ae6b9011842342    0.0s
     => => sending tarball                                                                            14.4s
    Loaded image: k8s.io/ntdevplugin:1.5
  5. Build the docker container image for the Napatech main-app container.
    The following nerdctl command builds a container image based on the Dockerfile located at kubernetes_v22.11.1/ntdevplugin/Dockerfile.ntmain.
    docker build -t k8s.io/ntmain:<version> \
    -f kubernetes_v<version>/ntdevplugin/Dockerfile.ntmain .
    where:
    • version is the version identifier of the corresponding package.
    • build: Used to build container images from a Dockerfile.
    • -t k8s.io/ntmain:<version>: The image will be tagged with k8s.io/ntmain and the version in the namespace k8s.io.
      Note: See the README.md file to find the correct version number of the container image in the following directory of the package.
      /opt/ntinl/kubernetes_v<version>/ntdevplugin/README.md
    • The build context for the image is set to the current directory.
    For example:
    docker build -t k8s.io/ntmain:1.1 \
    -f kubernetes_v22.11.1/ntdevplugin/Dockerfile.ntmain .
    An output example:
    [+] Building 380.3s (31/31) FINISHED
     => [internal] load build definition from Dockerfile.ntmain                                               0.0s
     => => transferring dockerfile: 1.72kB                                                                    0.0s
     => [internal] load metadata for docker.io/library/fedora:37                                              0.9s
     => [internal] load .dockerignore                                                                         0.0s
     => => transferring context: 2B                                                                           0.0s
    …
    …
     => exporting to docker image format                                                                     33.4s
     => => exporting layers                                                                                  19.4s
     => => exporting manifest sha256:bd16c0c255a4a7d6b316e51ef181794f4f71dc1ef0294dc4d50a785ef64464d2         0.0s
     => => exporting config sha256:80700f0e66091a8b53de06631cbad28a5271e7d450e14e1d5b61501e29aa7c3c           0.0s
     => => sending tarball                                                                                   14.1s
    unpacking k8s.io/ntmain:1.1 (sha256:bd16c0c255a4a7d6b316e51ef181794f4f71dc1ef0294dc4d50a785ef64464d2)...
    Loaded image: k8s.io/ntmain:1.1
  6. Run the following command to list available images.
    docker images
    An output example:
    REPOSITORY          TAG  IMAGE ID     CREATED            PLATFORM       SIZE         BLOB SIZE
    …
    k8s.io/ntdevplugin 1.5 6d04eadf1e20 20 minutes ago linux/amd64 588.2 MiB 322.1 MiB
    k8s.io/ntmain      1.1 bd16c0c255a4 5 minutes ago  linux/amd64 593.2 MiB 320.8 MiB
    …
  7. Configure parameters in the ntdevplugin.yaml file if necessary.
    vim /opt/ntinl/kubernetes_v<version>/ntdevplugin/ntdevplugin.yaml

    For information on the ConfigMap parameters of the Napatech device-plugin pod, see ConfigMap.

    By default, automatic detection of the SmartNIC is enabled in the file.

    The nodeSelector can be added to assign a specific label to Kubernetes nodes with a Napatech SmartNIC running Link-Inline™ Software. See Deploying the Napatech device-plugin pod at selected Kubernetes nodes.

  8. Apply the Kubernetes manifest file ntdevplugin.yaml.
    For example:
    kubectl apply -f kubernetes_v22.11.1/ntdevplugin/ntdevplugin.yaml
    This command creates resources and deploy images as defined in the ntdevplugin.yaml file. Before applying the ntdevplugin.yaml file, make sure to review its content to confirm that it aligns with your intended configuration.
    An output example:
    configmap/napatech-device-plugin-config created
    daemonset.apps/napatech-device-plugin created
  9. Check log messages of the Napatech device-plugin pod to verify whether the SmartNIC was properly detected, and both the Napatech device-plugin and main-app containers have started. For example:
    The following command is used to retrieve a list of pods from all namespaces in a Kubernetes cluster.
    kubectl get pods -A
    An output example:
    NAMESPACE     NAME                                     READY   STATUS      RESTARTS       AGE
    kube-system   helm-install-traefik-79rrr               0/1     Completed   1              55d
    kube-system   helm-install-traefik-crd-xtdj9           0/1     Completed   0              55d
    kube-system   svclb-traefik-5fecb96c-whwnn             2/2     Running     10 (19d ago)   55d
    kube-system   local-path-provisioner-957fdf8bc-xgrkk   1/1     Running     5 (19d ago)    55d
    kube-system   metrics-server-648b5df564-2l7lp          0/1     Running     5 (19d ago)    55d
    kube-system   coredns-77ccd57875-4kxvk                 1/1     Running     5 (19d ago)    55d
    kube-system   traefik-64f55bb67d-wlq42                 1/1     Running     5 (19d ago)    55d
    kube-system   napatech-device-plugin-pts2d             2/2     Running     0              32s
    It displays information about the pods in the cluster, including names, statuses, ages, and other relevant details. This command is useful for getting an overview of the current status of pods across all namespaces, which can be especially helpful for troubleshooting and monitoring purposes.

    The output shows that the Napatech device-plugin pod, napatech-device-plugin-pts2d is running.

  10. Retrieve the logs from the NTDevPlugin pod using the namespace and the name of the pod.
    For example:
    kubectl logs -n kube-system napatech-device-plugin-pts2d
    An output example:
    Defaulted container "napatech-device-plugin" out of: napatech-device-plugin, napatech-main-app, napatech-init (init)
    I0925 15:01:32.307864       1 ntdevplugin.go:667] Napatech Device Plugin version 1.5
    I0925 15:01:32.308101       1 ntdevplugin.go:669] Configuration:
    I0925 15:01:32.308121       1 ntdevplugin.go:671]     NicPci           : autodetect
    I0925 15:01:32.308169       1 ntdevplugin.go:671]     vfMacStart       : 02:11:22:33:44:00
    I0925 15:01:32.308195       1 ntdevplugin.go:671]     vfVlanStart      : 44
    I0925 15:01:32.308215       1 ntdevplugin.go:671]     vfNumberOfQueues : 1
    I0925 15:01:32.308235       1 ntdevplugin.go:671]     MainApp          : enabled
    I0925 15:01:32.522022       1 ntdevplugin.go:687] NT SmartNIC detected at PCI address: 0000:42:00.0
    I0925 15:01:32.716362       1 ntdevplugin.go:620] Found: PF:  [0000:42:00.0]
    I0925 15:01:32.716451       1 ntdevplugin.go:621]        VFs: [0000:42:00.4 0000:42:00.5 0000:42:00.6 0000:42:00.7]
    I0925 15:01:32.716477       1 ntdevplugin.go:733] ntPhysFuncManager is disabled by configuration
    I0925 15:01:32.716507       1 ntdevplugin.go:302] Creating VF device: 0000:42:00.7
    I0925 15:01:32.716520       1 ntdevplugin.go:302] Creating VF device: 0000:42:00.4
    I0925 15:01:32.716531       1 ntdevplugin.go:302] Creating VF device: 0000:42:00.5
    I0925 15:01:32.716556       1 ntdevplugin.go:302] Creating VF device: 0000:42:00.6
    I0925 15:01:32.911930       1 ntdevplugin.go:635] ntVirtFunc: DevicePluginPath /var/lib/kubelet/device-plugins/, pluginEndpoint ntVirtFunc-1695654092.sock
    I0925 15:01:32.912029       1 ntdevplugin.go:636]             NTDevPlugin start server at: /var/lib/kubelet/device-plugins/ntVirtFunc-1695654092.sock
    I0925 15:01:37.920429       1 ntdevplugin.go:341] ntVirtFuncManager GetOptions: &DevicePluginOptions{PreStartRequired:false,GetPreferredAllocationAvailable:true,}
    I0925 15:01:37.921450       1 ntdevplugin.go:761] Napatech Device Plugin registered
    I0925 15:01:37.921511       1 ntdevplugin.go:749] Socket health check started
    The output displays the logs generated by napatech-device-plugin pod. The logs can provide insights into the behavior and activities of the containers and can be useful for troubleshooting issues and monitoring the behavior of applications.
  11. Retrieve the logs from a specific container.
    kubectl logs -n kube-system napatech-device-plugin-pts2d -c napatech-init
    In the command example, the -c flag specifies the name of the container, which you want to retrieve the logs, in this case, napatech-init. An output example:
    NT SmartNIC info:
      NIC PCI address: 0000:42:00.0
      NIC type:
      NIC driver:      vfio-pci
      VFs requested:   4
      VFs configured:  4
    NT SmartNIC 0000:42:00.0 is configured.
    The following command is to display the logs generated by the napatech-main-app container.
    kubectl logs -n kube-system napatech-device-plugin-pts2d -c napatech-main-app
    An output example:
    Execution of NT main APP:
      NT_NIC =          0000:42:00.0
      NT_VF_DEV =       0000:42:00.4 0000:42:00.5 0000:42:00.6 0000:42:00.7
      NT_DPDK_PREFIX =  nt-main-0000:42:00.0
      NT_VF_NUM =       4
      NT_VF_QUEUES =    1
      VF_VLAN_START =   44
      VF_TOKEN =        711081f2-1054-46b2-94b9-b53a10c979e1
    Executing: ntmain -c 0xfffe -n 4 --file-prefix=nt-main-0000:42:00.0 --vfio-vf-token=711081f2-1054-46b2-94b9-b53a10c979e1 -a 0000:42:00.0,exception_path=1,portqueues=[4:1,5:1,6:1,7:1]  -a 0000:42:00.4,vlan=44,sep=1 -a 0000:42:00.5,vlan=45,sep=1 -a 0000:42:00.6,vlan=46,sep=1 -a 0000:42:00.7,vlan=47,sep=1 -- --queues 1
    Wait for ntmain with PID 20
    EAL: Detected CPU lcores: 32
    EAL: Detected NUMA nodes: 2
    EAL: Detected static linkage of DPDK
    EAL: Multi-process socket /var/run/dpdk/nt-main-0000:42:00.0/mp_socket
    EAL: Selected IOVA mode 'VA'
    EAL: VFIO support initialized
    EAL: Using IOMMU type 1 (Type 1)
    EAL: Probe PCI driver: net_ntnic (18f4:1c5) device: 0000:42:00.0 (socket 1)
    ETHDEV: WRN: iova mode (2) should be PA for performance reasons
    ETHDEV: INF: NT VFIO device setup 0000:42:00.0
    NTHW: INF: PCI:0000:42:00.0: FPGA 0200-9563-55-16 (C8255B3710) [644B901E]
    …
    …
    EAL: Probe PCI driver: net_nt_vf (18f4:51a) device: 0000:42:00.7 (socket 1)
    VDPA: INF: Probe NT200A02 VF : 42:00:7
    VDPA: INF: ntvf_vdpa_pci_probe: [../drivers/net/ntnic/ntnic_vf_vdpa.c:1268] 0000:42:00.7
    ETHDEV: INF: NT VFIO device setup 0000:42:00.7
    EAL: Using IOMMU type 1 (Type 1)
    VDPA: INF: ntvf_vdpa_update_datapath: unhandled state [../drivers/net/ntnic/ntnic_vf_vdpa.c:810]
    VDPA: INF: vDPA3: device 0000:42:00.7 (host_id 7), backing device 0000:42:00.7, index 5, queues 1, rep port 7, ifname /usr/local/var/run/stdvio7/stdvio7
    VHOST_CONFIG: (/usr/local/var/run/stdvio7/stdvio7) vhost-user client: socket created, fd: 103
    VHOST_CONFIG: (/usr/local/var/run/stdvio7/stdvio7) failed to connect: No such file or directory
    VHOST_CONFIG: (/usr/local/var/run/stdvio7/stdvio7) reconnecting...
    ETHDEV: INF: PCI:0000:42:00.0:intf_0: link is up
    Note: The following log messages in the output can be disregarded because the vHost-user socket files are created by running user applications.
    VHOST_CONFIG: (/usr/local/var/run/stdvio4/stdvio4) failed to connect: Connection refused
    VHOST_CONFIG: (/usr/local/var/run/stdvio7/stdvio7) failed to connect: No such file or directory
    The ntmain application acts as a client for vHost-user sockets. If the vHost-user socket files already exist, they are deleted before new files are created.