OS Preparation

Link-Inline™ Software User Guide

Platform
Napatech SmartNIC
Content Type
User Guide
Capture Software Version
Link-Inline™ Software 3.2

Enable IOMMU and huge-page memory before installing Napatech Link-Inline™ Software.

About this task

Begin the installation by enabling IOMMU and huge-page memory. Both of these features are enabled by modification of the grub configuration file. These features require a system restart to take effect and are persistent across system restarts.

Procedure

  1. Enter the following command to confirm that your system provides the necessary hardware virtualization support.
    egrep '(vmx|svm)' --color=always /proc/cpuinfo | sort -u

    If this command returns nothing, the processor does not support virtualization.

    For more information on Intel processors, refer to: Does My Processor Support Intel® Virtualization Technology?

  2. Confirm that VT-d is enabled in the BIOS.

    Some system manufacturers disable these extensions by default. These extensions are listed in the BIOS by various names which differ from manufacturer to manufacturer. Consult your system manufacturer’s documentation for information on BIOS settings.

  3. Run the following commands to activate IOMMU and enable huge pages in the kernel.
    grubby --update-kernel=ALL --args=intel_iommu=on
    grubby --update-kernel=ALL --args=default_hugepagesz=1G
    grubby --update-kernel=ALL --args=hugepagesz=1G
    grubby --update-kernel=ALL --args=hugepages=32
    Change the hugepages parameter to specify a different number of huge pages to be allocated at boot. We recommend that you set hugepages to 32 or more consistent with your needs and the constraints of your system.
    Note: 16 huge pages per NUNA node are required. In a dual-socket system, make sure the required number of huge pages are assigned to the NUMA node associated with the PCIe slot where the SmartNIC is installed.
    If the vfio-pci driver is built into the kernel, enable SR-IOV in the kernel by running the command as follows.
    grubby --update-kernel=ALL --args=vfio_pci.enable_sriov=1
    Some old servers or some servers with old BIOS versions may fail activating virtual functions because the BIOS is not providing enough MMIO space for virtual functions. This issue can be resolved by adding the kernel parameter, pci=realloc.
    grubby --update-kernel=ALL --args=pci=realloc
    This command enables reallocating PCI bridge resources to accommodate required resource.
    Run this command to verify.
    grubby --info=ALL
  4. Create a mount point and mount hugetlbfs.
    mkdir /mnt/huge
    mount -t hugetlbfs none /mnt/huge
    Edit the file /etc/fstab and add the following entry:
    nodev /mnt/huge hugetlbfs defaults 0 0
  5. Mount huge pages automatically at boot time to avoid the need to remount after a system restart.
    Edit the file /etc/fstab and add the following entry:
    nodev /mnt/huge hugetlbfs defaults 0 0

    For further information about configuring huge pages on Linux hosts, refer to the Linux hugetlbfs guide.

    For more information on DPDK huge pages, refer to the documentation at: https://dpdk-guide.gitlab.io/dpdk-guide/setup/hugepages.html

  6. Reboot the server.
    reboot
  7. Enter the following command to verify that the configuration has been successfully updated.
    cat /proc/cmdline

    The output of this command should include the parameters for IOMMU and huge pages.

    An output example:
    BOOT_IMAGE=…intel_iommu=on default_hugepagesz=1G hugepagesz=1G hugepages=32
  8. Confirm that IOMMU is enabled.
    • Enter the following command for AMD-based machines :
      dmesg | grep AMD-Vi
    • Enter the following command for Intel-based machines :
      dmesg | grep -i IOMMU
    If there is no output from one of these commands, you will need to remedy this before moving on.
    • Verify that your hardware supports VT-d and that it has been enabled in the BIOS
    • Verify that the motherboard chipset supports IOMMU
    • Check dmesg for errors suggesting that the BIOS is broken
  9. Verify the number of actual allocated huge pages by entering the following command.
    cat /proc/meminfo | grep Huge
    An output example:
    AnonHugePages:   2701312 kB
    ShmemHugePages:        0 kB
    FileHugePages:      2048 kB
    HugePages_Total:      32
    HugePages_Free:       32
    HugePages_Rsvd:        0
    HugePages_Surp:        0
    Hugepagesize:    1048576 kB
    Hugetlb:        33554432 kB
    Note: By default, memory in a dual-socket system will be allocated from each NUMA node.
    Enter the following command to display the per-node distribution of huge pages.
    cat /sys/devices/system/node/node*/meminfo | grep Huge