Large Host Buffer Memory Configurations

Reference Documentation

product_line_custom
Napatech SmartNIC
category
Reference Information
A total of 1024GB (1TB) host buffers across all configured adapters is supported by the Napatech driver. Linux considerations:
  • Linux kernel sysctl "vm.max_map_count": Especially for large host buffer configurations it is necessary to adjust the kernel sysctl "vm.max_map_count" (/proc/sys/vm/max_map_count) The kernel sysctl "vm.max_map_count" (/proc/sys/vm/max_map_count) should be adjusted to (at least) the total configured host buffer memory in MB multiplied by four. Example for total host buffer size 128GB (131072MB): 131072*4 = 524288. Hence the minimum value for "vm.max_map_count" is 524288. If your kernel is already using an even higher value - then just leave it like that. If your application has special memory mapping needs - it should be added to this setting.

  • Linux kernel sysctl "kernel.numa_balancing": Especially for large host buffer configurations it may be necessary to disable the kernel sysctl "kernel.numa_balancing" (/proc/sys/kernel/numa_balancing) In some large host buffer scenarios the automatic numa page balancing and page migration will not work properly and it will impose a big system-load on the system. Symptoms seen: idle user-space threads consumes close to 100% system-time. Calls to sleep(), usleep() and nanosleep() becomes unaccurate - yet their return value is zero (uninterrupted). Disabling the numa balancing will fix this (kernel sysctl: kernel.numa_balancing=0)