A pair of NT40A01-4×10/1-SLB accelerators can be bonded in a master-slave configuration so that all traffic received on the master accelerator is replicated to the slave accelerator, ensuring local cache access to two NUMA nodes in a dual CPU socket server.
The CPU Socket Load Balancing feature can be used to optimize performance on a dual CPU socket server for demanding data processing at 4×10 Gbit/s.
4×10 Gbit/s traffic received on a master card is replicated to a slave card. This allows load distribution of traffic in up to 128 streams per CPU socket (NUMA node), for a total of 256 streams in a dual CPU socket server.
Time-stamping and port statistics are done by the master accelerator, while frame processing, filtering and stream distribution is done by each accelerator. In effect, this works as if an optical splitter had been inserted before each pair of ports, with the advantage that time-stamping of the received frames is guaranteed to be identical.
The ports on the slave accelerator are disabled and cannot be used.
- Master or Slave.
- NUMA node locality.
- Accelerators in a pair must be consecutively numbered, with the slave number = master number + 1.
Filtering and distribution of traffic to streams and setting affinity of streams to NUMA nodes is done as usual using NTPL.
The traffic received on a port 0 on the master accelerator is replicated and available as if received on port 0 on the slave accelerator. From the host's point of view, since ports are enumerated consecutively in accelerator order, port n and port n+4 receive the same traffic.
[Adapter0] BondingType = Master NumaNode = 0 # Local NUMA node for this PCIe slot [Adapter1] BondingType = Slave NumaNode = 1 # Local NUMA node for this PCIe slot