Suricata

Installation and Use of Napatech Link™ Capture Software for Intel® PAC with Intel® Arria® 10 GX FPGA

Platform
Intel® PAC
Content Type
Quick Guide
Capture Software Version
Link™ Capture Software 12.7

Installing and configuring Suricata for use with Napatech Link™ Capture Software.

Introduction

Suricata is an open source intrusion detection and prevention engine.

Suricata can be installed for use with Napatech Link™ Capture Software in two ways:
  • To get optimal performance, build Suricata from source with native Napatech support enabled.
  • Suricata can be installed from binary packages and use libpcap interfaces to Napatech Link™ Capture Software.
    Note: With this option, there is no support for Napatech hardware.

Package-based installation

Identify a suitable precompiled Suricata package for your OS distribution. See Installing and Running libpcap Applications for information about how to install and use package-based applications with Napatech libpcap.

See DN-0428 for more information about configuration of Napatech libpcap.

For Ubuntu, the OISF maintains a PPA, suricata-stable, that always contains the latest stable release:
sudo add-apt-repository ppa:oisf/suricata-stable
sudo apt-get update
sudo apt-get install suricata
For RedHat Enterprise Linux 7 and CentOS 7 the EPEL repository can be used:
sudo yum install epel-release
sudo yum install suricata

See also http://suricata.readthedocs.io/en/latest/install.html#install-binary-packages.

Dependencies

You need to install the following libraries and their development headers:
  • libpcap
  • libpcre
  • libmagic
  • zlib
  • libyaml
You need the following tools:
  • make
  • gcc
  • pkg-config

Building Suricata from source

To build and run Suricata with Napatech support:
  1. Download and extract the Suricata source: https://github.com/OISF/suricata.
  2. Run configure to enable Napatech support and prepare for compilation:
    ./configure --enable-napatech --with-napatech-includes=/opt/napatech3/include --with-napatech-libraries=/opt/napatech3/lib
    make
    sudo make install-full
    
  3. Edit the suricata.yaml file to configure the maximum number of streams to use:

    If you plan to use the load distribution (RSS-like) feature in the SmartNIC, the list should contain the same number of streams as host buffers defined in ntservice.ini:

    Napatech :
        # The Host Buffer Allowance for all streams
        # (-1 = OFF, 1 - 100 = percentage of the host buffer that can be held back)
        hba : - 1
    
        # use_all_streams set to "yes" queries the Napatech service for all configured
        # streams and listen on all of them. When set to "no" the streams config array
        # is used.
        use - all - streams : yes
    
        # The streams to listen on
        streams : [ 0 , 1 , 2 , 3 , 4 , 5 , 6 , 7 ]
    Note: hba is useful only when a stream is shared with another application. When hba is enabled, packets are dropped (i.e. not delivered to Suricata) when the host-buffer utilization reaches the high-water mark indicated by the hba value. This ensures that, even if Suricata is slow with packet processing, the other application still receives all of the packets. If hba is enabled without another application sharing the stream, it results in sub-optimal packet buffering.

    When setting use-all-streams=yes, Suricata uses all available streams created by the driver ([Advanced multi-threaded configuration]). When setting use-all-streams=no, Suricata uses the streams listed in streams: [0, 1, 2, 3, 4, 5, 6, 7], in this case the streams from 0 to 7. The number is the stream ID used in the NTPL command.

  4. Configure Suricata:

    For the basic installation, set up the SmartNIC to merge all physical ports into a single stream that Suricata can read from. For this configuration, Suricata handles the packet distribution to multiple threads.

    Change these lines in /opt/napatech3/bin/ntservice.ini for best single buffer performance:

    TimeSyncReferencePriority = OSTime     # Timestamp clock synchronized to the OS 
    HostBuffersRx = [ 1 , 16 , -1 ]        # [number of host buffers, Size(MB), NUMA node]
  5. Stop and restart ntservice after making changes to ntservice.ini:
    sudo /opt/napatech3/bin/ntstop.sh
    sudo /opt/napatech3/bin/ntstart.sh
    
  6. Test your setup:

    Create a file with the following commands:

    Delete=all                              # Delete any existing filters
    Assign[priority=0; streamid=0]= all     # Assign all physical ports to stream ID 0
    

    Run the commands using the ntpl tool:

    sudo /opt/napatech3/bin/ntpl -f <my_ntpl_file>
    
  7. Start Suricata:
    sudo suricata -c /usr/local/etc/suricata/suricata.yaml --napatech --runmode workers
    

Advanced multi-threaded configuration

A more advanced configuration uses the load distribution (RSS - like) capability in the SmartNIC. Create 8 streams and set up the SmartNIC to distribute the load based on a 5-tuple hash. Increasing buffer size minimizes packet loss only if your CPU cores are fully saturated. Setting the minimum buffer size (16MB) gives the best performance (minimize L3 cache hits) if your CPU cores have capacity.

  1. Modify the ntservice.ini file to increase the number and size of the host buffers:
    TimeSyncReferencePriority = OSTime    # Timestamp clock synchronized to the OS
    HostBuffersRx = [8,16,-1]             # [number of host buffers, Size (MB), NUMA node]
    Note: Setting the NUMA node parameter to -1 causes the driver to allocate from the same NUMA node where the SmartNIC is located.
  2. Stop and restart ntservice after making changes to ntservice.ini:
    sudo /opt/napatech3/bin/ntstop.sh
    sudo /opt/napatech3/bin/ntstart.sh
  3. Assign the streams to host buffers and configure the load distribution.

    The load distribution is set up to support both tunneled and non-tunneled traffic. Create a file that contains these NTPL commands:

    Delete=All                              # Delete any existing filters
    HashMode[priority=4]=Hash5TupleSorted
    Assign[priority=0; streamid=(0..7)]= all
    
  4. Run the NTPL commands using the ntpl tool:
    sudo /opt/napatech3/bin/ntpl -f <my_ntpl_file>
    
  5. Run Suricata:
    sudo suricata -c /usr/local/etc/suricata/suricata.yaml --napatech --runmode workers
    

Manually configuring NUMA nodes

Important: The NUMA node the host buffers must be defined on is the same physical CPU socket that the SmartNIC is plugged into.
The NUMA node for the SmartNIC is entered in ntservice.ini:
TimeSyncReferencePriority = OSTime     # Timestamp clock synchronized to the OS
HostBuffersRx = [8,16,0]               # [number of host buffers, Size (MB), NUMA node]
Each stream ID must be tied to a NUMA node using the NTPL Setup command:
Delete=All                       # Delete any existing filters
Setup[numaNode=0] = streamid==0
Setup[numaNode=0] = streamid==1
Setup[numaNode=0] = streamid==2
Setup[numaNode=0] = streamid==3
Setup[numaNode=0] = streamid==4
Setup[numaNode=0] = streamid==5
Setup[numaNode=0] = streamid==6
Setup[numaNode=0] = streamid==7
HashMode[priority=4]=Hash5TupleSorted
Assign[priority=0; streamid=(0..7)]= all

Host buffers versus stream IDs

The number of host buffers must match the number of stream IDs used. In this case, stream ID 0 to 7 are used:
Assign[priority=0; streamid=(0..7)]= all
The smallest number of host buffers allocated in ntservice.ini must be 8:
HostBuffersRx = [8,16,-1]

Counters

For each stream that is processed the following counters are output in stats.log:

  • nt<streamid>.pkts - The number of packets received by the stream.
  • nt<streamid>.bytes - The total number of bytes received by the stream.
  • nt<streamid>.drop - The number of packets that were dropped from this stream due to buffer overflow conditions.

If hba is enabled, the following counter is also provided:

  • nt<streamid>.hba_drop - the number of packets dropped because the host buffer allowance high-water mark was reached.

In addition to counters host buffer utilization is tracked and logged. This is also useful for debugging. Log messages are output for both host buffers and onboard buffers when 25, 50, and 75 percent of utilization is reached. Corresponding messages are output when utilization decreases.