Preparing Templates

Getting Started with Napatech Link-Virtualization™ Software

Platform
Napatech SmartNIC
Content Type
Getting Started
Getting Started Guide
Capture Software Version
Link-Virtualization™ Software 4.5

Templates are used to create and configure resources in OpenStack. Some templates are modified for a VXLAN or VLAN network as well as for optimal performance of Link-Virtualization™ Software.

About this task

The templates include configurations for overcloud deployment with a VXLAN or VLAN network as shown in the Network topology.

Two types of templates are provided.
  1. Napatech custom templates: The destination directory is /home/stack/templates/ by default. If another directory is used for Napatech custom templates, the following templates must be updated.
    • custom-network-configuration.yaml: Correct the directory for the Controller.yaml and ComputeOvsDpdk.yaml files.
    • firstboot.yaml: Correct the directory for the root_and_net_mappings.yaml file.
  2. Default templates from the official release which require changes:
    • Changes are made to perform overcloud deployment for a VXLAN or VLAN network with a Napatech SmartNIC.
    • The diff output files are included to identify changes compared to the default template files.
Prepare templates as described in this procedure and place the updated templates in the specified directories as follows. See step 6.
  1. config.yml: /usr/share/ansible/roles/tripleo_ovs_dpdk/tasks/
  2. openvswitch-dpdk-baremetal-ansible.yaml: /usr/share/openstack-tripleo-heat-templates/deployment/openvswitch/
  3. container_puppet_config.py: /usr/share/ansible/plugins/modules/
  4. glance-api-container-puppet.yaml: /usr/share/openstack-tripleo-heat-templates/deployment/glance/
  5. network-environment.yaml: /usr/share/openstack-tripleo-heat-templates/ci/environments/network/multiple-nics/
  6. network-isolation.yaml: /usr/share/openstack-tripleo-heat-templates/ci/environments/network/multiple-nics/
  7. neutron-ovs-dpdk.yaml: /usr/share/openstack-tripleo-heat-templates/environments/services/
  8. nova-libvirt-container-puppet.yaml: /usr/share/openstack-tripleo-heat-templates/deployment/nova/
  9. root_and_net_mappings.yaml: /home/stack/templates/
  10. scheduler_hints_env.yaml: /home/stack/templates/
  11. neutron-ovs.yaml: /usr/share/openstack-tripleo-heat-templates/environments/services/
  12. podman-baremetal-ansible.yaml: /usr/share/openstack-tripleo-heat-templates/deployment/podman/
  13. ComputeOvsDpdk.yaml: /home/stack/templates/
  14. custom-network-configuration.yaml: /home/stack/templates/
  15. firstboot.yaml: /home/stack/templates/
  16. nodes.json: /home/stack/templates/
  17. Controller.yaml: /home/stack/templates/
  18. containers-prepare-parameters.yaml: /home/stack/templates/
  19. network_data.yaml: /home/stack/templates/
  20. roles_data.yaml: /home/stack/templates/

Procedure

  1. Update the environment related templates.
    The files mentioned in this step are placed in the directories as follows. Use the files which suit for the target network as separate files are provided for VXLAN and VLAN.
    tripleo/templates/custom_templates/vxlan/
    tripleo/templates/custom_templates/vlan/
    1. nodes.json: Defines the controller and compute nodes.
      Update the following parameters.
      • "address": The MAC address of the network interface which is reserved for the provisioning network on each node.
      • "pm_type": The IPMI driver. idrac is used by default.
      • "pm_user" and "pm_password": The user name and password of the IPMI.
      You don't have to update other parameters. cpu, memory, disk and arch are already set according to the minimum system requirement.
    2. roles_data.yaml: Each role is defined.
      Set CountDefault to the number of the compute nodes.
      ########################################################
      # Role: ComputeOvsDpdk                                 #
      ########################################################
      ...
        CountDefault: 2 # Update number of Compute nodes
      ...
      See the OpenStack documentation at https://docs.openstack.org/project-deploy-guide/tripleo-docs/latest/features/custom_roles.html for more information about roles_data.yaml.
  2. Update templates with the network configuration.
    The files mentioned in the substeps 2.a, 2.b, 2.c, 2.d and 2.e are placed in the directories as follows. Separate files are provided for a VXLAN and VLAN network. Use the files which suit for the target network.
    tripleo/templates/custom_templates/vxlan/
    tripleo/templates/custom_templates/vlan/
    1. firstboot.yaml: Configures the network interface mapping and the password for the compute and controller nodes.
      See an example for the VLAN setup.
      parameter_defaults:
        NetConfigDataLookup:
          control:
            dmiString: "system-product-uuid"
            id: "9789534d-dac4-4b63-987f-9457449e9258" # Update id ( openstack baremetal node list )
            nic1: eno1 # Control Plane network 
            nic2: eno2 # Internal API network
            nic3: ens6 # Tenant network
      ...
        NodeRootPassword: "1234"
      Update the network interface names of each node and set NodeRootPassword.
    2. Controller.yaml and ComputeOvsDpdk.yaml: Contains the network configuration for the controller node and the compute nodes.
      These files have two sections:
      • parameters: Contains the network configuration parameters.
      • resources: Contains the configuration for os-net-config. The values in the parameters section are used, such as MTU, IP addresses and interface names.
      The ovs_user_bridge setting of os-net-config in the ComputeOvsDpdk.yaml file is different for VXLAN and VLAN. The bridge br-phy is configured for the VXLAN setup.
              - type: ovs_user_bridge
                name: br-phy
      ...
      The bridge br-ex is configured for the VLAN setup.
              - type: ovs_user_bridge
                name: br-ex
      ...
      The DPDK port dpdk0 is configured in ComputeOvsDpdk.yaml only. A configuration example for VXLAN:
                - type: ovs_dpdk_port
                  name: dpdk0
                  mtu: 9000
                  driver: vfio-pci
                  members:
                  - type: device
                    name: '0x18f4'
                    network: vxlan
      A configuration example for VLAN
                - type: ovs_dpdk_port
                  name: dpdk0
                  mtu: 9000
                  driver: vfio-pci
                  members:
                  - type: device
                    name: '0x18f4'
                    network: vlan
      Configure the MTU size to suit your target network. The default value is set to 9,000 bytes. This value must be the switch MTU size minus 50 bytes for overhead. For example, if The switch MTU size is 9,216 bytes, set mtu to 9166. It is possible to change to a smaller MTU size than the initial value after overcloud deployment.
      Note: For existing deployments, it is not possible to set the MTU size to a larger value than the initial value. Overcloud must be redeployed after making changes in the configuration file if a larger value is desired.
      See MTU Configuration for more information about how to check and configure the MTU size.
      Use the files which are suitable for the target setup. See the Red Hat documentation at Custom network interface templates for more information about the Controller.yaml and ComputeOvsDpdk.yaml templates.
    3. custom-network-configuration.yaml: Describes the overcloud network environment such as DNS servers, default route and kernel parameters for DPDK.
      The files are prepared for the VXLAN and VLAN setup as shown in the Network topology.
      Note: The settings for VXLAN and VLAN are different. The bridge mapping is done on the port for the VXLAN setup.
      NeutronBridgeMappings: 'dpdk0:br-phy'
      NeutronNetworkVLANRanges: 'dpdk0:9:99'
      The bridge mapping is done on the network for the VLAN setup.
      NeutronBridgeMappings: 'public:br-ex'
      NeutronNetworkVLANRanges: 'public:912:915'
      Correct the VLAN range for the target network. For example:
      NeutronNetworkVLANRanges: 'public:226:230'
      In addition. the following parameters are configured in custom-network-configuration.yaml.
          IsolCpusList: "2-19,22-39"
          NovaVcpuPinSet: ['4-19,24-39']
          NovaReservedHostMemory: 4096
      where:
      • IsolCpusList: CPU cores to be isolated from other host processes. It is mandatory to provide isolated CPU cores to achieve optimal performance. Set cores which are selected for OvsPmdCoreList in the neutron-ovs-dpdk.yaml file (see step 3.b) and NovaVcpuPinSet.
      • NovaVcpuPinSet: Sets cores for CPU pinning. These physical CPU cores are used as virtual CPUs by guest instances.
        Note: Select a CPU core on the same NUMA node as the SmartNIC NUMA node location. Use the lscpu command to determine the mapping of CPU cores to NUMA nodes. How to determine the SmartNIC NUMA node location is described in step 3 of section OVS-DPDK Initialization.
      • NovaReservedHostMemory: This parameter reserves memory for third party processes which are running for the OpenStack infrastructure. The amount of memory depends on the number of guest instances. For more information about how to calculate the amount of memory, see the Red Hat documentation at NovaReservedHostMemory.
      A configuration example:
      IsolCpusList: "2,4,6,8,10,12,14,16,18,20,22,24,26,28,30"
      NovaVcpuPinSet: ['4,6,8,10,12,14,16,18,20,22,24,26,28,30']
      NovaReservedHostMemory: 4096
    4. network_data.yaml: Contains a list of networks and the configurations.
    5. root_and_net_mappings.yaml: Maps parameters to the nodes.
    6. network-environment.yaml and network-isolation.yaml: Collects templates for OpenStack networking entities. The unused entities are commented out. These templates are updated according to the roles for the setup. See the corresponding diff files.
      The files are placed in the directory as follows.
      tripleo/templates/updated_defaults_templates/
  3. Apply the other_config settings for Napatech OVS.
    The files mentioned in this step are placed in the directory as follows.
    tripleo/templates/updated_defaults_templates/
    1. config.yaml and openvswitch-dpdk-baremetal-ansible.yaml: The changes are made to enable the OVS other_config parameters for hardware offloading features of the SmartNIC. See the corresponding diff file.
    2. neutron-ovs-dpdk.yaml: Changes are made to add the OVS other_config parameters. See the corresponding diff file.
      The file contains the following Napatech OVSDB other_config settings.
      OvsDpdkSocketMemory: "4096,0"
      OvsDpdkDpdkExtra: "--iova-mode=pa --vfio-vf-token=14d63f20-8445-11ea-8900-1f9ce7d5650d"
      OvsDpdkMemoryChannels: "4"
      OvsPmdCoreList: "2"
      where:
      • OvsDpdkSocketMemory: Corresponds to dpdk-socket-mem of other_config. An allocation of 4 GB on NUMA node 0, and 0 GB on NUMA node 1 is specified.
        Note: Allocate memory on the same NUMA node as the SmartNIC NUMA node location to achieve optimal performance. For more information about how to determine the SmartNIC NUMA node location, see step 3 in the OVS-DPDK Initialization section.
      • OvsDpdkDpdkExtra: Set IOVA to the physical address mode and --vfio-vf-token to the uuid which is generated using the uuidgen command. For example:
        uuidgen
        14d63f20-8445-11ea-8900-1f9ce7d5650d
      • OvsDpdkMemoryChannels: Corresponds to dpdk-extra of other_config. 4 memory channels are specified.
      • OvsPmdCoreList: Corresponds to pmd-cpu-mask of other_config. CPU core 2 for the OVS control plane is specified.
        Note: Select a CPU core on the same NUMA node as the SmartNIC NUMA node location. Use the lscpu command to determine the mapping of CPU cores to NUMA nodes.
      Make changes to make it suitable for the target compute node servers.
  4. Update templates for the custom containers.
    The files mentioned in the substeps 4.a, 4.b and 4.c are placed in the following directory.
    tripleo/templates/updated_defaults_templates/
    1. nova-libvirt-container-puppet.yaml: Changes are made to add/mount a directory for creating the Napatech OVS sockets in the Nova libvirt container. See the corresponding diff file.
    2. container_puppet_config.py: Contains common TripleO code. Changes are made to make it work with duplications in container lists (fixing a bug for the puppet volume list). See the corresponding diff file.
    3. podman-baremetal-ansible.yaml: Changes are made to add the local container registry hostname which will run the custom containers. See the corresponding diff file.
      Update the following line with the hostname of the director.
      default: ['myhost.ctlplane.localdomain:8787']
      For example:
      default: ['director.ctlplane.localdomain:8787']
    4. containers-prepare-parameters.yaml: Defines a list of containers and their locations inside the registry. The list includes the custom containers with the Napatech OpenStack patches. See the corresponding diff file.
      The file mentioned in this step is placed in the directories as follows. Use the file which suits for the target network.
      tripleo/templates/custom_templates/vxlan/
      tripleo/templates/custom_templates/vlan/
      Make changes to match the host name of the director node. You can use the following command.
      sed -i 's/myhost/<host_name>/g' containers-prepare-parameters.yaml
      For example:
      sed -i 's/myhost/director/g' containers-prepare-parameters.yaml
  5. Make OpenStack specific updates.
    The files mentioned in this step are placed in the directory as follows.
    tripleo/templates/updated_defaults_templates/
    1. glance-api-container-puppet.yaml: GlanceBackend is set to file. See the corresponding diff file.
    2. neutron-ovs.yaml: Added vswitch_ovs to the NeutronMechanismDrivers parameter.
      The NeutronTypeDrivers and NeutronNetworkType parameter settings are different for VXLAN and VLAN. The settings for VXLAN:
      NeutronTypeDrivers: 'vxlan,vlan,flat' # use this for vxlan deploy
      #NeutronTypeDrivers: 'vlan,flat' # use this for vlan deploy
      NeutronNetworkType: 'vxlan' # set network type (vxlan or vlan)
      The settings for VLAN:
      #NeutronTypeDrivers: 'vxlan,vlan,flat' # use this for vxlan deploy
      NeutronTypeDrivers: 'vlan,flat' # use this for vlan deploy
      NeutronNetworkType: 'vlan' # set network type (vxlan or vlan)
      See the corresponding diff file.
    3. scheduler_hints_env.yaml: The capabilities tag for each role is updated.
      The file mentioned in this step is placed in the following directory. Use a file which suits for the target network.
      tripleo/templates/custom_templates/vxlan/
      tripleo/templates/custom_templates/vlan/
      The capabilities tag was used previously in the following command for undercloud deployment.
      openstack baremetal node set control --property capabilities='node:controller'
      This is a configuration example in the scheduler_hints_env.yaml file.
      parameter_defaults:
        ControllerSchedulerHints:
          'capabilities:node': 'controller'
      
      This allows the Nova scheduler to control nodes directly. See the OpenStack documentation for more information about the capabilities tag.
  6. Place templates in the specified locations of the director node.
    Run the script which copies the templates in the updated_defaults_templates directory to the specified locations.
    su - stack
    cd tripleo/templates
    sudo ./copy_updated_default_templates.sh
    Create the /home/stack/templates.
    mkdir /home/stack/templates
    Copy the files in the custom_templates/. For the VXLAN setup:
    cp custom_templates/vxlan/* /home/stack/templates
    For the VLAN setup:
    cp custom_templates/vlan/* /home/stack/templates