The Packstack scripts provide an easy way to deploy an OpenStack environment on multiple nodes.
About this task
This configuration example demonstrates VXLAN or VLAN tunneling in a multinode OpenStack environment with Napatech Link-Virtualization™ Software
Before you begin
- The SmartNIC is updated with an appropriate FPGA image. See Update the FPGA image.
- Start with a clean installation of a Linux system on all nodes.Note: Packstack will download and install all the additional dependencies.
- The provided Packstack package must be placed on the director node.
- A VM base image must be prepared. After OpenStack deployment is done, place the image on the controller node.
Procedure
-
Unpack the packstack-napatech package.
cd openstack/ tar zxvf packstack-napatech-<version>.main.tar.gz
where:- version is the Napatech Packstack package version identifier.
-
Copy the RPM packages for the Napatech tools and Napatech OVS/DPDK to the correct
directory.
cd packstack/multinode/roles/prepare_all/files/rpms/Napatech cp <package_root_directory>/software/nt-driver-vswitch-<version>.x86_64.rpm . cp <package_root_directory>/software/nt-driver-vswitch-devel-<version>.noarch.rpm . cp <package_root_directory>/software/nt-driver-vswitch-modules-<version>.x86_64.rpm . cp <package_root_directory>/tools/nt-tools-vswitch-<version>.x86_64.rpm . cp <package_root_directory>/ovs-hwaccel/ovs-hwaccel-<version>.x86_64.rpm . cd <package_root_directory>/dpdk-hwaccel/dpdk-hwaccel-<version>.src.rpm .
where:- package_root_directory is the directory to the unpacked Napatech package.
- version is the version identifier of the corresponding package.
-
Install required packages on the director node.
dnf install ansible-core.x86_64 ansible-galaxy collection install ansible.posix
-
Generate an SSH key pair with no passphrase on the director node.
ssh-keygen -f /root/.ssh/id_rsa
An output example:Generating public/private rsa key pair. Enter passphrase (empty for no passphrase):
Create a key with no passphrase.Enter same passphrase again: Your identification has been saved in /root/.ssh/id_rsa. Your public key has been saved in /root/.ssh/id_rsa.pub. The key fingerprint is: ... The key's randomart image is: ...
Copy the generated keys to the controller and compute nodes.ssh-copy-id -i /root/.ssh/id_rsa.pub root@<controller_IP> ssh-copy-id -i /root/.ssh/id_rsa.pub root@<compute0_IP> ssh-copy-id -i /root/.ssh/id_rsa.pub root@<compute1_IP>
-
Generate an SSH key with no passphrase on the controller node.
ssh-keygen -f /root/.ssh/id_rsa
Copy the generated key to the controller and compute nodes.ssh-copy-id -i /root/.ssh/id_rsa.pub root@<controller_IP> ssh-copy-id -i /root/.ssh/id_rsa.pub root@<compute0_IP> ssh-copy-id -i /root/.ssh/id_rsa.pub root@<compute1_IP>
-
Update the hosts file with IP addresses of the compute nodes and
the control node.
cd <package_root_directory>/openstack/packstack/multinode vim hosts
For example:[computes] 10.20.22.93 10.20.22.92 [control] 10.20.22.23
-
Edit the variables.yaml file.
vim variables.yaml
-
Update parameters in the variables.yaml file as follows.
- The directories to be created on the controller and compute nodes for the provided
RPM packages: Change the directories if necessary. They are defined as follows by
default.
#Paths to the packstack dir on the controler and compute nodes PS_DIR: /root/packstack_deploy/ #Path to the rpms on the controler and compute nodes RPMS_PATH: "{{ PS_DIR }}rpms/Napatech" #Path to the NT tools NT_PATH: /opt/napatech3/bin
- The second network interface names of the controller and compute nodes: Change to
the second network interface names of the nodes. They are defined as follows by
default.
SECOND_IFACE: # Dictionary variable name 10.20.22.79: eno8403 # Second interface for the first compute node 10.20.22.72: eno8403 # Second interface for controler 10.20.22.81: eno2 # Second interface for the second compute
- IP addresses of the nodes. For
example:
OS_CONTROLLER_HOST: 10.20.22.65 # Controller node's IP OS_COMPUTE_HOSTS: "10.20.22.64,10.20.22.24" # Compute nodes' IP
- The directories to be created on the controller and compute nodes for the provided
RPM packages: Change the directories if necessary. They are defined as follows by
default.
-
Choose the network type in the variables.yaml file.
- Use the following configuration for the VLAN
setup:
OS_NEUTRON_ML2_TENANT_NETWORK_TYPES: vlan OS_NEUTRON_ML2_TYPE_DRIVERS: vlan,flat
Use the following configuration for the VXLAN setup:OS_NEUTRON_ML2_TENANT_NETWORK_TYPES: vxlan OS_NEUTRON_ML2_TYPE_DRIVERS: vxlan,flat
- For the VLAN network setup, the VLAN range (tagged) is configured by default as
follows:
OS_NEUTRON_ML2_VLAN_RANGES: public:1000:2000 # The hypervisor driver to use with Nova
Correct the range to match your target network configuration.Note: If the VLAN range needs to be changed after deployment, the configuration can be changed on the controller node. Open the ml2_conf.ini as follows on the controller node.vim /etc/neutron/plugins/ml2/ml2_conf.ini
The ml2_conf.ini file contain the following lines:[ml2_type_vlan] network_vlan_ranges=public:1000:2000
After changes are made in the ml2_conf.ini file, run the following commands to restart the services on the controller node.systemctl restart neutron-server.service systemctl restart neutron-openvswitch-agent.service systemctl restart neutron-l3-agent.service systemctl restart neutron-dhcp-agent.service
- Use the following configuration for the VLAN
setup:
-
Configure the MTU size in the variables.yaml file.
The default value is set as follows.
MTU: 9000
This value must be the switch MTU size minus 50 bytes for overhead. For example, if the switch MTU size is 9,216 bytes, set MTU to 9166. Change this value to suit your target network. It is possible to change to a smaller MTU size than the initial value after overcloud deployment.Note: For existing deployments, it is not possible to set the MTU size to a larger value than the initial value. Overcloud must be redeployed after making changes in the configuration file if a larger value is desired.See MTU Configuration for more information about how to check and configure the MTU size. -
Start deployment.
cd <package_root_directory>/openstack/packstack/multinode ansible-playbook --verbose -i hosts main.yaml
An output example:... PLAY RECAP ********************************************************************* 10.20.22.72 : ok=52 changed=44 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 10.20.22.79 : ok=57 changed=53 unreachable=0 failed=0 skipped=2 rescued=0 ignored=0 10.20.22.81 : ok=57 changed=53 unreachable=0 failed=0 skipped=2 rescued=0 ignored=0
Note: If deployment fails, it is possible to continue deployment from the failed task. This is an example output when deployment failed.... TASK [prepare_all : Copy rpms folder to hosts] ********************************* An exception occurred during task execution. To see the full traceback, use -vvv. ...
After troubleshooting, run the following command to start deployment from the Copy rpms folder to hosts task. For example:ansible-playbook --verbose -i hosts main.yaml \ --start-at-task="Copy rpms folder to hosts"