The Packstack scripts provide an easy way to deploy an OpenStack environment on multiple nodes.
Before you begin
- Update the SmartNICs with an appropriate FPGA image. See Updating the FPGA image.
- Start with a clean installation of a Linux system on all nodes.Note: Choose Server on the Base Environment list for Software Selection at the Installation Summary screen.Note: Packstack will download and install all the additional dependencies.
- Place the provided Napatech package on the director node.
About this task
This configuration example demonstrates VXLAN or VLAN tunneling in a multinode OpenStack environment with Napatech Link-Virtualization™ Software
The installation process includes the following steps:
- Install required packages on the director node.
- Generate an SSH key pair on the directory node and copy the generated key to the controller and the compute nodes.
- Edit configuration files.
- Run the deployment tool.
Procedure
-
Install required packages on the director node.
dnf install ansible-core.x86_64 ansible-galaxy collection install ansible.posix
-
Generate an SSH key pair with no passphrase on the director node.
ssh-keygen -f /root/.ssh/id_rsa
An output example:Generating public/private rsa key pair. Enter passphrase (empty for no passphrase):
Create a key with no passphrase.Enter same passphrase again: Your identification has been saved in /root/.ssh/id_rsa. Your public key has been saved in /root/.ssh/id_rsa.pub. The key fingerprint is: ... The key's randomart image is: ...
Copy the generated key to the controller and compute nodes.ssh-copy-id -i /root/.ssh/id_rsa.pub root@<controller_IP> ssh-copy-id -i /root/.ssh/id_rsa.pub root@<compute0_IP> ssh-copy-id -i /root/.ssh/id_rsa.pub root@<compute1_IP>
-
Unpack the packstack package on the director
node.
cd <package_root_directory>/openstack/ tar zxvf packstack-<version>.tar.gz
where:- package_root_directory is the directory to the unpacked Napatech package.
- version is the corresponding package version identifier.
-
Copy the RPM packages for Napatech OVS/DPDK to the correct directory.
cd <package_root_directory>/openstack/packstack-release-4.5/release-train/redhat8.5 cd multinode/roles/prepare_all/files/ntoss/rpms/ cp <package_root_directory>/ovs-hwaccel/ovs-hwaccel-<version>.x86_64.rpm . cp <package_root_directory>/dpdk-driver/dpdk-nt-<version>.src.rpm .
where:- package_root_directory is the directory to the unpacked Napatech package.
- version is the version identifier of the corresponding package.
-
Update the hosts file with IP addresses of the controller and
compute nodes.
cd <package_root_directory>/openstack/packstack-release-4.5/release-train/redhat8.5/ cd multinode vim hosts
For example:[computes] 10.20.22.23 10.20.22.72 [control] 10.20.22.95
-
Edit the variables.yaml file.
vim variables.yaml
-
Update parameters in the variables.yaml file as follows.
- Define directories which will be created on the controller and compute nodes for the
provided RPM packages. They are defined as
follows.
#Paths to the packstack dir on the controler and compute nodes PS_DIR: /root/packstack_deploy/ #Path to the rpms on the controler and compute nodes RPMS_PATH: "{{ PS_DIR }}/ntoss/rpms/"
Change to the desired directories if necessary. - Set TPASS to the password of all
nodes. It is expected the password is the same for all nodes. For
example:
TPASS: "1234"
- Configure the number of virtual functions and the number
of queues for each virtual function. For
example:
VF: 3 #3b:00.4 - 1 queue, 3b:00.5 - 2 queue, 3b:00.6 - 1 queue PQ: “4:1,5:2,6:1”
The number of queues are configured for 3 virtual functions in this example: 1 queue for the first virtual function (4:1), 2 queues for the second virtual function (5:2) and 1 for the third virtual function (6:1). It is reconfigurable after deployment. For more information, see Configuring Receive-Side Scaling (RSS) in OpenStack. - Interface names of the second network on the controller and compute nodes: Correct
the network interface names. They are defined as
follows.
SECOND_IFACE: 10.20.22.95: enp1s0f1 10.20.22.23: ens10f1 10.20.22.72: ens6
- Set IP addresses of the first network on the controller and compute nodes. For
example:
OS_CONTROLLER_HOST: 10.20.22.95 # Controller node's IP OS_COMPUTE_HOSTS: "10.20.22.23,10.20.22.72" # Compute nodes' IP
- Define directories which will be created on the controller and compute nodes for the
provided RPM packages. They are defined as
follows.
-
Choose the network type in the variables.yaml file.
- Use the following configuration for the VLAN
setup:
OS_NEUTRON_ML2_TENANT_NETWORK_TYPES: vlan OS_NEUTRON_ML2_TYPE_DRIVERS: vlan,flat
Use the following configuration for the VXLAN setup:OS_NEUTRON_ML2_TENANT_NETWORK_TYPES: vxlan OS_NEUTRON_ML2_TYPE_DRIVERS: vxlan,flat
- For the VLAN network setup, the VLAN range (tagged) is configured by default as
follows:
OS_NEUTRON_ML2_VLAN_RANGES: public:1300:1307
Correct the range to match your target network configuration.Note: After deployment, the VLAN range is reconfigurable on the controller node. Open the ml2_conf.ini file on the controller node as follows.vim /etc/neutron/plugins/ml2/ml2_conf.ini
The ml2_conf.ini file contain the following lines:[ml2_type_vlan] network_vlan_ranges=public:1300:1307
After changes are made in the ml2_conf.ini file, run the following commands to restart the services on the controller node.systemctl restart neutron-server.service systemctl restart neutron-openvswitch-agent.service systemctl restart neutron-l3-agent.service systemctl restart neutron-dhcp-agent.service
- Use the following configuration for the VLAN
setup:
-
Configure the MTU size in the variables.yaml file.
The default value is set as follows.
MTU: 1500
This value must be set to the switch MTU size minus 50 bytes (for overhead). For example, if the switch MTU size is 9,216 bytes, set MTU to 9166. Change this value to suit your target network. It is possible to change to a smaller MTU size than the initial value after OpenStack is deployed.Note: In the deployed OpenStack environment, it is not possible to set the MTU size to a larger value than the initial value. If a larger value is desired, OpenStack must be redeployed after making changes in the variables.yaml file.See MTU Configuration for more information about how to check and configure the MTU size in OpenStack. -
Configure from where OpenStack components are downloaded.
It is configurable either to download source files of OpenStack components from OpenStack Git Hub or to use archived files on the director. By default, it is set to download the latest source files from OpenStack Git Hub.
CLONE_COMPONENTS: True
If archived OpenStack components on the director must be used, set the CLONE_COMPONENTSparameter to False as follows.CLONE_COMPONENTS: False
OpenStack components must be placed in the correct directory. For example:<package_root_directory>/release-train/redhat8.5/multinode/roles/prepare_all/files/OpenStack_components/
OpenStack components must contain the following directory tree.openstack_components/ ├── neutron ├── neutron-lib ├── nova └── os-vif
-
Start deployment.
cd <package_root_directory>/openstack/packstack-release-4.5/release-train/redhat8.5/ cd multinode/ ansible-playbook --verbose -i ./hosts main.yaml \ --extra-vars '{"lic_username":"<username>","lic_pass":"<password>"}'
where:- package_root_directory is the directory to the unpacked Napatech package.
- version is the version identifier of the corresponding package.
- username and password are credentials of your Red Hat subscription. For more information on a Red Hat subscription, see https://access.redhat.com/products/red-hat-subscription-management.
ansible-playbook --verbose -i ./hosts main.yaml \ --extra-vars '{"lic_username":"napatech","lic_pass":"test12345"}'
It takes about an hour to complete the OpenStack deployment. An output example after the installation is complete:... PLAY RECAP ******************************************************************************************************** 10.20.22.95 : ok=80 changed=65 unreachable=0 failed=0 skipped=2 rescued=0 ignored=0 10.20.22.23 : ok=91 changed=78 unreachable=0 failed=0 skipped=6 rescued=0 ignored=0 10.20.22.72 : ok=92 changed=78 unreachable=0 failed=0 skipped=6 rescued=0 ignored=0
Note: If deployment fails, it is possible to continue deployment from the failed task. This is an example output when deployment failed.... TASK [prepare_all : Copy rpms folder to hosts] ********************************* An exception occurred during task execution. To see the full traceback, use -vvv. ...
After troubleshooting, run the following command to start deployment from the Copy rpms folder to hosts task. For example:ansible-playbook --verbose -i ./hosts main.yaml \ --extra-vars '{"lic_username":"napatech","lic_pass":"test12345"}' \ --start-at-task="Copy rpms folder to hosts"