7.x
SSH to the primary node
Edit the virtual nova configuration file:
# viocli update nova
Add a new section header called DEFAULT and add the specific allocation ratio under it.
conf: nova: DEFAULT: cpu_allocation_ratio: 10 ram_allocation_ratio: 1.5 disk_allocation_ratio: 1.0 neutron: metadata_proxy_shared_secret: .Secret:managedencryptedpasswords:data.metadata_proxy_shared_secret vmware: passthrough: true tenant_vdc: true
Note: This is in YAML and formatting/indentations are extremely important.
Note: Change any of these allocation ratio values to 1.0 to disallow resource overcommitment.
Save the file to trigger LCM to update the nova services.
Observe the re-creation of the nova-api-* pods using the following methods:
Review logs:
# kubectl -n kube-system logs -f -lapp=helm
Observe the nova-api* pods get deleted/created:
# pods | grep -i nova-api
openstack nova-api-metadata-###### 1/1 Running 7 95d
openstack nova-api-osapi-##### 2/2 Running 6 95d
Once the pods are both running, confirm the Openstack deployment is in a RUNNING status:
# viocli get deployment
PUBLIC VIP PRIVATE VIP HIGH AVAILABILITY
...
OpenStack Deployment State: RUNNING
Confirm the changes to the allocation ratios:
Validate via the nova-api-osapi* pod:
# osctl exec -it nova-api-osapi-##### /bin/bash
# kubectl describe pod/nova-api-osapi-##### -n openstack
Validate the CPU, RAM, Disk allocation ratios in the nova.conf file:
# grep -i allocation_ratio /etc/nova/nova.conf
Note: If the changes aren’t applied, check the status of the osdeployment pod. If the pod stays in a init state for more than 15 seconds, the pod may need to be destroyed manually:
# osdel po osdeployment-osdeployment1
After the changes has been performed in the nova CR, repeat the same steps in the nova-compute CR:
# viocli update nova-compute
Note: Changing the CPU allocation ratios will not affect existing instances. They will continue to run as is.
The default allocation ratios are as follows.
cpu_allocation_ratio = 10ram_allocation_ratio = 1.5disk_allocation_ratio = 1
To disallow any resource overcommitment:
cpu_allocation_ratio = 1.0ram_allocation_ratio = 1.0disk_allocation_ratio = 1.0
Test VIO workload placement:# openstack allocation candidate list --os-placement-api-version 1.10 --resource VCPU=# --resource MEMORY_MB=#### --resource DISK_GB=##
Example: openstack allocation candidate list --os-placement-api-version 1.10 --resource VCPU=1 --resource MEMORY_MB=2048 --resource DISK_GB=20
+---+----------------------------------+--------------------------------------+----------------------------------------------+
| # | allocation | resource provider | inventory used/capacity |
+---+----------------------------------+--------------------------------------+----------------------------------------------+
| 1 | VCPU=1,MEMORY_MB=2048,DISK_GB=20 | ########-####-####-####-############ | VCPU=3/40,MEMORY_MB=1536/36862,DISK_GB=3/699 |
+---+----------------------------------+--------------------------------------+----------------------------------------------+
Obtain resource providers:# openstack resource provider list
+--------------------------------------+-------------------------------------------------+------------+
| uuid | name | generation |
+--------------------------------------+-------------------------------------------------+------------+
| ########-####-####-####-############ | domain-c20.########-####-####-####-############ | 6
+--------------------------------------+-------------------------------------------------+------------+
View resource provider inventory# openstack resource provider inventory list ########-####-####-####-############
+----------------+------------------+----------+----------+-----------+----------+-------+
| resource_class | allocation_ratio | max_unit | reserved | step_size | min_unit | total |
+----------------+------------------+----------+----------+-----------+----------+-------+
| VCPU | 10.0 | 8 | 0 | 1 | 1 | 16 |
| MEMORY_MB | 1.5 | 20479 | 0 | 1 | 1 | 40958 |
| DISK_GB | 1.0 | 670 | 0 | 1 | 1 | 1199 |
+----------------+------------------+----------+----------+-----------+----------+-------+