CPU Pinning and Hard Partitioning

Peter Goldthorp, Dito. May 2022

Oracle defines 2 types of virtualized environment for licensing purposes. These are described in the Oracle Partitioning Policy document. The main distinction between the types is whether the environment supports CPU overcommit. CPU overcommit is a mechanism that many hypervisors provide. It allows a VM to exceed its allotted vCPU allocation for short periods by “borrowing” CPU cycles from underutilized VMs.

Oracle’s partitioning policy refers to hypervisors that support overcommit as soft partitioned and ones that don’t as hard partitioned. Oracle ignores soft partitions for licensing. The customer must purchase licenses based on the core count of the hypervisor. Hard partitioned environments can be licensed based on the cores associated with each VM.

OLVM can be used to configure hard partitioned VMs but does not do this by default. The procedure to follow is described in HardPartitioning with Oracle LinuxKVM and summarized as a worked example below.

Setup Instructions

  1. kvm-host: SSH into the KVM hosts.
  2. kvm-host: Run lscpu and review the Thread(s) per core, Core(s) per socket and Socket(s) values

     lscpu
    
     Architecture:          x86_64
     CPU op-mode(s):        32-bit, 64-bit
     Byte Order:            Little Endian
     CPU(s):                48
     On-line CPU(s) list:   0-47
     Thread(s) per core:    2
     Core(s) per socket:    12
     Socket(s):             2
     NUMA node(s):          2
     Vendor ID:             GenuineIntel
     CPU family:            6
     Model:                 79
     Model name:            Intel(R) Xeon(R) Gold 6246 CPU @3.30GHz
     ...
    

    This example shows a 24 core server (2 x 12 cores per socket) with hyperthreading enabled (2 Thread(s) per core) resulting in a total of 48 (virtual) CPU(s). Each core supports 2 vCPUs.

  3. kvm-host: Run virsh --readonly list to list VMs running on the KVM host

     virsh --readonly list
     Id   Name     State
     ------------------------
     1    dns      running
     2    kube01   running
     3    kube02   running
     4    kube03   running
     5    kube04   running
     6    vis22    running
    
  4. kvm-host: For each VM use virsh --readonly vcpuinfo {vm-name} --pretty to examine its CPU Affinity values. Example:

     virsh --readonly vcpuinfo vis22 --pretty
     VCPU:           0
     CPU:            1
     State:          running
     CPU time:       27.1s
     CPU Affinity:   0-47 (out of 48)
    
     VCPU:           1
     CPU:            2
     State:          running
     CPU time:       13.0s
     CPU Affinity:   0-47 (out of 48)
    
     VCPU:           2
     CPU:            0
     State:          running
     CPU time:       13.8s
     CPU Affinity:   0-47 (out of 48)
    
     VCPU:           3
     CPU:            1
     State:          running
     CPU time:       12.7s
     CPU Affinity:   0-47 (out of 48)
    

    This example shows a 2 core (4 vCPU) VM. The CPU Affinity values 0-47 (out of 48) indicate the CPUs have not been pinned. The virtualization manager is free to use any available CPU.

  5. OLVM Admin UI: Use the OLVM Manager UI to provision VMs.

  6. OLVM Manager VM (GCP): SSH into the OLVM Manager VM from the GCP console.
  7. OLVM Manager VM (GCP): Run yum install olvm-vmcontrol to install the olvm-control module
  8. OLVM Manager VM (GCP): Check the CPU pin status (e.g. for a VM called kube01)

     olvm-vmcontrol -m olvm-manager.gcp-project.internal \
     -u admin@internal -v kube01 -c getvcpu
    
     Oracle Linux Virtualization Manager VM Control Utility 4.3.1-1
     Password:
     Connected to Oracle Linux Virtualization Manager 4.3.10.4-1.0.22.el7
     Getting vcpu pinning ...
     No CPU pinning is configured
    
  9. OLVM Manager VM (GCP): Pin the CPUs for kube01 using olvm-vmcontrol

    olvm-vmcontrol -m <olvm_fqdn> -u <user>@internal -v <guest_vm_name> -c setvcpu -s 0-19,40-59 -e

     olvm-vmcontrol -m olvm-manager.gcp-project.internal \
      -u admin@internal -v kube01 -c setvcpu -s 0-3
    
     Oracle Linux Virtualization Manager VM Control Utility 4.3.1-1
     Password:
     Connected to Oracle Linux Virtualization Manager 4.3.10.4-1.0.22.el7
     Setting vcpu pinning ...
     Trying to pin virtual cpu # 0
     Trying to pin virtual cpu # 1
     Trying to pin virtual cpu # 2
     Trying to pin virtual cpu # 3
     Retrieving vcpu pinning to confirm it has been set...
     vcpu 0 pinned to cpuSet[0-3]
     vcpu 1 pinned to cpuSet[0-3]
     vcpu 2 pinned to cpuSet[0-3]
     vcpu 3 pinned to cpuSet[0-3]
    
     NOTE: if the VM is running you must now stop and then start the VM from
           the Oracle Linux Virtualization Manager in order for CPU pinning changes to take effect.
     NOTE: a restart or a reboot of the VM is not sufficient to put CPU pinning changes into effect.
    
  10. OLVM Admin UI: Use the OLVM Manager UI to shutdown and restart the VM

  11. OLVM Manager VM (GCP): Verify the result from the OLVM Manager ssh session

    olvm-vmcontrol -m olvm-manager.gcp-project.internal \
    -u admin@internal -v kube01 -c getvcpu
    Oracle Linux Virtualization Manager VM Control Utility 4.3.1-1
    Connected to Oracle Linux Virtualization Manager 4.3.10.4-1.0.22.el7
    Getting vcpu pinning ...
    vcpu 0 pinned to cpuSet[0-3]
    vcpu 1 pinned to cpuSet[0-3]
    vcpu 2 pinned to cpuSet[0-3]
    vcpu 3 pinned to cpuSet[0-3]
    
  12. kvm-host: Verify the result using virsh on the KVM host.

    virsh --readonly vcpuinfo kube01 --pretty
    VCPU:           0
    CPU:            1
    State:          running
    CPU time:       27.1s
    CPU Affinity:   0-3 (out of 40)
    
    VCPU:           1
    CPU:            2
    State:          running
    CPU time:       13.0s
    CPU Affinity:   0-3 (out of 40)
    
    VCPU:           2
    CPU:            0
    State:          running
    CPU time:       13.8s
    CPU Affinity:   0-3 (out of 40)
    
    VCPU:           3
    CPU:            1
    State:          running
    CPU time:       12.7s
    CPU Affinity:   0-3 (out of 40)
    

    Note: the CPU Affinity value is now 0-3 (out of 40)

  13. OLVM Manager VM (GCP): Return to the OLVM Manager ssh session and pin the remaining CPUs. Tip: olvm-vmcontrol has a -e flag to read the password from an environment variable

    export OLVMUTIL_PASS={olvm manager password}
    
     olvm-vmcontrol -m olvm-manager.gcp-project.internal \
     -u admin@internal -e -v kube02 -c setvcpu -s 4-11
    
     olvm-vmcontrol -m olvm-manager.gcp-project.internal \
     -u admin@internal -e -v kube03 -c setvcpu -s 12-15
    
     olvm-vmcontrol -m olvm-manager.gcp-project.internal \
     -u admin@internal -e -v kube04 -c setvcpu -s 16-23
    
  14. OLVM Admin UI: Shutdown and restart each VM
  15. OLVM Admin UI: Verify the result.

Copyright © Dito LLC, 2023