Optimising Power with Affinity Groups
Here is an extract from the Optimizing Power with Affinity Groups Poster Presentation I gave at the IBM Technical Symposium in Melbourne 2017.
What are Power Affinity Groups?
- LPAR Affinity Groups give the Hypervisor a hint that you would like this group of LPARs to be located on processor Chips and/or Nodes that are close to each other.
- Without these hints, the Hypervisor may not place your LPARs near to the VIO Servers that provide their virtual resources.
- Affinity Groups can keep competing workloads apart from each other.
Why are Power Affinity Group Important?
- LPARs that share common resources, like the Fibre Channel and Ethernet adapters within a VIO Server will obtain better performance and adapter throughput the closer they physically are.
- Network communications between LPARs that are closer to each other will respond faster and have lower latency times.
- Up to 50% increase in Network Bandwidth.
- Reduced Network Round Trip Times.
- Higher Transactions per Second.
- LPARs with differing Time of Day workloads can be placed on the same Chips and/or Nodes.
- OLTP Day Time workload runs when overnight batch LPARs are idle.
- Batch LPARs run when the OLTP day time workloads are idle.
- Separation of Production and Non-Production workloads within the same Power System.
Power Affinity Group Requirements
- Prior to HMC Version: 8, Release: 8.6.0, Service Pack: 1, the LPM command (migrlpar) does not honour nor keep affinity group IDs and Affinity Group IDs are lost after an LPM.
- Prior to Firmware FW860.10, LPAR Affinity Groups can over-ride the Power8 VIOS placement priority for locality to adapters, if the VIO Servers are in affinity groups with other LPARs.
- LPAR Affinity Groups are not configurable from within the GUI. The only way to configure these is from the command line on the HMC.
View the Affinity Group in current profile
- lssyscfg -m e81 -r prof -F lpar_name,lpar_id,name,affinity_group_id
NPOC01,81,normal,30
vio82,2,normal,20
vio83,3,normal,30
vio84,4,normal,40
vio85p,5,normal,50
View the Affinity Group in running profile
- lssyscfg -m e81 -r lpar -F name,curr_profile,lpar_id,state,affinity_group_id
NPOC01,81,running,30
vio82,2,running,20
vio83,3,running,30
vio84,4,running,40
vio85p,5,running,50
Setting Power Affinity Groups
- Set the Affinity Group in current profile:
chsyscfg -m e81 -r prof -i "lpar_name=NPOC01,name=normal,affinity_group_id=30"
- Affinity Groups can be changed and set from the HMC cli.
- LPAR needs to restart from profile to set Affinity Group id.
- Affinity Groups can be set or changed with the LPM command.
- Missing Affinity Groups and LPM, target frame creates LPAR as per standard placement rules then it will set the Affinity Group id.
- Future LPMs with same Affinity Group will place LPAR as near as possible to the existing LPARs with same Affinity Group.
LPM with Power Affinity Groups
Affinity Groups can be set or changed with the LPM command
# migrlpar -o m -m e82 -t e81 -p NPOC01 -i 'dest_lpar_id=81,"virtual_scsi_mappings=20/vio82//840,21/vio83//841",
"virtual_fc_mappings=30/vio82//842,31/vio82//843,32/vio83//844,33/vio83//845",
"vswitch_mappings=41/ETHERNET0/NPB,42/ETHERNET0/NPB,43/ETHERNET0/NP,43/ETHERNET0/NPC",
shared_proc_pool_name=DefaultPool,source_msp_name=vio92,source_msp_ipaddr=192.168.40.102,
dest_msp_name=vio82,dest_msp_ipaddr=192.168.40.100,affinity_group_id=30’
Performance Graphs
See my blog post for details regarding performance increases for LPAR communications with core to core, chip to chip and node to node configurations.
http://www.capacityreports.net/AIX_Blog/index.php/power8-e880-internal-vswitch-tests
LPAR Layout using Power Affinity Groups
See my blog post for details regarding the following table.
http://www.capacityreports.net/AIX_Blog/index.php/power-system-hypervisor-resource-dumps
|-----------|-----------------------|---------------|------|---------------|---------------|-------|
| Domain | Procs Units | Memory | | Proc Units | Memory | Ratio |
| SEC | PRI | Total | Free | Free | Total | Free | LP | Tgt | Aloc | Tgt | Aloc | |
|-----|-----|-------|-------|-------|-------|-------|------|-------|-------|-------|-------|-------|
| 2 | | 4800 | 2800 | 60 | 6144 | 2019 | | | | | | 563 |
| | 8 | 1200 | 100 | 60 | 1536 | 113 | | | | | | 882 |
| | | | | | | | 50 | 100 | 100 | 192 | 192 | |
| | | | | | | | 76 | 100 | 100 | | | |
| | | | | | | | 110 | 200 | 200 | 880 | 880 | |
| | | | | | | | 143 | 600 | 600 | 256 | 256 | |
| | | | | | | | 147 | 40 | 40 | 32 | 32 | |
| | 9 | 1200 | 900 | 0 | 1536 | 260 | | | | | | 225 |
| | | | | | | | 111 | 300 | 300 | 1200 | 1200 | |
| | 10 | 1200 | 600 | 0 | 1536 | 120 | | | | | | 156 |
| | | | | | | | 2 | 200 | 200 | 48 | 48 | |
| | | | | | | | 76 | 100 | 100 | 800 | 800 | |
| | | | | | | | 83 | 200 | 200 | 8 | 8 | |
| | | | | | | | 112 | 100 | 100 | 480 | 480 | |
| | 11 | 1200 | 1200 | 0 | 1536 | 1526 | | | | | | 992 |
|-----|-----|-------|-------|-------|-------|-------|------|-------|-------|-------|-------|-------|