Monitor and Tune AIX IO Buffers
AIX has a number of IO buffers that it uses for different types of IO. There are buffers for paging devices, JFS and JFS2 filesystems, Disk IO and Client IO. Client IO is generally NFS and/or VxFS IO. Shortages in these buffers is reported by the last five lines from vmstat -v.
vmstat -v | tail -5
164918 pending disk I/Os blocked with no pbuf <== Disk I/O Blocked
0 paging space I/Os blocked with no psbuf <== Paging Space I/O Blocked
2288 filesystem I/Os blocked with no fsbuf <== JFS I/O Blocked
0 client filesystem I/Os blocked with no fsbuf <== NFS or VxFS I/O Blocked
5329075 external pager filesystem I/Os blocked with no fsbuf <== JFS2 I/O Blocked
Pending disk IOs blocked with no pbuf
These disk I/Os are blocked when AIX Volume Group has run out of sufficient pbuf’s for all the IO to it. You can check each Volume Groups pbuf configuration with lvmo.
# lvmo -v nimvg -a
vgname = nimvg
pv_pbuf_count = 512 <=== Number of buffers allocated for each Physical Volume.
total_vg_pbufs = 512 <=== Total number of buffers for this Volume Group.
max_vg_pbufs = 16384
pervg_blocked_io_count = 824 <=== Number of blocked IOs for this Volume Group.
pv_min_pbuf = 512
max_vg_pbuf_count = 0
global_blocked_io_count = 847 <=== Global count of blocked IOs for ALL Volume Groups.
aio_cache_pbuf_count = 0
Tuning is done on a per Volume Group basis. Do not add too many pbufs to a Volume Group as these will consume pinned memory pages. Adding small amounts of extra pbufs is the best approach until the pervg_blocked_io_count stops rising. So in this example, for nimadmvg, I am adding an extra 512 pufs per Physical Volume in the volume group.
lvmo -v nimvg -o pv_pbuf_count=1024
External pager filesystem I/Os blocked with no fsbuf
These JFS2 disk I/Os are blocked when AIX does not have sufficient fsbufs in pinned memory to hold the IO requests. It is recommended that you tune the j2_dynamicBufferPreallocation option first, as this is a dynamic change and takes immediate effect. Acceptable tolerance is 5 digits per 90 days of uptime.
Your first tuning option should be j2_dynamicBufferPreallocation.
If 6 digits:
ioo –o j2_dynamicBufferPreallocation=128
If 7+ digits:
ioo –o j2_dynamicBufferPreallocation=256
Your second tuning option should be j2_nBufferPerPagerDevice.
Note that j2_nBufferPerPagerDevice is now a restricted tunable and IBM recommend you open a Sev3 PMR first to confirm your actions before proceeding. This option requires that the Filesystems be re-mounted to take effect.
Second Option (if first option wasn’t enough): If 6 digits:
ioo -o j2_nBufferPerPagerDevice=5120
Second Option (if first option wasn’t enough): If 7+ digits:
ioo -o j2_nBufferPerPagerDevice=10240
filesystem I/Os blocked with no fsbuf
This option is only relevant for JFS filesystems and is not anything to do with JFS2 filesystems. If you still have JFS filesystesm, you tune this with numfsbuf.
If the VMM must wait for a free bufstruct, it puts the process on the VMM wait list before the start I/O is issued and will wake it up once a bufstruct has become available. For heavy IO workloads, suggested starting value is 512 and filesystems must be remounted for changes to take effect.
ioo -o numfsbufs=512