Comment thread started by Steve Fernandes on HCIBench

Full comments
Jan 04, 2021

Hi, I have a question regarding the working set % and also the HCI bench best practice formula for creating a set.
So the VMware site says
# of VMs x # of Data Disks x Size of Data Disk x # of Threads should be LESS THAN Size of Cache Drive x Diskgroups per Host x # of hosts
My scenario translates to 144 x 8 x 14 x 4 = 64512 and 800 x 4 x 9 = 28,800.
this scenario im WELL OVER and not LESS THAN.
1. Is this is a valid way to create a working set? ( as im now over instead of less)
2. Should I change the % working set to make sure Im under the 28,800? if I do so do I still calculate threads in that total or just the disk size x number of vms x number of disks?
Many Thanks

Jan 05, 2021

where did you see that?
usually for working set we talk about the total size, thus it should be:
num_of_vms * num_of_disk_per_vm * disk_size * working-set_percentage

but the total thread count should be calculated in the following way:
num_of_vms * num_of_disk_per_vm * thread_per_disk

so in your case your total thread count defined should be
144 * 8 * 4 = 4608
and you have 4 * 9 = 36 disk groups in total, our recommendation on thread count is no more than 64 thread count per disk group, so the total thread count you want to put for testing may be less than 36 * 64 = 2304 for getting best iops/throughput with reasonable latency. So in your case, you can reduce one of the following variables to the half in order to get the recommended config:
1. num_of_vm
2. disk_per_vm
3. thread_per_disk

as far as the working-set(size) calculation, we recommend to use the total cache disk size, for calculation, for example, your cache_disk size is 800GB(600GB usable if it's all-flash), then your total cache size is 36 * 600GB = 21600GB, then use this number divided by the total number of vmdks you have, in your case it's 144 * 8, so your vmdk size can be set around 20GB and set working-set percentage to 100 when configuring the workload parameter.

hope it helps.

Jan 06, 2021

Thank you for that clarification!!!. Much appreciated.