Please provide your feedback in this short Flings' survey.
Jan 28, 2021

Running FIO testing on graphs can see information on iops / throughput etc on FIO single stats but getting following error Data outside time range on all charts on fio stats

Jan 28, 2021

also seeing the following issue in validation

1 VMs will be deployed onto host podt-intel02.podt.spoc, each VM has 2 vCPU configured, in this case, the CPU resource of podt-intel02.podt.spoc would be oversubscribed.
2021-01-28 16:19:52 +0000: You can reduce number of vCPU per guest VM or reduce number of VMs

Jan 28, 2021

this just an FYI so you can ignore it.
for the grafana, you may want to adjust the time on the browser.

Jan 29, 2021

Thanks Chen Wei i will try your suggestion

Jan 13, 2021

Resource Usage:
RAM USAGE = -4.23%

Why did the RAM Usage is showing -4.23%?

Jan 25, 2021

might be due to some mis-calculation from vSAN observer, did you run test against vSAN storage?

Jan 06, 2021

Hello,

After i upgrade HCIBench from version 2.5.0 to 2.5.2 by using the command cd /root/ && wget https://codeload.github.com/cwei44/HCIBench/zip/master && unzip master && sh /root/HCIBench-master/upgrade.sh

Now i started to lose ping to the VM and getting error during the "Validate Config" "/opt/automation/lib/pre-validation.rb:350:in `validate_misc_info': undefined method `uniq' for nil:NilClass (NoMethodError)
from /opt/automation/lib/pre-validation.rb:637:in

i didnt change anything other then upgrading. The only way i can get it to work by reverting the VM back from snapshot to version 2.5.0

Jan 06, 2021

i just tried in my env and after upgrading it still work.
could you try to reboot the HCIBench VM after upgrading? also, could you send the /opt/automation/conf/perf-conf.yaml to vsanperformance@vmware.com?

Jan 06, 2021

i reboot the VM and still getting the same error. The perf-config.yaml file sent via email.

Jan 04, 2021

Hi, I have a question regarding the working set % and also the HCI bench best practice formula for creating a set.
So the VMware site says
# of VMs x # of Data Disks x Size of Data Disk x # of Threads should be LESS THAN Size of Cache Drive x Diskgroups per Host x # of hosts
My scenario translates to 144 x 8 x 14 x 4 = 64512 and 800 x 4 x 9 = 28,800.
this scenario im WELL OVER and not LESS THAN.
1. Is this is a valid way to create a working set? ( as im now over instead of less)
2. Should I change the % working set to make sure Im under the 28,800? if I do so do I still calculate threads in that total or just the disk size x number of vms x number of disks?
Many Thanks

Jan 05, 2021

where did you see that?
usually for working set we talk about the total size, thus it should be:
num_of_vms * num_of_disk_per_vm * disk_size * working-set_percentage

but the total thread count should be calculated in the following way:
num_of_vms * num_of_disk_per_vm * thread_per_disk

so in your case your total thread count defined should be
144 * 8 * 4 = 4608
and you have 4 * 9 = 36 disk groups in total, our recommendation on thread count is no more than 64 thread count per disk group, so the total thread count you want to put for testing may be less than 36 * 64 = 2304 for getting best iops/throughput with reasonable latency. So in your case, you can reduce one of the following variables to the half in order to get the recommended config:
1. num_of_vm
2. disk_per_vm
3. thread_per_disk

as far as the working-set(size) calculation, we recommend to use the total cache disk size, for calculation, for example, your cache_disk size is 800GB(600GB usable if it's all-flash), then your total cache size is 36 * 600GB = 21600GB, then use this number divided by the total number of vmdks you have, in your case it's 144 * 8, so your vmdk size can be set around 20GB and set working-set percentage to 100 when configuring the workload parameter.

hope it helps.

Jan 06, 2021

Thank you for that clarification!!!. Much appreciated.