HCIBench stands for "Hyper-converged Infrastructure Benchmark". It's essentially an automation wrapper around the popular and proven VDbench open source benchmark tool that makes it easier to automate testing across a HCI cluster.
RAM USAGE = -4.23%
Why did the RAM Usage is showing -4.23%?
might be due to some mis-calculation from vSAN observer, did you run test against vSAN storage?
After i upgrade HCIBench from version 2.5.0 to 2.5.2 by using the command cd /root/ && wget https://codeload.github.com/cwei44/HCIBench/zip/master && unzip master && sh /root/HCIBench-master/upgrade.sh
Now i started to lose ping to the VM and getting error during the "Validate Config" "/opt/automation/lib/pre-validation.rb:350:in `validate_misc_info': undefined method `uniq' for nil:NilClass (NoMethodError)
i didnt change anything other then upgrading. The only way i can get it to work by reverting the VM back from snapshot to version 2.5.0
i just tried in my env and after upgrading it still work.
could you try to reboot the HCIBench VM after upgrading? also, could you send the /opt/automation/conf/perf-conf.yaml to firstname.lastname@example.org?
i reboot the VM and still getting the same error. The perf-config.yaml file sent via email.
Hi, I have a question regarding the working set % and also the HCI bench best practice formula for creating a set.
So the VMware site says
# of VMs x # of Data Disks x Size of Data Disk x # of Threads should be LESS THAN Size of Cache Drive x Diskgroups per Host x # of hosts
My scenario translates to 144 x 8 x 14 x 4 = 64512 and 800 x 4 x 9 = 28,800.
this scenario im WELL OVER and not LESS THAN.
1. Is this is a valid way to create a working set? ( as im now over instead of less)
2. Should I change the % working set to make sure Im under the 28,800? if I do so do I still calculate threads in that total or just the disk size x number of vms x number of disks?
where did you see that?
usually for working set we talk about the total size, thus it should be:
num_of_vms * num_of_disk_per_vm * disk_size * working-set_percentage
but the total thread count should be calculated in the following way:
num_of_vms * num_of_disk_per_vm * thread_per_disk
so in your case your total thread count defined should be
144 * 8 * 4 = 4608
and you have 4 * 9 = 36 disk groups in total, our recommendation on thread count is no more than 64 thread count per disk group, so the total thread count you want to put for testing may be less than 36 * 64 = 2304 for getting best iops/throughput with reasonable latency. So in your case, you can reduce one of the following variables to the half in order to get the recommended config:
as far as the working-set(size) calculation, we recommend to use the total cache disk size, for calculation, for example, your cache_disk size is 800GB(600GB usable if it's all-flash), then your total cache size is 36 * 600GB = 21600GB, then use this number divided by the total number of vmdks you have, in your case it's 144 * 8, so your vmdk size can be set around 20GB and set working-set percentage to 100 when configuring the workload parameter.
hope it helps.
Thank you for that clarification!!!. Much appreciated.
Hi, I found that the Throughput displayed by HCIBench monitor under vSphere version 7.0.1 is smaller than under vSphere 6.7.
Have you made any attempts to do so?
Im not sure i'm following you on this, do you mean the dashboard in grafana is smaller?
I mean if I use vSphere 7.0.1, the data I get on grafana is smaller than if I use vSphere 6.7
so running the same workload, but you got lower throughput on 7.0.1 than 6.7? everything else is the same except the vsphere version?
Yes, and these are just our observations.
How much diff you observed?
Also there's nothing changed from hcibench so the workload pushed down should be identical.
You can send both tests pdf reports to email@example.com so we can take further investigation.