Please provide your feedback in this short Flings' survey.
Apr 29, 2020

I downloaded the latest version and was able to import it successfully. On my 3 Host cluster, the validation was successful but the test never finishes. I have included the log that I can see below, if there is a better log for this stage I'm happy to grab that as well.

I can see that there are 6 VMs spread across my 3 hosts.

2020-04-29 22:32:52 -0700: Checking Existing VMs...
2020-04-29 22:35:11 -0700: Existing VMs are Successfully Verified.
2020-04-29 22:35:12 -0700: Virtual Disk Preparation ZERO Started.(May take half to couple of hours depending on the size of VMs deployed)
2020-04-29 23:47:14 -0700: Disk Preparation Finished: 4/6 VMs

Apr 30, 2020

4 of the hosts are around 230GB each of the allocated 128GB, the other 2 are only 50 and 60ish GB. The two that are only 50 and 60 GB are on the same ESXI Host.

All hosts are identical, Supermicro Xeon 8c/16t, 64GB ram, 1TB Samsung 970 Pro NVME (vSAN cache), 6.4TB Intel DC P4600 (vSAN capacity).

Apr 30, 2020

Ok, so I restarted one of the hci-fil VMs and the HCIBench mark tool continued.

The sizes are all still 4x 230GB, 1x 64GB, 1x 49GB.

Apr 30, 2020

I was able to finish the tests after 8 hours. I wanted to see if somebody can help me interpret the results and maybe provide some thoughts on why it's performing so poorly.

Thanks

Apr 30, 2020

If you can upload your results (including vSAN observer data) to a shared drive and send us an email at vsanperformance@vmware.com I'll take a quick look.

Apr 29, 2020

I am having the same issue as Robert below - HCIBench on segment 1, test vm on segment 2. DHCP enabled on both segments, and HCIBench appliance is receiving a DHCP address on both. Test vm deployed on either aren't receiving any ip addresses, regardless of whether using the segment DHCP or the HCIBench appliance as the DHCP server.

Apr 29, 2020

Furthermore, if both the NICs on the HCIBench appliance are connected - I can't browse to the configuration page. Once I disconnect vmnic2 (the VM network nic), I can browse to it just fine.

If both vmnic are connected, I can't reliably ping either, but if either is disconnected, PING requests are returned reliably... I think there is a problem with the internal gateway settings...

Apr 29, 2020

Hi Tom, this is definitely a configuration that we use in our lab so this behavior is unexpected. Can you reach out to me by email at vsanperformance@vmware.com?

Apr 29, 2020

The issue was a blocked port between HCIBench and the ESXi hosts preventing the deployment of the worker VMDK.

Added a firewall rule to permit ICMP and HTTPS(443) from HCIBench to the ESXi hosts solved the issue.

Apr 24, 2020

I'm trying to use HCIBench on a V6.7 environment with VSAN and every time I run prevalidation I get failed IP tests of one kind or another. It looks like there's always one of the test VMs that fails it's ping tests.

2020-04-24 14:43:28 -0700: IP Assignment failed or IP not reachable

From one of the tvm logs:
networks: Pub-Net-1284 = Bench-VLAN
Deploying VM on 10.252.3.64...
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 14.4M 100 14.4M 0 0 36.8M 0 --:--:-- --:--:-- --:--:-- 36.8M
Powering on VMs ...
PowerOnVM hci-tvm-vsanDatastore-6-1: success
Waiting for VMs to boot ...
green
hci-tvm-vsanDatastore-6-1: 192.168.3.6
PING 192.168.3.6 (192.168.3.6) 56(84) bytes of data.
From 192.168.2.1 icmp_seq=1 Destination Host Unreachable
From 192.168.2.1 icmp_seq=2 Destination Host Unreachable
... several failed ping attempts...
PowerOffVM hci-tvm-vsanDatastore-6-1: success
Destroy hci-tvm-vsanDatastore-6-1: success
Can't Ping VM VirtualMachine("vm-858") by IP 192.168.3.6
254
2020-04-24 14:43:28 -0700: IP Assignment failed or IP not reachable

Apr 27, 2020

Usually this type of problem is a configuration difference if using standard virtual switches and/or a misconfigured trunk port on the physical switch (e.g. VLAN not allowed).

Create 2 VM: one on host 10.252.3.64 and on on another host. Configure the IP on those VM (e.g. 192.168.3.1/24 and 192.168.3.2/24). If the VM cannot ping each other but if you vMotion the VM from host 10.252.3.64 to other hosts and the pings succeeds, this is a strong confirmation of a network configuration problem.

Apr 24, 2020

The other test hosts have ping results like this:

hci-tvm-vsanDatastore-7-1: 192.168.3.7
deploy-hcitvm-7.log-PING 192.168.3.7 (192.168.3.7) 56(84) bytes of data.
deploy-hcitvm-7.log-64 bytes from 192.168.3.7: icmp_seq=1 ttl=64 time=0.291 ms
deploy-hcitvm-7.log-64 bytes from 192.168.3.7: icmp_seq=2 ttl=64 time=0.161 ms
deploy-hcitvm-7.log-64 bytes from 192.168.3.7: icmp_seq=3 ttl=64 time=0.161 ms
deploy-hcitvm-7.log-64 bytes from 192.168.3.7: icmp_seq=4 ttl=64 time=0.196 ms
deploy-hcitvm-7.log-64 bytes from 192.168.3.7: icmp_seq=5 ttl=64 time=0.163 ms
deploy-hcitvm-7.log-
deploy-hcitvm-7.log:--- 192.168.3.7 ping statistics ---
deploy-hcitvm-7.log-5 packets transmitted, 5 received, 0% packet loss, time 22ms

Apr 15, 2020

How does HCIBench determine if the VSAN cluster is All Flash or Hybrid?
Is it possible that the information reported is wrong?
Example, vSAN AF Stretched cluster reported as Hybrid.
Thanks

Apr 15, 2020
Apr 16, 2020

Thanks for your reply.
Om maybe because one disk in the witness appliance is marked as HDD:

vSAN Disks:
HDD: Local VMware Disk (mpx.vmhba0:C0:T1:L0)
VMware Virtual disk 375 GB
SSD: Local VMware Disk (mpx.vmhba0:C0:T2:L0)
VMware Virtual disk 10 GB

Thanks again

Apr 16, 2020

Here is a quick workaround. Run these commands on HCIBench to use modified scripts that force the type to All-Flash.

cp /opt/automation/lib/easy-run.rb /opt/automation/lib/easy-run.rb.bak

cp /opt/automation/lib/get-vsan-info.rb /opt/automation/lib/get-vsan-info.rb.bak

curl https://raw.githubusercontent.com/cleeistaken/hcibench_miscellaneous/master/force_af/easy-run.rb --output /opt/automation/lib/easy-run.rb

curl https://raw.githubusercontent.com/cleeistaken/hcibench_miscellaneous/master/force_af/get-vsan-info.rb --output /opt/automation/lib/get-vsan-info.rb