Please provide your feedback in this short Flings' survey.
Feb 10, 2020

Whenever I validate my configuration i am getting below error
"The name 'hci-tvm-vsan-demo-vms' already exists"
There is no VM available with this name on localdatastore and vsandatastore.
Re-deployed HCIbench still getting same error.

Feb 10, 2020

Did you create a folder with that name? If so, please delete any folder name that starts with "hci-tvm" and try again.

Feb 10, 2020

No there is no.folder with hci-tvm name

Feb 10, 2020
Feb 06, 2020

When I run validate config, it is creating multiple hci-tvm-vsan vms and putting them on a different network to the other vdbench machines or in some cases assigning an ip address which is already used by our other vdbench machines. We haven't changed any config. Everything was working ok last time we ran a test. The validation is then failing saying unable to ssh into a hci-tvm machine

Feb 06, 2020

Says IP assignment failed or IP not reachable. The HCIBench appliance seems to have given itself 172.16.0.1 and it creates the hci-tvm VMs as 172.16.3.x

Feb 06, 2020

The static IP ranges use a netmask of /18 so subnet IP range is 172.16.0.1 - 172.16.63.254.

Feb 06, 2020

Seem to have resolved this by running ifconfig eth1 up; service dhcp start on the HCIBench appliance

Feb 06, 2020

After running the validation again, it has failed again

Feb 06, 2020

Once the validation fails can you confirm all the VM have an IP on the correct subnet and can you ping all, some, or none of the test VM from HCIBench? Does the network selected for eth1 on hcibench matche the network on the VM configuration, and Is the VLAN spanning on all your hosts?

Feb 06, 2020

docker0 has an ip of 172.17.0.1
eth0 has an ip of 10.253.11.50

When running validation the HCIbench appliance assigns itself an ip of 172.16.2.1

The hci-tvm machines assign themselves 172.16.3.x addresses

All hosts are connected to the same port group

The eth1 network on hcibench is connected to the same port group as the vms it creates

No nsx or vxlan stuff going on - just a lab environment

Feb 06, 2020

At first glance that all looks right. Email me when you can and let's jump on a call.

Jan 31, 2020

Hi Team,

We are doing performance analysis using HCIBench, we would like to know the internal behavior of the tool when we start seeing low throughput. At some point we are hitting maximum with HCIBench and we would like to know if HCIBench lower the workload push when throughput of the array starts dropping?

Feb 03, 2020

The load generators (vdbench or fio) will execute whatever test parameter you've set.

If you ask for a certain block size and queue depth they will try to fulfill the request regardless of the performance or impact to the storage system. For instance if the storage system has a small amount of cache and few capacity disks you can end up filling up the cache after which the throughput/IOPS will drop as the system slows incoming writes to prevent the system completely exhausting the cache.

If you create a custom test profile with a target (e.g. fio's latency_target) then the IOPS (and throughput) will fluctuate to meet the target.

Feb 03, 2020

Than k You Charles. Just to confirm, regardless of the sate of the array, the load generator will keep pushing the same amount of workload that was defined in the profile, is this right?

Feb 06, 2020

Yes, FIO will generate as much of the workload specified in the configuration file unless a limit is specified (e.g. target latency, iops_rate).

Dec 29, 2019

Hi Team, i am not able to see final test report in .pdf format.
I ran several tests, with small duration (20-40 minutes) and i was able to have my pdf report.
Now, without changing anything except test duration to 28800 second (8 hours), i can't see the report in results. I only have .txt summary.
Is there anything i am missing?

Dec 29, 2019

I was forgetting: version 2.3.1

Dec 29, 2019

Adding that i discovery that grafana is no more answering to requests...

Dec 30, 2019

I guess Grafana doesn't respond is the root cause. Could you check whether you still have the space on /dev/sdb by "df /dev/sdb -h"?

Also, try reboot your HCIBench VM, check whether you can access http://HCIBENCH_IP:3000 and proceed to run
"ruby /opt/automation/lib/generate_report.rb /opt/output/results/TEST_NAME/TEST_CASE_NAME" see what you get.