Please provide your feedback in this short Flings' survey.
Oct 04, 2020

Anyone seen this error on OVF deployment failure (vSphere 7)?

Provider method implementation threw unexpected exception: com.vmware.vapi.std.errors.OperationNotFound: OperationNotFound (com.vmware.vapi.std.errors.operation_not_found) => {
messages = [LocalizableMessage (com.vmware.vapi.std.localizable_message) => {
id = vapi.method.input.invalid.interface,
defaultMessage = Cannot find service 'com.vmware.appliance.networking.no_proxy'.,
args = [com.vmware.appliance.networking.no_proxy],
params = <null>,
localized = <null>
}],
data = <null>,
errorType = OPERATION_NOT_FOUND
}

Oct 06, 2020

It turns out this was caused by an issue in vCenter (services not running).

Sep 22, 2020

Hi Team

I am having following error
checking existing VMs
Deployement Started
[ERROR] IP Assignment or Accessible Failed
Testing failed, For details plase Find Logs in /opt/autamation/logs

PING 172.18.3.26 (172.18.3.26) 56(84) bytes of data.
From 172.18.2.1 icmp_seq=1 Destination Host Unreachable

Can you please help

ESXI-MGMT and VSAN are configured as LAG interfaces, so i need to make any change for same.

Sep 30, 2020

did you specify the same portgroup for guest VMs which assigned to HCIBench "VM Network"? if you did, did you pass pre-validation? thinking your underlying vlan has connectivity issue.

Sep 23, 2020

This error is usually a good indicator there is an underlying network problem. HCIBench does not require anything other than reliable network(s) with unrestricted communication between the hosts. Using LAG shouldn't be a problem and this would be completely transparent to HCIBench. However, LAGs introduce additional complexity where a mis-configuration could result in intermittent problems.

- Are all the test VM unreachable, or only a subset? If it's a subset, is there a pattern (host, rack, ...)?
- Create a pair of VM to test connectivity between every pair of hosts on the portrgoups used by HCIBench. If may be necessary to use different IP and MAC addressed depending on the LAG hash algorithm.

Sep 09, 2020

2020-09-09 22:55:18 -0700: Validating Fio binary and the workload profiles...
2020-09-09 22:55:19 -0700: Validating VC IP and Credential...
2020-09-09 22:55:20 -0700: VC IP and Credential Validated
2020-09-09 22:55:20 -0700: Validating Datacenter Data-center...
2020-09-09 22:55:20 -0700: Datacenter Data-center Validated
2020-09-09 22:55:20 -0700: Validating Cluster Cluster...
2020-09-09 22:55:21 -0700: Cluster Cluster Validated
2020-09-09 22:55:22 -0700: Cluster Cluster has DRS mode: disabled
2020-09-09 22:55:23 -0700: Validating If Any Hosts in Cluster Cluster is in Maintainance Mode...
2020-09-09 22:55:25 -0700: All the Hosts in Cluster Cluster are not in Maitainance Mode
2020-09-09 22:55:25 -0700: Validating Network VM Network...
2020-09-09 22:55:27 -0700: Network VM Network Validated
2020-09-09 22:55:27 -0700: Checking If Network VM Network is accessible from all the hosts of Cluster...
2020-09-09 22:55:28 -0700: Network VM Network is accessible from host 10.200.0.47
2020-09-09 22:55:28 -0700: Network VM Network is accessible from all the hosts of Cluster
2020-09-09 22:55:28 -0700: Validating Type of Network VM Network...
2020-09-09 22:55:29 -0700: Network VM Network Type is Network
2020-09-09 22:55:30 -0700: Datastore Datastore-ILDC Validated
2020-09-09 22:55:30 -0700: Checking Datastore Datastore-ILDC type...
2020-09-09 22:55:31 -0700: Getting Datastore Datastore-ILDC id
2020-09-09 22:55:32 -0700: Datastore Datastore-ILDC type is VMFS
2020-09-09 22:55:32 -0700: Checking If Datastore Datastore-ILDC is accessible from all the hosts of Cluster...
2020-09-09 22:55:33 -0700: Datastore Datastore-ILDC is accessible from host 10.200.0.47
2020-09-09 22:55:33 -0700: Datastore Datastore-ILDC is accessible from all the hosts of Cluster
2020-09-09 22:55:34 -0700: Validating cluster inter-connectivity...
/opt/automation/lib/pre-validation.rb:440:in `validate_vc_info': undefined method `[]' for false:FalseClass (NoMethodError)
from /opt/automation/lib/pre-validation.rb:677:in `
'

Am I missing something .. This is the first run on a newly setup ESXi,Vcenter ,HCIbench controller

Sep 17, 2020

sorry we are busy with vmworld, did not get chance to reply back sooner.

you have a single esxi cluster?
could you
ssh into hcibench and run
rvc "USERNAME:PASSWORD"@VCIP to enter rvc, then
cd /VC/DATACENTER/computers/CLUSTER
and run
what 'hosts/*/vms/hci-tvm-*'
paste me the result?

Aug 24, 2020

How do I force the warmup on every run of HCIbench.
It seems random on whether it warms up all VMs or just some.

I chose a random warmup on the last run (deployed 100 x VMs):

2020-08-24 09:15:07 -0700: Deployment Started.
2020-08-24 09:21:55 -0700: Verifying If VMs are Accessible
2020-08-24 09:21:56 -0700: Deployment Successfully Finished.
2020-08-24 09:21:57 -0700: Virtual Disk Preparation RANDOM Started.(May take half to couple of hours depending on the size of VMs deployed)
2020-08-24 09:21:57 -0700: Disk Preparation Finished: 1/1 VMs
2020-08-24 09:21:57 -0700: Virtual Disk Preparation Finished, Sleeping for 120 Seconds...
2020-08-24 09:23:57 -0700: I/O Test Started.
2020-08-24 09:24:01 -0700: Started Testing vdb-10vmdk-25ws-4k-0rdpct-100randompct-2threads

We perform a trim of all disks and rebuild the DG before every HCIbench test thus we need the warm up to trigger every time.

After 2-3 runs it typically does no warm up unless we try changing the test types or move from vdbench <-> fio.

We've tried warmup in the test profiles but doesn't help.
Is there something we are missing?

Aug 24, 2020

if you are reuse deployed vm, and those vms have been warmed up before, it would skip this procedure automatically to save the time. so if you dont reuse vms it would go ahead to warm them up when deployment finishes.

Aug 24, 2020

Hi Chen,

That doesn't appear to be the case, it also occurs on new clusters with no VMs. That last run had no VMs in the cluster prior to testing and still only warmed up 1 of the 100 VMs deployed.

Prior to testing, I cleaned up all the VMs from the HCIbench script, unmounted, deleted the vSAN DG, did a trim on all disks, rebuilt the DG, and started the test. Not only was there no previous VMs but the vSAN filesystem was new and all disks had been blanked.

Is there a cache (or cache file) somewhere where it remembers VMs it deployed previously? If so how could I blank it.

I could deploy HCIbench, configure it, clone it, and spin up a new instance each time but would be far easier to reuse one instance.

Thanks, Mick.

Aug 24, 2020
Aug 26, 2020

Do you have another email address, Outlook says its invalid?

vsanperformance@vmware.com?
Remote Server returned '550 5.1.3 STOREDRV.Submit; invalid recipient address'

Aug 26, 2020