Apr 23, 2021

I am confused. I downloaded the zip file, now what? Copy the zip file to esx host or i need deploy some vm?

Apr 24, 2021

Please unzip it in a Linux os. And read the readme file. It has guidance.

Apr 21, 2021

Trying to setup on a different node and hitting this issue:
2021-04-21 14:54:27,401 - sperf.py:50 - INFO - Copy id_rsa.pub to the host.
/bin/sh: 1: sshpass: not found
2021-04-21 14:54:27,403 - sperf.py:53 - ERROR - Setup automation failed on the host.

Apr 21, 2021

sshpass is required by this tool, so you may need to install it on your linux.

Apr 22, 2021

Thanks. Did that but now when running I'm getting this known issue
Opening OVF source: ./ovftemplates/sperfVMv2.ovf
Error: Cannot parse locator: vi://root:LcYaykL?UKjuhxKUX7(G@10.169.112.43
Warning:
- No manifest file found.
Completed with errors
Traceback (most recent call last):
File "./sperf.py", line 484, in <module>
sys.exit(main())
File "./sperf.py", line 468, in main
status = sperf.DeployVM(args.vmname, args.ovfurl, args.nosslverify)
File "./sperf.py", line 97, in DeployVM
logging.info("Deploy VM on %s Failed rc=%d." % (ds, rc))
UnboundLocalError: local variable 'rc' referenced before assignment

I did applied the change from the bug section.

Apr 01, 2021

Hey guys,
Getting this error when trying to run on a standalone ESXi 6.7
Traceback (most recent call last):
File "./sperf.py", line 484, in <module>
sys.exit(main())
File "./sperf.py", line 468, in main
status = sperf.DeployVM(args.vmname, args.ovfurl, args.nosslverify)
File "./sperf.py", line 95, in DeployVM
proc.communicate(input=self._passwd)
File "/usr/lib/python3.6/subprocess.py", line 848, in communicate
self._stdin_write(input)
File "/usr/lib/python3.6/subprocess.py", line 801, in _stdin_write
self.stdin.write(input)
TypeError: a bytes-like object is required, not 'str'
test@testsrv:~/StoragePerformanceTester/sperf$ Opening OVF source: ./ovftemplates/sperfVMv2.ovf
The manifest validates
Opening VI target: vi://root@172.16.53.168:443/
Error: OVF Package is not supported by target:
- Line 26: Unsupported hardware family 'vmx-17'.
Completed with errors

Any ideas?

Apr 01, 2021

How can I get it to run on ESXi 6.7U3?

Apr 01, 2021

Hi Michael,
Would you please share your build number? So I can work out your problem.
#vmware -v

Thanks

Apr 01, 2021

Hi,

[root@localhost:~] vmware -v
VMware ESXi 6.7.0 build-15160138

Apr 01, 2021

Here is a solution for the compatibility issue for ESXi6.7.

1. Modify '<vssd:VirtualSystemType>vmx-17</vssd:VirtualSystemType>' to '<vssd:VirtualSystemType>vmx-14</vssd:VirtualSystemType>' in the file ovftemplates/sperfVMv2.ovf.

2. Then delete ovftemplates/sperfVMv2.mf.

It works on my local server.

For the other ESXi version, please check the right virtual hardware version through below link.
https://kb.vmware.com/s/article/1003746

Thanks

Apr 02, 2021

Thanks, passed that but now getting this for each workload

2021-04-02 09:11:33,008 - sperf.py:245 - INFO - Run workload on 172.16.4.172
2021-04-02 09:11:33,819 - cpio.py:65 - INFO - Start to collect cpu cycles on 172.16.53.168
2021-04-02 09:11:34,324 - sperf.py:194 - INFO - Running workload - fio_8thread_4dev_4krandread
fio: connect: Connection refused
fio: failed to connect to 172.16.4.172:8765
2021-04-02 09:11:34,330 - cpio.py:71 - INFO - End to collect cpu cycles on 172.16.53.168
2021-04-02 09:11:34,833 - sperf.py:211 - ERROR - run ./fio/fio --output=/home/test/StoragePerformanceTester/sperf/results/5/rawdata/fio_8thread_4dev_4krandread_210402_091133_.txt --client=172.16.4.172 ./fio/fio_8thread_4dev_4krandread.fio meet error 1
2021-04-02 09:11:34,834 - sperf.py:212 - INFO - End workload - fio_8thread_4dev_4krandread
2021-04-02 09:11:34,834 - cpio.py:65 - INFO - Start to collect cpu cycles on 172.16.53.168
2021-04-02 09:11:35,334 - sperf.py:194 - INFO - Running workload - fio_8thread_4dev_4kread
fio: connect: Connection refused
fio: failed to connect to 172.16.4.172:8765

Apr 02, 2021

This is caused by the 'fio server' stoped in your VM. You can try to restart the VM named datastore_sperfVMv1 and do the test again.

If it can not be solved by restart, please try to start the 'fio server' manually.

1. login the VM with password (ca$hc0w)
# ssh vmware@172.16.4.172
2. #fio --server

Then run the sperf.py test again.

Jun 02, 2021

Hi,

i have the "fio: connect: Connection refused" problem too.
But i can't login to the VM.
Username is root?
I've tried the PW with and without ().

Thank you.
Gero

Jun 02, 2021

User/passwd : sperf/ca$hc0w

Yes, you may need to reinstall the fio.

Thanks,
Haitao

Jun 02, 2021

Unfortunately it does not work :/
No idea why.
Tried it via VMRC and putty.
Always access denied...

Jun 02, 2021

Sorry, please try vmware/ca$hc0w

Jun 02, 2021

Yea. This worked! :)

Thank you!

Apr 02, 2021

Tried the restart and got the same error.
SSHd on the server and getting this when running fio --server
Illegal instruction

Apr 02, 2021

Hi Michael,
I guess the error is caused by the ovf export/import mismatch issue.

You can login the VM again and install fio-2.20 manually.
Here is the code link. https://github.com/axboe/fio/releases/tag/fio-3.20

My server has been shutdown, so I can't share the detailed commands. It is an ubuntu VM. I remember that you may need to install make, gcc or sth before you do ./configure #make & make install.

Thanks

Apr 02, 2021

Having some errors
vmware@ubuntuMiniv1:~/fio-fio-2.20$ ./configure
compile test failed
Configure failed, check config.log and/or the above output

vmware@ubuntuMiniv1:~/fio-fio-2.20$ make
Makefile:20: config-host.mak: No such file or directory
FIO_VERSION = fio-2.20
Running configure for you...
compile test failed
Configure failed, check config.log and/or the above output
Makefile:16: recipe for target 'config-host.mak' failed
make: *** [config-host.mak] Error 1

Apr 05, 2021

Any other ideas/suggestions?

Apr 06, 2021
Apr 07, 2021

Thanks it worked, but for each workload I'm getting
- ERROR - run ./fio/fio --output=/home/test/StoragePerformanceTester/sperf/results/10/rawdata/fio_8thread_4dev_8krandwrite_210407_083948_.txt --client=172.16.53.110 ./fio/fio_8thread_4dev_8krandwrite.fio meet error 1
In the end I do see to check the results

Apr 07, 2021

Please share the output of below command :
./fio/fio --client=172.16.53.110 ./fio/fio_8thread_4dev_8krandwrite.fio

Apr 07, 2021

hostname=ubuntuMiniv1, be=0, 64-bit, os=Linux, arch=x86-64, fio=fio-3.20, flags=0
<ubuntuMiniv1> job5: (g=0): rw=randwrite, bs=(R) 8192B-8192B, (W) 8192B-8192B, (T) 8192B-8192B, ioengine=libaio, iodepth=128
<ubuntuMiniv1> ...
<ubuntuMiniv1> job5: (g=0): rw=randwrite, bs=(R) 8192B-8192B, (W) 8192B-8192B, (T) 8192B-8192B, ioengine=libaio, iodepth=128
<ubuntuMiniv1> ...
<ubuntuMiniv1> job5: (g=0): rw=randwrite, bs=(R) 8192B-8192B, (W) 8192B-8192B, (T) 8192B-8192B, ioengine=libaio, iodepth=128
<ubuntuMiniv1> ...
<ubuntuMiniv1> job5: (g=0): rw=randwrite, bs=(R) 8192B-8192B, (W) 8192B-8192B, (T) 8192B-8192B, ioengine=libaio, iodepth=128
<ubuntuMiniv1> ...
<ubuntuMiniv1> Starting 8 processes
<ubuntuMiniv1> fio: failed opening blockdev /dev/sdb for size check
<ubuntuMiniv1> file:filesetup.c:713, func=open(/dev/sdb), error=Permission denied
<ubuntuMiniv1> fio: pid=0, err=13/file:filesetup.c:713, func=open(/dev/sdb), error=Permission denied
<ubuntuMiniv1> fio: failed opening blockdev /dev/sdb for size check
<ubuntuMiniv1> file:filesetup.c:713, func=open(/dev/sdb), error=Permission denied
<ubuntuMiniv1> fio: pid=0, err=13/file:filesetup.c:713, func=open(/dev/sdb), error=Permission denied
<ubuntuMiniv1> fio: failed opening blockdev /dev/sde for size check
<ubuntuMiniv1> file:filesetup.c:713, func=open(/dev/sde), error=Permission denied
<ubuntuMiniv1> fio: pid=0, err=13/file:filesetup.c:713, func=open(/dev/sde), error=Permission denied
<ubuntuMiniv1> fio: failed opening blockdev /dev/sde for size check
<ubuntuMiniv1> file:filesetup.c:713, func=open(/dev/sde), error=Permission denied
<ubuntuMiniv1> fio: pid=0, err=13/file:filesetup.c:713, func=open(/dev/sde), error=Permission denied
<ubuntuMiniv1> fio: failed opening blockdev /dev/sdc for size check
<ubuntuMiniv1> file:filesetup.c:713, func=open(/dev/sdc), error=Permission denied
<ubuntuMiniv1> fio: pid=0, err=13/file:filesetup.c:713, func=open(/dev/sdc), error=Permission denied
<ubuntuMiniv1> fio: failed opening blockdev /dev/sdc for size check
<ubuntuMiniv1> file:filesetup.c:713, func=open(/dev/sdc), error=Permission denied
<ubuntuMiniv1> fio: pid=0, err=13/file:filesetup.c:713, func=open(/dev/sdc), error=Permission denied
<ubuntuMiniv1> fio: failed opening blockdev /dev/sdd for size check
<ubuntuMiniv1> file:filesetup.c:713, func=open(/dev/sdd), error=Permission denied
<ubuntuMiniv1> fio: pid=0, err=13/file:filesetup.c:713, func=open(/dev/sdd), error=Permission denied
<ubuntuMiniv1> fio: failed opening blockdev /dev/sdd for size check
<ubuntuMiniv1> file:filesetup.c:713, func=open(/dev/sdd), error=Permission denied
<ubuntuMiniv1> fio: pid=0, err=13/file:filesetup.c:713, func=open(/dev/sdd), error=Permission denied
<ubuntuMiniv1>
client <172.16.53.110>: exited with error 8

Getting this for all workloads

Apr 07, 2021

You may need to check if the instance of "fio --server" has the permission to open '.dev/sdd'.

try to start the fio --server by root
or
reboot the VM.

Apr 07, 2021

2021-04-07 09:09:43,020 - sperf.py:147 - INFO - Got ip(172.16.53.110) from ds1_sperfVMv1.
2021-04-07 09:09:43,022 - sperf.py:245 - INFO - Run workload on 172.16.53.110
2021-04-07 09:09:43,837 - cpio.py:65 - INFO - Start to collect cpu cycles on 172.16.53.168
2021-04-07 09:09:44,347 - sperf.py:194 - INFO - Running workload - fio_8thread_4dev_4krandread
2021-04-07 09:10:45,533 - cpio.py:71 - INFO - End to collect cpu cycles on 172.16.53.168
2021-04-07 09:10:46,038 - cpio.py:81 - INFO - Tracked 4122612 commands in this case.
2021-04-07 09:10:46,038 - sperf.py:212 - INFO - End workload - fio_8thread_4dev_4krandread
2021-04-07 09:10:46,038 - cpio.py:65 - INFO - Start to collect cpu cycles on 172.16.53.168
2021-04-07 09:10:46,538 - sperf.py:194 - INFO - Running workload - fio_8thread_4dev_4kread
2021-04-07 09:11:47,416 - cpio.py:71 - INFO - End to collect cpu cycles on 172.16.53.168
2021-04-07 09:11:47,930 - cpio.py:81 - INFO - Tracked 4140033 commands in this case.
2021-04-07 09:11:47,930 - sperf.py:212 - INFO - End workload - fio_8thread_4dev_4kread
2021-04-07 09:11:47,930 - cpio.py:65 - INFO - Start to collect cpu cycles on 172.16.53.168
2021-04-07 09:11:48,437 - sperf.py:194 - INFO - Running workload - fio_8thread_4dev_4kwrite
2021-04-07 09:12:49,324 - cpio.py:71 - INFO - End to collect cpu cycles on 172.16.53.168
2021-04-07 09:12:49,838 - cpio.py:81 - INFO - Tracked 4763107 commands in this case.
2021-04-07 09:12:49,838 - sperf.py:212 - INFO - End workload - fio_8thread_4dev_4kwrite
2021-04-07 09:12:49,838 - cpio.py:65 - INFO - Start to collect cpu cycles on 172.16.53.168
2021-04-07 09:12:50,350 - sperf.py:194 - INFO - Running workload - fio_8thread_4dev_4krandwrite
2021-04-07 09:13:51,237 - cpio.py:71 - INFO - End to collect cpu cycles on 172.16.53.168
2021-04-07 09:13:51,750 - cpio.py:81 - INFO - Tracked 1932365 commands in this case.
2021-04-07 09:13:51,750 - sperf.py:212 - INFO - End workload - fio_8thread_4dev_4krandwrite
2021-04-07 09:13:52,568 - cpio.py:65 - INFO - Start to collect cpu cycles on 172.16.53.168
2021-04-07 09:13:53,076 - sperf.py:194 - INFO - Running workload - fio_8thread_4dev_8krandread
2021-04-07 09:14:54,459 - cpio.py:71 - INFO - End to collect cpu cycles on 172.16.53.168
2021-04-07 09:14:54,969 - cpio.py:81 - INFO - Tracked 3464117 commands in this case.
2021-04-07 09:14:54,969 - sperf.py:212 - INFO - End workload - fio_8thread_4dev_8krandread
2021-04-07 09:14:54,969 - cpio.py:65 - INFO - Start to collect cpu cycles on 172.16.53.168
2021-04-07 09:14:55,469 - sperf.py:194 - INFO - Running workload - fio_8thread_4dev_8kread
2021-04-07 09:15:56,460 - cpio.py:71 - INFO - End to collect cpu cycles on 172.16.53.168
2021-04-07 09:15:56,968 - cpio.py:81 - INFO - Tracked 3377808 commands in this case.
2021-04-07 09:15:56,968 - sperf.py:212 - INFO - End workload - fio_8thread_4dev_8kread
2021-04-07 09:15:56,968 - cpio.py:65 - INFO - Start to collect cpu cycles on 172.16.53.168
2021-04-07 09:15:57,481 - sperf.py:194 - INFO - Running workload - fio_8thread_4dev_8kwrite
2021-04-07 09:16:58,368 - cpio.py:71 - INFO - End to collect cpu cycles on 172.16.53.168
2021-04-07 09:16:58,882 - cpio.py:81 - INFO - Tracked 3854567 commands in this case.
2021-04-07 09:16:58,882 - sperf.py:212 - INFO - End workload - fio_8thread_4dev_8kwrite
2021-04-07 09:16:58,882 - cpio.py:65 - INFO - Start to collect cpu cycles on 172.16.53.168
2021-04-07 09:16:59,383 - sperf.py:194 - INFO - Running workload - fio_8thread_4dev_8krandwrite
2021-04-07 09:18:00,283 - cpio.py:71 - INFO - End to collect cpu cycles on 172.16.53.168
2021-04-07 09:18:00,791 - cpio.py:81 - INFO - Tracked 1957670 commands in this case.
2021-04-07 09:18:00,791 - sperf.py:212 - INFO - End workload - fio_8thread_4dev_8krandwrite
2021-04-07 09:18:01,592 - cpio.py:65 - INFO - Start to collect cpu cycles on 172.16.53.168
2021-04-07 09:18:02,102 - sperf.py:194 - INFO - Running workload - fio_8thread_4dev_64krandread
2021-04-07 09:19:03,252 - cpio.py:71 - INFO - End to collect cpu cycles on 172.16.53.168
2021-04-07 09:19:03,762 - cpio.py:81 - INFO - Tracked 1231790 commands in this case.
2021-04-07 09:19:03,762 - sperf.py:212 - INFO - End workload - fio_8thread_4dev_64krandread
2021-04-07 09:19:03,762 - cpio.py:65 - INFO - Start to collect cpu cycles on 172.16.53.168
2021-04-07 09:19:04,260 - sperf.py:194 - INFO - Running workload - fio_8thread_4dev_64kread
2021-04-07 09:20:05,132 - cpio.py:71 - INFO - End to collect cpu cycles on 172.16.53.168
2021-04-07 09:20:05,642 - cpio.py:81 - INFO - Tracked 1251653 commands in this case.
2021-04-07 09:20:05,642 - sperf.py:212 - INFO - End workload - fio_8thread_4dev_64kread
2021-04-07 09:20:05,642 - cpio.py:65 - INFO - Start to collect cpu cycles on 172.16.53.168
2021-04-07 09:20:06,141 - sperf.py:194 - INFO - Running workload - fio_8thread_4dev_64kwrite
2021-04-07 09:21:07,066 - cpio.py:71 - INFO - End to collect cpu cycles on 172.16.53.168
2021-04-07 09:21:07,582 - cpio.py:81 - INFO - Tracked 931322 commands in this case.
2021-04-07 09:21:07,582 - sperf.py:212 - INFO - End workload - fio_8thread_4dev_64kwrite
2021-04-07 09:21:07,582 - cpio.py:65 - INFO - Start to collect cpu cycles on 172.16.53.168
2021-04-07 09:21:08,093 - sperf.py:194 - INFO - Running workload - fio_8thread_4dev_64krandwrite
2021-04-07 09:22:09,118 - cpio.py:71 - INFO - End to collect cpu cycles on 172.16.53.168
2021-04-07 09:22:09,643 - cpio.py:81 - INFO - Tracked 783238 commands in this case.
2021-04-07 09:22:09,643 - sperf.py:212 - INFO - End workload - fio_8thread_4dev_64krandwrite
2021-04-07 09:22:09,644 - sperf.py:223 - INFO - Running workload - fio_latency_512read
2021-04-07 09:23:10,615 - sperf.py:234 - INFO - End workload - fio_latency_512read
2021-04-07 09:23:10,615 - sperf.py:223 - INFO - Running workload - fio_latency_4kread
2021-04-07 09:24:11,618 - sperf.py:234 - INFO - End workload - fio_latency_4kread
2021-04-07 09:24:11,618 - sperf.py:223 - INFO - Running workload - fio_latency_8kread
2021-04-07 09:25:12,475 - sperf.py:234 - INFO - End workload - fio_latency_8kread
2021-04-07 09:25:12,476 - sperf.py:223 - INFO - Running workload - fio_latency_512write
2021-04-07 09:26:13,346 - sperf.py:234 - INFO - End workload - fio_latency_512write
2021-04-07 09:26:13,346 - sperf.py:223 - INFO - Running workload - fio_latency_4kwrite
2021-04-07 09:27:14,214 - sperf.py:234 - INFO - End workload - fio_latency_4kwrite
2021-04-07 09:27:14,214 - sperf.py:223 - INFO - Running workload - fio_latency_8kwrite
2021-04-07 09:28:15,053 - sperf.py:234 - INFO - End workload - fio_latency_8kwrite
2021-04-07 09:28:15,055 - sperf.py:327 - INFO - [('4kread', 68.0), ('4krandread', 68.5), ('4kwrite', 79.4), ('4krandwrite', 32.2)]
2021-04-07 09:28:15,056 - sperf.py:337 - INFO - Create iopstasks1.svg.
2021-04-07 09:28:15,057 - sperf.py:327 - INFO - [('8kread', 56.3), ('8krandread', 57.3), ('8kwrite', 64.2), ('8krandwrite', 32.6)]
2021-04-07 09:28:15,057 - sperf.py:337 - INFO - Create iopstasks2.svg.
2021-04-07 09:28:15,058 - sperf.py:327 - INFO - [('64kread', 20.8), ('64krandread', 20.5), ('64kwrite', 15.5), ('64krandwrite', 13.0)]
2021-04-07 09:28:15,058 - sperf.py:337 - INFO - Create iopstasks3.svg.
2021-04-07 09:28:15,059 - sperf.py:402 - INFO - Create latency.svg.
Traceback (most recent call last):
File "./sperf.py", line 484, in <module>
sys.exit(main())
File "./sperf.py", line 480, in main
sperf.CreateSVGChart()
File "./sperf.py", line 411, in CreateSVGChart
svgcharts.extend(sorted(self.CreateCPIOCharts(outfile)))
File "./sperf.py", line 375, in CreateCPIOCharts
format_bar_value=lambda x: "{:.0f}cycles".format(x)
File "/home/test/StoragePerformanceTester/sperf/createBarCharts.py", line 45, in CreateBarChart
tick_size, min_value, max_value = get_tick_size(min_value, max_value, 8)
File "/home/test/StoragePerformanceTester/sperf/utils.py", line 69, in get_tick_size
unit_value = max(min_unit, pow(10, floor(log10(end_value - start_value) - 1)))
ValueError: math domain error

Apr 07, 2021

I'll work on this issue.
Before that, you can add '-c 0' to your sperf.py command to pass the test.

Feb 01, 2021

Do i'm hitting the bug?

./sperf.py vmsrv01 -d cl01-sas
2021-02-01 15:27:39,411 - sperf.py:450 - INFO - Using primary log sperf.log.
2021-02-01 15:27:39,411 - sperf.py:460 - INFO - The test results will be put into /root/StoragePerformanceTester/sperf/results/4
Please enter the root password for vmsrv01:
2021-02-01 15:27:44,031 - sperf.py:50 - INFO - Copy id_rsa.pub to the host.
2021-02-01 15:27:44,302 - sperf.py:59 - INFO - Geting basic info on vmsrv01
2021-02-01 15:27:46,880 - sperf.py:72 - ERROR - Didn't find the right hardware info based on cl01-sas
2021-02-01 15:27:46,881 - sperf.py:110 - INFO - Check if the VM (cl01-sas_sperfVMv1) has been deployed.
2021-02-01 15:27:47,760 - sperf.py:119 - INFO - Host vmsrv01 doesn't have a VM named cl01-sas_sperfVMv1.
2021-02-01 15:27:47,761 - sperf.py:88 - WARNING - This connect is insecure. please try to add --nosslverify 0 in your tests.
Traceback (most recent call last):
File "./sperf.py", line 484, in <module>
sys.exit(main())
File "./sperf.py", line 468, in main
status = sperf.DeployVM(args.vmname, args.ovfurl, args.nosslverify)
File "./sperf.py", line 95, in DeployVM
proc.communicate(input=self._passwd)
File "/usr/lib/python3.8/subprocess.py", line 1009, in communicate
self._stdin_write(input)
File "/usr/lib/python3.8/subprocess.py", line 958, in _stdin_write
self.stdin.write(input)
TypeError: a bytes-like object is required, not 'str'
root@ubuntu-server:~/StoragePerformanceTester/sperf# Opening OVF source: ./ovftemplates/sperfVMv2.ovf
The manifest validates
Opening VI target: vi://root@192.168.88.41:443/
Deploying to VI: vi://root@192.168.88.41:443/
Transfer Completed
Powering on VM: cl01-sas_sperfVMv1
Task Completed
Received IP address: 192.168.99.173
Completed successfully

And here i'm getting back to the prompt... What can i do to get this running?

Feb 02, 2021

Ah, i must click on the bug to get a comment with the workaround. Thanks!

This error i passed now, but not i got a other error:

utils.py:18 - ERROR - Run 'vsish -pe get /storage/scsifw/devices//stats |grep commands' on vmsrv01 failed, error=1

Will this command be executed on the ESXi host? On the host i get:

[root@vmsrv01:~] vsish -pe get /storage/scsifw/devices//stats
main():Python mode is deprecated and will be removed in future releases. You can use pyvsilib instead.
VSISHCmdGetInt():Get failed: Not found
[root@vmsrv01:~]