fling logo of Storage Performance Tester

Storage Performance Tester

version 1.1 — August 06, 2021

Contributors 3

View All

Comments 123

  • profile picture of Romeo
  • profile picture of markshannon1974
  • profile picture of Romeo
  • profile picture of Romeo
  • profile picture of HarryCarlo
  • profile picture of HarryCarlo
  • profile picture of apksmooth
  • profile picture of Yukaiosl
View All

Summary

Storage Performance Tester is a one-click storage performance test tool, which is able to collect IOPS, latency and CPU cycles per I/O for ESXi storage stack. This tool automates all the testing steps including the customized VMs deployment, I/O workload running, and storage performance analysis. It displays the performance metrics through multiple visualized graphical charts. The only thing that users need to do is enter one command and wait for the performance report of your server.

This tool is designed to be a developer-friendly tool help troubleshoot and identify storage performance issues. It could be used to validate the maximum performance of new storage hardwares/drivers and setups of vSphere/vSAN. For more details please check the guild located in the instructions

Requirements

  • Python 3
  • sshpass
  • 2 GB of storage space
  • Linux environments (kernel version is older than 2.6.31)

Instructions

Using Storage Performance Tester

2.1. Obtain and Prepare work

The Storage Performance Tester can be obtained as a .zip file from VMware. Uncompress the whole zip file, and follow the instructions below to check whether it could run in your environment.

  1. check if the third-party tool, fio, could work well.
    #./fio --version
  2. check if ovftool could work
    #./ovftool/ovftool --version

2.2. Using Storage Performance Tester

A basic command as below:

#./sperf.py HOSTNAME -d DatastoreName

sperf.py needs users to input the root password of the HOST, after that it will do the test automatically and give you the result in about 20mins (with default

configuration).

The command used to test multi datastores on the HOST:

#./sperf.py HOST -d datastore1 -d datastore2 -d datastore3 -d datastore4

2.3. I/O Workloads Setup

Sperf could control the I/O workloads through the config file (--configfile/-i).

#config.ini is the default workload config file.

[iopstasks1]

fio_8thread_4dev_4krandread

fio_8thread_4dev_4kread

fio_8thread_4dev_4kwrite

fio_8thread_4dev_4krandwrite

[iopstasks2]

fio_8thread_4dev_8krandread

fio_8thread_4dev_8kread

fio_8thread_4dev_8kwrite

fio_8thread_4dev_8krandwrite

[iopstasks3]

fio_8thread_4dev_64krandread

fio_8thread_4dev_64kread

fio_8thread_4dev_64kwrite

fio_8thread_4dev_64krandwrite

[latencytasks]

fio_latency_512read

fio_latency_4kread

fio_latency_8kread

fio_latency_512write

fio_latency_4kwrite

fio_latency_8kwrite

The config file supports three keywords, which are [iops*] [latency*] and [delay*].

[iops*] is an IOPS fio scripts group.

[latency*] is a latency fio workloads group.

[delay*] is to add some time delay between the sections.

You can write your own config files to define which kind of workloads to run in the test.

Example:

#cat config2.ini

[iopstask1]

#your fio scripts name in fioscripts.

fio_8thread_4dev_4krandread

[delaytime1]

#you can also add some delay between the two workload sections.

1800

[iopstask2]

fio_8thread_4dev_8kread

[latencytasks]

fio_latency_512read

Storage Performance Tester Reporting

SPerf provides an intuitive way for customers to check performance metrics.

It is better to have an HTTP service in your environment to check the output.html files if you don't have one, below steps will help.

#cd results && nohup ./sperfhttp.sh &
The upper script will start up a simple HTTP service in your Linux with port 8000. So you can check your all your test results through http://YOURLINUXIP:8000/

In the results, you can also check the basic information of your target HOST.

For your instance:

Video

Changelog

Support vnvme virtual adapter to sperf. support ESXi67 bugfixes.

Similar Flings

No similar flings found. Check these out instead...
Feb 14, 2014
fling logo of Migrate to View (MTV)

Migrate to View (MTV)

version 1.0

Migrate to View (MTV) enables seamless migration of vSphere virtual machines (non-VDI) into View Connection Broker, maintaining the user persistence onto the virtual machines. By moving to View, more features can be integrated and leveraged. Additionally, administrative tasks can better maintain virtual machines and control user policy.

Jun 26, 2019
fling logo of IOBlazer

IOBlazer

version 1.01

IOBlazer is a multi-platform storage stack micro-benchmark. IOBlazer runs on Linux, Windows and OSX and it is capable of generating a highly customizable workload.

Oct 07, 2021
fling logo of Configurator Toolkit for Kubernetes

Configurator Toolkit for Kubernetes

version 1.22.0.1

Configurator Toolkit for Kubernetes Fling is a command-line-based tool for authoring Kubernetes YAML files and performing basic Kubernetes administration tasks.

May 04, 2021
fling logo of NSX Mobile

NSX Mobile

version 1.0.1

NSX Mobile brings the ease of monitoring the networking and security right from your phone.

Jun 30, 2022
fling logo of Image-Quality

Image-Quality

version 1.0

This Fling analyses a sequence of screenshots collected by a user to generate three metrics: frame-count, smoothness, and image quality for VDI or any video streaming application. See readmeFirst.PDF for instructions, and notes on how to interpret results.

Jun 02, 2021
fling logo of Resource-Efficient Supervised Anomaly Detection Classifier

Resource-Efficient Supervised Anomaly Detection Classifier

version 1.0.0

Resource-Efficient Supervised Anomaly Detection Classifier is a scikit-learn classifier for resource-efficient anomaly detection that augments either Random-Forest or XGBoost classifiers to perform well in a resource-constrained setting.

View More