Summary
Hello ESXi-Arm Fling participants!
Over the past several years, you've seen us demonstrate our virtualization technology on the Arm platform across several use cases, everything from running mission critical workloads on a windmill, to running on the SmartNIC, to running on AWS Graviton in the cloud. We realized that the resilient platform created for the datacenter can be equally valuable in non-traditional environments. We've learned a lot from exploratory discussions with customers and Arm Silicon Partners.
Now we'd like to give our customers a chance to evaluate this technology in their own environments. This evaluation program is for the enterprise architects who are considering the viability of virtualizing Arm workloads, for the test/dev team looking for a way to spin up Arm environments, and for the tinkerers who simply want to explore running ESXi-Arm in a small form factor Edge environment. We're interested to understand what features you will find most valuable, and how you will want to deploy this technology.
To get started, head over to the Requirements tab for the list of recommended hardware to try out the bits and the installation instructions.
Contact Us:
We’d love to hear more about your plans for ESXi on Arm. Please reach out to us at esxionarm@vmware.com with the following details to help us shape future releases:
- Company name
- Arm hardware platform of choice
- Scale of deployment (number of servers/VMs)
- Types of workloads and use cases
- vSphere features of interests (HA, DRS, NSX, vSAN, Tanzu, vCD, etc)
You can also engage with us and the rest of ESXi-Arm community with these additional resources:
- For all questions related to ESXi-Arm Fling, please use the Comments and/or Bug section of the Fling website.
- VMTN ESXi-Arm Community: https://communities.vmware.com/community/vmtn/vsphere/esxi/esxi-arm-fling
- Follow us on Twitter: @esxi_arm
- Chat with us on Slack: #esxi-arm-fling on VMware {code}

Requirements
- Datacenter:
- Ampere Computing eMAG-based systems from Avantek and Lenovo (HR330A, HR350A)
- Ampere Computing Altra-based systems from Avantek and other distributors (experimental, single socket only)
- Ampere Computing Altra-based shapes from Oracle Cloud Infrastructure (experimental)
- Arm Neoverse N1 System Development Platform
- HPE ProLiant RL300 Gen11
- Marvell OCTEON 10
- Near Edge:
- SolidRun Honeycomb LX2
- SolidRun MacchiatoBin or CN9132 EVB
- NVIDIA Jetson AGX Xavier Developer Kit (experimental)
- Far Edge:
- Raspberry Pi 4b - 4GB or 8GB Model (8GB is HIGHLY recommended & USB 3.0 device for ESXi/VMFS is also recommended)
- Raspberry Pi 400
- NVIDIA Jetson Xavier NX Developer Kit (experimental)
- LS1046A-based NXP Freeway
- LS1046A-based NXP RDB
- Socionext SynQuacer Developerbox
- PINE64 Quartz64 Model A
- Firefly Station M2 (4GB and 8GB models)
Supported vCenter Server: vCenter Server is not required but for customers who wish to use vCenter Server (x86) to manage ESXi-Arm host, this functionality is supported.
- vCenter Server Appliance (VCSA) 7.0 or newer is needed to manage an ESXi-Arm host
- vCenter Server Appliance (VCSA) 7.0c or 7.0d is required to enable vSphere HA and vSphere FT on an ESXi-Arm host. Please see ESXi-Arm documentation for more detailed instructions
- vCenter Server Appliance (VCSA) 7.0 Update 1 or newer can be used BUT vSphere DRS will not function (Following workaround should be applied). If you need vSphere DRS, please use VCSA 7.0c or 7.0d
- To access ESXi-ARM, you'll need to register for a MyVMware account. Registration is free - https://my.vmware.com/web/vmware/registration
Documentation:
- Overall ESXi-Arm documentation + Hardware specific installation PDF guides can be download on the ESXi-Arm Fling download page (select drop down)
- A reminder that the distributed ESXi-Arm bits under the VMware Fling program is not officially supported. Do not deploy this technical preview in any production capacity.
- The distributed ESXi-Arm bits match the vSphere 7.0 release. Everything not explicitly documented as working should be assumed otherwise. Please check the documentation here and the official VMware ARM blog.
- Periodically we will publish updated bits to enable new features or fix bugs. While we hope to provide these in a timely manner, please note that our existing product responsibility to customers takes precedent over the fling. Our responses to bug fixes or forum comments may be delayed.
- After installation, ESXi-Arm can be managed as a standalone host through the ESXi Host client or it can be managed by vCenter Server (running on x86). Please follow the ESXi on Arm Fling with vCenter tutorial and note the various restrictions and limitations. Do not use a production vCenter instance to manage your ESXi-Arm instance.
- The ESXi-Arm bits will expire 180 days after installation. You will need to reinstall the bits to reset the clock.
- While ESXi-Arm supports many Arm ServerReady-like systems, ESXi-Arm Fling "officially" supports only a few chosen platforms. Please check the documentation here. Best-effort tutorials for some other systems may be published over at the official VMware ARM blog.
- If you will be installing the ESXi-Arm on the Raspberry Pi 4B, we HIGHLY recommend the 8GB version. While 4GB is sufficient to boot, there's not much room left to run a VM. Swing for the 8GB if you can.
Video
Changelog
- Fix SMP boot issues on some micro-architectures.
- Improve high physical addresses memory nodes support.
- Add support for 3 kinds of USB NIC adaptors:
- Realtek RTL8153 serial 1G USB Network Adaptor
- Realtek RTL8152 serial 100M USB Network Adaptor
- ASIX AX88179/AX88178A serial 1G USB Network Adaptor
- See this blog post for the complete list of USB Network Adaptors
Build 22346715
VMware-VMvisor-Installer-7.0.0-22346715.aarch64.iso
VMware-ESXi-7.0.0-22346715-depot.zip
Jun 14, 2023 - v1.13 Note: Upgrade is NOW supported from earlier ESXi-Arm 1.x Fling releases using either ESXi-Arm ISO or Offline Bundle
- Adds support for AHCI SATA controllers that do not support 64-bit addressing on systems with memory located above 4GB
- Fixes a PSOD on GIGABYTE’s Ampere Altra and Altra Max systems with ASM1164 based SATA HBAs when one or more SATA disks are present
- Virtual NVMe device data corruption fix
- Virtual UEFI ACPI tables now only shows configured serial ports. An ACPI SPCR table is created for the first one found
- UEFI real-time clock (RTC) support is now enabled on Rockchip based systems
- Fixes a possible hang at shutdown on Rockchip based systems when using the onboard network interface
- Upgrades using image profiles with the Offline Bundle (zip) are now possible on all systems
- Fixes vVols connection failures
- High Availability for vCenter Server 8.0+ (See blog post for more details)
Build 21921575
VMware-VMvisor-Installer-7.0.0-21921575.aarch64.iso
VMware-ESXi-7.0.0-21921575-depot.zip
vmware-fdm-8.0.0-20519528.arm64.vib (VC 8.0 Build 20519528)
vmware-fdm-8.0.0-20920323.arm64.vib (VC 8.0a Build 20920323)
vmware-fdm-8.0.0-21216066.arm64.vib (VC 8.0b Build 21216066)
vmware-fdm-8.0.0-21457384.arm64.vib (VC 8.0c Build 21457384)
vmware-fdm-8.0.1-21560480.arm64.vib (VC 8.0u1 Build 21560480)
vmware-fdm-8.0.1-21815093.arm64.vib (VC 8.0u1a Build 21815093)
Mar 17, 2023 - v1.12 Note: Upgrade is NOW supported from earlier ESXi-Arm 1.x Fling releases using either ESXi-Arm ISO or Offline Bundle
- Virtualization Improvements
- Various fixes related to Arm SystemReady compliance for virtual hardware exposed to guests
- Compatibility fixes related to secure boot
- Host Support Improvements
- New platforms
- EXPERIMENTAL support for HPE ProLiant RL300 Gen11 servers
- EXPERIMENTAL support for Marvell OCTEON 10 based platforms
- NVME
- Support for NVMe on non-cache coherent PCIe root complexes (eg. Rockchip RK3566 systems like Pine64 Quartz64 and Firefly Station M2)
- Add a workaround for devices with PCI vendor/device ID 126f:2263 (e.g. Patriot M.2 P300) that report non-unique EUI64/NGUID identifiers which prevented more than one disk from being detected on systems with multiple devices present
- When upgrading a system from 1.12 from a prior Fling release with one of these devices, datastores from the device will not be mounted by default. Please refer to this blog post on how to mount the volumes after the upgrade is complete
- Miscellaneous
- ESXi-Arm Offline Bundle (zip) now available
- Fixed cache size detection for Armv8.3+ based systems
- Relax processor speed uniformity checks for DVFS enabled systems
- Support additional PHY modes in the mvpp2 driver
- Fixed IPv6 LRO handling in the eqos driver
- Identify some new CPU models
- Ampere Altra-based systems may PSOD when AHCI disks are used
- In 1.11 we mentioned that the kernel included with the Ubuntu for Arm 22.04.1 LTS installer had an issue that prevented graphics from initializing properly. Ubuntu for Arm 22.04.2 LTS has since been released and includes a fix for this issue.
- FreeBSD 13.1-RELEASE has a known bug with PVSCSI support and large I/O requests. There are a few ways to work around this issue:
- Upgrade to FreeBSD 13.2-RC1 or later, which includes a fix
- Set the tunable kern.maxphys=”131072″ to limit the maximum I/O request size
- Use AHCI instead of PVSCSI
- Support CPU accelerated crypto (e.g. NEON, Armv8 Cryptographic Extensions) for built-in ESX services
- Fixed ESXi xterm-256color terminfo. Terminal.app in macOS (or any modern terminal, on any OS) now properly renders esxtop
- Updated the virtual UEFI ROM to match the version used by VMware Fusion for Apple silicon
- Support for virtual HTTP boot
- Support for virtual TPM, virtual Secure Boot, and encrypted VMs
- Support for physical GICv4 systems
- Added VMware Tools for Windows
- Fixed issue with ixgben driver
- Ampere Altra-based systems may PSOD when AHCI disks are used
- Ubuntu 22.04 LTS installer graphics do not work. Please use Ubuntu 22.10
- Windows SVGA driver does not work and must not be installed (or use safe mode to uninstall the svga device)
- Upgrade from earlier ESXi-Arm 1.x Fling is now supported
- Support for Arm DEN0115 (PCIe config space access via firmware interface, tested with Raspberry Pi)
- Report L3 cache info for Ampere eMAG
- Minor improvements to non-cache coherent DMA support
- Raspberry Pi NIC (genet) statistics
- GOS: use VNXNET3 and PVSCSI as default for freebsd12
- Support for RK3566 SBCs (e.g. Quartz64)
- PCIe support (NVMe not supported at this time)
- EQOS (onboard) NIC support
- Fix missing barriers for Intel igbn NIC driver, improving stability
- Return zero for unknown sys_reg(3, 0, 0, x, y) accesses from VMs
- Telemetry reporting - Collect statistics on what kind of systems the Fling is being run on, to best gauge interest
- No PII is collected. Here are items collected:
- CPU info: core count, NUMA, manufacturer, etc.
- Firmware info: vendor, version
- Platform info: vendor, product, UUID, PCI device list
- ESXi-Arm info: version, patch level, product build
- The /bin/telemetry script runs on every boot and at 00:00 every Saturday
- Experimental support for Marvell Octeon TX2 CN92xx/CN93xx/CN95xx/CN96xx/CN98xx platforms
- Improved support for PL011 UARTs
- VMM support for ID_AA64ISAR2_EL2, fixing VM crashes with newer Linux kernels (>= 5.17-rc2)
- PCIe Enhanced Allocation support
- Improvements to logging for PCIe
- Improvements to MSI virtualization
- ACPI fix to support OpenBSD guests
- Improved handling of ITS device ID width in implementations without indirect table support
- Improvements to VMkernel TLB handling
- Improvements to NUMA handling (Especially around error reporting)
- Experimental support for Pine64 Quartz64 board
- Support for VMware SVGA driver (compatible with Fusion on AS, e,g, fixes Fedora F35 black screen issue)
- NUMA-aware VMM, improving performance for dual-socket Ampere Altra machines
- Improved compatibility for systems without an IORT
- Fix performance issues in newer Linux kernel guest OSes like Debian 10 and Photon 4
- Recognise CA55
- Improve TLBI handling in VMM/VMK
- Improve contention for atomic ops
- Experimental Support for Ampere Altra-based BM.Standard.A1.160 shapes from Oracle Cloud Infrastructure
- Experimental Support for Marvell Armada A8040 / Octeon TX2 CN9132 chipsets
- Experimental Support for Socionext SynQuacer Developerbox
- Minor VM performance improvement
- Support BCM2848 ACPI ID for the USB OTG port (affects newer UEFI firmware versions)
- Improved PMU virtualization
- Fix virtual AHCI support for some ACPI OSes
- Improve time virtualization
- Experimental support for NVIDIA Tegra Xavier AGX and NVIDIA Tegra Xavier NX (PCIe, USB, NVMe, SATA)
- Experimental support for 2P Ampere Altra-based servers (Mt. Jade)
- Improved VM performance for multi-socket Arm servers
- Fix virtual NVMe support in UEFI and some OSes
- Improve interrupt controller virtualization
- Improve virtualization performance
- Improve compatibility with newer guest OS linux kernels
- Improve USB stability issues, especially with RTL8153-based USB NICs (a common chipset) and especially on Raspberry Pi and Tegra Xavier
- Updated documentation for ESXi-Arm Fling, Raspberry Pi, Ampere Altra, NVIDIA Xavier AGX & NVIDIA Xavier NX (See download for details)
- Improved hardware compatibility (various bug fixes/enhancements)
- Add support for Experimental Ampere Altra (single socket systems only) (please see Requirements for more details)
- ACPI support for virtual machines
- NVMe and PVSCSI boot support in vEFI
- Workaround for ISO boot on some Arm servers
- Address VMM crash with newer guest OSes and Neoverse N1-based systems
- Improved guest interrupt controller virtualization
- Improved (skeletal) PMU virtualization
- Improved big endian VM support
- UI: Disable datastore browsing when no datastores are present
- PSCI: Fix missing context_id argument for CPU_ON calls
- GICv2: Always enable SGIs, as GIC-500
- arm64: Support for big-endian guests
- Remove requirements/restrictions on initrd for UEFI-less VMs
- Fix for https://flings.vmware.com/esxi-arm-edition/bugs/1098 (PSOD adding to VDS)
- Support for Arm N1 SDP platform
- Support for VMs on Neoverse N1 CPU
- Pass-thru stability improvements to LS1046A and LX2160A platforms
- Fix for vCenter/DRS incorrect CPU usage
- Fix for VM crash when VM storage fills up
- Stability fix for non-coherent DMA device support
- Installer: tolerate RAM size within 4% of 4GB instead of 3.125 (for the otherwise unsupported RK3399 boards)
- Serial port handling improvements (for unsupported/unknown boards, to be a bit more resilient of firmware configuration errors)
- Documentation Updates:
- Moved and expanded iSCSI doc for Pi doc to main ESXi-Arm Fling doc
- Added LS1046ARDB docs (including ref to it from main ESXi-Arm doc and Fling website)
- Fixed Ampere server name and links (its HR330A/HR350A, not SR-something)
- Added Arm N1SDP document (including ref to it from main ESXi-Arm doc)
- Updated GuestOSes known to work with ESXi-Arm including new "Verified" section
- Updated instruction to update EEPROM for Pi doc
Known Issues:
Build 21447677
VMware-VMvisor-Installer-7.0.0-21447677.aarch64.iso
VMware-ESXi-7.0.0-21447677-depot.zip
Oct 26, 2022 - v1.11 Note: Upgrade is NOW supported from earlier ESXi-Arm 1.x Fling releases
Known Issues:
Build 20693597
VMware-VMvisor-Installer-7.0.0-20693597.aarch64.iso
July 20, 2022 - v1.10 Note: Upgrade is NOW supported from earlier ESXi-Arm 1.x Fling releases
Build 20133114
VMware-VMvisor-Installer-7.0.0-20133114.aarch64.iso
March 31, 2022 - v1.9 Note: Upgrade is NOT possible, only fresh installation is supported. If you select "Preserve VMFS" option, you can re-register your existing Virtual Machines.
Build 19546333
VMware-VMvisor-Installer-7.0.0-19546333.aarch64.iso
December 17, 2021 - v1.8 Note: Upgrade is NOT possible, only fresh installation is supported. If you select "Preserve VMFS" option, you can re-register your existing Virtual Machines.
Build 19076756
VMware-VMvisor-Installer-7.0.0-19076756.aarch64.iso
December 7, 2021 - v1.7 Note: Upgrade is NOT possible, only fresh installation is supported. If you select "Preserve VMFS" option, you can re-register your existing Virtual Machines.
Build 19025766
VMware-VMvisor-Installer-7.0.0-19025766.aarch64.iso
October 6, 2021 - v1.6 Note: This release does not contain a new ESXi-Arm build, it is to announce new hardware enablement. The previous ESXi-Arm build can be used with the mentioned hardware platforms below. For more information, please download the hardware specific PDF guides.
August 6, 2021 - v1.5 Note: Upgrade is NOT possible, only fresh installation is supported. If you select "Preserve VMFS" option, you can re-register your existing Virtual Machines.
June 15, 2021 - v1.4 Note: Upgrade is NOT possible, only fresh installation is supported. If you select "Preserve VMFS" option, you can re-register your existing Virtual Machines.
April 02, 2021 - v1.3 Note: Upgrade is NOT possible, only fresh installation is supported. If you select "Preserve VMFS" option, you can re-register your existing Virtual Machines.
November 30, 2020 - v1.2 Note: Upgrade is NOT possible, only fresh installation is supported. If you select "Preserve VMFS" option, you can re-register your existing Virtual Machines.
October 22, 2020 - v1.1
October 06, 2020 - v1.0 (Initial Release)
Build 16966451 VMware-VMvisor-Installer-7.0.0-16966451.aarch64.iso
Contributors
Similar Flings
No similar flings found. Check these out instead...

Operator Builder for Tanzu
Operator Builder for Tanzu will produce a working operator given a set of known good YAML manifests with basic create, update, and delete functionality out of the box.

SyncML Compare
SyncML-Compare is an extension to Fiddler application that lets you compare the syncmls pushed from server against the SyncMls received from the device management client on the device.

vCenter Cluster Performance Tool
vCenter Cluster Performance Tool is a Powershell script that uses vSphere PowerCLI to obtain performance data for a cluster by aggregating information from individual hosts.

Mjolnir : Automation Library for VMware Mangle
Mjolnir is a python utility package that helps in performing fault injections on remote hosts. It is a lightweight python wrapper that consumes VMware Mangle REST APIs in backend to inject or remediate a fault on a host machine.

ViewDbChk
The ViewDbChk tool allows administrators to scan for, and fix provisioning errors that can not be addressed using View Administrator. Provisioning errors occur when there are inconsistencies between the LDAP, vCenter and View Composer databases. These can be caused by: direct editing of the vCenter inventory, restoring a backup, or a long term network problem.

Workspace ONE UEM OG Builder (Private)
Workspace ONE UEM OG Builder takes a csv file and create Organization Groups in Workspace ONE UEM in seconds. Quickly create those 100 or 10,000 OGs for your customer in minutes rather then spending hours or days clicking the same buttons.