fling logo of ESXi Arm Edition

ESXi Arm Edition

version 1.12 — March 17, 2023

Contributors 16

View All

Comments 449

  • profile picture of Cyprien Laplace
  • profile picture of Daniel Casota
  • profile picture of Daniel Casota
  • profile picture of XeroX
  • profile picture of William Lam
  • profile picture of acme2020
  • profile picture of XeroX
  • profile picture of Cyprien Laplace
View All


Hello ESXi-Arm Fling participants!

Over the past several years, you've seen us demonstrate our virtualization technology on the Arm platform across several use cases, everything from running mission critical workloads on a windmill, to running on the SmartNIC, to running on AWS Graviton in the cloud. We realized that the resilient platform created for the datacenter can be equally valuable in non-traditional environments. We've learned a lot from exploratory discussions with customers and Arm Silicon Partners.

Now we'd like to give our customers a chance to evaluate this technology in their own environments. This evaluation program is for the enterprise architects who are considering the viability of virtualizing Arm workloads, for the test/dev team looking for a way to spin up Arm environments, and for the tinkerers who simply want to explore running ESXi-Arm in a small form factor Edge environment. We're interested to understand what features you will find most valuable, and how you will want to deploy this technology.

To get started, head over to the Requirements tab for the list of recommended hardware to try out the bits and the installation instructions.

Contact Us:

We’d love to hear more about your plans for ESXi on Arm. Please reach out to us at esxionarm@vmware.com with the following details to help us shape future releases:

  • Company name
  • Arm hardware platform of choice
  • Scale of deployment (number of servers/VMs)
  • Types of workloads and use cases
  • vSphere features of interests (HA, DRS, NSX, vSAN, Tanzu, vCD, etc)

You can also engage with us and the rest of ESXi-Arm community with these additional resources:


Supported ESXi-Arm Hardware: If you are running the Fling on a system not on this list, please let us know!

Supported vCenter Server:
vCenter Server is not required but for customers who wish to use vCenter Server (x86) to manage ESXi-Arm host, this functionality is supported.
  • vCenter Server Appliance (VCSA) 7.0 or newer is needed to manage an ESXi-Arm host
  • vCenter Server Appliance (VCSA) 7.0c or 7.0d is required to enable vSphere HA and vSphere FT on an ESXi-Arm host. Please see ESXi-Arm documentation for more detailed instructions
  • vCenter Server Appliance (VCSA) 7.0 Update 1 or newer can be used BUT vSphere DRS will not function (Following workaround should be applied). If you need vSphere DRS, please use VCSA 7.0c or 7.0d
Download access:
Note: Note: During registration, it's important all request information are filled in, including valid first last name and address information including the country. This information is required as part of our export compliance procedures. Incomplete information will result in access delays and error message asking to provide valid information such as your first, last name, address and country.

  • Overall ESXi-Arm documentation + Hardware specific installation PDF guides can be download on the ESXi-Arm Fling download page (select drop down)
Additional Information:
  • A reminder that the distributed ESXi-Arm bits under the VMware Fling program is not officially supported. Do not deploy this technical preview in any production capacity.
  • The distributed ESXi-Arm bits match the vSphere 7.0 release. Everything not explicitly documented as working should be assumed otherwise. Please check the documentation here and the official VMware ARM blog.
  • Periodically we will publish updated bits to enable new features or fix bugs. While we hope to provide these in a timely manner, please note that our existing product responsibility to customers takes precedent over the fling. Our responses to bug fixes or forum comments may be delayed.
  • After installation, ESXi-Arm can be managed as a standalone host through the ESXi Host client or it can be managed by vCenter Server (running on x86). Please follow the ESXi on Arm Fling with vCenter tutorial and note the various restrictions and limitations. Do not use a production vCenter instance to manage your ESXi-Arm instance.
  • The ESXi-Arm bits will expire 180 days after installation. You will need to reinstall the bits to reset the clock.
  • While ESXi-Arm supports many Arm ServerReady-like systems, ESXi-Arm Fling "officially" supports only a few chosen platforms. Please check the documentation here. Best-effort tutorials for some other systems may be published over at the official VMware ARM blog.
  • If you will be installing the ESXi-Arm on the Raspberry Pi 4B, we HIGHLY recommend the 8GB version. While 4GB is sufficient to boot, there's not much room left to run a VM. Swing for the 8GB if you can.



Mar 17, 2023 - v1.12

Note: Upgrade is NOW supported from earlier ESXi-Arm 1.x Fling releases using either ESXi-Arm ISO or Offline Bundle
  • Virtualization Improvements
    • Various fixes related to Arm SystemReady compliance for virtual hardware exposed to guests
    • Compatibility fixes related to secure boot
  • Host Support Improvements
    • New platforms
      • EXPERIMENTAL support for HPE ProLiant RL300 Gen11 servers
      • EXPERIMENTAL support for Marvell OCTEON 10 based platforms
    • NVME
      • Support for NVMe on non-cache coherent PCIe root complexes (eg. Rockchip RK3566 systems like Pine64 Quartz64 and Firefly Station M2)
      • Add a workaround for devices with PCI vendor/device ID 126f:2263 (e.g. Patriot M.2 P300) that report non-unique EUI64/NGUID identifiers which prevented more than one disk from being detected on systems with multiple devices present
        • When upgrading a system from 1.12 from a prior Fling release with one of these devices, datastores from the device will not be mounted by default. Please refer to this blog post on how to mount the volumes after the upgrade is complete
    • Miscellaneous
      • ESXi-Arm Offline Bundle (zip) now available
      • Fixed cache size detection for Armv8.3+ based systems
      • Relax processor speed uniformity checks for DVFS enabled systems
      • Support additional PHY modes in the mvpp2 driver
      • Fixed IPv6 LRO handling in the eqos driver
      • Identify some new CPU models

    Known Issues:
    • Ampere Altra-based systems may PSOD when AHCI disks are used
    • In 1.11 we mentioned that the kernel included with the Ubuntu for Arm 22.04.1 LTS installer had an issue that prevented graphics from initializing properly. Ubuntu for Arm 22.04.2 LTS has since been released and includes a fix for this issue.
    • FreeBSD 13.1-RELEASE has a known bug with PVSCSI support and large I/O requests. There are a few ways to work around this issue:
      • Upgrade to FreeBSD 13.2-RC1 or later, which includes a fix
      • Set the tunable kern.maxphys=”131072″ to limit the maximum I/O request size
      • Use AHCI instead of PVSCSI

          Build 21447677

    Oct 26, 2022 - v1.11

    Note: Upgrade is NOW supported from earlier ESXi-Arm 1.x Fling releases
    • Support CPU accelerated crypto (e.g. NEON, Armv8 Cryptographic Extensions) for built-in ESX services
    • Fixed ESXi xterm-256color terminfo. Terminal.app in macOS (or any modern terminal, on any OS) now properly renders esxtop
    • Updated the virtual UEFI ROM to match the version used by VMware Fusion for Apple silicon
    • Support for virtual HTTP boot
    • Support for virtual TPM, virtual Secure Boot, and encrypted VMs
    • Support for physical GICv4 systems
    • Added VMware Tools for Windows
    • Fixed issue with ixgben driver

    Known Issues:
    • Ampere Altra-based systems may PSOD when AHCI disks are used
    • Ubuntu 22.04 LTS installer graphics do not work. Please use Ubuntu 22.10
    • Windows SVGA driver does not work and must not be installed (or use safe mode to uninstall the svga device)

          Build 20693597

    July 20, 2022 - v1.10

    Note: Upgrade is NOW supported from earlier ESXi-Arm 1.x Fling releases
    • Upgrade from earlier ESXi-Arm 1.x Fling is now supported
    • Support for Arm DEN0115 (PCIe config space access via firmware interface, tested with Raspberry Pi)
    • Report L3 cache info for Ampere eMAG
    • Minor improvements to non-cache coherent DMA support
    • Raspberry Pi NIC (genet) statistics
    • GOS: use VNXNET3 and PVSCSI as default for freebsd12
    • Support for RK3566 SBCs (e.g. Quartz64)
      • PCIe support (NVMe not supported at this time)
      • EQOS (onboard) NIC support
    • Fix missing barriers for Intel igbn NIC driver, improving stability
    • Return zero for unknown sys_reg(3, 0, 0, x, y) accesses from VMs
    • Telemetry reporting - Collect statistics on what kind of systems the Fling is being run on, to best gauge interest
      • No PII is collected. Here are items collected:
        • CPU info: core count, NUMA, manufacturer, etc.
        • Firmware info: vendor, version
        • Platform info: vendor, product, UUID, PCI device list
        • ESXi-Arm info: version, patch level, product build
        • The /bin/telemetry script runs on every boot and at 00:00 every Saturday

          Build 20133114

    March 31, 2022 - v1.9

    Note: Upgrade is NOT possible, only fresh installation is supported. If you select "Preserve VMFS" option, you can re-register your existing Virtual Machines.
    • Experimental support for Marvell Octeon TX2 CN92xx/CN93xx/CN95xx/CN96xx/CN98xx platforms
    • Improved support for PL011 UARTs
    • VMM support for ID_AA64ISAR2_EL2, fixing VM crashes with newer Linux kernels (>= 5.17-rc2)
    • PCIe Enhanced Allocation support
    • Improvements to logging for PCIe
    • Improvements to MSI virtualization

          Build 19546333

    December 17, 2021 - v1.8

    Note: Upgrade is NOT possible, only fresh installation is supported. If you select "Preserve VMFS" option, you can re-register your existing Virtual Machines.
    • ACPI fix to support OpenBSD guests
    • Improved handling of ITS device ID width in implementations without indirect table support
    • Improvements to VMkernel TLB handling
    • Improvements to NUMA handling (Especially around error reporting)

          Build 19076756

    December 7, 2021 - v1.7

    Note: Upgrade is NOT possible, only fresh installation is supported. If you select "Preserve VMFS" option, you can re-register your existing Virtual Machines.
    • Experimental support for Pine64 Quartz64 board
    • Support for VMware SVGA driver (compatible with Fusion on AS, e,g, fixes Fedora F35 black screen issue)
    • NUMA-aware VMM, improving performance for dual-socket Ampere Altra machines
    • Improved compatibility for systems without an IORT
    • Fix performance issues in newer Linux kernel guest OSes like Debian 10 and Photon 4
    • Recognise CA55
    • Improve TLBI handling in VMM/VMK
    • Improve contention for atomic ops

          Build 19025766

    October 6, 2021 - v1.6

    Note: This release does not contain a new ESXi-Arm build, it is to announce new hardware enablement. The previous ESXi-Arm build can be used with the mentioned hardware platforms below. For more information, please download the hardware specific PDF guides.
    • Experimental Support for Ampere Altra-based BM.Standard.A1.160 shapes from Oracle Cloud Infrastructure
    • Experimental Support for Marvell Armada A8040 / Octeon TX2 CN9132 chipsets
    • Experimental Support for Socionext SynQuacer Developerbox

    August 6, 2021 - v1.5

    Note: Upgrade is NOT possible, only fresh installation is supported. If you select "Preserve VMFS" option, you can re-register your existing Virtual Machines.
    • Minor VM performance improvement
    • Support BCM2848 ACPI ID for the USB OTG port (affects newer UEFI firmware versions)

          Build 18427252

    June 15, 2021 - v1.4

    Note: Upgrade is NOT possible, only fresh installation is supported. If you select "Preserve VMFS" option, you can re-register your existing Virtual Machines.
    • Improved PMU virtualization
    • Fix virtual AHCI support for some ACPI OSes
    • Improve time virtualization
    • Experimental support for NVIDIA Tegra Xavier AGX and NVIDIA Tegra Xavier NX (PCIe, USB, NVMe, SATA)
    • Experimental support for 2P Ampere Altra-based servers (Mt. Jade)
    • Improved VM performance for multi-socket Arm servers
    • Fix virtual NVMe support in UEFI and some OSes
    • Improve interrupt controller virtualization
    • Improve virtualization performance
    • Improve compatibility with newer guest OS linux kernels
    • Improve USB stability issues, especially with RTL8153-based USB NICs (a common chipset) and especially on Raspberry Pi and Tegra Xavier
    • Updated documentation for ESXi-Arm Fling, Raspberry Pi, Ampere Altra, NVIDIA Xavier AGX & NVIDIA Xavier NX (See download for details)

          Build 18175197

    April 02, 2021 - v1.3

    Note: Upgrade is NOT possible, only fresh installation is supported. If you select "Preserve VMFS" option, you can re-register your existing Virtual Machines.
    • Improved hardware compatibility (various bug fixes/enhancements)
    • Add support for Experimental Ampere Altra (single socket systems only) (please see Requirements for more details)
    • ACPI support for virtual machines
    • NVMe and PVSCSI boot support in vEFI
    • Workaround for ISO boot on some Arm servers
    • Address VMM crash with newer guest OSes and Neoverse N1-based systems
    • Improved guest interrupt controller virtualization
    • Improved (skeletal) PMU virtualization
    • Improved big endian VM support

          Build 17839012

    November 30, 2020 - v1.2

    Note: Upgrade is NOT possible, only fresh installation is supported. If you select "Preserve VMFS" option, you can re-register your existing Virtual Machines.
    • UI: Disable datastore browsing when no datastores are present
    • PSCI: Fix missing context_id argument for CPU_ON calls
    • GICv2: Always enable SGIs, as GIC-500
    • arm64: Support for big-endian guests
    • Remove requirements/restrictions on initrd for UEFI-less VMs

          Build 17230755

    October 22, 2020 - v1.1
    • Fix for https://flings.vmware.com/esxi-arm-edition/bugs/1098 (PSOD adding to VDS)
    • Support for Arm N1 SDP platform
    • Support for VMs on Neoverse N1 CPU
    • Pass-thru stability improvements to LS1046A and LX2160A platforms
    • Fix for vCenter/DRS incorrect CPU usage
    • Fix for VM crash when VM storage fills up
    • Stability fix for non-coherent DMA device support
    • Installer: tolerate RAM size within 4% of 4GB instead of 3.125 (for the otherwise unsupported RK3399 boards)
    • Serial port handling improvements (for unsupported/unknown boards, to be a bit more resilient of firmware configuration errors)
    • Documentation Updates:
      • Moved and expanded iSCSI doc for Pi doc to main ESXi-Arm Fling doc
      • Added LS1046ARDB docs (including ref to it from main ESXi-Arm doc and Fling website)
      • Fixed Ampere server name and links (its HR330A/HR350A, not SR-something)
      • Added Arm N1SDP document (including ref to it from main ESXi-Arm doc)
      • Updated GuestOSes known to work with ESXi-Arm including new "Verified" section
      • Updated instruction to update EEPROM for Pi doc

          Build 17068872

    October 06, 2020 - v1.0 (Initial Release)

          Build 16966451

Similar Flings

No similar flings found. Check these out instead...
Nov 28, 2018
fling logo of Workspace ONE UEM Profile Migration Utility

Workspace ONE UEM Profile Migration Utility

version 1.6

The Workspace ONE UEM Profile Migration Utility aides in moving Profiles between Workspace ONE UEM Consoles.

May 30, 2013
fling logo of ESXi Google Authenticator

ESXi Google Authenticator

version 1.0

Google Authenticator is a project that provides two-factor authentication by using both a PAM (Pluggable Authentication Module) module and a mobile application for generating one-time passcodes. In ESXi Google Authenticator, we modified the source code of Google-Authenticator to enable two-step authentication on ESXi (5.0, 5.1).

Feb 02, 2015
fling logo of VM Resource and Availability Service

VM Resource and Availability Service

version 1.0

This Fling enables you to perform a what-if analysis for host failures on your infrastructure.

Aug 01, 2014
fling logo of Tap Tap vCloud Client

Tap Tap vCloud Client

version 1.0

Tap Tap vCloud Client is an Android application to manage and monitor cloud organizations in vCloud Director application and in a VMware vCloud Hybrid Service.

Feb 14, 2014
fling logo of Migrate to View (MTV)

Migrate to View (MTV)

version 1.0

Migrate to View (MTV) enables seamless migration of vSphere virtual machines (non-VDI) into View Connection Broker, maintaining the user persistence onto the virtual machines. By moving to View, more features can be integrated and leveraged. Additionally, administrative tasks can better maintain virtual machines and control user policy.

May 30, 2019
fling logo of Cloud Automation Services SDK for Python

Cloud Automation Services SDK for Python

version 1.0

The Cloud Automation Services SDK for Python is a set of Python classes to simplify automation against several aspects of the Cloud Assembly, Service Broker, and Code Stream API when using Python.

View More