Summary
With machine learning is widely used in enterprises, big data are trained on the edge, inference services go to production either in the cloud or on the edge.
On the edge
- Edge devices have limited resources, space and power supply
- Edge servers cost much higher than devices
- Hardware accelerators are heterogeneous in architecture and various on interfaces and performance on the edge
In the cloud
- Accelerator market is dominated by Nvidia GPU
- Other options come as AMD GPU, Intel Habana Goya/Altera FPGA, AWS Inferentia, Xilinx FPGA etc
- Common inference interfaces from cloud to edge doesn’t appear generally
- Limitation on specific hardware accelerators or cloud leads to new vendor lock-in
Project Supernova is to build a common machine learning inference service framework by enabling machine learning inference accelerators across edge endpoint devices, edge systems and cloud, with or without hardware accelerators.
- Micro-service based architecture with Restful API
- Support heterogenous system architectures from leading vendors
- Support accelerator compilers to native code
- Neutral to ML training framework file formats
- Work on both edge devices and clouds
- Support Xilinx Cloud FPGA
- Hardware CPU support:
- x86-64, ARM64
- Hardware accelerator support:
- Intel VPU, Google Edge TPU, Nvidia GPU, AMD GPU
- Software
- Inference toolkit support: OpenVINO, TensorRT & Tenserflow Lite
- Training framework data format: Tensorflow, Caffe, ONNX, MxNet
Requirements
The common computing platforms including most resource-constrained edge system, PC, server, etc, where can deploy Linux/Docker.
Instructions
Changelog
Version 1.2 Update
- Support Xilinx Cloud FPGA
Version 1.1 Update
- Support Bitfusion
- K8S and docker-compose deployment
Version 1.0 Update
Compared 0.0.1, this release supports:
- New HW accelerators - AMD GPU + Xilinx FPGA
- CPU Accelerations with OpenVINO
- Basic K8S deployment
- Versatile APIs
- vSphere Bitfusion support
- More use cases like facial mask
- New HW accelerators - AMD GPU + Xilinx FPGA
- CPU Accelerations with OpenVINO ased on AVX/SSE
- Basic K8S deployment
- Versatile API set
- More use cases like facial mask
- vSphere Bitfusion support
Please add the following two lines for our release:
Contributors
Similar Flings
No similar flings found. Check these out instead...

vRealize Operations Export Tool
The vRealize Operations Export Tool allows users to extract and export metrics and properties from vRealize Operations.

Horizon Network Label Assignment Tool
Horizon Network Label Assignment Tool helps the administrator to deploy large pools with a single Parent Virtual Machine or template by enabling to set up multiple network labels present on the ESXi host Cluster. It also provides an option to reset and view the network label configuration setting set previously in a tabular format.

VMware Container For Folding@Home
VMware Container for Folding@ Home is a docker container for running folding at home client. This container is supported on both Docker standalone clients and on a Kubernetes Cluster.

Cluster Rules Manager (CRM)
Cluster Rules Manager (CRM) is a web application which enables users to easily manage & report DRS rules.

WebCommander
Have you ever wanted to give your users access to certain virtual infrastructure tasks instead of the entire vCenter Client?
WebCommander is a way to do this! WebCommander was designed as a framework to wrap your PowerShell and PowerCLI scripts into an easy-to-access web service.