Summary
With machine learning is widely used in enterprises, big data are trained on the edge, inference services go to production either in the cloud or on the edge.
On the edge
- Edge devices have limited resources, space and power supply
- Edge servers cost much higher than devices
- Hardware accelerators are heterogeneous in architecture and various on interfaces and performance on the edge
In the cloud
- Accelerator market is dominated by Nvidia GPU
- Other options come as AMD GPU, Intel Habana Goya/Altera FPGA, AWS Inferentia, Xilinx FPGA etc
- Common inference interfaces from cloud to edge doesn’t appear generally
- Limitation on specific hardware accelerators or cloud leads to new vendor lock-in
Project Supernova is to build a common machine learning inference service framework by enabling machine learning inference accelerators across edge endpoint devices, edge systems and cloud, with or without hardware accelerators.
- Micro-service based architecture with Restful API
- Support heterogenous system architectures from leading vendors
- Support accelerator compilers to native code
- Neutral to ML training framework file formats
- Work on both edge devices and clouds
- Support Xilinx Cloud FPGA
- Hardware CPU support:
- x86-64, ARM64
- Hardware accelerator support:
- Intel VPU, Google Edge TPU, Nvidia GPU, AMD GPU
- Software
- Inference toolkit support: OpenVINO, TensorRT & Tenserflow Lite
- Training framework data format: Tensorflow, Caffe, ONNX, MxNet
Requirements
The common computing platforms including most resource-constrained edge system, PC, server, etc, where can deploy Linux/Docker.
Instructions
Changelog
Version 1.2 Update
- Support Xilinx Cloud FPGA
Version 1.1 Update
- Support Bitfusion
- K8S and docker-compose deployment
Version 1.0 Update
Compared 0.0.1, this release supports:
- New HW accelerators - AMD GPU + Xilinx FPGA
- CPU Accelerations with OpenVINO
- Basic K8S deployment
- Versatile APIs
- vSphere Bitfusion support
- More use cases like facial mask
- New HW accelerators - AMD GPU + Xilinx FPGA
- CPU Accelerations with OpenVINO ased on AVX/SSE
- Basic K8S deployment
- Versatile API set
- More use cases like facial mask
- vSphere Bitfusion support
Please add the following two lines for our release:
Contributors
Similar Flings
No similar flings found. Check these out instead...

Horizon View Persona Management Share Validation Tool
The Horizon View Persona Management Share Validation Tool is a command-line utility that analyzes user profiles and CIFS shares used by Persona Management to ensure minimum security requirements are met. Persona depends on two CIFS shares to function: the central profile store and the redirected folder share.

vSAN Performance Monitor
The vSAN performance monitor is a monitoring and visualization tool based on vSAN Performance metrics.

Resource-Efficient Supervised Anomaly Detection Classifier
Resource-Efficient Supervised Anomaly Detection Classifier is a scikit-learn classifier for resource-efficient anomaly detection that augments either Random-Forest or XGBoost classifiers to perform well in a resource-constrained setting.

VMware Modified Enhanced SCAP Content Editor
VMware Modified Enhanced SCAP Content Editor is an updated version of the Enhanced SCAP Content Editor too by G2, Inc, and is an open source project, vmware-scap-edit, on GitHub.

DRS Entitlement Viewer
DRS Entitlement Viewer is installed as a plugin to the vSphere client. It is currently only supported for the HTML5 based vSphere client.

IOBlazer
IOBlazer is a multi-platform storage stack micro-benchmark. IOBlazer runs on Linux, Windows and OSX and it is capable of generating a highly customizable workload.