fling logo of Resource-Efficient Supervised Anomaly Detection Classifier

Resource-Efficient Supervised Anomaly Detection Classifier

version 1.0.0 — June 02, 2021

Contributors 2

View All

Summary

Resource-Efficient Supervised Anomaly Detection Classifier is a scikit-learn classifier for resource-efficient anomaly detection that augments either Random-Forest or XGBoost classifiers to perform well in a resource-constrained setting. To that end, Resource-Efficient Supervised Anomaly Detection Classifier offers reduced memory footprint and computation as compared to Random-Forest and XGBoost.

The key idea behind Resource-Efficient Supervised Anomaly Detection Classifier is first to train a small model that is sufficient to correctly classify the majority of the queries. Then, using only subsets of the training data, train expert models for these fewer harder cases where the small model is at high risk of making a classification mistake.

We are happy to help you integrating RADE into your use case.
If you would like our help, please contact us:
Yaniv Ben-Itzhak, ybenitzhak@vmware.com
Shay Vargaftik, shayv@vmware.com

Requirements

Prerequisites:

  • CMake 3.13 or higher
  • numpy
  • pandas
  • sklearn
  • xgboost

Instructions

Prerequisities installation:

- Install CMake 3.13 or higher.

- The python packages can be installed manually or by running:

   pip3 install -r requirments.txt

 

Usage:

- This classifier is implemented as a sci-kit classifier.

- example_program.py contains an example that compares RADE to Random Forest and XGBoost.

- Another example code for using RADE is: 

    from rade_classifier import RadeClassifier

    from sklearn.datasets import make_classification

    from sklearn.metrics import classification_report

    from sklearn.model_selection import train_test_split



    X, y = make_classification(n_samples=1000000, n_features=4,

                                n_informative=2, n_redundant=0,

                                random_state=0, shuffle=False, weights=[0.99, 0.01])



    X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, stratify=y, random_state=42)



    clf = RadeClassifier()



    clf.fit(X_train, y_train)

    y_predicted = clf.predict(X_test)

    print(classification_report(y_test, y_predicted, digits=5))

Changelog

Version 1.0.0

  • Added OSL file

Similar Flings

No similar flings found. Check these out instead...
Feb 14, 2014
fling logo of Migrate to View (MTV)

Migrate to View (MTV)

version 1.0

Migrate to View (MTV) enables seamless migration of vSphere virtual machines (non-VDI) into View Connection Broker, maintaining the user persistence onto the virtual machines. By moving to View, more features can be integrated and leveraged. Additionally, administrative tasks can better maintain virtual machines and control user policy.

Sep 16, 2020
fling logo of vRealize Network Insight and HCX Integration

vRealize Network Insight and HCX Integration

version 1.0

This integration script between vRealize Network Insight (vRNI) and VMware HCX, allows you to streamline the application migration process.

Aug 26, 2013
fling logo of Proactive DRS

Proactive DRS

version 1.0

Proactive DRS was the winning entry in last year's 2012 Open Innovation Contest. We promised that we'd create a fling of the winning entry, and here it is!

Apr 19, 2021
fling logo of Workspace ONE Access Migration Tool

Workspace ONE Access Migration Tool

version 1.0.0.24

Workspace ONE Access Migration Fling helps eases migration of Apps from one Access tenant to another (on-premise to SaaS or SaaS to SaaS).

Jun 07, 2022
fling logo of Post Quantum Cryptography Test Client

Post Quantum Cryptography Test Client

version 1.0

This Fling tests Post Quantum Cryptographic ciphers.

Oct 26, 2020
fling logo of Create your own VMware Fling

Create your own VMware Fling

version 1.0

Learn how you can create your own Fling or update an existing Fling.

View More