omegaml python scikit-learn deploy machine learning REST API swiss


Deploy your AI/ML analytics pipeline in seconds, not months.

Swiss Cloud
Open Source
HOW IT WORKS
  • with omega|ml

     Straight from a Data Scientist's Jupyter Notebook (or batch job):

    1. om.models.put(model_or_pipeline, 'sales-prediction')
      => http PUT /api/model/sales-prediction/predict

    2. om.datasets.put(dataframe, 'sales-data')
      => http GET|PUT /api/dataset
      /sales-data/

    Any data scientist can do this in seconds. Imagine what your whole team can achieve in a month. And there is more!

  • without omega|ml

    1. Figure out how to deploy Python & dependencies in cloud

    2. Figure out how to deploy data pipelines efficiently 

    3. Figure out how to deploy models efficiently

    4. Serialize models to file, store in cloud (which one?)

    5. Develop backend to serve data & models

    6. Add a REST API 

    7. Add Security Layer

    8. Scale up (figure out how to parallelize model training & pred.)

    9. Repackage models for scaled deployment

    10. Train new model version & redeploy (manage downtime)

    11. Figure out how to run re-training on a schedule, in cloud

    12. Write code to monitor & log model's performance

    13. Write visualization UI to track model performance

    14. The list goes on - and we did not talk technology choices yet

    In short, you need  a whole team to do this & it will take months.

WHAT IS INCLUDED
  • Fully installed, customizable Python data science distribution

  • Scalable compute cluster for model training & prediction

  • Scalable NoSQL database for any-size data sets
  • Instant REST API for data, models, reports

  • Secure Jupyter Notebooks ready for collaboration

  • Notebook scheduling for controlled batch execution

In addition you get:

  • Jupyter Notebook publishing as reports and presentations

  • Instant Plotly Dashboard deployment 

  • Mini batch framework for streaming and IoT data (*)
  • Run on Apache Spark or Anaconda Distributed cluster (*)

  • Deployment integration for any cloud backend (*)

  • Enterprise-grade security

(*) chat to learn more

Designed for Scalability & Productivity

  • Scalable, reliable Open Source components

  • Leveraging MongoDB and RabbitMQ

  • Extensible plug-in architecture

FROM YOUR R&D LAB TO PRODUCTION

omega|ml is your one-stop hub to build, productize and launch your AI/ML project

Innovate

More and much faster:

Data Scientists continue working with  the Python tools they trust & love.

Working right out of Jupyter Notebook or any other IDE, omega|ml does not stand in your way. Yet it is always ready to deploy and collaborate on datasets and models.

All it takes is a single line of code.

Collaborate

Collaborate easily:

Ever wondered where to store all those .csv files? How to share your  notebooks? How to persist and deploy your models?

Sure there are ways. But they are all complicated.

omega|ml provides collaboration out of the box, for datasets, models, pipelines and applications.

Productize

Launch your app today:

Want to integrate your datasets and models into an application? Don't waste weeks or months to build your own. omega|ml is ready in minutes.

omega|ml publishes datasets, models and dash apps with a single line of code.  Once published you get a nice, ready-to use REST API and app URLs. Scheduled data pipelines included.

Scale

Leverage Swiss cloud power:

The built-in compute cluster provides instant, no-hassle, scalable model training and prediction. Support for Dask Distributed, Hadoop and Apache Spark is available free of charge.

As a private or public IaaS/PaaS provider, deploy omega|ml Enterprise Edition to offer your clients 

a scalable Data Science and ML Platform As a Service

WHY OMEGA|ML  

  1. Flexible - Leverage Python machine learning models & pipelines in any application, straight from our easy-to-use REST API
     

  2. Fast - Collaborate instantly on any data science project, using the tools you love (e.g. Python, scikit-learn, Jupyter Notebook)
     

  3. Scalable - Scale model training and prediction from any client, applying the power of the built-in compute cluster 
     

  4. Save money - Large-scale datasets at a fraction of the cost of other solutions (no need to run Apache Hadoop or Spark)
     

  5. No vendor lock-in - Our fully open source core and support for any Kubernetes cloud means you can deploy anywhere.  
     

  6. Secure and independent - Our compute center in Switzerland meets all your data privacy and security requirements.

BUILT-IN, NO-HASSLE DATA & MODEL PERSISTENCE

# transparently store Pandas Series and DataFrames or any Python object

om.datasets.put(df, 'stats')

om.datasets.get('stats', sales__gte=100)

# transparently store and get models 

clf = LogisticRegression()

om.models.put(clf, 'forecast')

clf = om.models.get('forecast')

# run and scale models directly on the integrated Python or Spark compute cluster

om.runtime.model('forecast').fit('stats[^sales]', 'stats[sales]')

om.runtime.model('forecast').predict('stats')

om.runtime.model('forecast').gridsearch(X, Y)

# use the REST API to store and retrieve data, run predictions

requests.put('/v1/dataset/stats', json={...})

requests.get('/v1/dataset/stats?sales__gte=100')

requests.put('/v1/model/forecast', json={...})

WORKS WITH ANY MACHINE LEARNING FRAMEWORK

# scikit-learn and SparkML supported out of the box, more to follow

# -- any scikit learn model or pipeline

pipl = Pipeline([...])

om.models.put(pipl, 'forecast')

om.runtime.model('forecast').predict('datax')

# -- create spark models from specification

om.models.put('pyspark.mllib.clustering.KMeans','kmeans', params=dict(k=8))

om.runtime.model('kmeans').fit('datax')

# -- store a spark model as instantiated in your spark context

kmSpark = KMeans(k=8)

om.models.put(kmSpark, 'kmeans')

om.runtime.model('kmeans').fit('datax')

SCALE TO ANY NUMBER OF MODELS & DATASETS LARGER THAN MEMORY

# Out-of-Core DataFrame is built in 

# -- get a lazy instance of the data to process

mdf = om.datasets.get('5yrsales', lazy=True)

# -- run aggregations

mdf.groupby(['year', 'month']).sales.sum()

# -- access rows by index

mdf.loc['2012'].groupby('month']).sales.sum()

# -- apply standard data conversions

mdf['sales'].apply(lambda sales: sales / 1000)

mdf.apply(lambda v: v['date'].dt.week)

# -- calculate percentiles, covariance and correlation

mdf.quantile([.1, .9]).value

mdf.cov().value

mdf.corr().value

Extensible Architecture: Storage & Compute Backends, DataFrame Operations

omega|ml comes with batteries included, however new requirements are not a problem: alternate data sources or sinks and data pipelines can easily be added. 

The following extension points are available:

Extend what  objects can be stored and retrieved

Add any compute backend

to run any omega|ml-stored models through the same API

COMPUTE &

DATAFRAME MIXINS

Add processing options such

as AutoML or Model Visualization

Any Cloud - Data Science Platform As A Service, On Premise or Hybrid Cloud 
Grow Your Business
Hello, Cloud Provider

There is untapped growth potential.  All globally leading cloud providers offer their version of a Ready-To-Go Data Science and Machine Learning Service, too complex for most of their local competitors. 

Get competitive today. With omega|ml you can offer your clients even better convenience to process their AI and machine learning workloads while keeping all data storage local. No Lock-In. Your Data Center. Ready today. 

Why Act Now? As customers of all sizes look to leverage AI and machine learning, IaaS and PaaS providers face the challenge of losing or missing out on valuable growth potential for their compute and storage resources to the incumbent large-scale providers. Don't let this happen to you, act today.

By its unique design based on widely used and state-of-the art  components in the distributed computing stack, omega|ml is easily deployed to any cloud. It works best when your cloud supports Docker, however bare-metal or other container technology supporting Python, RabbitMQ and MongoDB will work.

With the Open Source core, omegal|ml is readily deployable by any DevOps team familiar with Docker. 

For on-premise deployments or if you need enterprise-ready security and do not want to manage the complexity of a large-scale data & compute cluster, omega|ml Enterprise Edition is available as a service or on-premise. Bring Your Own Cloud or subscribe to our compute capacity.

Setup is a breeze: omega|ml EE is already configured to support any Kubernetes cloud by simply adding an omega|ml cluster management node. We take care of the rest. Deployed within minutes, JupyterHub and a Server Less Script/Lambda service are then ready to use, providing Notebooks for straight-forward team collaboration and interactive computing at scale. All while enabling data storage and any packaged data science algorithm to run from the same easy-to-use API as shown above.

Fun Fact: Unlike most vendors we do not license by number or size of compute nodes - with unlimited compute nodes only your imagination limits what you can achieve.