Deploy your AI/ML analytics pipeline in seconds, not months.
HOW IT WORKS
Straight from a Data Scientist's Jupyter Notebook (or batch job):
=> http PUT /api/model/sales-prediction/predict
=> http GET|PUT /api/dataset/sales-data/
Any data scientist can do this in seconds. Imagine what your whole team can achieve in a month. And there is more!
Figure out how to deploy Python & dependencies in cloud
Figure out how to deploy data pipelines efficiently
Figure out how to deploy models efficiently
Serialize models to file, store in cloud (which one?)
Develop backend to serve data & models
Add a REST API
Add Security Layer
Scale up (figure out how to parallelize model training & pred.)
Repackage models for scaled deployment
Train new model version & redeploy (manage downtime)
Figure out how to run re-training on a schedule, in cloud
Write code to monitor & log model's performance
Write visualization UI to track model performance
The list goes on - and we did not talk technology choices yet
In short, you need a whole team to do this & it will take months.
WHAT IS INCLUDED
Fully installed, customizable Python data science distribution
Scalable compute cluster for model training & prediction
- Scalable NoSQL database for any-size data sets
Instant REST API for data, models, reports
Secure Jupyter Notebooks ready for collaboration
Notebook scheduling for controlled batch execution
In addition you get:
Jupyter Notebook publishing as reports and presentations
Instant Plotly Dashboard deployment
- Mini batch framework for streaming and IoT data (*)
Run on Apache Spark or Anaconda Distributed cluster (*)
Deployment integration for any cloud backend (*)
(*) chat to learn more
Designed for Scalability & Productivity
Scalable, reliable Open Source components
Leveraging MongoDB and RabbitMQ
Extensible plug-in architecture
FROM YOUR R&D LAB TO PRODUCTION
omega|ml is your one-stop hub to build, productize and launch your AI/ML project
More and much faster:
Data Scientists continue working with the Python tools they trust & love.
Working right out of Jupyter Notebook or any other IDE, omega|ml does not stand in your way. Yet it is always ready to deploy and collaborate on datasets and models.
All it takes is a single line of code.
Ever wondered where to store all those .csv files? How to share your notebooks? How to persist and deploy your models?
Sure there are ways. But they are all complicated.
omega|ml provides collaboration out of the box, for datasets, models, pipelines and applications.
Launch your app today:
Want to integrate your datasets and models into an application? Don't waste weeks or months to build your own. omega|ml is ready in minutes.
omega|ml publishes datasets, models and dash apps with a single line of code. Once published you get a nice, ready-to use REST API and app URLs. Scheduled data pipelines included.
Leverage Swiss cloud power:
The built-in compute cluster provides instant, no-hassle, scalable model training and prediction. Support for Dask Distributed, Hadoop and Apache Spark is available free of charge.
As a private or public IaaS/PaaS provider, deploy omega|ml Enterprise Edition to offer your clients
a scalable Data Science and ML Platform As a Service
Flexible - Leverage Python machine learning models & pipelines in any application, straight from our easy-to-use REST API
Fast - Collaborate instantly on any data science project, using the tools you love (e.g. Python, scikit-learn, Jupyter Notebook)
Scalable - Scale model training and prediction from any client, applying the power of the built-in compute cluster
Save money - Large-scale datasets at a fraction of the cost of other solutions (no need to run Apache Hadoop or Spark)
No vendor lock-in - Our fully open source core and support for any Kubernetes cloud means you can deploy anywhere.
Secure and independent - Our compute center in Switzerland meets all your data privacy and security requirements.
BUILT-IN, NO-HASSLE DATA & MODEL PERSISTENCE
# transparently store Pandas Series and DataFrames or any Python object
# transparently store and get models
clf = LogisticRegression()
clf = om.models.get('forecast')
# run and scale models directly on the integrated Python or Spark compute cluster
# use the REST API to store and retrieve data, run predictions
WORKS WITH ANY MACHINE LEARNING FRAMEWORK
# scikit-learn and SparkML supported out of the box, more to follow
# -- any scikit learn model or pipeline
pipl = Pipeline([...])
# -- create spark models from specification
# -- store a spark model as instantiated in your spark context
kmSpark = KMeans(k=8)
SCALE TO ANY NUMBER OF MODELS & DATASETS LARGER THAN MEMORY
# Out-of-Core DataFrame is built in
# -- get a lazy instance of the data to process
mdf = om.datasets.get('5yrsales', lazy=True)
# -- run aggregations
# -- access rows by index
# -- apply standard data conversions
mdf['sales'].apply(lambda sales: sales / 1000)
mdf.apply(lambda v: v['date'].dt.week)
# -- calculate percentiles, covariance and correlation
Extensible Architecture: Storage & Compute Backends, DataFrame Operations
omega|ml comes with batteries included, however new requirements are not a problem: alternate data sources or sinks and data pipelines can easily be added.
The following extension points are available:
Extend what objects can be stored and retrieved
Add any compute backend
to run any omega|ml-stored models through the same API
Add processing options such
as AutoML or Model Visualization
Any Cloud - Data Science Platform As A Service, On Premise or Hybrid Cloud
Grow Your Business
Hello, Cloud Provider
There is untapped growth potential. All globally leading cloud providers offer their version of a Ready-To-Go Data Science and Machine Learning Service, too complex for most of their local competitors.
Get competitive today. With omega|ml you can offer your clients even better convenience to process their AI and machine learning workloads while keeping all data storage local. No Lock-In. Your Data Center. Ready today.
Why Act Now? As customers of all sizes look to leverage AI and machine learning, IaaS and PaaS providers face the challenge of losing or missing out on valuable growth potential for their compute and storage resources to the incumbent large-scale providers. Don't let this happen to you, act today.
By its unique design based on widely used and state-of-the art components in the distributed computing stack, omega|ml is easily deployed to any cloud. It works best when your cloud supports Docker, however bare-metal or other container technology supporting Python, RabbitMQ and MongoDB will work.
With the Open Source core, omegal|ml is readily deployable by any DevOps team familiar with Docker.
For on-premise deployments or if you need enterprise-ready security and do not want to manage the complexity of a large-scale data & compute cluster, omega|ml Enterprise Edition is available as a service or on-premise. Bring Your Own Cloud or subscribe to our compute capacity.
Setup is a breeze: omega|ml EE is already configured to support any Kubernetes cloud by simply adding an omega|ml cluster management node. We take care of the rest. Deployed within minutes, JupyterHub and a Server Less Script/Lambda service are then ready to use, providing Notebooks for straight-forward team collaboration and interactive computing at scale. All while enabling data storage and any packaged data science algorithm to run from the same easy-to-use API as shown above.
Fun Fact: Unlike most vendors we do not license by number or size of compute nodes - with unlimited compute nodes only your imagination limits what you can achieve.