Today at Spark + AI summit we are excited to announce .NET for Apache Spark. Spark is a popular open source distributed processing engine for analytics over large data sets. Spark can be used for processing batches of data, real-time streams, machine learning, and ad-hoc query.
.NET for Apache Spark is aimed at making Apache® Spark™ accessible to .NET developers across all Spark APIs. So far Spark has been accessible through Scala, Java, Python and R but not .NET.
We plan to develop .NET for Apache Spark in the open along with the Spark and .NET community to ensure that developers get the best of both worlds.
The remainder of this post provides more specifics on the following topics:
.NET for Apache Spark provides high performance APIs for using Spark from C# and F#. With this .NET APIs, you can access all aspects of Apache Spark including Spark SQL, DataFrames, Streaming, MLLib etc. .NET for Apache Spark lets you reuse all the knowledge, skills, code, and libraries you already have as a .NET developer.
The C#/ F# language binding to Spark will be written on a new Spark interop layer which offers easier extensibility. This new layer of Spark interop was written keeping in mind best practices for language extension and optimizes for interop and performance. Long term this extensibility can be used for adding support for other languages in Spark.
You can learn more details about this work through this proposal.
.NET for Apache Spark is compliant with .NET Standard 2.0 and can be used on Linux, macOS, and Windows, just like the rest of .NET. .NET for Apache Spark is available by default in Azure HDInsight, and can be installed in Azure Databricks and more.
Once setup, you can start programming Spark applications in .NET with three easy steps.
In our first .NET Spark application we will write a basic Spark pipeline which counts the occurrence of each word in a text segment.
// 1. Create a Spark session
DataFrame dataFrame = spark.Read().Text("input.txt");
// 3. Manipulate and view data
var words = dataFrame.Select(Split(dataFrame["value"], " ").Alias("words"));
We are pleased to say that the first preview version of .NET for Apache Spark performs well on the popular TPC-H benchmark. The TPC-H benchmark consists of a suite of business oriented queries. The chart below illustrates the performance of .NET Core versus Python and Scala on the TPC-H query set.
The chart above shows the per query performance of .NET for Apache Spark versus Python and Scala. .NET for Apache Spark performs well against Python and Scala . Furthermore, in cases where UDF performance is critical such as query 1 where 3B rows of non-string data is passed between the JVM and the CLR .NET for Apache Spark is 2x faster than Python.
It’s also important to call out that this is our first preview of .NET for Apache Spark and we aim to further invest in improving and benchmarking performance (e.g. Arrow optimizations). You can follow our instructions to benchmark this on our GitHub repo.
- Simplified getting started experience, documentation and samples
- Native integration with developer tools such as Visual Studio, Visual Studio Code, Jupyter notebooks
- .NET support for user-defined aggregate functions
- .NET idiomatic APIs for C# and F# (e.g., using LINQ for writing queries)
- Out of the box support with Azure Databricks, Kubernetes etc.
- Make .NET for Apache Spark part of Spark Core. You can follow this
See something missing on this list, please drop us a comment below
.NET for Apache Spark is our first step in making .NET a great tech stack for building Big Data applications.
We need your help to shape the future of .NET for Apache Spark, we look forward to seeing what you build with .NET for Apache Spark. You can provide reach out to us through our GitHub repo.
This blog is authored by Rahul Potharaju, Ankit Asthana, Tyson Condie, Terry Kim, Dan Moseley, Michael Rys and the rest of the .NET for Apache Spark team.