We're announcing the start of the Portable SIMD Project Group within the Libs team. This group is dedicated to making a portable SIMD API available to stable Rust users.
Rust uses project groups to help coordinate work. They're a place for people to get involved in helping shape the parts of Rust that matter to them.
SIMD stands for Single Instruction, Multiple Data. It lets the CPU apply a single instruction to a "vector" of data. The vector is a single extra-wide CPU register made of multiple "lanes" of the same data type. You can think of it as being similar to an array. Instead of processing each lane individually, all lanes have the same operation applied simultaneously. This lets you transform data much faster than with standard code. Not every problem can be accelerated with "vectorized" code, but for multimedia and list-processing applications there can be significant gains.
Different chip vendors offer different SIMD instructions.
Some of these are available in Rust's
You can build vectorized functions using that, but at the cost of maintaining a different version for each CPU you want to support.
You can also not write vectorized operations and hope that LLVM's optimizations will "auto-vectorize" your code.
However, the auto-vectorizer is easily confused and can fail to optimize "obvious" vector tasks.
The portable SIMD API will enable writing SIMD code just once using a high-level API. By explicitly communicating your intent to the compiler, it's better able to generate the best possible final code. This is still only a best-effort process. If your target doesn't support a desired operation in SIMD, the compiler will fall back to using scalar code, processing one lane at a time. The details of what's available depend on the build target.
We intend to release the Portable SIMD API as
We will cover as many use cases as we can, but it might still be appropriate for you to use
For that reason the
std::simd types will also be easily convertable to
std::arch types where needed.