Now Reading
sort-research-rs/textual at major · Voultapher/sort-research-rs · GitHub

sort-research-rs/textual at major · Voultapher/sort-research-rs · GitHub

2023-06-10 13:42:00

Creator: Lukas Bergdoll @Voultapher
Date: 10-06-2023 (DD-MM-YYYY)

It is a efficiency evaluation of the lately popularized [1] Intel AVX-512 type implementation.

Intel Publishes Blazing Quick AVX-512 Sorting Library, Numpy Switching To It For 10~17x Sooner Types

This evaluation will take a look at the efficiency of Intel’ x86-simd-sort and the way it compares to varied different generic type implementations such because the C++ normal library std::type and vqsort one other high-performance manually vectorized type implementation. Breaking down complicated efficiency traits right into a single quantity is difficult and may need little predictive energy. This evaluation goals to place that ’10~17x’ quantity into perspective and the way it pertains to different high-performance implementations.

TL;DR: Benchmarking is difficult. If you’re utilizing x86-simd-sort, you will get higher total efficiency and keep away from catastrophic scaling for sure enter patterns through the use of vqsort + Clang. As well as it’s proven that {hardware} particular handbook vectorization with huge AVX-512 SIMD will not be the one method to write environment friendly software program. ipnsort demonstrates comparable efficiency to x86-simd-sort whereas being generic, optimized for greater than solely peak efficiency and solely utilizing as much as SSE2 directions.

Bias disclaimer. The creator of this evaluation is the creator of ipnsort.

The phrases type implementation and kind algorithm, are expressly not used interchangeably. Virtually all high-performance implementations are hybrids, utilizing a number of type algorithms. As such, the phrases ‘type algorithm’ will solely be used to speak concerning the algorithmic nature of particular constructing blocks.

Graphs with logarithmic axis are marked as such, these are primarily helpful to look at the change of a property, not its absolute values.


Benchmark setup

Benchmarking is notoriously tough, and particularly artificial benchmarks will not be consultant. An incomplete checklist of related components:

  • Enter measurement
  • Enter kind (worth to maneuver and worth to check)
  • Enter sample (already sorted, random, cardinality, streaks, combined and many others.)
  • {Hardware} prediction and cache results

{Hardware} prediction and cache results in sizzling benchmarks that do nothing however run a set enter measurement, kind and sample mixture in a loop could also be deceptive of their predictive energy for actual world efficiency. Particularly for small enter sizes. The later part “Scorching benchmarks” seems to be at sizzling outcomes and explains in additional element why they may not be as helpful on this context.

Home windows check machine

Home windows 10.0.19044
rustc 1.69.0-nightly (0416b1a6f 2023-02-14)
clang model 15.0.1
Microsoft (R) C/C++ Optimizing Compiler Model 19.31.31104 for x86
Intel Core i9-10980XE 18-Core Processor (Skylake micro-architecture)
CPU enhance enabled.

Linux check machine

Linux 5.19
rustc 1.69.0-nightly (7aa413d59 2023-02-19)
clang model 14.0.6
Intel Xeon Gold 6154 18-Core Processor (Skylake micro-architecture)
CPU enhance disabled.

Some type implementations are adaptive, they’ll attempt to exploit current patterns within the knowledge to do much less work. A breakdown of the benchmark patterns:

  • ascending, numbers 0..measurement
  • random, random numbers generated by rand StdRng::gen [2]
  • random_d20, uniform random numbers within the vary 0..=20
  • random_p5, 95% 0 and 5% random knowledge, not uniform
  • random_z1, Zipfian distribution with characterizing exponent s == 1.0 [3]
  • saws_long, (measurement as f64).log2().spherical() variety of randomly chosen ascending and descending streaks
  • saws_short, randomly chosen ascending and descending streaks within the vary of 20..70

Whether or not these patterns are consultant will rely in your workload. These are essentially artificial benchmarks exploring type efficiency in isolation.


A collection of high-performance in-place type implementations.

Generic comparability based mostly

- rust_std_unstable          | `slice::sort_unstable` (1)
- rust_ipnsort_unstable      | (2) (3)
- cpp_std_msvc_unstable      | MSVC `std::type` (4)
- cpp_std_gnu_unstable       | libstdc++ `std::type` (8)
- cpp_std_libcxx_unstable    | libc++ `std::type` (9)
- cpp_pdqsort_unstable       | (4)
- c_crumsort_unstable        | (5) (6)

Manually vectorized

- cpp_vqsort                 | (7) (10)
- cpp_intel_avx512           | (7)


  1. Vendored ca. mid 2022.
  2. Nonetheless WIP and these are solely preliminary outcomes.
  3. This kind was beforehand generally known as ipn_unstable, the beforehand introduced ipn_stable is defunct now, and work on a secure type implementation steady below one other identify.
  4. Constructed with msvc.
  5. Compiled with #outline cmp(a, b) (*(a) > *(b)). That is required to be aggressive, the common method of offering a comparability operate is problematic due to C language limitations. In impact because of this the outcomes are solely relevant to primitive varieties, and something utilizing a customized comparability operate for person outlined varieties may have a steep perf penalty.
  6. crumsort does an preliminary evaluation and switches to quadsort a merge type that makes use of as much as N reminiscence making it not in-place ⚠️, it will possibly fall again to in-place merging if the allocation fails. These exams had been carried out with allocations potential.
  7. Constructed with clang and -march=native. Compiled with static dispatch, this might not be moveable. Any CPU with out AVX-512 help would fail to run the binary. It is unknown what the overhead of dynamic dispatch can be.
  8. Constructed with gcc.
  9. Constructed with clang.
  10. Allocates fastened measurement reminiscence buffer, not depending on measurement of enter. This blurs the road of in-place.

Outcomes u64

A superb benchmark to shine mild into the power of the kind to use instruction-level parallelism (ILP) is hot-u64-10000. The enter are 10k u64 values, which inserts into the core personal L2 knowledge cache for the used check machines. The higher restrict needs to be within the order of 4-5 directions per cycle for such a dataset. 10k parts is sufficient to reliably exploit current patterns within the enter knowledge. This may be reproduced by operating cargo bench hot-u64-<sample>-10000

Utilizing this knowledge to write down a headline like:

Rust std type is 15x quicker than C++ std type

Will not be an sincere illustration. And it appears questionable to compress a lot info right into a single quantity, whereas nonetheless remaining consultant. Taking a look at this one enter measurement, on one particular micro-architecture, utilizing this particular set of compilers, and testing these artificial patterns, yields:

  • intel_avx512 is usually quicker than vqsort at this enter measurement.
  • The 2 manually vectorized type implementations intel_avx512 and vqsort, are usually not good at exploiting current patterns within the enter.
  • intel_avx512 struggles if there may be one quite common worth within the enter (random_p5).
  • cpp_std_msvc is usually the slowest of the comparability based mostly type implementations.
  • rust_std which is predicated on pdqsort performs much like pdqsort, aside from totally ascending inputs.
  • ascending reveals the most important variance with the quickest type being ~31x quicker han the slowest.
  • Extra pure random distributions like random_z1 shut the hole between manually vectorized code and pdqsort derived designs, rust_std, ipnsort and crumsort.

Measuring random sample efficiency throughout completely different sizes:


  • Classically sure by the Large-O complexity of O(N * log(N)) for random inputs, one may count on peak throughput to happen for the smallest inputs. Nonetheless in actuality for tiny enter sizes the operate name overhead, chilly code, cache- and branch-prediction misses will restrict throughput. And at very massive sizes the primary restrict tends to be main-memory bandwidth, paired with the elevated quantity of labor required per aspect. The height throughput throughout most high-performance implementations, balancing these aforementioned components, sits at enter measurement ~1k.
  • vqsort as examined is exceedingly sluggish for small inputs. Extra on that under.
  • Beginning at enter measurement ~50k vqsort is the quickest.
  • ipnsort catches as much as intel_avx512 at ~1m, regardless of solely utilizing SSE2 directions and utilizing no {hardware} particular code, and whereas caring about binary-size and compile-times, generally at the price of efficiency.

Linux vs Home windows

Usually I might not count on any significant change between Linux and Home windows, for code that aside from allocations and stack structure does not have any OS particular logic in it. Additional, the 2 check machines use the identical Skylake micro-architecture. Wanting on the similar cold-u64-random benchmark yields:

Home windows:



  • intel_avx512, pdqsort, rust_std and ipnsort all lose efficiency
  • libstdc++ std::type appears to carry out higher than the msvc model on this check. Exhibiting considerably much less regression than can be anticipated because of the CPU frequency distinction.
  • The relative efficiency loss in comparison with Home windows machine of ipnsort is bigger than the considered one of intel_avx512. That is probably attributable to the disabled CPU enhance and total conservative frequency cap of three GHz. The Linux machine is extra indicative of a server surroundings. The place in distinction on the Home windows machine ipnsort can enhance increased than intel_avx512.
  • For bigger inputs intel_avx512 reveals higher throughput on Home windows with a measured sustained frequency of ~3.8GHz. E.g. at enter measurement 1m on Home windows ~67m elem/s vs ~54m elem/s on Linux. Which neatly matches the frequency distinction.
  • vqsort nonetheless is so much quicker on Linux. Each by way of peak efficiency and min enter measurement to outperform the remaining. That is an sudden consequence, and appreciable effort was spent to investigate the foundation reason behind this. Within the check vqsort used the identical compiler and flags as intel_avx512. A distinct benchmark suite yields the identical outcomes, a distinct pre-built Clang model yields the identical outcomes. On a dual-booted Zen3 machine vqsort in AVX2 mode reveals the identical ~2x regression when going from Linux to Home windows at enter measurement 10k. As well as the upper frequency as proven by intel_avx512 on Home windows ought to give it one more enhance above Linux which does not manifest. All this whereas the opposite type implementations stay largely the identical. The basis trigger for this was recognized in collaboration with the vqsort authors [4]. The examined vqsort model acquired randomness from the OS, for every name to vqsort in an try to mitigate a possible efficiency denial-of-service (DOS). This concept is much like using SipHash because the default hasher within the Rust normal library. This explains why we see considerably completely different outcomes on Home windows and Linux. See the linked rationalization for extra particulars.
  • One other sudden result’s the height throughput for intel_avx512 is decrease than on Linux, regardless of increased frequencies. And the height is reached earlier on Linux.

Measuring a extra pure zipfian distribution random_z1 throughout completely different sizes:


  • Usually much like random sample efficiency scaling.
  • intel_avx512 reveals much less uniform scaling, with total worse efficiency in comparison with random.
  • The pdqsort derived implementations and vqsort see a efficiency uplift in comparison with random, by with the ability to effectively filter out widespread values.

Measuring a random distribution the place most (95%) values are the identical worth:


  • The pdqsort derived implementations, and rust_std_msvc on Home windows and vqsort on Linux present glorious efficiency, a number of instances quicker than random sample efficiency.
  • intel_avx512 reveals worse than random sample efficiency, yielding the general worst efficiency on this situation.

Outcomes i32


The primary distinction between i32 and u64 is that i32 is simply 4 bytes in comparison with the 8 bytes of u64. Virtually all excessive efficiency type implementations are delicate to the bandwidth of the CPUs reminiscence subsystems. In idea this could permit increased cache throughput, higher cache utilization and better vector ALU throughput.

Home windows:



  • Each manually vectorized implementations can practically double their throughput. vqsort solely demonstrates this on Linux for this measurement. Exhibiting a smaller relative acquire on Home windows.
  • All comparability based mostly type implementations solely see a small change in comparison with u64.


Home windows:



  • The manually vectorized implementations are so much higher at leveraging the elevated potential throughput, than the generic comparability based mostly implementations.
  • Very comparable scaling to u64 for the non manually vectorized implementations.

Direct comparability

A complete take a look at efficiency for a single kind may be achieved by evaluating two implementations and plotting their symmetric relative speedup and slowdown on the Y-axis and the check measurement on the X-axis. Every line representing a sample. Eg. a-vs-b, 1.7x means a is 1.7x quicker than b, and -1.7x means b is 1.7x quicker than a. The primary identify within the title is type a, and the second identify is b. The graphs are fastened to the Y-range -3x to 3x to permit comparability between graphs.

Evaluating the 2 manually vectorized implementations u64:


  • There’s stark distinction between Linux and Home windows. All strains transfer to the left, that means vqsort overtakes intel_avx512 at smaller enter sizes. And the graph strikes down, within the measurement vary of 100k to 10m vqsort is ~2x quicker on Linux whereas solely ~1.4x quicker on Home windows for random inputs.
  • On Home windows under 10k and 200 on Linux, intel_avx512 is quicker in each sample than vqsort.
  • The random_5p sample triggers unhealthy partitions in intel_avx512 which doesn’t safe-guard in opposition to it, making vqsort ~14x quicker than intel_avx512 for len == 10m on Home windows, and ~26x on Linux. This appears to point that intel_avx512 has worse than O(N * log(N)) worst-case scaling, probably attributable to poor pivot choice and no mitigation methods.

Evaluating the 2 manually vectorized implementations i32:


  • i32 performs similar to u64 aside from random_z1 on Linux exhibiting a bigger benefit for vqsort.

In comparison with the quickest generic comparability based mostly implementation ipnsort:


  • Between enter measurement 36 and 1k, intel_avx512 is quicker in practically each sample on Linux. Between 1k and 100k intel_avx512 is quicker for totally random and noticed like patterns.
  • ipnsort is quicker for low- cardinality and zipfian patterns, the place it will possibly leverage the pdqsort derived means to filter out widespread values.
  • On Linux and with a max frequency of three GHz intel_avx512 is quicker for totally random and noticed like patterns.
  • As famous earlier, intel_avx512 reveals worse than O(N * log(N)) scaling for random_p5, making ipnsort ~15x instances quicker on Home windows and ~19x Linux for enter measurement 10m.
  • Each intel_avx512 and vqsort are unable to leverage already sorted inputs. Whereas the pdqsort derived implementations can deal with this case in O(N). At enter measurement 100k ipnsort is ~20x quicker on Home windows and ~15x on Linux. And at enter measurement 10m ~10x quicker on Home windows and ~16x on Linux. The place the 10m case will probably be largely restricted by major reminiscence bandwidth, of which the Server Skylake implementation has extra.

Non AVX-512 outcomes

The overwhelming majority of client units doesn’t help AVX-512. intel_avx512 solely helps AVX-512 units, nonetheless vqsort is written on prime of freeway [5], a SIMD abstraction that helps varied platforms and {hardware} capabilities. For instance it may be used on machines that solely help AVX2, in addition to Arm chips with Neon help.

AVX2 check machine

Linux 6.2
rustc 1.70.0-nightly (17c116721 2023-03-29)
clang model 15.0.7
AMD Ryzen 9 5900X 12-Core Processor (Zen3 micro-architecture)
CPU enhance enabled.


  • vqsort reveals the identical exceptionally poor small measurement efficiency.
  • All generic comparability based mostly implementations see a big efficiency uplift in comparison with Skylake.
  • ipnsort is way nearer to vqsort for big inputs in comparison with the Skylake Linux machine.
  • crumsort reveals smaller features in comparison with pdqsort and rust_std than on the Skylake machines.
  • vqsort hits its peak throughput sooner than on the Skylake Linux machine.

Neon check machine

Darwin Kernel Model 22.1.0
rustc rustc 1.70.0-nightly (f15f0ea73 2023-03-04)
Apple clang model 14.0.0
M1 Professional 8-Core Processor (Firestorm P-core micro-architecture)
CPU enhance enabled.


See Also

  • vqsort reveals the identical exceptionally poor small measurement efficiency.
  • For many type implementations, throughput is almost doubled in comparison with the Skylake chip at comparable clock speeds. Showcasing Firestorm’s micro-architecture capabilities, leveraging a large out-of-order (OoO) design.
  • In comparison with the Skylake and Zen3 outcomes, Firestorm can hit peak throughput earlier. Indicative of a superb department prediction implementation with smaller miss penalities, quick studying and total a superb cache setup.
  • cpp_std_libcxx performs the worst typically.
  • vqsort reveals efficiency much like pdqsort, restricted by the Neon instruction set and 128-bit vector width.
  • The 2 implementations closely exploiting ILP, crumsort and ipnsort, see bigger uplifts in comparison with pdqsort as their baseline, with ipnsort exhibiting ~1.9x the efficiency in comparison with ~1.6x on Skylake.
  • At enter measurement 50k there’s a efficiency valley that’s reproducible and appears to have an effect on each crumsort and ipnsort.
  • Energy measurements point out that ipnsort makes use of ~1.4x much less energy than vqsort whereas being practically twice as quick. Which might indicate ~2.7x higher energy effectivity. This means that in some eventualities code that focuses on exploiting ILP may be extra power environment friendly than vectorized code. With the extra good thing about not being {hardware} particular.

C type interface

crumsort mirrors the C normal library qsort interface. Which asks for a comparability operate int (*comp)(const void *, const void *). And whereas plenty of code implement this with return a - b;, such code results in UB for some integers and isn’t generic. A easy implementation as introduced on the cppreference web site is that this:

int compare_ints(const void* a, const void* b)
    int arg1 = *(const int*)a;
    int arg2 = *(const int*)b;
    if (arg1 < arg2) return -1;
    if (arg1 > arg2) return 1;
    return 0;
    // return (arg1 > arg2) - (arg1 < arg2); // potential shortcut
    // return arg1 - arg2; // inaccurate shortcut (fails if INT_MIN is current)

Nonetheless relying in your compiler, it will generate branches. Which in flip trigger department miss-prediction which is undesirable, particularly for code that desires to use ILP. One other different to write down this in a method that generates good branchless code-gen, particularly with Clang is that this:

int compare_ints(const void* a_ptr, const void* b_ptr)
    const int a = *(const int*)a_ptr;
    const int b = *(const int*)b_ptr;

    const bool is_less = a < b;
    const bool is_more = a > b;
    return (is_less * -1) + (is_more * 1);

This could keep away from the problem with department miss-prediction, nonetheless one other situation stays. C abstracts over user-defined logic with operate pointers. In distinction to C++ and Rust closures these cannot be inlined with out profile-guided optimizations (PGO). crumsort till lately required a customized outline which disables a customized comparability globally through a outline #outline cmp(a, b) (*(a) > *(b)) earlier than the header solely implementation is parsed. That is what’s required for greatest efficiency, and what all benchmarks right here used. Nonetheless this makes crumsort extra akin to the manually vectorized implementations by way of supported varieties. Consumer outlined varieties that do not slot in a lengthy lengthy would require supply degree modifications. And for greatest efficiency with such interface a novel model of crumsort must be compiled for each kind, through the construct system. Essentially this can be a C situation and solely a crumsort situation, as a result of it’s applied in C and mirrors the qsort interface.

Evaluating the outcomes of crumsort to crumsort utilizing the branchless comparability operate (c_crumsort_generic) on the Skylake Linux machine provides:


  • When utilized in a generic style crumsort reveals considerably worse efficiency, ~2.4x slower for bigger sizes and most random patterns.
  • c_crumsort_generic reveals efficiency near cpp_std_gnu.
  • Patterns similar to ascending and random_d20 which carry out fewer whole comparisons, are affected extra.

Scorching benchmarks

Working a form implementation a number of million instances in a loop, as is the usual for instruments like google benchmark and criterion, with new distinctive inputs of the identical measurement and sample will not be consultant of actual world software efficiency. To simulate a program that calls type often with various sizes, and a chilly CPU prediction state, the benchmarks are additionally run in a mode the place between every type name, the CPU prediction state is trashed [6]. These benchmarks are marked cold-*. And so they for the idea for the above scaling graphs. Benchmarks utilizing the default sizzling loop logic are marked hot-*. sizzling benchmark numbers needs to be interpreted as greatest case efficiency below laboratory circumstances.


Measuring random sample efficiency throughout completely different sizes:


  • Even in a sizzling loop vqsort as examined is exceedingly sluggish for small inputs.
  • intel_avx512 is quick throughout all sizes, when benchmarked in a sizzling loop.
  • intel_avx512 and crumsort, differentiate themselves from the opposite implementations by not utilizing insertion type for such inputs.
  • Beginning at ~50k vqsort is the quickest.
  • ipnsort catches as much as intel_avx512 at ~1m, regardless of solely utilizing SSE2 directions and utilizing no {hardware} particular code.
  • Beginning at inputs measurement ~10k cold and hot outcomes are largely the identical.

Instrumenting the Rust normal library and constructing a customized model of rustc that logs the enter measurement and time spent in slice::type [7], which is the secure type of the Rust normal library, yielded these insights clear compiling 60 crates:

Out of the ~500k calls to slice::type 71.6% had been len == 0, 22.8% had been len == 1 and in whole 99+% had been len <= 20 and collectively they account for ~50% of the time spent in slice::type.

Taking a look at that vital len <= 20 vary:


  • ipnsort leads the chart, in distinction to the recent outcomes.
  • Peak throughput for len <= 20 drops by 5x in comparison with sizzling outcomes.

One other side that needs to be talked about, is frequency scaling. Apart from the most recent generations of Intel and AMD micro-architectures, Golden Cove and Zen4, earlier Intel AVX-512 implementations scaled down the CPU enhance frequency when sure AVX-512 directions had been executed [8]. This may occasionally have an effect on actual world utilization and efficiency. One other side probably affected by chilly code is the AVX-512 startup, a phenomenon restricted to the arguably poor Skylake AVX-512 implementation.

The proven cold and warm benchmarks, mannequin the 2 extremes of the probably vary of potential outcomes.

Additional sizzling outcomes

Creator’s conclusion and opinion

intel_avx512 is a really quick type implementation, particularly for random inputs, and it reveals good scaling throughout completely different enter sizes. Nonetheless it reveals worse than O(N * log(N)) scaling in easy patterns, and suffers from poor pivot choice with out a fallback. For varieties which can be smaller than 8 bytes, similar to u8, or i32 each manually vectorized implementations showcase far increased efficiency than generic comparability based mostly implementations. This evaluation didn’t look into the overhead of runtime dispatch, which might be required to make use of it in code that isn’t uniquely compiled for a selected platform. Taking a look at some eventualities:

Specialised type in a library

intel_avx512 generally is a nice alternative, if the enter consists of plain integers or floats, of various sizes, assumed largely random, not very small, not of low-cardinality and the suitable {hardware} supporting AVX-512 is obtainable. On the similar time it may be problematic to make use of it as the only real substitute for calls to type due to chilly efficiency, worst case efficiency and potential frequency scaling points.

Specialised HPC code

Should you want the very best throughput, and know you solely have massive inputs of supported varieties, vqsort is the quicker choice. Greatest vqsort efficiency requires Linux and clang. Different compilers shouldn’t be used and yield code a number of instances slower.

Language normal library

An ordinary library implementation needs to be good in most eventualities, with unhealthy efficiency in as few eventualities as potential. There are various obvious points with intel_avx512 for such a use case. It could solely help a small fraction of units, would enhance binary measurement for all x86 targets. It’s not good at exploiting current patterns within the enter. It may’t help user-defined comparisons, which might probably imply that v.sort_unstable() and v.sort_unstable_by(|a, b| a.cmp(b)) may have a shocking distinction in efficiency. It may’t help user-defined varieties with out further code by the person, eg. #[derive(Copy, Clone)]struct X(i32) may have a shocking distinction in efficiency to only i32. On prime of that, there are points with chilly efficiency and frequency scaling.

All the outcomes are a snapshot of the respective implementations. After performing these measurements, there have been adjustments to crumsort and ipnsort, and the folks behind vqsort are wanting into small enter efficiency. I particularly selected to not re-run the benchmarks with newer variations to keep away from a bias of fixing sure issues similar to poor saws_long efficiency in ipnsort, with out giving all of the implementations the prospect to do such enhancements.


Thanks Jan Wassenberg in your detailed suggestions, intensive assist in operating benchmarks and serving to me perceive among the observations. Thanks Roland Bock in your detailed suggestions and concepts make this writeup extra readable.


Up to date vqsort outcomes

New vqsort model fe85fdf. Similar graphs as above, with cpp_vqsort_new added. Solely vqsort was re-tested right here.

Skylake (AVX-512):

Zen3 (AVX2):


  • The brand new model of vqsort addresses the primary situation of very poor efficiency for smaller inputs, and even manages to enhance total efficiency for bigger inputs. With this it’s now undoubtedly the quickest type implementation in my testing, when AVX2 or AVX-512 can be found for random-like patterns.

Source Link

What's Your Reaction?
In Love
Not Sure
View Comments (0)

Leave a Reply

Your email address will not be published.

2022 Blinking Robots.
WordPress by Doejo

Scroll To Top