LinqAF: BenchmarksPosted: 2018/01/25
This is part of a series on LinqAF, you should start with the first post.
How is LinqAF benchmarked?
I make use of the excellent BenchmarkDotNet library which makes it easy to profile different configurations, and calculate meaningful statistics. Concretely, I profile both the legacy x86 JIT and the x64 RyuJit configurations with the concurrent server garbage collector.
I created separate benchmarks for:
- Each LINQ operator, including different overrides
- Nullable and non-nullable versions for Average, Max, Min, and Sum
- Both typed and untyped underlying enumerables for Cast
- Sized and unsized underlying enumerables for Reverse, ToArray, ToDictionary, and ToList
- Three actual LINQ uses from the Stack Overflow, Jil, and Roslyn codebases
- One giant synthetic case that uses very nearly every operator at once
I ran all benchmarks on a machine with the following specs (as output by BenchmarkDotNet):
BenchmarkDotNet=v0.10.9, OS=Windows 10 Redstone 1 (10.0.14393)
Processor=Intel Core i7-6900K CPU 3.20GHz (Skylake), ProcessorCount=16
My goal with LinqAF was to produce a (very nearly) allocation free implementation of LINQ-to-Objects, and so that is what I focused on when benchmarking – either zero allocations when possible, or fewer allocations than the equivalent LINQ-to-Objects operator. Ultimately, I did achieve my goal with LinqAF – however, since I didn’t focus on performance, there are plenty of cases where LINQ-to-Objects is faster than LinqAF.
In particular, the following operators are slower in LinqAF than in LINQ-to-Objects:
- Any (without predicate)
- Cast (if the underlying type is actually an IEnumerable<T>)
- DefaultIfEmpty (with a parameter)
- Max (for nullable types)
- Min (for nullable types)
- OrderBy & OrderByDescending
- Skip & SkipWhile
- Sum (for nullable types)
- Take & TakeWhile
- ThenBy & ThenByDescending
- ToArray (if the underlying type isn’t an IList<T>)
Most of these operations are only a little slower, but Cast is actually quite a bit slower (taking more than twice as much time). This is because an optimization that is available to LINQ-to-Objects is made impossible by LinqAF’s “everything is a struct”-design.
You can run all the benchmarks on your own by executing the LinqAF.Benchmark project – be sure to build in Release mode, or you’ll get inaccurate results. A full log of the run I did for this post can be found here – at the end of the file you’ll find all the slower benchmarks, and how much slower they were exactly.
Benchmarking is always fraught with issues – if you have any interesting cases where LinqAF allocates more than LINQ-to-Objects or is incredibly slower I’d be interested in hearing about it.
In the last post in this series, I’ll go over what’s missing for LinqAF as well as what direction future work could take.