Jil: Serializing JSON Really, Really QuicklyPosted: 2013/11/13
Several months ago I more or less finished up my “just for fun” coding side project Sigil (which I’ve written about a few times before), and started looking for something new to hack on. I settled on a JSON serializer for .NET that pulled out all the stops to be absolutely as fast as I could make it.
About three months later, I’m ready to start showing off…
a fast JSON serializer built on Sigil.
This is still a very early release (arbitrarily numbers 0.5.5, available on Nuget [but not in search results]), so there are a lot of caveats:
- Jil has very few configuration options
- Jil has very limited support for dynamic serialization
- Jil only serializes, no deserialization
- Jil only supports a subset of types
- Jil has seen very little production use
- At time of writing, Jil has been pulled into the Stack Exchange API for about a week without issue
In tabular form, the same data:
The take away from these benchmarks is that Jil is about twice as fast as the next fastest .NET JSON serializer. Protobuf-net is still faster (as you’d expect from an efficient binary protocol from Google and a library written by Marc Gravell) but Jil’s closer to it than to the next JSON serializer.
I could write a whole series of how Jil shaves microseconds, and may yet do so. I’ll briefly go over some of the highlights right now, though.
The first one’s right there in the name, Jil is built on Sigil. That means a type’s serializer gets created in nice tight CIL, which becomes nice tight machine code, and no reflection at all occurs after the first call.
Second, Jil has a number of very specialized methods for converting data types into strings. Rather than relying on Object.ToString() or similar, Jil has a separate dedicated methods for shoving Int32s, UInt64s, Guids, and so on into character arrays. These specialized methods avoid extra allocations, sidestep all the culture-specific infrastructure .NET makes available, and let me do crazy things like divide exactly 14 times to print a DateTime.
As you’d expect of performance focused code in a garbage collected environment, the third thing Jil focuses on is trying not allocate anything, ever. In practice, Jil can keep allocations to a single small charactar array except when dealing with Guids and IDictionary<TKey, TValue>s. For Guids Jil must allocate an array for each since Guid.ToByteArray() doesn’t take a buffer, while serializing Dictionaries still allocates an enumerator.
If you’ve clicked through to Jil’s source by now, you might have noticed some MethodImpl attributes. That’s a part of the Jil’s fourth big trick, trading a fair amount of memory for more speed. Aggressively inlining code saves a few instructions spent branching, and even more time if instruction prefetching isn’t perfect in the face of method calls.
Last but not least, Jil avoids branches whenever possible; your CPU’s branch predictor can ruin your day. This means everything from using jump tables, to skipping negative checks on unsigned types, to only doing JSONP specific escaping when requested, and even baking configuration options into serializers to avoid the runtime checks. This does mean that Jil can create up to 64 different serializers for the same type, though in practice only a few different configurations are used within a single program.
I’m definitely interested in any crazy code that shaves more time off. Also faster ways to create a string (rather than write to a TextWriter), my experiment with capacity estimation works… but not for reliably enough speedups to flip it on by default.