Jil: Doing JSON Really, Really QuicklyPosted: 2014/02/03
After about three months of work, and some time in a production environment, the second half of Jil (a fast JSON library built on Sigil) is ready for release.
Jil now supports both serializing and deserializing JSON.
As with serialization, Jil deserialization supports very few configuration options and requires static typing. The interface is, accordingly, just a simple “JSON.Deserialize<T>(…)” which takes either a string or a TextReader.
Jil’s entire raison d’être is ridiculous optimizations in the name of speed, so let’s get right to the benchmarks.
Let’s start with the (recently updated) SimpleSpeedTester:
The numbers show Jil’s deserializers are about 50% faster than the next fastest JSON library. Protobuf-net is included as a baseline, it isn’t a JSON serializer (it does Protocol Buffers) but it’s the fastest .NET serializer (of anything) that I’m aware of.
Just like with serializing, Jil does runtime code generation using Sigil to produce tight, monolithic deserialization methods. That right there accounts for quite a lot of performance.
To speed up deserializing numbers, Jil has a large collection of methods tailored to each built-in number type. Deserializing bytes, for example, can omit sign checks and all but one overflow check. This specialization avoids unnecessary allocations, and sidesteps all the overhead in .NET imposed by support for culture-specific number formatting.
When deserializing JSON objects, Jil tries to avoid allocations when reading member names. It does this by checking to see if a hash function (Jil uses variants of the Fowler–Noll–Vo hash function) can be truncated into a jump table, ie. Jil calculates hash(memberName) % X for many Xs to see if a collision free option exists. If there’s an option that works, Jil uses it instead of actually storing member names which saves an allocation and the cost of a lookup in a Dictionary.
A fair amount of performance is gained by being very parsimonious with memory on the heap. Jil typically allocates a single character buffer, and may allocate a StringBuilder if certain conditions are met by the deserialized data. Additional string allocations will be made if Jil cannot use hash functions member name lookups.
Interested in Jil?
Next up is probably dynamic deserialization, though naturally I’d love to make the existing functionality even faster. Don’t hesitate to ping me with more performance tricks.