Jil: Version 2.x, Dynamic Serialization, And More

Jil, a fast JSON serializer/deserializer for .NET, has hit 2.x!  It’s gained a lot more perf work, some improved functionality, and dynamic serialization.

What’s Dynamic Serialization?

Jil has always supported static serialization via a generic type parameter.  This would serialize exactly the provided type; things like inheritance, System.Object members, and the DLR were unsupported.

In .NET there are, broadly speaking, three “kinds” of dynamic behavior: polymorphism, reflection, and the dynamic language runtime.  Jil now supports all three.

Polymorphism

Polymorphism comes up when you use subclasses.  Consider the following.

class Foo
{
  public string Fizz;
}
class Bar : Foo
{
  public string Buzz;
}

 

Since Jil can’t know the Bar class exists when Serialize<Foo>(…) is first called, the Buzz member would never be serialized.  SerializeDynamic(…), however, knows that the Foo class isn’t sealed and that the possibility of new members needs to be accounted for on every invocation.  A similar situation exist with virtual members, which is also handled in SerializeDynamic.

Reflection

Reflection matters when the objects being serialized have normal .NET types (ints, strings, lists, user defined classes , and so on) at runtime but that type isn’t known at compile time.  Consider the following.

object a = 123;
object b = "hello world";

Calling Serialize(a) or Serialize(b) would infer a type of System.Object, since at compile time that’s the only information we have.  Using SerializeDynamic, Jil knows to do a runtime lookup on the actual type of the passed in object.

Dynamic Runtime

The dynamic runtime backs the dynamic keyword in C#, but for Jil’s purposes it can be thought of as special casing types that implement IDynamicMetaObjectProvider.

While it’s rare to directly implement IDynamicMetaObjectProvider, code using ExpandoObject or DynamicObject isn’t unheard of.  For example, Dapper uses ExpandoObject to implement its dynamic returns.

Speed Tricks

As usual, Jil focuses on speed although dynamic serialization is necessarily slower than static serialization.

Jil builds a custom serializer at each point in the type graph where the type could vary on subsequent calls.  For example, if Jil sees a “MyAbstractClassBase” abstract class as a member it will do an extra lookup on each call to SerializeDynamic(…) to find out what the type really is for that invocation.  If instead Jil sees a “string” or a “MyValueType” struct as a member, it knows that they cannot vary on subsequent calls and so will not do the extra lookup.  This makes the first serialization involving a new type slower, but subsequent serializations are much faster.

The most common implementations of IDynamicMetaObjectProvider are special cased, ExpandoObject is treated as an IDictionary<string, object> and DynamicObject’s TryConvert(…) method is called directly.  This avoids some very expensive trial casts that are sometimes necessary when serializing an implementer of IDynamicMetaObjectProvider.

Further Improvements

While dynamic serialization is the main new feature in 2.x, other improvements have been made.

A partial list of improvements:

Relative Speed Improvements

It’s been a while since I’ve posted Jil benchmarks, since most recent work has been on dynamic features that aren’t really comparable.  However, lots of  little improvements have added up to some non-trivial performance gains on the static end.

Overall serialization gains fall around 1% for the Stack Exchange API types, with the larger models and collections gaining a bit more.

In the Stack Exchange API, deserialization of most models has seen speed increases north of 5% with largest models and collections seeing double-digit gains.

 

These numbers are available in a Google Doc were derived from Jil’s Benchmark project running on a machine with the following specs,:

  • Operating System: Windows 8 Enterprise 64-bit
  • Processor: Intel Core i7-3960X 3.30 GHz
  • Ram: 64 GB
    • DDR
    • Quad Channel
    • 800 MHz

The types used were taken from the Stack Exchange API to reflect a realistic workload, but as with all benchmarks take these numbers with a grain of salt.

I would also like to call out Paul Westcott’s (manofstick on Github) contributions to the Jil project, which have made some of these recent performance gains possible.

As always, you can

Browse the code on  GitHub or Get Jil on NuGet