Jil: Doing JSON Really, Really Quickly

After about three months of work, and some time in a production environment, the second half of Jil (a fast JSON library built on Sigil) is ready for release.

Jil now supports both serializing and deserializing JSON.

As with serialization, Jil deserialization supports very few configuration options and requires static typing.  The interface is, accordingly, just a simple “JSON.Deserialize<T>(…)” which takes either a string or a TextReader.

The Numbers

Jil’s entire raison d’être is ridiculous optimizations in the name of speed, so let’s get right to the benchmarks.

Let’s start with the (recently updated) SimpleSpeedTester:

Performance Graph

The numbers show Jil’s deserializers are about 50% faster than the next fastest JSON library.  Protobuf-net is included as a baseline, it isn’t a JSON serializer (it does Protocol Buffers) but it’s the fastest .NET serializer (of anything) that I’m aware of.

The Tricks

Just like with serializing, Jil does runtime code generation using Sigil to produce tight, monolithic deserialization methods.  That right there accounts for quite a lot of performance.

To speed up deserializing numbers, Jil has a large collection of methods tailored to each built-in number type.  Deserializing bytes, for example, can omit sign checks and all  but one overflow check.  This specialization avoids unnecessary allocations, and sidesteps all the overhead in .NET imposed by support for culture-specific number formatting.

When deserializing JSON objects, Jil tries to avoid allocations when reading member names.  It does this by checking to see if a hash function (Jil uses variants of the Fowler–Noll–Vo hash function) can be truncated into a jump table, ie. Jil calculates hash(memberName) % X for many Xs to see if a collision free option exists.  If there’s an option that works, Jil uses it instead of actually storing member names which saves an allocation and the cost of a lookup in a Dictionary.

A fair amount of performance is gained by being very parsimonious with memory on the heap.  Jil typically allocates a single character buffer, and may allocate a StringBuilder if certain conditions are met by the deserialized data.  Additional string allocations will be made if Jil cannot use hash functions member name lookups.

Interested in Jil?

Grab It On Nuget or Checkout The Code

Next up is probably dynamic deserialization, though naturally I’d love to make the existing functionality even faster.  Don’t hesitate to ping me with more performance tricks.

Jil: Serializing JSON Really, Really Quickly

Several months ago I more or less finished up my “just for fun” coding side project Sigil (which I’ve written about a few times before), and started looking for something new to hack on.  I settled on a JSON serializer for .NET that pulled out all the stops to be absolutely as fast as I could make it.

About three months later, I’m ready to start showing off…


a fast JSON serializer built on Sigil.

This is still a very early release (arbitrarily numbers 0.5.5, available on Nuget [but not in search results]), so there are a lot of caveats:

  1. Jil has very few configuration options
  2. Jil has very limited support for dynamic serialization
  3. Jil only serializes, no deserialization
  4. Jil only supports a subset of types
  5. Jil has seen very little production use
    • At time of writing, Jil has been pulled into the Stack Exchange API for about a week without issue

Some Numbers

Benchmarks are most the exciting thing about Jil, so let’s get right to them.  The following are from a fork of theburningmonk’s SimpleSpeedTester project.

In tabular form, the same data:

Raw numbers are available in a google document.  There are also further benchmarks I created while making Jil on Github.

The take away from these benchmarks is that Jil is about twice as fast as the next fastest .NET JSON serializer.  Protobuf-net is still faster (as you’d expect from an efficient binary protocol from Google and a library written by Marc Gravell) but Jil’s closer to it than to the next JSON serializer.

Some Tricks

I could write a whole series of how Jil shaves microseconds, and may yet do so.  I’ll briefly go over some of the highlights right now, though.

The first one’s right there in the name, Jil is built on Sigil.  That means a type’s serializer gets created in nice tight CIL, which becomes nice tight machine code, and no reflection at all occurs after the first call.

Second, Jil has a number of very specialized methods for converting data types into strings.  Rather than relying on Object.ToString() or similar, Jil has a separate dedicated methods for shoving Int32s, UInt64s, Guids, and so on into character arrays.  These specialized methods avoid extra allocations, sidestep all the culture-specific infrastructure .NET makes available, and let me do crazy things like divide exactly 14 times to print a DateTime.

As you’d expect of performance focused code in a garbage collected environment, the third thing Jil focuses on is trying not allocate anything, ever.  In practice, Jil can keep allocations to a single small charactar array except when dealing with Guids and IDictionary<TKey, TValue>s.  For Guids Jil must allocate an array for each since Guid.ToByteArray() doesn’t take a buffer, while serializing Dictionaries still allocates an enumerator.

If you’ve clicked through to Jil’s source by now, you might have noticed some MethodImpl attributes.  That’s a part of the Jil’s fourth big trick, trading a fair amount of memory for more speed.  Aggressively inlining code saves a few instructions spent branching, and even more time if instruction prefetching isn’t perfect in the face of method calls.

Last but not least, Jil avoids branches whenever possible; your CPU’s branch predictor can ruin your day.  This means everything from using jump tables, to skipping negative checks on unsigned types, to only doing JSONP specific escaping when requested, and even baking configuration options into serializers to avoid the runtime checks.  This does mean that Jil can create up to 64 different serializers for the same type, though in practice only a few different configurations are used within a single program.

Check Out The Code or Grab Jil From Nuget

I’m definitely interested in any crazy code that shaves more time off.  Also faster ways to create a string (rather than write to a TextWriter), my experiment with capacity estimation works… but not for reliably enough speedups to flip it on by default.

Making Magic: How Sigil Works

Version 3.0.0 of Sigil was just released (grab it on Nuget and check out the source on Github).  The big new feature this release is a disassembler, one which allows for some inspection of the flow of values through a .NET delegate.

But that’s not what I’m writing about.  I figure now’s as good a time as any to write up the “how” of Sigil, given that I covered the “what” and “why” in an earlier post and that recent releases have refactored Sigil’s internals into a state I’m happier with.

Bytecode Verifiers And You

In essence Sigil is a “bytecode verifier”.  If you’ve done JVM or .NET development you should be familiar with the concept, the bytecode verifiers on those platforms make sure that the class files or assemblies you load contain bytecode that can safely be executed.

The definition of “safe” is very particular, a bytecode verifier doesn’t prevent errors from occurring at execution time but rather prevents invariants of the runtime from being violated.  For example, a bytecode verifier would guarantee that invoking an instance method is passed the proper number and types of parameters and that it is invoked against an instance of the appropriate type.

One way to think about bytecode verifiers is that they guarantee that every operation receives the correct types as inputs and every operation leaves the runtime in a well formed state.

Sigil’s Special Features

Where Sigil differs from other bytecode verifiers is that it doesn’t operate on “finished” instruction sequences.  It verifies as you build a sequence, failing as soon as it can be sure the sequence is invalid.

Because Sigil deals with incomplete instruction sequences it also has to do with a lot of unknowns, especially around branches.  It’s quite common to branch to an instruction you haven’t actually emitted yet or emit instructions that aren’t yet provably reachable, both cases a traditional verifier can never encounter.

Sigil also has to explain itself when it fails, so it has to be able to deliver where and why a given sequence became invalid (which can be far removed from the last emitted instruction because of branches).  Similar complications exist when verification is successful, as things like eliding trivial casts and replacing branches with their short forms (which are deferred until an instruction stream is finalized) requires a lot of information about the instruction stream be retained.

Simple Verifying

If you ignore branches, verifying a bytecode sequence is pretty simple.  You can think of it as executing instructions as if they consumed and produced types instead of values.  Since Sigil is a .NET library I’ll be using .NET examples, though the basic idea applies to all similar verifiers.

For example, assume the following is an implementation of a Func<int>:

ldc.i4 1
ldc.i4 2

We know “ldc.i4″ consumes nothing, and produces an int, “add” actually consumes and produces a wide range of types but one of them is “int int -> int”.  The “ret” instruction either consumes nothing or a single type, dependent on the signature of the method it is used in; in this case it consume an “int” which we know because the method is a “Func<int>”.

I’ve written out the state of the stack (the .NET runtime is specified as a stack machine) after each instruction executes:

         // --empty--
ldc.i4 1 // int
ldc.i4 2 // int int
add      // int
ret      // --empty--

We need to add one more rule, which is that control flow ends with a “ret” which leaves the stack empty, and with that it’s clear that this sequence would verify.

How each instruction affects the stack, in terms of types.

The following sequence wouldn’t verify:

ldc.i4 1            // int
ldstr "hello world" // string int
mult                // !ERROR!

Since “mult” doesn’t have a transition from “string int” verification could be shown to fail as soon as “mult” is emitted.

The earliest releases of Sigil were simple verifiers as they didn’t try and trace through.  Through version 1.2.9 you instead had to assert types when marking labels under certain circumstances.

Complex Verifying

As soon as you add branches life gets considerably more complicated.  Now you have to deal with cases where you can’t infer what types are on the stack immediately.

Consider the following:

ldc.i4 1
br end_label


ldc.i4 2
br middle_label

In this example the types passed to “add” are unclear when it is emitted, it’s only when “br middle_label” is encountered that it becomes clear that “add” will verify.

Furthermore, whether or not “ret” verifies depends on the method being built. It would verify if we’re building a Func<int> then it verifies.  If we’re building a Func<string> it should fail when emitted, since there’s no way for “add” to pass a string to “ret”.  If we’re building a Func<double>, then it should fail when “br middle_label” is emitted since then “add” is known to produce and pass an Int32 to “ret”.

How Sigil Copes

Sigil deals with the complexities of verifying partial sequences in two ways: tracking the possible types on the stack, and creating a verifier for each possible code path.

Reconsidering the above example:

                // ** Initial Verifier: V1, with empty stack **
ldc.i4 1        // int
br end_label    // int
                // ** New Verifier: V2 with unknown stack, V1 stashed **
add             // int|long|float|double|etc.
ret             // --empty--
                // ** New Verifier: V3 with unknown stack, V2 stashed **
end_label:      // ** V1 restored (stack is: int) **
ldc.i4 2        // int int
br middle_label // int int (V2 now verified with int int as initial stack)

There’s rather a lot going on now, what with verifiers being stashed and restored.  Also note that when “add” is first encountered it places an “or” clause onto the stack, which allows “ret” to verify if it expects any type in that “or” clause.

How different verifiers see the instruction stream

The logic around creating and restoring verifiers is tricky, but boiled down:

  • At an unconditional branch, store the current stack state and remove the current verifiers
    • If the branched to label has already been marked, take the current stack and check it against the expected initial stack at that label
  • At a conditional branch do the same checks and stores as an unconditional one, but don’t remove the current verifiers
  • When marking a label, if there is no verifier create a new one with an unknown initial stack
    • If there are stacks stored against that label from previous branches, check that those stacks are consistent and make them the new verifiers initial stack
  • Treat “ret” as an unconditional branch which doesn’t store the stack state
  • If at any point an instruction is emitted and there is no current verifier, the newly emitted code is unreachable

This logic is captured in Sigil’s RollingVerifier UnconditionalBranch, ConditionalBranch, Return, and Mark methods.

As for tracking all possible values on the stack (as in the “or” with “add” in the previous example), Sigil considers transitions rather than types on the stack.  So long as there’s at least one transition that could be taken for every instruction a verifier has seen, the instruction stream is considered verifiable.

Take for example the, rather complex, transitions for “add”:

    new StackTransition(new [] { typeof(int), typeof(int) }, new [] { typeof(int) }),
    new StackTransition(new [] { typeof(int), typeof(NativeIntType) }, new [] { typeof(NativeIntType) }),
    new StackTransition(new [] { typeof(long), typeof(long) }, new [] { typeof(long) }),
    new StackTransition(new [] { typeof(NativeIntType), typeof(int) }, new [] { typeof(NativeIntType) }),
    new StackTransition(new [] { typeof(NativeIntType), typeof(NativeIntType) }, new [] { typeof(NativeIntType) }),
    new StackTransition(new [] { typeof(float), typeof(float) }, new [] { typeof(float) }),
    new StackTransition(new [] { typeof(double), typeof(double) }, new [] { typeof(double) }),
    new StackTransition(new [] { typeof(AnyPointerType), typeof(int) }, new [] { typeof(SamePointerType) }),
    new StackTransition(new [] { typeof(AnyPointerType), typeof(NativeIntType) }, new [] { typeof(SamePointerType) }),
    new StackTransition(new [] { typeof(AnyByRefType), typeof(int) }, new [] { typeof(SameByRefType) }),
    new StackTransition(new [] { typeof(AnyByRefType), typeof(NativeIntType) }, new [] { typeof(SameByRefType) })

One more thing to track when dealing with verifiers is whether or not we know their initial stack, what I call “baseless” in the code.  It is not an error for an instruction stream to underflow a baseless verifier, since it’s stack could be anything.  Instead of failing verification, Sigil considers the result of underflowing a baseless stack to be a “wildcard” type which satisfies any transition; this is how “add” can pop a value to continue verification after “middle_label”.

Trickier Things

.NET has a couple hairy CIL opcodes that require special handling: dup, localloc, and leave.

“dup” duplicates the current value on the stack, the difficulty being that we only know the type on the stack if we can verify the preceding instructions which isn’t always.  Sigil handles this by making “dup” place a special type on the stack, which when encountered by the verifier pushes a copy the preceeding transition’s result or a wildcard if underflowing a baseless verifier.

“localloc” is analogous to alloca(), pushing a pointer to memory on the stack, which requires that only a single value be on the stack when executed.  This means the current verifier cannot be baseless to verify.  In this case Sigil uses a special transition which asserts that the size of the stack is one if the verifier is based, and is ignored if it is not.

“leave” is an unconditional branch out of an exception or catch block which empties the stack entirely.  Sigil considers this equivalent to “pop”-ing exactly the number of items currently on the stack, which like “localloc” means the current verifier cannot be baseless.  Like “dup” Sigil uses a special type to indicate that the stack needs to be emptied to the verifier.

Optimizing Emitted Code

There are two kinds of optimizations Sigil can do to emitted CIL: eliding trivial casts, and replacing instructions with their short forms.

Conceptually eliding is straight forward, just keep track of what types are guaranteed to go into a castclass or isinst operation and if those types are assignable to the type encoding with the instruction elide it.  Sigil attaches callbacks, like this one, to “castclass” and “isinst” transitions which are called whenever a verifier processes those operations and passed enough information to decide whether to elide themselves or not.

Some short forms are easy, any of the LoadConstant methods with short forms can be changed at call time.  The trickier ones are branches, as we need to wait until we’re done emitting code and can calculate offsets.  Tracking offsets is handled by BufferedILGenerator (which maintains a cache of byte offsets to each instruction) and a last minute call to Emit.PatchBranches patches all the branch instructions that can fit their offsets into a signed byte.

Optimizing Sigil

Sigil employs some tricks to keep it relatively snappy.

Perhaps most importantly, it doesn’t re-verify operations unless it absolutely has to.  Every currently in scope verifier maintains a cache of its state when it last verified the instruction stream and re-uses that cache when new instructions are added.  The cache does have to be invalidated when the initial stack state changes, which only happens when branching to or marking labels.

Sigil also tries to keep the number of verifiers in scope limited by discarding any non-baseless verifiers past the first one.  Since any verifier that isn’t baseless can be traced back to the start of the method, we know that there are no “or” type clauses on the stack so the verifiers are equivalent going forward even if they took different paths through the code.

Other Tidbits

To wrap this up I’m just going to list some minor things of note found in Sigil.

  • Sigil has most of an implementation of Linq-to-Objects to run on pre-.NET 3.5 runtimes, heavily influenced by Jon Skeet’s Edulinq series
  • Sigil has it’s own Tuple implementation for similar reasons
  • Sigil’s disassembler, while doing a great deal more, started as a replacement for the Mono.Reflection disassembler in our code base
  • Sigil’s exception blocks are slightly different from ILGenerators in that you explicitly attach catches and finallys to them, this makes nested exception handling easier to debug

And again, Sigil can be installed from Nuget and the source is available on Github.

Improving Sigil

A few weeks ago I announced Sigil, a fail-fast validating wrapper around .NET’s ILGenerator.  Since then I’ve pushed a number of updates and bug fixes, culminating in the fairly significant 2.0.0 release now on Nuget.

I figured now was a good time to write down what’s changed, and where I see this code going.

A Smarter Validator

The biggest change between 1.x and 2.x is that Sigil no longer requires stack type assertions after unconditional branches.

In version 1.x you had to do something like the following:

var emit = Emit<Func<int>>.NewDynamicMethod();

var label1 = emit.DefineLabel();
var label2 = emit.DefineLabel();


emit.MarkLabel(label2, new [] { typeof(int) }); // Sigil wasn't smart enough to know what the stack was here


Version 2.x removes the need to pass types to certain MarkLabel() calls entirely.  This removes the most painful bit of busywork entailed in using the initial release of Sigil.

As a consequence of this new validator, a few breaking changes were made to Sigil’s interface.  The obvious removal of stack assertions, plus the removal of GetStack (as the stack state is not always known) and the addition of some required parameters to array manipulation methods (as the array involved cannot always be inferred).

Better Debugging Information

Sigil is all about failing fast and helpfully, and several changes have been made to the string returned by SigilVerificationException.GetDebugInfo() since Sigil’s initial release.

Examples include:

  • Non-returning paths indicated by the labels that path flows through
  • Instructions that take locals are annotated with the names and types of the involved local
  • The instruction that causes a verification error
  • As in version 1.x the stack state at any error, now including all possible types


Since it’s initial release, Sigil has gained the ability to elide trivial casts (thanks to Marc Gravell for the first version of this feature).

Modest performance improvements have been made to rewriting branches into their short forms, and the ability to disable such rewriting has been added (credit to Jason Punyon for some of this).


Sigil has been tuned somewhat since it’s initial release, but the introduction of a smarter validator has come at a performance price under certain circumstances.  As a rough guideline, if you’re generating methods with hundreds of labels and branches or tens of thousands of instructions Sigil may be too slow for you.

In addition to the aforementioned optimization options (passed as OptimizationOptions), the option to defer some validation has also been added (via the ValidationOptions enumeration).  Between the two a great deal of Sigil’s utility can still be had even for very large or very branch intensive generated code.

Minor Improvements

  • While debugging all declared labels and in scope locals are exposed as the Labels and Locals properties
  • The WriteLine(string, params Locals[]) method
  • Automatic insertion of the volatile prefix when able
  • The ability to refer to locals and labels by name

The Future

I’m not really satisified with Sigil’s performance just yet.  Profiling suggests that the slowest part of Sigil is gluing together instruction sequences for post-branch validation; I’m sure this can be done more quickly and probably skipped altogether in a lot of cases.

I’d also like to support the last four CIL instructions: Arglist, Mkrefany, Refanytype, and Refanyval.  Sigil has focused mostly on the DynamicMethod case where these instructions are rarely useful, but for completion’s sake they should be available.

Sigil currently has .NET 4.5 and 3.5 builds, it’d be nice to get a .NET 2.0 build as well.  This is non-trivial as Sigil makes heavy use of LINQ and extension methods.

A more long term goal is to abstract most of Sigil’s dependency on System.Reflection and System.Reflection.Emit so that its validation and optimization features can be used for a wider array of meta-programming tasks.  Freely switching between standard .NET and IKVM.Reflection is probably the first place to start, as there’s already a similar feature in protobuf-net’s precompiler.

As before, Sigil’s source is on github and the latest stable releases are on Nuget.

Sigil: Adding Some (More) Magic To IL

A nifty thing you can do in .NET is generate bytecode (properly Common Intermediate Language [CIL], formerly Microsoft Intermediate Language [MSIL], commonly called just IL) on the fly.  Previously I’ve used it to do dumb things with strings, and build a serializer on top of protobuf-net.  Over at Stack Exchange it’s used in small but critical parts of our API, in our caching layer via protobuf-net, and in our micro-ORM Dapper.

The heart of IL generation is the ILGenerator class which lets you emit individual opcodes and keeps track of labels, locals, try/catch/finally blocks, and stack depth.

To illustrate .NET’s built-in IL generation, here’s how you’d add 1 & 2:

var method = new DynamicMethod("AddOneAndTwo", typeof(int), Type.EmptyTypes);
var il = method.GetILGenerator();
il.Emit(OpCodes.Ldc_I4, 1);
il.Emit(OpCodes.Ldc_I4, 2);
var del = (Func<int>)method.CreateDelegate(typeof(Func<int>));

del(); // returns 3


ILGenerator is quite powerful, but it leaves a lot to be desired in terms of ease of use.  For example, leave one of the Ldc_I4′s out of the above…

var method = new DynamicMethod("AddOneAndTwo", typeof(int), Type.EmptyTypes);
var il = method.GetILGenerator();
il.Emit(OpCodes.Ldc_I4, 1);
// Woops, we left out the 2!
var del = (Func<int>)method.CreateDelegate(typeof(Func<int>));


And what happens?  You’d expect an error to be raised when we emit the Add opcode, but I’d understand deferring verification until the delegate was actually created.

Of course nothing’s ever easy, and what actually happens is an InvalidProgramException is thrown when the delegate is first used with a phenomenally unhelpful “Common Language Runtime detected an invalid program.” message.  Most of the time, ILGenerator gives you no indicator as to where or why you went wrong.

Frustrations I’ve had with ILGenerator, in descending severity:

  • Fails very late during code generation, and doesn’t indicate what went wrong
  • Allows obviously malformed instructions, like Emit(OpCodes.Ldc_I4, “hello”)
  • Lack of validation around “native int” allows for code that only works on specific architectures

Enter Sigil

Naturally I have a solution, and that solution’s name is Sigil (defined as “an inscribed or painted symbol considered to have magical power”, pronounced “Si-jil”).  Sigil wraps ILGenerator, exposes much less error prone alternatives to Emit, and does immediate verification of the instruction stream.

The erroneous code above becomes:

var il = Emit<Func<int>>.NewDynamicMethod("AddOneAndTwo");
// Still missing that 2!
var del = il.CreateDelegate();

And Sigil throws an exception at il.Add() with the much more helpful message “Add expects 2 values on the stack”.  Also notice how Sigil does away with all that nasty casting.

Sigil does much more than just checking that enough values are on the stack.  It’ll catch type mismatches (including esoteric ones, like trying to add a float and a double), illegal control transfers (like branching out of catch blocks), and bad method calls.

Data For Debugging

In addition to not failing helpfully, ILGenerator doesn’t give you much to go on when it does fail.  You don’t get an instruction listing or stack states, and your locals and labels are nothing but indexes and offsets.

When verification fails using Sigil the full instruction stream to data and current state of the stack (possibly two stacks, if a branch is involved) are captured by the thrown SigilVerificationException.  Every local and label gets a name in the instruction listing (which you can override), and the values on the stack that caused the failure are indicated.

For example…

var il = Emit<Func<string, Func<string, int>, string>>.NewDynamicMethod("E1");
var invoke = typeof(Func<string, int>).GetMethod("Invoke");
var notNull = il.DefineLabel("not_null");


var d1 = il.CreateDelegate();

… throws an SigilVerificationException on Return(), and calling GetDebugInfo() on it gives you the following:

Top of stack
System.Int32 // Bad value

Instruction stream
bne.un not_null
callvirt Int32 Invoke(System.String)

You still have to puzzle through it, but’s a lot easier to see what went wrong (that return from the passed delegate needs to be converted to a string before calling Return()).

But Wait, There’s More

Since Sigil is already doing some correctness validation that requires waiting until a method is “finished” (like making sure branches end up with their “expected” stacks), it has all it needs to automated a lot of tedious optimizations you typically do by hand when using ILGenerator.

For example, “Emit(OpCodes.Ldc_I4, {count})” shouldn’t be used if {count} is between -1 and 8; but who wants to remember that, especially if you’re rapidly iterating?  Similarly almost every branching instruction has a short form you should use when the offset (in bytes, not instructions) fits into a single byte.  Sigil automates all of that, you just call “LoadConstant” or “Branch” and move on.

Sigil also automates picking the appropriate version of some opcodes based on type.  In raw IL, there are separate instructions for loading bytes, ints, arbitrary ValueTypes, and reference types from an array.  Using ILGenerator you’d have to pick the appropriate opcode, but with Sigil you just call “LoadElement()” and the preceding instructions are used to figure it out.

Finally, Sigil detects when the Tailcall and Readonly prefixes can be used and inserts them into the command stream.  It’s not possible to detect when the  Volatile and Unaligned prefixes should be inserted (at least so far as I know), but Sigil does only allow them to be added in conjuction with opcodes they’re legal on which is still better than ILGenerator.

Unconditional Branch Caveat

There is one pain point Sigil does not yet address, though I have plans.  Right now, Sigil requires type assertions immediately after unconditional branches (Br, and Leave to be precise) as it’s incapable of inferring the state of the stack in this case.  This doesn’t come up quite as much as you’d expect, since truly unconditional branches are rare; especially when creating DynamicMethods.

Asserting types is attached to marking labels, and looks like the following:

var il = Emit<Func<int>>.NewDynamicMethod();

var b0 = il.DefineLabel("b0"), b1 = il.DefineLabel("b1"), b2 = il.DefineLabel("b2");
il.Branch(b0); // jump to b0 with "abc"

il.MarkLabel(b1, new [] { typeof(int) }); // incoming: 3
il.Call(typeof(Math).GetMethod("Max", new[] { typeof(int), typeof(int) }));
il.Branch(b2); // jump to b2 with 4

il.MarkLabel(b0, new[] { typeof(string) }); // incoming: "abc"
il.Branch(b1); // jump to b1 with 3
il.MarkLabel(b2, new[] { typeof(int) }); // incoming: 4

You can assert types along with any MarkLabel call, in cases where Sigil can infer the stack state a SigilVerificationException will be thrown when there’s a mismatch.

Check It Out, Try It Out, Break It

Sigil’s source is on github, and it’s available on Nuget.

While I’ve done a fair amount of testing and converted some projects from ILGenerator to Sigil to flush out bugs, I wouldn’t at all be surprised if there are more.  Likewise, I wouldn’t be shocked if Sigil’s validation has some holes or if it’s too strict in some cases.

So grab Sigil and try it out, I love working on this sort of stuff so don’t be shy about opening issues.

Public Broadcasting: A Self-Describing Wrapper Around protobuf-net

Familiar with Protocol Buffers?  It’s a neat binary serialization format out of Google which aims to be efficient and extensible.  Java, C++, and Python have “official” libraries, and there are a plethora of libraries for other platforms.

In fact, we’ve been using protobuf-net over at Stack Exchange for a good long time, since August 18th, 2010 if our commit logs are to be believed.  It’s famously fast and simple to use, which has let it worm its way into basically all of our projects.  We even got the mind behind it to come and slum it with chumps like me.


There is one pain point to using Protocol Buffers and that’s, well, defining the protocol bits.  You can either define your messages in .proto files and compile them, or if you’re using protobuf-net (or similar) annotate your existing types.

With protobuf-net a typical class ends up looking like so:

public class RedisInboxItem : IComparable<RedisInboxItem>
  public DateTime CreationDate { get; set; }
  [ProtoMember(2, IsRequired = true)]
  public InboxItemType Type { get; set; }
  public int Id { get; set; }
  public string Title { get; set; }
  public bool IsPersisted { get; set; }

  // ...

This isn’t a problem if you’re marshaling between different code bases or acting as a service – you need to document the types involved after all; might as well do it in annotations or .proto files.  But if you’re communicating within the same code base, or with internal services, this manual protocol declaration can be onerous and a tad error prone.

Easily beat Brock, or eventually have a giant fire-breathing dragon?

Trade-Offs For Convenience

What would be really keen is a wrapper around Protocol Buffers that carries its own description, so that mapping tag numbers to fields doesn’t need any fore-knowledge of the serialized types.  It’s also been a while since I committed any really terrible ILGenerator code.

So I wrote one, it trades some of protobuf-net’s speed and some of Protocol Buffers compactness for the convenience of not having to use .proto files or annotations.  I call it Public Broadcasting, because that’s the first phrase that popped into my head with a P followed by a B.  Naming is hard.

In short, what Public Broadcasting does is provide a structurally typed wrapper around protobuf-net.  Any member names that match when deserializing are mapped correctly, any missing members are ignored, and any safe conversions (like byte -> int or Enum <-> String) happen automatically.  In addition, Nullable<struct>’s will be converted to default(struct) if necessary when deserializing.  Inheritance, type names, interfaces, and so on are ignored; we care about how the data “looks” not how it’s “named”.

Public Broadcasting works by describing a type using Protocol Buffers, then including that description in an “envelope” along with the actual data.  When deserializing, Public Broadcasting constructs a new type with all the appropriate annotations and then lets protobuf-net do the heavy lifting of deserializing the data.  Since we only care about the “structure” of the data a good deal of .NET’s type system is discarded, namely only classes (no distinction between ReferenceType and ValueType), Lists, Dictionaries, Enumerations, Nullables, and “simple” types (int, byte, string, GUID, etc.) are used.

Since the original type being deserialized doesn’t actually need to be known with Public Broadcasting, there is a Deserialize method which returns dynamic.  Although dynamic is something I normally avoid, in the “grab a data object, then discard”- style I typically use protobuf-net in, I think it’s a good fit.

In my (admittedly limited) testing, Public Broadcasting is typically within an order of magnitude of raw protobuf-net usage.  And protobuf-net is hella-fast, so even 10x slower is going to be plenty fast most of the time.

In terms of compactness, Public Broadcasting is going to be approximately the “length of all involved strings” larger than raw protobuf-net.  As soon as you start having many instances or recursive types this overhead shrinks relative to the size of the message, as Public Broadcasting doesn’t repeat member names like JSON.

However, if you absolutely must have the smallest messages and the fastest (de)serializations then you’ll want to use protobuf-net directly; overhead imposed by Public Broadcasting is noticeable.

If You Like What You’ve Read

Grab the source, or pull the dll off of Nuget, and let me know what you think.

Extending Type Inferencing in C#

A while ago I used the first Roslyn CTP to hack truthiness into C#.  With the second Roslyn CTP dropping on June 5th, now pretty close to feature complete,  it’s time think up some fresh language hacks.

Roslyn What Now?

For those unfamiliar, Roslyn is Microsoft’s “Compiler as a Service” for the VB.NET and C# languages.  What makes Roslyn so interesting is that it exposes considerably more than “Parse” and “Compile” methods; you also have full access to powerful code transformations and extensive type information.

Roslyn is also very robust in the face of errors, making the most sense it can of malformed code and providing as complete a model as possible.  This is very handy for my evil purposes.

Picking A Defect

Now I happen to really like C#, it’s a nice, fast, fairly rapidly evolving, statically typed language.  You get first class functions, a garbage collector, a good type system, and all that jazz.  Of course, no language is perfect so I’m going to pick one of my nits with C# and hack up a dialect that addresses it using Roslyn.

The particular defect is that you must specify return types in a method declaration, this is often pointless repetition in my opinion.

Consider this method:

static MvcHtmlString ToDateOnlySpanPretty(DateTime dt, string cssClass)
  return MvcHtmlString.Create(String.Format(@"<span title=""{0:u}"" class=""{1}"">{2}</span>", dt, cssClass, ToDateOnlyStringPretty(dt, DateTime.UtcNow)));

Is that first MvcHtmlString really necessarily?  Things get worst when you start returning generic types, oftentimes I find myself writing code like:

Dictionary<string, int> CalcStatistics()
  var result = new Dictionary<string, int>();
  // ...
  return result;

Again, the leading Dictionary<string, int> is really not needed the type returned is quite apparent so we’re really just wasting key strokes there.

This pointless repetition was addressed for local variables in C# 3.0 with the var keyword.  With Roslyn, we can hack up a pre-processor that allows var as a method’s return type.  The above would become the following:

var CalcStatistics()
  var results = new Dictionary<string, int>();
  // ...
  return results;

The Rules

Wanton type inferencing can be a bit dangerous, it makes breaking contracts pretty easy to do for one.  So I’m imposing a few constraints on where “var as a return” can be used, for the most part these are arbitrary and easily changed.

  • var can only be used on non-public and non-protected methods
  • if a method returns multiple types, a type will be chosen for which all returned types have an implicit conversion with the following preferences
    • classes before interfaces
    • derived types before base types
    • generic types before non-generic types
    • in alphabetical order
    • with the exceptions that Object is always considered last and IEnumerable (and IEnumerable<T>) are considered before other interfaces
  • a method with empty returns will become void

The last rule may be a little odd as C# doesn’t have a notion of a “void type” really, which is something of a defect itself in my opinion.  Which type is chosen for a return is well-defined so it’s predictable (always a must in a language feature) and attempts to match “what you meant”.

I made one more handy extension, which is allowing var returning methods to return anonymous types.  You can sort of do this now either passing around Object or using horrible grotty hacks (note, don’t actually do that); but since there’s no name you can’t do this cleanly.  Since var returning methods don’t need names, I figured I might as well address that too.

Let’s See Some Code

Actually loading a solution and parsing/compiling is simple (and boring), check out the source for how to do it.

The first interesting bit is finding all methods that are using “var” as a return type.

// eligibleMethods is a List<MethodDeclarationSyntax>
var needsInferencing =
  w => (w.ReturnType is IdentifierNameSyntax) &&

This literally says “find the methods that have return type tokens which are ‘var’”.  Needless to say, this would be pretty miserable to do from scratch.

Next interesting bit, we grab all the return statements in a method and get the types they return.

var types = new List<TypeInfo>();
// returns is a List<ReturnStatementSyntax>
foreach (var ret in returns)
  var exp = ret.Expression;

  if (exp == null)

  var type = model.GetTypeInfo(exp);

Note the “model.GetTypeInfo()” call there, that’s Roslyn providing us with detailed type information (which we’ll be consuming in a second) despite the fact that the code doesn’t actually compile successfully at the moment.

We move onto the magic of actually choosing a named return type.  Roslyn continues to give us a wealth of type information, so all possible types are pretty easy to get (and ordering them is likewise simple).

var allPossibilities = info.SelectMany(i => i.Type.AllInterfaces).OfType<NamedTypeSymbol>().ToList();
allPossibilities.AddRange(info.Select(s => s.Type).OfType<NamedTypeSymbol>());
// info is an IEnumerable<TypeInfo>
foreach (var i in info)
  var @base = i.Type.BaseType;
  while (@base != null)
    @base = @base.BaseType;

Rewriting the method with a new return type is a single method call, and replacing the “from source” method with our modified one is just as simple.

And that’s basically it for the non-anonymous type case.

Anonymous Types

For returning anonymous types, everything up to the “rewrite the method” step is basically the same.  The trouble is, even though we know the “name” of the anonymous type we can’t use it.  What we need to do instead is “hoist” the anonymous type into an actual named type and return that instead.

This is kind of complicated, but you can decompose it into the following steps:

  1. Make a note of returned anonymous types
  2. Rewrite methods to return a new type (that doesn’t exist yet)
  3. Rewrite anonymous type initializers to use the new type
  4. Create the new type declaration

Steps #1 and #2 are easy, Roslyn doesn’t care that your transformation doesn’t make sense yet.

Step #3 requires a SyntaxRewriter and a way to compare anonymous types for equality.  The rewriter is fairly simple, the syntax for an anonymous type initializer and a named one vary by very little.

Comparing anonymous types is a little more complicated.  By the C# spec, anonymous are considered equivalent if they have the same properties (by name and type) declared in the same order.

Roslyn gives us the information we need just fine, so I threw a helper together for it:

internal static bool AreEquivalent(TypeSymbol a, TypeSymbol b, Compilation comp)
  var aMembers = a.GetMembers().OfType<PropertySymbol>().ToList();
  var bMembers = b.GetMembers().OfType<PropertySymbol>().ToList();
  if (aMembers.Count != bMembers.Count) return false;
  for (var i = 0; i < aMembers.Count; i++)
    var aMember = aMembers[i];
    var bMember = bMembers[i];
    if (aMember.Name != bMember.Name) return false;
    if (aMember.DeclaredAccessibility != bMember.DeclaredAccessibility) return false;
    var aType = aMember.Type;
    var bType = bMember.Type;
    var aName = aType.ToDisplayString(SymbolDisplayFormat.FullyQualifiedFormat);
    var bName = bType.ToDisplayString(SymbolDisplayFormat.FullyQualifiedFormat);
    if (aName == bName) continue;
    var conv = comp.ClassifyConversion(aType, bType);
    if (!conv.IsIdentity) return false;
  return true;

Notice that Roslyn also gives us details about what conversions exist between two types, another thing that would be absolutely hellish to implement yourself.

The final step, adding the new type, is the biggest in terms of code although it’s not really hard to understand (I also cheat a little).  Our hoisted type needs to quack enough like an anonymous to not break any code, which means it needs all the expected properties and to override Equals, GetHashCode, and ToString.

This is all contained in one large method.  Rather than reproduce it here, I’ll show what it does.

Take the anonymous type:

A = "Hello",
B = 123,
C = new Dictionary<string, string>()

This will get a class declaration similar to

internal class __FTI88a733fde28546b8ae4f36786d8446ec {
  public string A { get; set; }
  public int B { get; set; }
  public global::System.Collections.Generic.Dictionary<string, string> C { get; set; }
  public override string ToString()
    return new{A,B,C}.ToString();
  public override int GetHashCode()
    return new{A,B,C}.GetHashCode();
  public override bool Equals(object o)
    __FTI88a733fde28546b8ae4f36786d8446ec other = o as __FTI88a733fde28546b8ae4f36786d8446ec;
    if(other == null) return new{A,B,C}.Equals(o);
      (A != null ? A.Equals(other.A) :
      (other.A != null ? other.A.Equals(A) : true )) &&
      B == other.B &&
      (C != null ? C.Equals(other.C) : (other.C != null ? other.C.Equals(C) : true ));

The two big cheats here are the lack of a constructor (so it’s technically possible to modify the anonymous type, this is fixable but I don’t think it’s needed for a proof-of-concept) and using the equivalent anonymous type itself to implement the required methods.


I’m pretty pleased with Roslyn thus far, it’s plenty powerful.  There are still a bunch of limitations and unimplemented features in the current CTP though, so be aware of them before you embark on some recreational hacking.

In terms of complaints, some bits are a little verbose (though that’s gotten better since the first CTP), documentation is still lacking, and there are some fun threading assumptions that make debugging a bit painful.  I’m sure I’m Doing It Wrong in some places, so some of my complaints may be hogwash; I expect improvements in each subsequent release as well.

If it wasn’t apparent, the code here is all proof-of-concept stuff.  Don’t use in production, don’t expect it to be bug free, etc. etc.

You can check out the whole project on Github.

More, a CSS compiler

It’s hard to represent CSS in pictures, so I’m breaking this post up with puppies.

CSS is an… interesting technology.  As Wil Shipley put it, “CSS: If a horse were designed by a committee of camels.”  There’s just enough rough edges, weird decisions, and has-not-had-to-use-it-everyday blindness to make you want something better to work in.  While it’s still a draft (and I hope has no chance of becoming part of a standard), take a look at CSS Variables and despair that that was proposed.

I’m hardly the first (second, or probably even millionth) person to think this, so there are some alternatives to writing CSS out there.  At Stack Exchange we use LESS (specifically the dotLESS variant), Sass is also pretty popular just from straw polling developers though I’m not really familiar with it.

For kicks I started throwing my own CSS compiler together a few months ago that’s a little more radical than LESS (from which I took a great deal of inspiration).  I’m calling it More for now because: naming is hard, and I like the homage to LESS.  Although I am drawing explicit inspiration from LESS, More is not a LESS super-set.

The radical changes are

  • Order doesn’t matter
  • “Resets” are first class
  • Sprites are first class
  • Explicit copy selector syntax, in addition to mixins
More details below, as you might guess familiarity with LESS will make lots of this sound really familiar.  If you’d rather not read a 3,000+ word post, hop over to Github where the README takes a whack at summarizing the whole language.

Declaration Order Doesn’t Matter

In CSS subsequent declarations override preceding ones, if you declare .my-class twice the latter one’s rules win; likewise for rules within the same block.  This is pretty wrong-headed in my opinion, you should be stating your intentions explicitly not just tacking on new rules.  Combine this with how negative an impact bloated CSS can have on page load speeds (CSS and HTML are the bare minimum necessary for display, everything else can be deferred), I can’t help but classify this as a misfeature.

In More, the order of selector blocks in the output is explicitly not guaranteed to match that of the input.  Resets, detailed below, allow you to specify that certain blocks preceed all others and @media blocks necessarily follow all others but any other reliance on ordering is considered a bad practice.

Likewise, overriding rules in blocks is an explicit operation rather than a function of declaration order, the specifics are detailed below.

He wants your CSS structure to be explicit.


More natively supports the concept of a “reset”.  Blocks contained between @reset{…} will always be placed at the top of generated CSS file, and become available for inclusion via reset includes.

  h1 { margin: 0; }
.class h1 { color: blue; }


h1 { margin: 0; }
.class h1 { color: blue; }

Blocks within a @reset block cannot use @reset includes or selector includes, but can use mixins.  Reset blocks are also not referenced by selector includes.

@reset includes let you say “reset this block to the default stylings”, practically this means including any properties defined within a @reset{…} in the block with the @reset include.

@reset {
  h1 { margin: 0; }
.class h1 { color: blue; @reset(h1); }


h1 { margin: 0; }

.class h1 { color:blue; margin: 0; }

It is not necessary to specify the selector to reset to, an empty @reset() will reset to the selector of the block that contains it.  Note that for nested blocks this will be the inner most blocks selector.

@reset {
  :hover { color: red; }
  h1 { margin: 0; }
h1 {
  line-height: 15px; 
a {
  color: blue;
  &:hover {
    font-weight: bold;


:hover { color: red; }
h1 { margin: 0; }
h1 { margin: 0; line-height: 15px; }
a { color: blue; }
a:hover { font-weight: bold; color: red; }

Properties included by a @reset() (with or without optional selector) will never override those defined otherwise in a block.

@reset { a { color: red; height: 10px; } }
a { @reset(); color: blue; } }


a { color: red; height: 10px; }
a { color: blue; height: 10px; }

The intent behind resets is to make it easier to define a style’s “default element stylings” and subsequent reset to them easily.

You know what’s cool? Not generating your sprite files independent of the CSS that uses them.


If you include a non-trivial number of images on  a site, you really ought to be collapsing them into sprites.  There are plenty of tools for doing this out there, but tight integration with CSS generation (if you’ve already decided to compile your CSS) is an obvious win.

To that end, More lets you generate sprites with the following syntax:

  @up-mod = '/img/up.png';
  @down-mod = '/img/down.png';

This declaration creates the sprite.png from up.png and down.png, and adds the @up-mod and @down-mod mixins to the file.  Mixins created from sprites are no different than any other mixins.

For example:

.up-vote {

could compile to

.up-vote {
  background-image: url(img/sprite.png);
  background-position: 0px 0px;
  background-repeat: no-repeat;
  width: 20px;
  height: 20px;

For cases there are other properties that you want to include semantically “within” a sprite, these generated mixins take one mixin as an optional parameter.  The following would be valid, for example.

.down-vote {

Finally, there are command line options for running executable on generated sprite files.  It’s absolutely worthwhile to squeeze the last few bytes out of a PNG file you’ll serve a million times, but actually implementing such a compression algorithm is outside the scope of More.  While this can be fairly easily hacked together, as a built-in I hope it serves as a “you should be doing this” signal to developers.  I personally recommend PNGOUT for this purpose.

Include Selectors

Sometimes, you just want to copy CSS around.  This can be for brevity’s sake, or perhaps because you want to define a set of selectors in terms of some other selector.   To do that with More, just include the selector in @() like so:

div.foo { color: #abc; }
div.bar {
  background-color: #def;

This syntax works with more complicated selectors as well, including comma delimited ones, making it possible to include many distinct blocks in a single statement.  It is not an error if a selector does not match any block in the document.

Selector includes are one of the last things to resolve in a More document, so you are copying the final CSS around.  @(.my-class) will copy the final CSS in the .my-class block, but not any of the mixin invokations, nested blocks, or variable declarations that went into generating it.

For example:

.bar {
  foo: @c ?? #eee;
  &:hover {
    buzz: 123;
.fizz {
  @c = #aaa;

Will generate:

.bar { foo: #eee; }
.bar:hover { buzz: 123; }
.fizz { foo: #eee; }

As @(.bar) copies only the properties of .bar in the final document.

Not stolen, inspired. *cough*

LESS Inspired Bits

That’s it for the really radical new stuff, most of the rest of More is heavily inspired by (though rarely syntactically identical to) LESS.  Where the syntax has been changed, it has been changed for reasons of clarity (at least to me, clarity is inherently subjective).

More takes pains to “stand out” from normal CSS, as it is far more common to be reading CSS than writing it (as with all code).  As such, I believe it is of paramount importance that as little of a file need be read to understand what is happening in a single line of More.

This is the reason for heavy use of the @ character, as it distinguishes between “standard CSS” with its very rare at-rules and More extensions when scanning a document.  Likewise, the = character is used for assignment instead of :.


More allows you to define constants that can then be reused by name throughout a document, they can be declared globally as well as in blocks like mixins (which are detailed below).

@a = 1;
@b = #fff;

These are handy for defining or overriding common values.

Another example:

@x = 10px;
img {
  @y = @x*2;
  width: @y;
  height: @y;

It is an error for one variable to refer to another in the same scope before it is declared, but variables in inner scopes can referrer to those in containing scopes irrespective of declaration order.  Variables cannot be modified once declared, but inner variables can shadow outer ones.

Puppies, (debatably) not annoying.

Nested Blocks

One of the annoying parts of CSS is the repetition when declaring a heirarchy of selectors.  Selectors for #foo, #foo.bar, #foo.bar:hover, and so on must appear in full over and over again.

More lets you nest CSS blocks, again much like LESS.

#id {
  color: green;
  .class {
    background-color: red;
  &:hover {
    border: 5px solid red;

would compile to

#id {
  color: green;
#id .class {
  background-color: red;
#id:hover {
  border: 5px solid red;

Note the presense of LESS’s & operator, which means “concat with parent”.  Use it for when you don’t want a space (descendent selector) between nested selectors; this is often the case with pseudo-class selectors.


Mixins effectively lets you define functions which you can then include in CSS blocks.

  background-color: red;
  font-weight: bold;
  font-size: 14px;

Mixins can also take parameters.

@alert(@c) { color: @c; }

Parameters can have default values, or be optional altogether.   Default values are specified with an =<value>, optional parameters have a trailing ?.

@alert(@c=red, @size?) { color: @c; font-size: @size; }

Mixin’s can be passed as parameters to mixins, as can selector inclusions.

@outer(@a, @b) { @a(); @b(green); }
@inner(@c) { color: @c; }
h1 { font-size: 12px; }
img { @outer(@(h1), @inner); }

Will compile to

h1  { font-size: 12px; }
img { font-size: 12px; color: green; }

As in LESS, the special variable @arguments is bound to all the parameters passed into a mixin.

@mx(@a, @b, @c) { font-family: @arguments; }
p { @mx("Times New Roman", Times, serif); }

Compiles to

p { font-family: "Times New Roman", Times, serif; }

Parameter-less mixins do not define @arguments, and it is an error to name any parameter or variable @arguments.

Puppy function: distract reader from how much text is in this post.


More implements the following functions:

  • blue(color)
  • darken(color, percentage)
  • desaturate(color, percentage)
  • fade(color, percentage)
  • fadein(color, percentage)
  • fadeout(color, percentage)
  • gray(color)
  • green(color)
  • hue(color)
  • lighten(color, percentage)
  • lightness(color)
  • mix(color, color, percentage/decimal)
  • nounit(any)
  • red(color)
  • round(number, digits?)
  • saturate(color, percentage)
  • saturation(color)
  • spin(color, number)

Most of these functions are held in common with LESS, to ease transition.  All functions must be preceded by @, for example:

@x = #123456;
p {
   color: rgb(@red(@x), 0, 0);

Note rgb(…), hsl(..), and other CSS constructs are not considered functions, but they will convert to typed values if used in variable declarations, mixin calls, or similar constructs.

In String Replacements

More strives to accept all existing valid CSS, but also attempts to parse the right hand side of properties for things like math and type information.  These two goals compete with each other, as there’s a lot of rather odd CSS out there.

img {
  filter: progid:DXImageTransform.Microsoft.MotionBlur(strength=9, direction=90);
  font-size: 10px !ie7;

It’s not reasonable, in my opinion, to expect style sheets with these hacks to be re-written just to take advantage of More, but it can’t parse them either.

As a compromise, More will accept any of these rules but warn against their use.  Since a fancier parsing scheme isn’t possible, a simple string replacement is done instead.

img {
  filter: progid:DXImageTransform.Microsoft.MotionBlur(strength=@strength, direction=90);
  font-size: @(@size * 2px) !ie7;

The above demonstrates the syntax.  Any variables placed in the value will be simply replaced, more complicated expressions need to be wrapped in @().

Although this feature is intended mostly as a compatibility hack, it is also available in quoted strings.

Other, Less Radical, Things

There are a couple of nice More features that aren’t really LESS inspired, but would probably fit right in a future LESS version.  They’re not nearly as radical additions to the idea of a CSS compiler.

Dressing up your puppies, optional.

Optional Values

Mixin invocations, parameters, variables, and effectively whole properties can be marked “optional”.

.example {
  color: @c ?? #ccc;

This block would have a color property of #ccc if @c were not defined, making overrides quite simple (you either define @c or you don’t).  Likewise, the foo mixin would only be included if it were defined (without the trailing ? it would be an error).

A trailing ? on a selector include will cause a warning, but not an error.  It doesn’t make sense, as selector includes are always optional.

Entire properties can be optional as follows:

.another-example {
  height: (@a + @b * @c)?;

If any of a, b, or c are not defined then the entire height property would not be evaluated.  Without the grouping parenthesis and trailing ? this would result in an error.


When using mixins, it is not uncommon for multiple copies of the same property to be defined.  This is especially likely if you’re using mixins to override default values.  To alleviate this ambiguity, it is possible to specify that a mixin inclusion overrides rules in the containing block.

@foo(@c) { color: @c; }
.bar {
  color: blue;

This would create a .bar block with a single color property with the value of “red”.  More will warn, but not error, when the same property is defined more than once in a single block.

You can also force a property to always override (regardless of trailing ! when included) by using !important.  More will not warn in these cases, as it’s expressed intent.

You can also use a trailing ! on selector includes, to the same effect.

Puppies With Hats

Values With Units

Values can be typed, and will be correctly coersed when possible and result in an error otherwise.  This makes it easier to catch non-nonsensical statements.

@mx(@a, @b) { height: @a + @b; }
  @mx(12px, 14); // Will evaluate to height: 26px;
  @mx(1in, 4cm); // Will evaluate to height: 6.54cm; (or equivalent)
  @mx(15, rgb(50,20,10)); // Will error

Similarly, any function which expects multiple colors can take colors of any form (hsl, rgb, rgba, hex triples, or hex sextuples).

When needed, units can be explicitly stripped from a value using the built in @nounit function.


You can include other More or CSS files using the @using directive.  Any included CSS files must be parse-able as More (which is a CSS super-set, although a strict one).  Any variables, mixins, or blocks defined in any includes can be referred to at any point in a More file, include order is unimportant.

@using can have a media query, this results in the copied blocks being placed inside of a @media block with the appropriate query specified.  More will error if the media query is unparseable.

At-rules, which are usually illegal within @media blocks, will be copied into the global context; this applies to mixins as well as @font-face, @keyframes, and so on.  This makes such at-rules available in the including file, allowing for flexibility in project structure.

More does allow normal CSS @imports.  They aren’t really advisable for speed reasons.  @imports must appear at the start of a file, and More will warn about @import directives that had to be re-ordered as that is not strictly compliant CSS in the first place.

CSS3 Animations

More is aware of the CSS3 @keyframes directive, and gives you all the same mixin and variable tools to build them.

@mx(@offset) { top: @offset * 2px; left: @offset + 4px; }
@keyframes my-anim {
  from { @mx(5); }
  to { @mx(15); }

Compiles to

@keyframes my-anim {
  from { top: 10px; left: 9px;}
  to { top: 30px; left: 19px; }

It is an error to include, either via a mixin or directly, a nested block in an animation block.  More will also accept (and emit unchanged) the Mozilla and Webkit prefixed versions of @keyframes.

Variables can be declared within both @keyframes declarations and the inner animation blocks, the following would be legal More for example.

@keyframes with-vars {
  @a = 10;
  from { 
    @b = @a * 2;
    top: @b + 5px;
    left: @b + 3px;
  to { top: @a; left: @a; }

It is an error to place @keyframes within @media blocks or mixins.  As with @media, @keyframes included via a @using directive will be pulled into the global scope.

More Is A CSS Superset

More strives to accept all existing CSS as valid More.  In-string replacements are a concession to this, as is @import support; both of which were discussed above.  @charset, @media, and @font-face are also supported in pursuit of this goal.

@charset may be declared in any file, even those referenced via @using.  It is an error for more than one character set to be referred to in a final file.  More will warn if an unrecognized character set is used in any @charset declaration.

@media declarations are validated, with More warning on unrecognized media types and erroring if no recognized media types are found in a declaration.  Blocks in @media statements can use mixins, but cannot declare them.  Selector includes first search within the @media block, and then search in the outer scope.  It is not possible for a selector include outside of a @media block to refer to a CSS block inside of one.

By way of example:

img { width: 10px; @(.avatar); }
p { font-color: grey; }
@media tv {
  p { font-color: black; }
  .avatar { border-width: 1px; @(p); }
  .wrapper { height:10px; @(img); }

Will compile to:

img { width: 10px; }
p { font-color: grey; }
@media tv {
  p { font-color: black; }
  .avatar { border-width: 1px; font-color: black; }
  .wrapper { height:10px; width: 10px; }

@font-face can be used as expected, however it is an error to include any nested blocks in a @font-face declaration.  More will warn if a @font-face is declared but not referred to elsewhere in the final CSS output; this is generally a sign of an error, but the possibility of another CSS file (or inline style) referring to the declared font remains.  More will error if no font-family or src rule is found in a @font-face block.

The pun is the highest form of humor.


More minifies it’s output (based on a command line switch), so you don’t need a separate CSS minifier.  It’s not the best minifier out there, but it does an OK job.

In particular it removes unnecessary white space, quotation marks, and reduces color and size declarations; “green” becomes #080 and “140mm” become “14cm” and so on.

A contrived example:

  a: #aabbcc;
  b: rgb(100, 50, 30);
  c: #008000;
  d: 10mm;
  e: 1.00;
  f: 2.54cm;

Would become (leaving in white space for reading clarity):

  a: #abc;
  b: #64321e;
  c: green;
  d: 1cm;
  e: 1;
  f: 1in;

This minification is controlled by a command line switch.

Cross Platform

For web development platform choice is really irrelevant and acknowledging that More runs just fine under Mono, and as such works on the Linux and Mac OS X command lines.

More Isn’t As Polished As LESS

I always feel a need to call this out when I push a code artifact, but this isn’t a Stack Exchange thing.  This is a “Kevin Montrose screwing around with code to relax” thing.  I’ve made a pretty solid effort to make it “production grade”, lots of tests, converted some real CSS (some of which is public, some of which is unfortunately not available), but this is still a hobby project.  Nothing is going to make it as solid as years of real world usage, which LESS and Sass have and More doesn’t.

Basically, if you’re looking for something to simplify CSS right now and it cannot fail go with one of them.  But, if you like some of what you read and are looking to play around well…

Give it a Spin

You can also check out the code on Github.

Committing A Horrible Atrocity Against C#

Title from the other side of the tracks: Bringing Sanity To C# Conditions

Have you played with Roslyn?  It’s Microsoft’s “compiler as a service” framework coming to a future .NET release, with a CTP having been available for awhile now.

The focus seems to be on making it really easy to do the sorts of source transformations Visual Studio (and heavy hitting plugins like ReSharper) specialize in.  Better code generation, static analysis, and what-have-you tooling will be probably be nice result as well.

But you can do some really evil/awesome things as well thanks to Roslyn failing very gracefully in the presence of malformed code, by design.

For example, have you ever wanted C# to be truthy?

I’m told my sense of fun is a little warped.

Truthiness is coalescing certain values to true/false that are not strictly boolean; making null a stand in for false, for example.  This is a common feature in dynamic languages, Python, Ruby, and of course Javascript all have some notion of truthiness.

I personally kind of hate truthiness (I’d appreciate it as an explicit unary operator though), but I realize this is a religious position.  I also hate mayonnaise, but don’t begrudge it’s existence.

But if you do like truthiness, or like writing Fun code like I do, Roslyn makes it possible to cram it into C#.

I’m choosing to define Truthiness In C# as:

  • null, the empty string, and the defaults of ValueTypes are false
  • everything else is true
  • only available in control statements, not as a type coercion

This definition is really easy to change, for reasons that will become apparent later, and is basically Python’s take (minus collections and dictionaries) so it’s not too crazy.

I want this to be a build step, so the flow for compiling with truthiness is:

  1. Take a (possibly invalid due to truthy statements) C# program
  2. Find all the bits that assume truthiness
  3. Replace those bits with equivalent and valid vanilla C#
  4. Hand the results off to the actual C# compiler

Roslyn actually can emit assemblies (in theory, I haven’t tried in the CTP), but for the sake of brevity I’m choosing to stop the process early and write new .cs files to disk.

Finding bits that assume truthiness is quite simple, because Roslyn doesn’t just blow up on an invalid program; it does it’s best to give you everything that can be gleaned from malformed code, just like Visual Studio.

This let’s us use a SyntaxWalker to visit each node of the AST for our dodgy program and still mostly make sense of it.  If we encounter anything where we expect a conditional (inside an if statement, for example) that isn’t provably a boolean, then we’ve probably found a truthy usage.  We don’t do anything with that knowledge yet, we just stash it away for later.

The SyntaxWalker that does this is surprisingly simple.  Once you’ve got a SemanticModel, figuring out the type of an expression is trivial.

private bool IsBool(ExpressionSyntax exp)
  var info = Model.GetSemanticInfo(exp);
  var knownType = info.Type;

    knownType != null &&
    knownType.SpecialType == Roslyn.Compilers.SpecialType.System_Boolean;

Once we’ve walked all source, finding a list of all truthy things, we can do the actual truthy implementation.  The easiest way to do this is to wrap all truthy values in a call to method that implements the above rules.  A more “correct” way would be to transform the truthy expressions into proper boolean ones, saving a method call and giving the compiler more information to work with.

I naturally went with the easy way, adding this method to every class that uses truthy expressions.  This also makes it very easy to change the truthiness rules, as I alluded to earlier.

private static bool __Truthy(object o)
  if (o == null || (o is string && (string)o == string.Empty)) return false;
  var type = o.GetType();
  if (type.IsValueType)
    return !o.Equals(Activator.CreateInstance(type));
  return true;

There’s a bit of finese in the actual wrapping of expression in calls to __Truthy.  If an expression contains a sub-expression that is itself truthy we want to replace the sub-expression first and re-walk the SyntaxTree.  This is because whether or not an expression is truthy is dependent on it’s sub-expressions: (true || “”) is truthy, but (true || __Truthy(“”)) is not essentially.  There’s also a little bit of work spent detecting if something is already being passed to __Truthy, so we don’t end up with __Truthy(__Truthy(“”)) or similar; this is mostly caused by ternary conditionals just being weird relatively speaking.

The full project on Github is an executable that transforms a whole Visual Studio .csproj, in a rather haphazard fashion.  You need the referenced assemblies to get a SemanticModel, which I’m extracting from .csprojs for convenience.

To illustrate the transformation, here’s some truthy C#.

static void Main(string[] args)
  UserDefined ud = new UserDefined { ABoolProp = false, AStringProp = "" };
  bool a = true, b = false;

  if ("hello world") Console.WriteLine("String literals are truthy");
  if (1) Console.WriteLine("int literals are truthy");
  if (!null) Console.WriteLine("null is falsy");
  if (!ud.AStringProp) Console.WriteLine("Member access works");
  if (ud.AStringProp.Length || !ud.ABoolProp) Console.WriteLine("Chained member access works");

  if (a || b || 0) Console.WriteLine("Normal bools aren't mangled");
  if (true) Console.WriteLine("Normal literals aren't mangled");

  string str = a ? "hello" : "world";

  if (str == "hello") Console.WriteLine("Normal ternary conditionals aren't mangled");

  for (int i = 0; i < 3 && (ud.AnIntMethod() ? "hello" : "world"); i++)
    Console.WriteLine(i + " complicated condition");

This gets transformed into

static void Main(string[] args)
  UserDefined ud = new UserDefined { ABoolProp = false, AStringProp = "" };
  bool a = true, b = false;

  if (__Truthy("hello world")) Console.WriteLine("String literals are truthy");
  if (__Truthy(1)) Console.WriteLine("int literals are truthy");
  if (!__Truthy(null)) Console.WriteLine("null is falsy");
  if (!__Truthy(ud.AStringProp)) Console.WriteLine("Member access works");
  if (__Truthy(ud.AStringProp.Length) || !ud.ABoolProp) Console.WriteLine("Chained member access works");

  if (a || b || __Truthy(0)) Console.WriteLine("Normal bools aren't mangled");
  if (true) Console.WriteLine("Normal literals aren't mangled");

  string str = a ? "hello" : "world";

  if (str == "hello") Console.WriteLine("Normal ternary conditionals aren't mangled");

  for (int i = 0; i < 3 && (__Truthy(__Truthy(ud.AnIntMethod()) ? "hello" : "world")); i++)
    Console.WriteLine(i + " complicated condition");

Notice that spacing is maintained, Roslyn round trips that sort of thing which is very nice.

And of course, the code runs and outputs the expected text:

String literals are truthy
int literals are truthy
null is falsy
Member access works
Chained member access works
Normal bools aren't mangled
Normal literals aren't mangled
Normal ternary conditionals aren't mangled
0 complicated condition
1 complicated condition
2 complicated condition

There are, as always, caveats.  Lots in this case, as Roslyn is still a CTP and there are a number of C# features it doesn’t handle yet.  Using var or lambdas will mess things right up, and I’m sure there are some out right bugs in both Roslyn and my dinky code.

But isn’t it cool that it’s possible to abuse C# even this much already?

An Absurd Experiment In String Parsing

I’m going to start this off by saying that this post deals with some code that is a bit… silly.  Definitely veering into “why would you do this?” territory for most, but it’s still an interesting endeavor.

The question is, how quickly can you parse a string?

To give some context, at Stack Exchange we’ve got a process that’s slurping logs from our load balancer into SQL Server for us to query; it’s useful for analytics, debugging, and performance monitoring.  The trick is that at our scale this is quite a lot of data to be dealing with, and traffic spikes at peak time have a habit of knocking the service into a death spiral.

Investigations into one of the latest incidents of service suicide indicated, for a short time, that actually parsing the logs was a bottleneck.  Ultimately this turned out to be a red-herring (the actual killer was, rather predictably, disk IO), but the seed of “how would we do this faster?” was planted.

To be clear, what follows isn’t solving a real problem; this is just me having some fun with code.  Everyone should do it now and again, builds character.

There are, broadly, two approaches to string parsing

You can either go with regular expressions, or you can roll it yourself with IndexOf (or your framework of choice’s equivalent).  All things being equal, I’m inclined to go with regular expressions for their terseness and comparable ease of maintainability.

If you’re going for maximum speed (as I am here), you really do want to do all the string manipulation yourself and maintainability be damned.  However, it’d be nice if we could get the performance of IndexOf’ing and Substring’ing everywhere with a cleaner interface.

Writing IL, approximately as intense as this song.

Enter IL

ILGenerator, my favorite class to go out of my way to avoid, lets you generate new methods at the bytecode (Common Intermediate Language, in the .NET world) level at runtime.  If you want speed, you’ll find it here.

The approach I went with was to create a nice interface (call chaining, opaque state objects, and all that jazz) for describing a parser, and at the last minute dropping into some hideous IL generation to create an actual delegate to do the deed.  This produces very, very fast string parsing code without being unbearable to use; it also enables some dirty tricks we wouldn’t want to use even in hand rolled parsing code, which I’ll get to later.

In terms of capabilities, I decided that a reasonable string parser would be composed of the following components

  • Moving forward or backwards in the string a fixed number of characters
  • Moving forward or backwards in the string until a certain string is encountered
  • Taking a given number of characters from the string as a value
  • Taking characters from the string until a certain string is encountered
  • Taking the remainder of the string as a value
  • Perform an arbitrary action when the input string does not conform to the parser’s expectations

I decomposed these six components further into eighteen different method calls, though there are many convenience overrides (referring to a member via either a string or a MemberInfo being the most common).

It ends up looking like the following (contrived example) in practice:

var parser =
  .Take(":=", "VarName")
  .Take("(", "MethodName")
  .Take(")", "Parameters")
  .Else((str, obj) => { throw new ArgumentException(); })

var myObj = new MyObject();
parser("a:=fx(a,b,c);   ... and then some more ...", myObj);

Do note that this isn’t meant to be a replacement for regular expressions, it’s meant to replace a class of string parsers for which I’d expect most developers to use regular expressions.  I do suspect these operations are enough to build DFAs (though I’m not going to spend time trying to prove it, I don’t really care), but not very cleanly and most regular expression engines aren’t strictly regular anyway.

This code runs like greased lightening

On my machine, parsing “(\d+),(\d+),(\d+),(\d+)” and it’s equivalent hand written and composed versions across one million randomly generated (but compliant) strings yields the following.

  • Regex: 3540ms
  • Hand Written: 1140ms
  • Composed: 690ms

I’m dropping the last digit, as it’s highly variable, and taking the median of five runs; forcing a GC (and pausing for it to complete) between each run to try and minimize GC jitters.

Wait, aren’t the hand written and composed versions equivalent?

They’re definately similar, but there are a lot of dirty tricks you can pull off when you’re emitting IL that you wouldn’t really want to tolerate in hand written code.

A big one, if you’re using IndexOf and Substring you’re still creating lots of strings (creating GC pressure) and implicitly paying for method calls (which are fast but not instant, so they add up over many iterations) to get at actual character data.  The internals of a regular expression are even worse in terms of overhead and object churn.  The IL I’m generating does everything in terms of character arrays (in fact, I can guarantee I create only one to parse a string of integers) and avoids method calls like the plague, effectively inlining everything.

In a related vein, both regular expression and IndexOf/Substring approaches end up giving you strings you need to parse into the appropriate data types.  Naively, you’ll typically use things like Int32.Parse which have the same “strings + method call” overhead as above.  Generating IL lets me inline my own parsing code, which naturally deals in character arrays, avoiding both costs again.

The IndexOf/Substring approach does still eek out better performance when you’re just dealing with string members.  Delving into the .NET classes with Reflector shows that Microsoft has (wisely) invested quite a lot of time in optimizing string manipulation, to match them I’d probably have to start linking C in which is a bigger time investment than I’m willing to make.

There really is no lower limit.

Of course, it could be dirtier

I have left some areas unexplored that would probably squeeze a few more milliseconds out.

Non-integer types still fall back to Parse calls, as do DateTime and TimeSpan and some cases of enumerations.  Alternative implementations are time consuming to produce, but would pay dividends especially in the DateTime and TimeSpan cases.

I suspect allocations can be reduced even further through some dubious stackalloc or ThreadLocal usage, possibly combined with some heap juggling.  Not nearly as certain these would work out for the best, but a lack of expertise and time have kept from digging too deeply.

My string scanning algorithm is basically a for loop, breaking out something like Boyer-Moore would pay noticeable dividends for larger “needles”.  This is a lot of complexity for exactly zero gain in the rather common “single character needle” case, so I haven’t sunk the necessary time in yet and may never.

Not much time has been spent on generating ideal IL either.  There’s certainly a lot of stack shuffling that could be eliminated for truly minuscule, if non-zero, savings.

If you’re interested, play around with the code

I’ve thrown it up on github, and uploaded a package on NuGet as the unexcitingly named FluentStringParser (because naming is hard).  The code’s moderately well documented, but it’s still IL so the learning curve is more of a brick wall really.  Still, learn by doing and all that.

I do stress that this is a personal project, not a Stack Exchange endorsed chunk of code (those are currently documented in this blog post).  Caveat emptor if you pull this code in as it does dirty, dirty things for silly, silly reasons.


Get every new post delivered to your Inbox.