Sigil: Adding Some (More) Magic To IL

A nifty thing you can do in .NET is generate bytecode (properly Common Intermediate Language [CIL], formerly Microsoft Intermediate Language [MSIL], commonly called just IL) on the fly.  Previously I’ve used it to do dumb things with strings, and build a serializer on top of protobuf-net.  Over at Stack Exchange it’s used in small but critical parts of our API, in our caching layer via protobuf-net, and in our micro-ORM Dapper.

The heart of IL generation is the ILGenerator class which lets you emit individual opcodes and keeps track of labels, locals, try/catch/finally blocks, and stack depth.

To illustrate .NET’s built-in IL generation, here’s how you’d add 1 & 2:

var method = new DynamicMethod("AddOneAndTwo", typeof(int), Type.EmptyTypes);
var il = method.GetILGenerator();
il.Emit(OpCodes.Ldc_I4, 1);
il.Emit(OpCodes.Ldc_I4, 2);
il.Emit(OpCodes.Add);
il.Emit(OpCodes.Ret);
var del = (Func<int>)method.CreateDelegate(typeof(Func<int>));

del(); // returns 3

But…

ILGenerator is quite powerful, but it leaves a lot to be desired in terms of ease of use.  For example, leave one of the Ldc_I4’s out of the above…

var method = new DynamicMethod("AddOneAndTwo", typeof(int), Type.EmptyTypes);
var il = method.GetILGenerator();
il.Emit(OpCodes.Ldc_I4, 1);
// Woops, we left out the 2!
il.Emit(OpCodes.Add);
il.Emit(OpCodes.Ret);
var del = (Func<int>)method.CreateDelegate(typeof(Func<int>));

del();

And what happens?  You’d expect an error to be raised when we emit the Add opcode, but I’d understand deferring verification until the delegate was actually created.

Of course nothing’s ever easy, and what actually happens is an InvalidProgramException is thrown when the delegate is first used with a phenomenally unhelpful “Common Language Runtime detected an invalid program.” message.  Most of the time, ILGenerator gives you no indicator as to where or why you went wrong.

Frustrations I’ve had with ILGenerator, in descending severity:

  • Fails very late during code generation, and doesn’t indicate what went wrong
  • Allows obviously malformed instructions, like Emit(OpCodes.Ldc_I4, “hello”)
  • Lack of validation around “native int” allows for code that only works on specific architectures

Enter Sigil

Naturally I have a solution, and that solution’s name is Sigil (defined as “an inscribed or painted symbol considered to have magical power”, pronounced “Si-jil”).  Sigil wraps ILGenerator, exposes much less error prone alternatives to Emit, and does immediate verification of the instruction stream.

The erroneous code above becomes:

var il = Emit<Func<int>>.NewDynamicMethod("AddOneAndTwo");
il.LoadConstant(1);
// Still missing that 2!
il.Add();
il.Return();
var del = il.CreateDelegate();
del();

And Sigil throws an exception at il.Add() with the much more helpful message “Add expects 2 values on the stack”.  Also notice how Sigil does away with all that nasty casting.

Sigil does much more than just checking that enough values are on the stack.  It’ll catch type mismatches (including esoteric ones, like trying to add a float and a double), illegal control transfers (like branching out of catch blocks), and bad method calls.

Data For Debugging

In addition to not failing helpfully, ILGenerator doesn’t give you much to go on when it does fail.  You don’t get an instruction listing or stack states, and your locals and labels are nothing but indexes and offsets.

When verification fails using Sigil the full instruction stream to data and current state of the stack (possibly two stacks, if a branch is involved) are captured by the thrown SigilVerificationException.  Every local and label gets a name in the instruction listing (which you can override), and the values on the stack that caused the failure are indicated.

For example…


var il = Emit<Func<string, Func<string, int>, string>>.NewDynamicMethod("E1");
var invoke = typeof(Func<string, int>).GetMethod("Invoke");
var notNull = il.DefineLabel("not_null");

il.LoadArgument(0);
il.LoadNull();
il.UnsignedBranchIfNotEqual(notNull);
il.LoadNull();
il.Return();
il.MarkLabel(notNull);
il.LoadArgument(1);
il.LoadArgument(0);
il.CallVirtual(invoke);
il.Return();

var d1 = il.CreateDelegate();

… throws an SigilVerificationException on Return(), and calling GetDebugInfo() on it gives you the following:

Top of stack
------------
System.Int32 // Bad value

Instruction stream
------------------
ldarg.0
ldnull
bne.un not_null
ldnull
ret
not_null:
ldarg.1
ldarg.0
callvirt Int32 Invoke(System.String)

You still have to puzzle through it, but’s a lot easier to see what went wrong (that return from the passed delegate needs to be converted to a string before calling Return()).

But Wait, There’s More

Since Sigil is already doing some correctness validation that requires waiting until a method is “finished” (like making sure branches end up with their “expected” stacks), it has all it needs to automated a lot of tedious optimizations you typically do by hand when using ILGenerator.

For example, “Emit(OpCodes.Ldc_I4, {count})” shouldn’t be used if {count} is between -1 and 8; but who wants to remember that, especially if you’re rapidly iterating?  Similarly almost every branching instruction has a short form you should use when the offset (in bytes, not instructions) fits into a single byte.  Sigil automates all of that, you just call “LoadConstant” or “Branch” and move on.

Sigil also automates picking the appropriate version of some opcodes based on type.  In raw IL, there are separate instructions for loading bytes, ints, arbitrary ValueTypes, and reference types from an array.  Using ILGenerator you’d have to pick the appropriate opcode, but with Sigil you just call “LoadElement()” and the preceding instructions are used to figure it out.

Finally, Sigil detects when the Tailcall and Readonly prefixes can be used and inserts them into the command stream.  It’s not possible to detect when the  Volatile and Unaligned prefixes should be inserted (at least so far as I know), but Sigil does only allow them to be added in conjuction with opcodes they’re legal on which is still better than ILGenerator.

Unconditional Branch Caveat

There is one pain point Sigil does not yet address, though I have plans.  Right now, Sigil requires type assertions immediately after unconditional branches (Br, and Leave to be precise) as it’s incapable of inferring the state of the stack in this case.  This doesn’t come up quite as much as you’d expect, since truly unconditional branches are rare; especially when creating DynamicMethods.

Asserting types is attached to marking labels, and looks like the following:

var il = Emit<Func<int>>.NewDynamicMethod();

var b0 = il.DefineLabel("b0"), b1 = il.DefineLabel("b1"), b2 = il.DefineLabel("b2");
il.LoadConstant("abc");
il.Branch(b0); // jump to b0 with "abc"

il.MarkLabel(b1, new [] { typeof(int) }); // incoming: 3
il.LoadConstant(4);
il.Call(typeof(Math).GetMethod("Max", new[] { typeof(int), typeof(int) }));
il.Branch(b2); // jump to b2 with 4

il.MarkLabel(b0, new[] { typeof(string) }); // incoming: "abc"
il.CallVirtual(typeof(string).GetProperty("Length").GetGetMethod());
il.Branch(b1); // jump to b1 with 3
il.MarkLabel(b2, new[] { typeof(int) }); // incoming: 4
il.Return();

You can assert types along with any MarkLabel call, in cases where Sigil can infer the stack state a SigilVerificationException will be thrown when there’s a mismatch.

Check It Out, Try It Out, Break It

Sigil’s source is on github, and it’s available on Nuget.

While I’ve done a fair amount of testing and converted some projects from ILGenerator to Sigil to flush out bugs, I wouldn’t at all be surprised if there are more.  Likewise, I wouldn’t be shocked if Sigil’s validation has some holes or if it’s too strict in some cases.

So grab Sigil and try it out, I love working on this sort of stuff so don’t be shy about opening issues.


Public Broadcasting: A Self-Describing Wrapper Around protobuf-net

Familiar with Protocol Buffers?  It’s a neat binary serialization format out of Google which aims to be efficient and extensible.  Java, C++, and Python have “official” libraries, and there are a plethora of libraries for other platforms.

In fact, we’ve been using protobuf-net over at Stack Exchange for a good long time, since August 18th, 2010 if our commit logs are to be believed.  It’s famously fast and simple to use, which has let it worm its way into basically all of our projects.  We even got the mind behind it to come and slum it with chumps like me.

But…

There is one pain point to using Protocol Buffers and that’s, well, defining the protocol bits.  You can either define your messages in .proto files and compile them, or if you’re using protobuf-net (or similar) annotate your existing types.

With protobuf-net a typical class ends up looking like so:

[ProtoContract]
public class RedisInboxItem : IComparable<RedisInboxItem>
{
  [ProtoMember(1)]
  public DateTime CreationDate { get; set; }
  [ProtoMember(2, IsRequired = true)]
  public InboxItemType Type { get; set; }
  [ProtoMember(3)]
  public int Id { get; set; }
  [ProtoMember(4)]
  public string Title { get; set; }
  [ProtoMember(5)]
  public bool IsPersisted { get; set; }

  // ...

This isn’t a problem if you’re marshaling between different code bases or acting as a service – you need to document the types involved after all; might as well do it in annotations or .proto files.  But if you’re communicating within the same code base, or with internal services, this manual protocol declaration can be onerous and a tad error prone.

Easily beat Brock, or eventually have a giant fire-breathing dragon?

Trade-Offs For Convenience

What would be really keen is a wrapper around Protocol Buffers that carries its own description, so that mapping tag numbers to fields doesn’t need any fore-knowledge of the serialized types.  It’s also been a while since I committed any really terrible ILGenerator code.

So I wrote one, it trades some of protobuf-net’s speed and some of Protocol Buffers compactness for the convenience of not having to use .proto files or annotations.  I call it Public Broadcasting, because that’s the first phrase that popped into my head with a P followed by a B.  Naming is hard.

In short, what Public Broadcasting does is provide a structurally typed wrapper around protobuf-net.  Any member names that match when deserializing are mapped correctly, any missing members are ignored, and any safe conversions (like byte -> int or Enum <-> String) happen automatically.  In addition, Nullable<struct>’s will be converted to default(struct) if necessary when deserializing.  Inheritance, type names, interfaces, and so on are ignored; we care about how the data “looks” not how it’s “named”.

Public Broadcasting works by describing a type using Protocol Buffers, then including that description in an “envelope” along with the actual data.  When deserializing, Public Broadcasting constructs a new type with all the appropriate annotations and then lets protobuf-net do the heavy lifting of deserializing the data.  Since we only care about the “structure” of the data a good deal of .NET’s type system is discarded, namely only classes (no distinction between ReferenceType and ValueType), Lists, Dictionaries, Enumerations, Nullables, and “simple” types (int, byte, string, GUID, etc.) are used.

Since the original type being deserialized doesn’t actually need to be known with Public Broadcasting, there is a Deserialize method which returns dynamic.  Although dynamic is something I normally avoid, in the “grab a data object, then discard”- style I typically use protobuf-net in, I think it’s a good fit.

In my (admittedly limited) testing, Public Broadcasting is typically within an order of magnitude of raw protobuf-net usage.  And protobuf-net is hella-fast, so even 10x slower is going to be plenty fast most of the time.

In terms of compactness, Public Broadcasting is going to be approximately the “length of all involved strings” larger than raw protobuf-net.  As soon as you start having many instances or recursive types this overhead shrinks relative to the size of the message, as Public Broadcasting doesn’t repeat member names like JSON.

However, if you absolutely must have the smallest messages and the fastest (de)serializations then you’ll want to use protobuf-net directly; overhead imposed by Public Broadcasting is noticeable.

If You Like What You’ve Read

Grab the source, or pull the dll off of Nuget, and let me know what you think.


Kids These Days

I freely admit about 1/2 the following content is just an excuse to use this title.

Lately I’ve found myself wondering, how are kids getting into programming these days?  I blame still attending the Stack Exchange “beer bashes” (though I think Github’s “drinkups” have won the naming battle there), while no longer being able to drink for these thoughts.  Programmers like to talk about programming, and drunks like to talk about the past, so drunk programmers… well you see where I’m going with this.  Anyway, it seems like the high level stuff has become more accessible, while the “what is actually happening”-bits are increasingly hidden.

Let Me Explain That Last Bit

Every kid who has touched a computer since 2001 has had access to decent javascript, and the mountains of examples view-source entails.  Today Java, C#, Python, Ruby, etc. are all freely available along with plenty of tutorials and example code.  That’s a hell of a lot more than what was out there when I started, but it’s all a few dozen levels of abstraction up.  And that’s my concern, the bare metal is really deep down and stumbling upon it is quite hard.

When I Was Your Age…

I first starting coding on a TI-99/4A, in TI-BASIC, at the age of 5. That’s a machine you can fit in your head.  The processor spec is ~40 pages long (the errata for the Core 2 Duo is almost 100 pages), and there’s none of the modern magic, like pipe-lining, branch prediction, or pre-fetching (there’s not even an on chip cache!).

Now of course, I didn’t actually understand the whole machine at 5 but the people teaching me did.  Heck, the only reason a machine discontinued in ’84 was available to me in ’92 was because they’d worked on the thing and saved a couple.

A grounding in such a simple machine lead me to question a lot about the higher level languages I eventually learned.  For example when I first started learning Java the idea that two functions with the same name could exist just messed with my head, because it didn’t gel with my previous experience, driving me to read up on namespacing and vtables.

Never did figure out how to catch Mew though.

Looking back, another avenue for programming knowledge is how horribly glitchy video games were.  Actually, video games are still horribly glitchy but the consoles are a lot more sophisticated now.  Used to be when a game went sideways the consoles just did not care, and you could do all wreak all sorts of havoc exploiting glitches.  I can honestly say I “got” pointers/indirection (though I don’t remember if I knew the words or not) when I wrapped my head around the infamous Cinnibar Island glitch.  Some of these style glitches can still happen on modern consoles, but there’s proper memory protection and some spare cycles to spend on validation now so they’re a lot rarer.

If you shift my story forward to the present day, a kid learning on 8 year old hardware would be using a PC running Windows XP (probably with a Pentium 4 in it); a simple machine that is not.  Their consoles have hypervisors, proper operating systems, and patches!  If you play Pokémon today you actually only have one masterball.  The most archaic kit they’re likely to encounter is a TI-83 calculator (which still has a Z80 in it), and even then not till high school.

Lies To Children

Which has some serious issues on the Surface RT; fix is pending approval in the app store.

Thinking about this lead me to write Code Warriors, a dinky little coding puzzle app, as my “see if XAML sucks any less than it used to” side project (the answer is yes, but the REPL is still bad).  Even it’s a lies to children version of assembly, making significant compromises in the name of “fun”.  And while I was thinking about kids learning, I certainly didn’t play test it with any children so… yeah, probably not great for kids.

I have no doubts we’re going to keep producing great programmers.  Colleges, the demands of the job market, and random people on the internet will produce employable ones at least.  But I find myself wondering if we aren’t growing past the point where you can slip into programming as if by accident.  It is after all quite a young field, kids in college when ENIAC was completed are still alive, so perhaps it’s just a natural progression.


Trustable Online Community Elections

Keeping votes secure for a couple thousand years, and counting.

One of the hallmarks of successful online communities is self-governance, typically by having some or all admins/moderators/what-have-yous come from “within” the community.  Many communities select such users through a formal election process.  Wikipedia and Stack Exchange (naturally) are two large examples, you’ve probably stumbled across a number of forums with similar conventions.

But this is the internet, anonymity, sock-puppetry, and paranoia run wild.

How can we prove online elections are on the up-and-up?

First, a disclaimer.  Today the solution is always some variation of reputation; sock puppetry is mitigated by the difficulty inherent in creating many “trusted” users, and the talliers are assumed to be acting in good faith (they, after all, are rarely anonymous and stand to lose a lot if caught).  This works pretty well, the status quo is good; what I’m going to discuss is interesting (I hope), but hardly a pressingly needed change.

Also, this is just me thinking aloud; this doesn’t reflect any planned changes over at Stack Exchange.

Modeling the threat

We need to define what we’re actually worried about (so we can proof our solution against it), and what we explicitly don’t care about.

Legitimate concerns:

  • Ballot box stuffing
  • Vote tampering
  • Vote miscounting

Explicitly ignoring:

  • Voter impersonation
  • Voter intimidation

The concerns are self explanatory, we just want to make sure only people who should vote do vote, that their votes are recorded correctly, and ultimately tallied correctly.

Attacks which we’re explicitly ignoring require a bit more explanation.  They all ultimately descend from the fact that these elections are occurring online, where identity is weak and the stakes are relatively low.

It is hard to prove who’s on the other end of the internet.

Consider how difficult it is to prove Jimmy Wales is actually tweeting from that account, or the one actually editing a Wikipedia article.  Since identity is so much weaker, we’re assuming whatever authentication a community is using for non-election tasks is adequate for elections.

We discount voter intimidation because, for practically all online communities, it’s impractical and unprofitable.  If you were to choose two Stack Overflow users at random, it’s 50/50 whether they’re even on the same side of the Atlantic much less within “driving to their home with a lead pipe”-distance.  Furthermore, the positions being run for aren’t of the high-paying variety.  Essentially, the risk and difficulty grossly outweigh the rewards so I feel confident in discounting the threat.

The Scheme

Before we get too deep into the technical details, I need to credit VoteBox (Sandler, Wallach) for quite a lot of these ideas.

Voter Rolls

To protect against ballot stuffing, it’s important to have a codified list of who can participate in a given election.  This is a really basic idea, you have to register to vote (directly or otherwise) in practically all jurisdictions.  But in the world of software it’s awfully tempting to do a quick “User.CanVote()” check at time of ballot casting, which makes it impossible to detect ballot box stuffing without consulting a historical record of everything that affects eligibility (which itself is almost impossible to guarantee hasn’t been tampered with).

Therefore, prior to an election all eligible voters should be published (for verification).  Once the election starts, this list must remain unchanged.

It’s A Challenge

Vote tampering (by the software typically) is difficult to proof against, but there are some good techniques.  First, all ballots must actually be stored as ballots; you can’t just “Canididate++” in code somewhere, normalized values in a database are no good to anyone; you need the full history for audits.  Again, this is common sense in real world elections but needs to be explicit for online ones.

This does make voting a little more confusing.

A more sophisticated approach is to make individual ballots “challenge-able”.  The idea is simple: break up “casting a ballot” into two parts, “stage” and “commit/challenge”.  When a ballot is staged, you publish it somewhere public (this implies some cryptography magic, which we’ll get back to later).  If the voter then “commits” their vote you make a note of that (again publicly), but if they “challenge” it you reveal the contents of the ballot (this implicitly spoils the ballot, and some more crypto-magic) for verification.  Obviously, if what the voter entered doesn’t match what the ballot contained something is amiss.

Challenge-able ballots make it possible for a group of voters to keep the system honest, as there’s no way for the system to know who will challenge a ballot any vote tampering carries a risk of being discovered. And with the internet being the internet news will get around quickly that something is up once tampered with ballots are being found, making the odds of challenging greater.

Freedom From Hash Chains

For challenges to work there needs to be a method for publishing “committed” ballots.  We could use something low-tech like an RSS feed, mailing list, or similar but ideally we’d have some guarantee that the feed itself isn’t being tampered with.  Consider the attack where some unchallenged ballots are removed from the feed long after they were published, or new ones are added as if they were cast in the past.

Our go-to solution for that is pushing each ballot (as well as some “keep alive” messages, for security reasons) into a modified hash chain.  The idea is simple, every new message contains a hash of the message before it.  Verifying no illicit insertions or deletions is just a matter of rehashing the chain one piece at a time and making sure everything matches.  You can also take a single message and verifying that it still belongs in the chain at a later date, meaning that people who want to verify the chain don’t have to capture every message the instant it’s published.

Note that the VoteBox system goes a step further and creates a “hash mesh” wherein multiple voting machines hash each others messages, to detect tampering done from a single machine.  In our model, all the machines are under the direct control of single entity and voters lack physical access to the hardware; the additional security of a “hash mesh” isn’t really present under this model accordingly.

Time Is Not On Your Side

One subtle attack on this hash chaining scheme: if you’re sufficiently motivated you can pre-compute them.  While far from a sure thing, with sufficient analytic data to predict about when users will vote, how likely they are to vote, challenge, and so on you could run a large number of fake elections “offline” and then try and get away with publishing one that deviates the least (in terms of time and challenged ballots) from reality.  With the same information and considerably more resources you could calculate fake elections on the fly starting from the currently accepted reality and try and steer things that way.  Since the entities involved typically have lots of computing resources, and elastic computing being widely available, we have to mitigate this, admittedly unlikely, but not impossible attack.

The problem in both cases is time.  An attacker has a lot of it before an election (when the voter roll may not be known, but can probably be guessed) and a fair amount of it during one.

Any independent and verifiable random source will do.

We can completely eliminate the time before an election by using a random root message in the hash chain.  Naturally we can’t trust the system to give us a randomly generated number, but if they commit to using something public and uncontrollable as the root message then all’s well.  An example would be to use the front page headline of some national newspapers (say the New York Times, USA Today, and the Wall Street Journal; in that order) from the day the election starts as the root message.  Since the root block is essentially impossible to know, faking a hash chain is no longer easier given the aforementioned analytic data.

To make attacking the hash chain during the election difficult we could introduce a random source for keep-alive payloads, like twitter messages from a Beiber search.  More important would be selecting a hash function that is both memory and time hard, such as scrypt.  Either is probably sufficient, but scrypt (or similar) would be preferable to Beiber if you had to choose.

Tallying Bananas

Thus far we’ve mostly concerned ourselves with individual ballots being tampered with, but what about the final results?  With what’s been discussed thus far, it’s still entirely possible that everyone votes, challenges, and audits to their hearts’ content and at the end of the election totally bogus results are announced.

The problem is that the ballots that count are necessarily opaque (they have to be encrypted clearly, even if we are publishing them), only the challenged ones have been revealed.

There do exist encryption schemes that can work around this.  Namely, any homomorphically additive encryption function; in other words, any encryption function E(x) for which there exist a function F(x1, x2) such that F(E(a), E(b)) = E(a+b).  Two such schemes are (a modifiedElGamal and Paillier (others exists), the ephemeral keys present in ElGamal are useful for our ballot challenge scheme.

Encrypting ballots with ElGamal and publishing them allows anyone to tally the results of the election.  The only constraint imposed by this system is that the voting system being used must be amenable to its ballots being represented by tuples of 1s and 0s; not a big hurdle, but something to be aware of.

This attack is like ballot stuffing with a single very large ballot.

Another subtle attack, since ballots are opaque nothing prevents a ballot from containing something other than 0 or 1 for a slot in a “race” (or all 1s, or any other invalid combination).  Some more crypto-magic in the form of zero knowledge proofs (specifically non-interactive ones) attached to a ballot for every “race” it contains gives us confidence (technically not 100%, but we can get arbitrarily close to it) that every ballot is well-formed.  The exact construction is complicated (and a bit beyond me, honestly), but the Adder system [Kiayias, Korman, Walluck] has provided a reference implementation for ElGamal (adapted into VoteBox, the Adder source seems to have disappeared).

There’s one last elephant in the room with this system.  Although everyone can tally the votes, and everyone can verify each individual vote is well-formed, those running the election are still the only ones who can decrypt the final results (they’re the only ones who have the ElGamal private key).  There honestly isn’t a fool-proof solution to this that I’m aware of, as revealing the private key allows for individual ballots to be decrypted in addition to the tally.  Depending on community this may be acceptable (ie. ballot privacy may only matter during the election), but that is more than likely not the case.  If there is a trusted third party, having them hold in the key in escrow for the duration of the election and then verify and decrypt the tally.  In the absence of a single third-party, a threshold scheme for revealing the private key may be acceptable.

Altogether Now

In short, the proposed system works like so:

  • Prior to the election starting, a full roll of eligible voters is published
  • When the election starts, a hash chain is started with an agreed upon (but random) first message
    • The hash chain uses a memory and time hard hash function
  • A public/private key pair for a homomorphically additive encryption system is stored in an agreed upon manner
  • Optionally, throughout the election agreed upon (but random) messages are added to the hash chain as keep-alives

When a user votes:

  • Their selections are encrypted using the previously generated public key
  • A NIZK is attached to the ballot to prove its well-formedness
  • An identifying token is also attached to the ballot to check against the voter roll
  • All this data is pushed to the hash chain as a single message
  • The voter then chooses to either commit or challenge their ballot
    • If they commit, a message is added to the hash chain to this effect
    • If they challenge, the ephemeral key for the ballot is published; allowing any third-party to decrypt the ballot, letting the voter prove the election is being run in good faith (they may then cast another ballot)

When the election ends:

  • Some pre-agreed upon message is placed in the hash chain
    • It’s not important this message be random, it’s just a signal that those watching can stop as the chain is finished
  • Anyone who wishes to can verify the integrity of the published hash chain, along with the integrity of individual ballots
  • Anyone who wishes to may tally the votes by exploiting the homomorphic properties of the chosen encryption scheme
  • Whichever approach to decrypting the tally has been chosen is undertaken, concluding the election

Improvements

The obvious place to focus on improving is the final decryption.  My gut (with all the weight of a Bachelor’s in Computer Science, so not a lot of weight) tells me there should be a way to construct a proof that the announced tally actually comes from decrypting the summed ciphertexts.  Or perhaps there’s some way to exploit the one-to-many nature of ElGamal encryption to provide (in advance of the election naturally) a function that will transform the tally ciphertext into one a different key can decrypt, but have that transformation not function on the individual ballots.

Of course, as I stated before, the status quo is acceptable.  It’s just fun to think of what we could build to deal with the theoretical problems.


Are you measuring your social media buttons?

I’d like you to go to a Stack Overflow (or any Stack Exchange) question, and see if you spot a nominally major change.  Such as this one, with a nice answer from Raymond Chen (I think it’s been more than month since I recommended his blog, seriously go read it).

Spot it?

Of course not, what we did is remove our social media sharing buttons.

Left is old, right is new.  A number of other instances were also removed.

The Process

I’m a big advocate of arguing from data and while I freely admit there are many things that are hard to quantify, your gains from social media buttons aren’t one of them.

When discussing sharing buttons, be careful about establishing scope.  Keeping or removing social media buttons isn’t about privacy, personal feelings on Twitter/Facebook/G+, or “what every other site does” (an argument I am particularly annoyed by); it’s about referred traffic and people.  There’s a strong temptation to wander into debate, feeling, and anecdotal territory; stick to the data.

You also need to gather lots of data, at Stack Exchange we prefer to slap a pointless query parameter onto urls we’re tracking.  It’s low tech, transparent, and doesn’t have any of the user facing penalties redirects have.  In particular we slapped ?sfb=1, ?stw=1, and ?sgp=1 onto links shared via the appropriate sharing buttons.  Determining click-throughs is just a matter of querying traffic logs with this approach.  We’ve been gathering data on social media buttons in particular for about four months.

Note that we’re implicitly saying that links that are shared that nobody clicks don’t count.  I don’t think this is contentious, spamming friends and followers (and circlers(?), whatever the G+ equivalent is) is a bit on the evil side. Somebody has to vindicate a share by clicking on it, otherwise we call it a waste.

This sort of nonsense isn’t worth it.

With all this preparation we can actually ask some interesting questions; for Stack Exchange we’re interested in how much traffic is coming from social media (this is an easy one), and how many sharers do so through a button.

Traffic gave us no surprises, we get almost nothing from social media.  Part of this is probably domain specific, my friends really don’t care about my knowledge of the Windows Registry or the Diablo III Auction House.  The sheer quantity of traffic coming from Google also drowns out any other source, it’s hard to get excited about tweaking social media buttons to bring in a few thousand extra views when tiny SEO changes can sling around hundreds of thousands.  To put the difference in scale in perspective, Google Search sends 3 orders of magnitude more traffic our way then the next highest referrer which itself sends more than twice what Twitter does (and Twitter is the highest “social” referrer).

Now that I’ve established a (rather underwhelming) upper-bound for how much our share buttons were getting us, I need to look at what portion can be ascribed to the share buttons versus people just copy/pasting.  This is where the slugs come in, presumably no-one is going to add a random query parameter to urls they’re copy/pasting after all.

You basically want to run a query like this:

SELECT *
FROM TrafficLogs
WHERE
-- everybody tries to set Referer so you know you got some juice from them
(RefererHost = 't.co' OR RefererHost = 'plus.url.google.com' OR RefererHost = 'www.facebook.com')
AND
-- Everybody scrapes links that are shared, don't count those they aren't people
UserAgent NOT LIKE '%bot%'
AND
-- Just the share buttons, ma'am
(Query LIKE '%stw=1%' OR Query LIKE '%sfb=1%' OR Query LIKE '%sgp=1%')

Adapt to your own data store, constrain by appropriate dates, etc. etc.  But you get the idea.  Exclude the final OR clause and the same query gives you all social media referred traffic to compare against.

What we found is that about 90% of everyone who does share, does so by copy/pasting.  Less than 10% of users make use of the share buttons even if they’ve already set out to share.  Such a low percentage of an already pretty inconsequential number doesn’t bode well for these buttons.

One final bit of investigation that’s a bit particular to Stack Exchange is figuring out the odds of a user sharing their newly created post.  We did this because while we always show the question sharing links to everyone, the answer sharing links are only shown to their owner and only for a short time after creation.  Same data, a little more complicated queries, and the answer comes out to ~0.4%.  Four out of every thousand new posts* on Stack Overflow get shared via a social media button (the ratio seems to hold constant for the rest of Stack Exchange, but there’s a lot less data to work with so my confidence is lower on them).

Data Shows They’re No Good, Now What?

Change for change’s sake is bad, right?  Benign things shouldn’t be moved around just because, but for social media buttons…

We know our users don’t like them, they’re sleezy (though we were very careful to avoid any of the tracking gotchas of +1 or Likes), and they’re cluttering some of our most important controls (six controls in a column on every question is a bit much, but to be effective [in theory] these buttons have to be prominent).

These buttons start with a lot of downsides, and the data shows that for us they don’t have much in the way of upsides.  So we we could either trudge along telling ourselves “viral marketing” and “new media” (plus we’ve already got them right, why throw them away?), or admit that these buttons aren’t cutting it and remove them.

So, are you measuring the impact of your site’s social media buttons?

You’re probably not in the same space as Stack Exchange, your users may behave differently, but you might be surprised at what you find out when you crunch the numbers.

*By coincidence this number seems to be about the same for questions and answers, it looks like more people seeing question share buttons balances out people being less enthusiastic about questions.


Extending Type Inferencing in C#

A while ago I used the first Roslyn CTP to hack truthiness into C#.  With the second Roslyn CTP dropping on June 5th, now pretty close to feature complete,  it’s time think up some fresh language hacks.

Roslyn What Now?

For those unfamiliar, Roslyn is Microsoft’s “Compiler as a Service” for the VB.NET and C# languages.  What makes Roslyn so interesting is that it exposes considerably more than “Parse” and “Compile” methods; you also have full access to powerful code transformations and extensive type information.

Roslyn is also very robust in the face of errors, making the most sense it can of malformed code and providing as complete a model as possible.  This is very handy for my evil purposes.

Picking A Defect

Now I happen to really like C#, it’s a nice, fast, fairly rapidly evolving, statically typed language.  You get first class functions, a garbage collector, a good type system, and all that jazz.  Of course, no language is perfect so I’m going to pick one of my nits with C# and hack up a dialect that addresses it using Roslyn.

The particular defect is that you must specify return types in a method declaration, this is often pointless repetition in my opinion.

Consider this method:

static MvcHtmlString ToDateOnlySpanPretty(DateTime dt, string cssClass)
{
  return MvcHtmlString.Create(String.Format(@"<span title=""{0:u}"" class=""{1}"">{2}</span>", dt, cssClass, ToDateOnlyStringPretty(dt, DateTime.UtcNow)));
}

Is that first MvcHtmlString really necessarily?  Things get worst when you start returning generic types, oftentimes I find myself writing code like:

Dictionary<string, int> CalcStatistics()
{
  var result = new Dictionary<string, int>();
  // ...
  return result;
}

Again, the leading Dictionary<string, int> is really not needed the type returned is quite apparent so we’re really just wasting key strokes there.

This pointless repetition was addressed for local variables in C# 3.0 with the var keyword.  With Roslyn, we can hack up a pre-processor that allows var as a method’s return type.  The above would become the following:

var CalcStatistics()
{
  var results = new Dictionary<string, int>();
  // ...
  return results;
}

The Rules

Wanton type inferencing can be a bit dangerous, it makes breaking contracts pretty easy to do for one.  So I’m imposing a few constraints on where “var as a return” can be used, for the most part these are arbitrary and easily changed.

  • var can only be used on non-public and non-protected methods
  • if a method returns multiple types, a type will be chosen for which all returned types have an implicit conversion with the following preferences
    • classes before interfaces
    • derived types before base types
    • generic types before non-generic types
    • in alphabetical order
    • with the exceptions that Object is always considered last and IEnumerable (and IEnumerable<T>) are considered before other interfaces
  • a method with empty returns will become void

The last rule may be a little odd as C# doesn’t have a notion of a “void type” really, which is something of a defect itself in my opinion.  Which type is chosen for a return is well-defined so it’s predictable (always a must in a language feature) and attempts to match “what you meant”.

I made one more handy extension, which is allowing var returning methods to return anonymous types.  You can sort of do this now either passing around Object or using horrible grotty hacks (note, don’t actually do that); but since there’s no name you can’t do this cleanly.  Since var returning methods don’t need names, I figured I might as well address that too.

Let’s See Some Code

Actually loading a solution and parsing/compiling is simple (and boring), check out the source for how to do it.

The first interesting bit is finding all methods that are using “var” as a return type.

// eligibleMethods is a List<MethodDeclarationSyntax>
var needsInferencing =
 eligibleMethods
 .Where(
  w => (w.ReturnType is IdentifierNameSyntax) &&
  ((IdentifierNameSyntax)w.ReturnType).IsVar
 ).ToList();

This literally says “find the methods that have return type tokens which are ‘var'”.  Needless to say, this would be pretty miserable to do from scratch.

Next interesting bit, we grab all the return statements in a method and get the types they return.

var types = new List<TypeInfo>();
// returns is a List<ReturnStatementSyntax>
foreach (var ret in returns)
{
  var exp = ret.Expression;

  if (exp == null)
  {
    types.Add(TypeInfo.None);
    continue;
  }

  var type = model.GetTypeInfo(exp);
  types.Add(type);
}

Note the “model.GetTypeInfo()” call there, that’s Roslyn providing us with detailed type information (which we’ll be consuming in a second) despite the fact that the code doesn’t actually compile successfully at the moment.

We move onto the magic of actually choosing a named return type.  Roslyn continues to give us a wealth of type information, so all possible types are pretty easy to get (and ordering them is likewise simple).

var allPossibilities = info.SelectMany(i => i.Type.AllInterfaces).OfType<NamedTypeSymbol>().ToList();
allPossibilities.AddRange(info.Select(s => s.Type).OfType<NamedTypeSymbol>());
// info is an IEnumerable<TypeInfo>
foreach (var i in info)
{
  var @base = i.Type.BaseType;
  while (@base != null)
  {
    allPossibilities.Add(@base);
    @base = @base.BaseType;
  }
}

Rewriting the method with a new return type is a single method call, and replacing the “from source” method with our modified one is just as simple.

And that’s basically it for the non-anonymous type case.

Anonymous Types

For returning anonymous types, everything up to the “rewrite the method” step is basically the same.  The trouble is, even though we know the “name” of the anonymous type we can’t use it.  What we need to do instead is “hoist” the anonymous type into an actual named type and return that instead.

This is kind of complicated, but you can decompose it into the following steps:

  1. Make a note of returned anonymous types
  2. Rewrite methods to return a new type (that doesn’t exist yet)
  3. Rewrite anonymous type initializers to use the new type
  4. Create the new type declaration

Steps #1 and #2 are easy, Roslyn doesn’t care that your transformation doesn’t make sense yet.

Step #3 requires a SyntaxRewriter and a way to compare anonymous types for equality.  The rewriter is fairly simple, the syntax for an anonymous type initializer and a named one vary by very little.

Comparing anonymous types is a little more complicated.  By the C# spec, anonymous are considered equivalent if they have the same properties (by name and type) declared in the same order.

Roslyn gives us the information we need just fine, so I threw a helper together for it:

internal static bool AreEquivalent(TypeSymbol a, TypeSymbol b, Compilation comp)
{
  var aMembers = a.GetMembers().OfType<PropertySymbol>().ToList();
  var bMembers = b.GetMembers().OfType<PropertySymbol>().ToList();
  if (aMembers.Count != bMembers.Count) return false;
  for (var i = 0; i < aMembers.Count; i++)
  {
    var aMember = aMembers[i];
    var bMember = bMembers[i];
    if (aMember.Name != bMember.Name) return false;
    if (aMember.DeclaredAccessibility != bMember.DeclaredAccessibility) return false;
    var aType = aMember.Type;
    var bType = bMember.Type;
    var aName = aType.ToDisplayString(SymbolDisplayFormat.FullyQualifiedFormat);
    var bName = bType.ToDisplayString(SymbolDisplayFormat.FullyQualifiedFormat);
    if (aName == bName) continue;
    var conv = comp.ClassifyConversion(aType, bType);
    if (!conv.IsIdentity) return false;
  }
  return true;
}

Notice that Roslyn also gives us details about what conversions exist between two types, another thing that would be absolutely hellish to implement yourself.

The final step, adding the new type, is the biggest in terms of code although it’s not really hard to understand (I also cheat a little).  Our hoisted type needs to quack enough like an anonymous to not break any code, which means it needs all the expected properties and to override Equals, GetHashCode, and ToString.

This is all contained in one large method.  Rather than reproduce it here, I’ll show what it does.

Take the anonymous type:

new
{
A = "Hello",
B = 123,
C = new Dictionary<string, string>()
}

This will get a class declaration similar to

internal class __FTI88a733fde28546b8ae4f36786d8446ec {
  public string A { get; set; }
  public int B { get; set; }
  public global::System.Collections.Generic.Dictionary<string, string> C { get; set; }
  public override string ToString()
  {
    return new{A,B,C}.ToString();
  }
  public override int GetHashCode()
  {
    return new{A,B,C}.GetHashCode();
  }
  public override bool Equals(object o)
  {
    __FTI88a733fde28546b8ae4f36786d8446ec other = o as __FTI88a733fde28546b8ae4f36786d8446ec;
    if(other == null) return new{A,B,C}.Equals(o);
    return
      (A != null ? A.Equals(other.A) :
      (other.A != null ? other.A.Equals(A) : true )) &&
      B == other.B &&
      (C != null ? C.Equals(other.C) : (other.C != null ? other.C.Equals(C) : true ));
  }
}

The two big cheats here are the lack of a constructor (so it’s technically possible to modify the anonymous type, this is fixable but I don’t think it’s needed for a proof-of-concept) and using the equivalent anonymous type itself to implement the required methods.

Conclusion

I’m pretty pleased with Roslyn thus far, it’s plenty powerful.  There are still a bunch of limitations and unimplemented features in the current CTP though, so be aware of them before you embark on some recreational hacking.

In terms of complaints, some bits are a little verbose (though that’s gotten better since the first CTP), documentation is still lacking, and there are some fun threading assumptions that make debugging a bit painful.  I’m sure I’m Doing It Wrong in some places, so some of my complaints may be hogwash; I expect improvements in each subsequent release as well.

If it wasn’t apparent, the code here is all proof-of-concept stuff.  Don’t use in production, don’t expect it to be bug free, etc. etc.

You can check out the whole project on Github.


More, a CSS compiler

It’s hard to represent CSS in pictures, so I’m breaking this post up with puppies.

CSS is an… interesting technology.  As Wil Shipley put it, “CSS: If a horse were designed by a committee of camels.”  There’s just enough rough edges, weird decisions, and has-not-had-to-use-it-everyday blindness to make you want something better to work in.  While it’s still a draft (and I hope has no chance of becoming part of a standard), take a look at CSS Variables and despair that that was proposed.

I’m hardly the first (second, or probably even millionth) person to think this, so there are some alternatives to writing CSS out there.  At Stack Exchange we use LESS (specifically the dotLESS variant), Sass is also pretty popular just from straw polling developers though I’m not really familiar with it.

For kicks I started throwing my own CSS compiler together a few months ago that’s a little more radical than LESS (from which I took a great deal of inspiration).  I’m calling it More for now because: naming is hard, and I like the homage to LESS.  Although I am drawing explicit inspiration from LESS, More is not a LESS super-set.

The radical changes are

  • Order doesn’t matter
  • “Resets” are first class
  • Sprites are first class
  • Explicit copy selector syntax, in addition to mixins
More details below, as you might guess familiarity with LESS will make lots of this sound really familiar.  If you’d rather not read a 3,000+ word post, hop over to Github where the README takes a whack at summarizing the whole language.

Declaration Order Doesn’t Matter

In CSS subsequent declarations override preceding ones, if you declare .my-class twice the latter one’s rules win; likewise for rules within the same block.  This is pretty wrong-headed in my opinion, you should be stating your intentions explicitly not just tacking on new rules.  Combine this with how negative an impact bloated CSS can have on page load speeds (CSS and HTML are the bare minimum necessary for display, everything else can be deferred), I can’t help but classify this as a misfeature.

In More, the order of selector blocks in the output is explicitly not guaranteed to match that of the input.  Resets, detailed below, allow you to specify that certain blocks preceed all others and @media blocks necessarily follow all others but any other reliance on ordering is considered a bad practice.

Likewise, overriding rules in blocks is an explicit operation rather than a function of declaration order, the specifics are detailed below.

He wants your CSS structure to be explicit.

Resets

More natively supports the concept of a “reset”.  Blocks contained between @reset{…} will always be placed at the top of generated CSS file, and become available for inclusion via reset includes.

@reset{
  h1 { margin: 0; }
}
.class h1 { color: blue; }

becomes

h1 { margin: 0; }
.class h1 { color: blue; }

Blocks within a @reset block cannot use @reset includes or selector includes, but can use mixins.  Reset blocks are also not referenced by selector includes.

@reset includes let you say “reset this block to the default stylings”, practically this means including any properties defined within a @reset{…} in the block with the @reset include.

@reset {
  h1 { margin: 0; }
}
.class h1 { color: blue; @reset(h1); }

becomes

h1 { margin: 0; }

.class h1 { color:blue; margin: 0; }

It is not necessary to specify the selector to reset to, an empty @reset() will reset to the selector of the block that contains it.  Note that for nested blocks this will be the inner most blocks selector.

@reset {
  :hover { color: red; }
  h1 { margin: 0; }
}
h1 {
  @reset(); 
  line-height: 15px; 
}
a {
  color: blue;
  &:hover {
    font-weight: bold;
    @reset();
  }
}

becomes

:hover { color: red; }
h1 { margin: 0; }
h1 { margin: 0; line-height: 15px; }
a { color: blue; }
a:hover { font-weight: bold; color: red; }

Properties included by a @reset() (with or without optional selector) will never override those defined otherwise in a block.

@reset { a { color: red; height: 10px; } }
a { @reset(); color: blue; } }

becomes

a { color: red; height: 10px; }
a { color: blue; height: 10px; }

The intent behind resets is to make it easier to define a style’s “default element stylings” and subsequent reset to them easily.

You know what’s cool? Not generating your sprite files independent of the CSS that uses them.

Sprites

If you include a non-trivial number of images on  a site, you really ought to be collapsing them into sprites.  There are plenty of tools for doing this out there, but tight integration with CSS generation (if you’ve already decided to compile your CSS) is an obvious win.

To that end, More lets you generate sprites with the following syntax:

@sprite('/img/sprite.png'){
  @up-mod = '/img/up.png';
  @down-mod = '/img/down.png';
}

This declaration creates the sprite.png from up.png and down.png, and adds the @up-mod and @down-mod mixins to the file.  Mixins created from sprites are no different than any other mixins.

For example:

.up-vote {
  @up-mod();
}

could compile to

.up-vote {
  background-image: url(img/sprite.png);
  background-position: 0px 0px;
  background-repeat: no-repeat;
  width: 20px;
  height: 20px;
}

For cases there are other properties that you want to include semantically “within” a sprite, these generated mixins take one mixin as an optional parameter.  The following would be valid, for example.

.down-vote {
  @down-mod(@myMixin);
}

Finally, there are command line options for running executable on generated sprite files.  It’s absolutely worthwhile to squeeze the last few bytes out of a PNG file you’ll serve a million times, but actually implementing such a compression algorithm is outside the scope of More.  While this can be fairly easily hacked together, as a built-in I hope it serves as a “you should be doing this” signal to developers.  I personally recommend PNGOUT for this purpose.

Include Selectors

Sometimes, you just want to copy CSS around.  This can be for brevity’s sake, or perhaps because you want to define a set of selectors in terms of some other selector.   To do that with More, just include the selector in @() like so:

div.foo { color: #abc; }
div.bar {
  background-color: #def;
  @(div.foo);
}

This syntax works with more complicated selectors as well, including comma delimited ones, making it possible to include many distinct blocks in a single statement.  It is not an error if a selector does not match any block in the document.

Selector includes are one of the last things to resolve in a More document, so you are copying the final CSS around.  @(.my-class) will copy the final CSS in the .my-class block, but not any of the mixin invokations, nested blocks, or variable declarations that went into generating it.

For example:

.bar {
  foo: @c ?? #eee;
  &:hover {
    buzz: 123;
  }
}
.fizz {
  @c = #aaa;
  @(.bar);
}

Will generate:

.bar { foo: #eee; }
.bar:hover { buzz: 123; }
.fizz { foo: #eee; }

As @(.bar) copies only the properties of .bar in the final document.

Not stolen, inspired. *cough*

LESS Inspired Bits

That’s it for the really radical new stuff, most of the rest of More is heavily inspired by (though rarely syntactically identical to) LESS.  Where the syntax has been changed, it has been changed for reasons of clarity (at least to me, clarity is inherently subjective).

More takes pains to “stand out” from normal CSS, as it is far more common to be reading CSS than writing it (as with all code).  As such, I believe it is of paramount importance that as little of a file need be read to understand what is happening in a single line of More.

This is the reason for heavy use of the @ character, as it distinguishes between “standard CSS” with its very rare at-rules and More extensions when scanning a document.  Likewise, the = character is used for assignment instead of :.

Variables

More allows you to define constants that can then be reused by name throughout a document, they can be declared globally as well as in blocks like mixins (which are detailed below).

@a = 1;
@b = #fff;

These are handy for defining or overriding common values.

Another example:

@x = 10px;
img {
  @y = @x*2;
  width: @y;
  height: @y;
}

It is an error for one variable to refer to another in the same scope before it is declared, but variables in inner scopes can referrer to those in containing scopes irrespective of declaration order.  Variables cannot be modified once declared, but inner variables can shadow outer ones.

Puppies, (debatably) not annoying.

Nested Blocks

One of the annoying parts of CSS is the repetition when declaring a heirarchy of selectors.  Selectors for #foo, #foo.bar, #foo.bar:hover, and so on must appear in full over and over again.

More lets you nest CSS blocks, again much like LESS.

#id {
  color: green;
  .class {
    background-color: red;
  }
  &:hover {
    border: 5px solid red;
  }
}

would compile to

#id {
  color: green;
}
#id .class {
  background-color: red;
}
#id:hover {
  border: 5px solid red;
}

Note the presense of LESS’s & operator, which means “concat with parent”.  Use it for when you don’t want a space (descendent selector) between nested selectors; this is often the case with pseudo-class selectors.

Mixins

Mixins effectively lets you define functions which you can then include in CSS blocks.

@alert(){
  background-color: red;
  font-weight: bold;
}
.logout-alert{
  font-size: 14px;
  @alert();
}

Mixins can also take parameters.

@alert(@c) { color: @c; }

Parameters can have default values, or be optional altogether.   Default values are specified with an =<value>, optional parameters have a trailing ?.

@alert(@c=red, @size?) { color: @c; font-size: @size; }

Mixin’s can be passed as parameters to mixins, as can selector inclusions.

@outer(@a, @b) { @a(); @b(green); }
@inner(@c) { color: @c; }
h1 { font-size: 12px; }
img { @outer(@(h1), @inner); }

Will compile to

h1  { font-size: 12px; }
img { font-size: 12px; color: green; }

As in LESS, the special variable @arguments is bound to all the parameters passed into a mixin.

@mx(@a, @b, @c) { font-family: @arguments; }
p { @mx("Times New Roman", Times, serif); }

Compiles to

p { font-family: "Times New Roman", Times, serif; }

Parameter-less mixins do not define @arguments, and it is an error to name any parameter or variable @arguments.

Puppy function: distract reader from how much text is in this post.

Functions

More implements the following functions:

  • blue(color)
  • darken(color, percentage)
  • desaturate(color, percentage)
  • fade(color, percentage)
  • fadein(color, percentage)
  • fadeout(color, percentage)
  • gray(color)
  • green(color)
  • hue(color)
  • lighten(color, percentage)
  • lightness(color)
  • mix(color, color, percentage/decimal)
  • nounit(any)
  • red(color)
  • round(number, digits?)
  • saturate(color, percentage)
  • saturation(color)
  • spin(color, number)

Most of these functions are held in common with LESS, to ease transition.  All functions must be preceded by @, for example:

@x = #123456;
p {
   color: rgb(@red(@x), 0, 0);
}

Note rgb(…), hsl(..), and other CSS constructs are not considered functions, but they will convert to typed values if used in variable declarations, mixin calls, or similar constructs.

In String Replacements

More strives to accept all existing valid CSS, but also attempts to parse the right hand side of properties for things like math and type information.  These two goals compete with each other, as there’s a lot of rather odd CSS out there.

img {
  filter: progid:DXImageTransform.Microsoft.MotionBlur(strength=9, direction=90);
  font-size: 10px !ie7;
}

It’s not reasonable, in my opinion, to expect style sheets with these hacks to be re-written just to take advantage of More, but it can’t parse them either.

As a compromise, More will accept any of these rules but warn against their use.  Since a fancier parsing scheme isn’t possible, a simple string replacement is done instead.

img {
  filter: progid:DXImageTransform.Microsoft.MotionBlur(strength=@strength, direction=90);
  font-size: @(@size * 2px) !ie7;
}

The above demonstrates the syntax.  Any variables placed in the value will be simply replaced, more complicated expressions need to be wrapped in @().

Although this feature is intended mostly as a compatibility hack, it is also available in quoted strings.

Other, Less Radical, Things

There are a couple of nice More features that aren’t really LESS inspired, but would probably fit right in a future LESS version.  They’re not nearly as radical additions to the idea of a CSS compiler.

Dressing up your puppies, optional.

Optional Values

Mixin invocations, parameters, variables, and effectively whole properties can be marked “optional”.

.example {
  color: @c ?? #ccc;
  @foo()?;
}

This block would have a color property of #ccc if @c were not defined, making overrides quite simple (you either define @c or you don’t).  Likewise, the foo mixin would only be included if it were defined (without the trailing ? it would be an error).

A trailing ? on a selector include will cause a warning, but not an error.  It doesn’t make sense, as selector includes are always optional.

Entire properties can be optional as follows:

.another-example {
  height: (@a + @b * @c)?;
}

If any of a, b, or c are not defined then the entire height property would not be evaluated.  Without the grouping parenthesis and trailing ? this would result in an error.

Overrides

When using mixins, it is not uncommon for multiple copies of the same property to be defined.  This is especially likely if you’re using mixins to override default values.  To alleviate this ambiguity, it is possible to specify that a mixin inclusion overrides rules in the containing block.

@foo(@c) { color: @c; }
.bar {
  color: blue;
  @foo(red)!;
}

This would create a .bar block with a single color property with the value of “red”.  More will warn, but not error, when the same property is defined more than once in a single block.

You can also force a property to always override (regardless of trailing ! when included) by using !important.  More will not warn in these cases, as it’s expressed intent.

You can also use a trailing ! on selector includes, to the same effect.

Puppies With Hats

Values With Units

Values can be typed, and will be correctly coersed when possible and result in an error otherwise.  This makes it easier to catch non-nonsensical statements.

@mx(@a, @b) { height: @a + @b; }
.foo{
  @mx(12px, 14); // Will evaluate to height: 26px;
  @mx(1in, 4cm); // Will evaluate to height: 6.54cm; (or equivalent)
  @mx(15, rgb(50,20,10)); // Will error
}

Similarly, any function which expects multiple colors can take colors of any form (hsl, rgb, rgba, hex triples, or hex sextuples).

When needed, units can be explicitly stripped from a value using the built in @nounit function.

Includes

You can include other More or CSS files using the @using directive.  Any included CSS files must be parse-able as More (which is a CSS super-set, although a strict one).  Any variables, mixins, or blocks defined in any includes can be referred to at any point in a More file, include order is unimportant.

@using can have a media query, this results in the copied blocks being placed inside of a @media block with the appropriate query specified.  More will error if the media query is unparseable.

At-rules, which are usually illegal within @media blocks, will be copied into the global context; this applies to mixins as well as @font-face, @keyframes, and so on.  This makes such at-rules available in the including file, allowing for flexibility in project structure.

More does allow normal CSS @imports.  They aren’t really advisable for speed reasons.  @imports must appear at the start of a file, and More will warn about @import directives that had to be re-ordered as that is not strictly compliant CSS in the first place.

CSS3 Animations

More is aware of the CSS3 @keyframes directive, and gives you all the same mixin and variable tools to build them.

@mx(@offset) { top: @offset * 2px; left: @offset + 4px; }
@keyframes my-anim {
  from { @mx(5); }
  to { @mx(15); }
}

Compiles to

@keyframes my-anim {
  from { top: 10px; left: 9px;}
  to { top: 30px; left: 19px; }
}

It is an error to include, either via a mixin or directly, a nested block in an animation block.  More will also accept (and emit unchanged) the Mozilla and Webkit prefixed versions of @keyframes.

Variables can be declared within both @keyframes declarations and the inner animation blocks, the following would be legal More for example.

@keyframes with-vars {
  @a = 10;
  from { 
    @b = @a * 2;
    top: @b + 5px;
    left: @b + 3px;
  }
  to { top: @a; left: @a; }
}

It is an error to place @keyframes within @media blocks or mixins.  As with @media, @keyframes included via a @using directive will be pulled into the global scope.

More Is A CSS Superset

More strives to accept all existing CSS as valid More.  In-string replacements are a concession to this, as is @import support; both of which were discussed above.  @charset, @media, and @font-face are also supported in pursuit of this goal.

@charset may be declared in any file, even those referenced via @using.  It is an error for more than one character set to be referred to in a final file.  More will warn if an unrecognized character set is used in any @charset declaration.

@media declarations are validated, with More warning on unrecognized media types and erroring if no recognized media types are found in a declaration.  Blocks in @media statements can use mixins, but cannot declare them.  Selector includes first search within the @media block, and then search in the outer scope.  It is not possible for a selector include outside of a @media block to refer to a CSS block inside of one.

By way of example:

img { width: 10px; @(.avatar); }
p { font-color: grey; }
@media tv {
  p { font-color: black; }
  .avatar { border-width: 1px; @(p); }
  .wrapper { height:10px; @(img); }
}

Will compile to:

img { width: 10px; }
p { font-color: grey; }
@media tv {
  p { font-color: black; }
  .avatar { border-width: 1px; font-color: black; }
  .wrapper { height:10px; width: 10px; }
}

@font-face can be used as expected, however it is an error to include any nested blocks in a @font-face declaration.  More will warn if a @font-face is declared but not referred to elsewhere in the final CSS output; this is generally a sign of an error, but the possibility of another CSS file (or inline style) referring to the declared font remains.  More will error if no font-family or src rule is found in a @font-face block.

The pun is the highest form of humor.

Minification

More minifies it’s output (based on a command line switch), so you don’t need a separate CSS minifier.  It’s not the best minifier out there, but it does an OK job.

In particular it removes unnecessary white space, quotation marks, and reduces color and size declarations; “green” becomes #080 and “140mm” become “14cm” and so on.

A contrived example:

img
{
  a: #aabbcc;
  b: rgb(100, 50, 30);
  c: #008000;
  d: 10mm;
  e: 1.00;
  f: 2.54cm;
}

Would become (leaving in white space for reading clarity):

img 
{
  a: #abc;
  b: #64321e;
  c: green;
  d: 1cm;
  e: 1;
  f: 1in;
}

This minification is controlled by a command line switch.

Cross Platform

For web development platform choice is really irrelevant and acknowledging that More runs just fine under Mono, and as such works on the Linux and Mac OS X command lines.

More Isn’t As Polished As LESS

I always feel a need to call this out when I push a code artifact, but this isn’t a Stack Exchange thing.  This is a “Kevin Montrose screwing around with code to relax” thing.  I’ve made a pretty solid effort to make it “production grade”, lots of tests, converted some real CSS (some of which is public, some of which is unfortunately not available), but this is still a hobby project.  Nothing is going to make it as solid as years of real world usage, which LESS and Sass have and More doesn’t.

Basically, if you’re looking for something to simplify CSS right now and it cannot fail go with one of them.  But, if you like some of what you read and are looking to play around well…

Give it a Spin

You can also check out the code on Github.


Follow

Get every new post delivered to your Inbox.