Overthinking CSV With Cesil: Writing Dynamic Types

I covered how to write known, static, types with Cesil in my previous post. As with reading, Cesil also supports dynamic types.

In my post on dynamic reading, I argued dynamic is still worth supporting due to how convenient it makes some common read operations. I feel the case for writing dynamic types is much weaker – it is rare to want to write heterogeneous types, and even rarer to not be able to easily map such a mixed collection to a single known type. All that said, for symmetry’s sake Cesil does have extensive support for writing dynamic types.

As with reading, writing static and dynamic types is essentially symmetric. All the same methods are provided, supporting all the same operations. The only difference is rather than using Configuration.For<TRow>() you use Configuration.ForDynamic(), and rather than IBoundConfiguration<TRow> being parameterized by a type TRow it’s parameterized by dynamic.

When using the DefaultTypeDescriber, performance varies considerably based on the “kind” of dynamic you are writing. Cesil special cases “well known” dynamic types for improved performance – namely the dynamic rows Cesil creates and ExpandoObject are treated specially. For other DLR aware types Cesil will use IDynamicMetaObjectProvider directly, which is considerably slower. Plain .NET types delegate to the usual EnumerateMembersToSerialize method, which implements “normal” .NET behavior.

Cesil allows customizing the members discovered, and the order they’ll be written in, by using a custom ITypeDescriber with your Options and implementing the GetCellsForDynamicRow directly.  Simple inclusive/exclusive can be controlled by subclass the DefaultTypeDescriber and overriding the ShouldIncludeCell method. I’ll cover how this works in more detail in a later post that goes in depth into all of Cesil’s configuration options.

And that’s about it for dynamic serialization – there’s not a lot to cover since so much of it is “just like writing static types, but dynamic.”  This post’s Open Question is, accordingly, more “tactical” than previous ones:

The interface isn’t technical wrong, but it has the undesirable property that general implementations will allocate at least a little bit for each row written.  An allocation-free alternative would be a marked improvement, provided it doesn’t come at the cost of flexibility or reasonable performance.

As before, I’ve opened an issue to gather long form responses.  Remember that, as part of the sustainable open source experiment I detailed in the first post of this series, any commentary from a Tier 2 GitHub Sponsor will be addressed in a future comment or post. Feedback from non-sponsors will receive equal consideration, but may not be directly addressed.

In my next post I’ll go into detail on all the configuration options Cesil supports. It’ll be a long post, as Cesil supports customizing the expected format, as well as almost every detail of describing and mapping types.


Overthinking CSV With Cesil: Writing Known Types

My last two posts have covered deserializing with Cesil, the subsequent two will cover serialization. This post will specifically dig into the case where you know the types involved at compile time, while the next one will cover the dynamic type case. If you’ve read the previous posts on read operations hopefully a lot of this will seem intuitive, just in reverse.

Again, CesilUtils exposes a bunch of utility methods – this time with names like WriteXXX. Variants exist for single row, multiple row, synchronous, asynchronous, and “straight to a file” operations. Just like with reading, CesilUtils doesn’t allow you to reuse an IBoundConfiguration<TRow> nor does it expose the underlying I(Async)Writer<TRow> but is convenient when performance and customization aren’t of paramount importance.

As with reading, maximum performance and flexibility is found in using either IWriter<TRow> or IAsyncWriter<TRow> interfaces obtained from an IBoundConfiguration<TRow> created via Configuration.For<TRow>. Creating configurations is mildly expensive, so caching and reusing them can be beneficial.

The writer interfaces expose methods to do the following:

  • Write a collection of rows with WriteAll(Async)
    • The sync version accepts an IEnumerable<T>
    • The async version can take either an IEnumerable<T> or an IAsyncEnumerable<T>
  • Write a single row with Write(Async)
  • Write a comment with WriteComment(Async)
    • If a comment contains a row ending sequence of characters, it will be split into multiple comments automatically

Mapping a type to a set of columns, the order of the those columns, and the conversion of the values of those columns to text is done with the ITypeDescriber registered on the Options provided to Configuration.For<TRow> or the method on CesilUtils (by default, this is an instance of DefaultTypeDescriber). When an IBoundConfiguration<TRow> is created ITypeDescriber.EnumerateMembersToSerialize is invoked once and the returned SerializableMembers detail how Cesil will map a TRow instance to a set of text columns.

Specifically a SerializableMember details

  • The name of column, which may be written as part of a header row
  • The Getter to use to obtain a value from a TRow instance
  • An (optional) ShouldSerialize to control, per-row, whether a column should be included
  • The Formatter used to turn the columns value into a sequence of characters
  • Whether or not to include a column if it has the default value for it’s type
    • Cesil uses Activator.CreateInstance to obtain a default instance of ValueTypes, and use null as the default value for reference types

The order of columns is taken from the order they are yielded by the IEnumerable<SerializableMember> returned by ITypeDescriber.EnumerateMembersToSerialize.

There is quite a lot of flexibility in how Getters, ShouldSerializes, and Formatters can be created. They will be covered in detail in a later post.

There’s less internal state being managed when Cesil is writing in comparison to when it is reading, so there are no fancy state machines or lookup tables. The most interesting part is NeedsEncodeHelper which is used to check for characters that would require escaping, which makes use of the X64 intrinsics supported in modern .NET (provided your processor supports them).

There are some minor additional details to keep in mind while writing with Cesil:

  • All XXXAsync() methods try to make as much progress as they can without blocking, they don’t just yield to yield.
  • All XXXAsync() methods do take an optional CancellationToken, and pass it down to the underlying stream. CancellationTokens are checked at reasonable intervals, but no guarantees are made about how often.
  • If you try to write a comment without having configured your Options with a comment character, an exception will be raised.
  • If you try and write a value that would require escaping without having configured your Options with a way to start and end escaped values, an exception will be raised.
    • Options.Default has ” as it’s escape start and stop characters.
  • If you try to write a value that includes the escape start and stop character, but have not configured your Options with an escape character, an exception will be raised.

And that about covers how to write static types with Cesil.

The Open Question for this post is a return to an earlier one, but with a particular focus on writing: Is there anything missing from IWriter(Async) that you’d expect to be supported in a modern .NET CSV library?

This question has already led to some changes, which will appear in the next release of Cesil – adding comment writing methods that take ReadOnlySpan<char> and ReadOnlyMemory<char> parameters, clarifying some parameter names, and returning counts of the number of rows written from the enumerable taking write methods.

Remember that, as part of the sustainable open source experiment I detailed in the first post of this series, any commentary from a Tier 2 GitHub Sponsor will be addressed in a future comment or post. Feedback from non-sponsors will receive equal consideration, but may not be directly addressed.

In my next post I’ll cover how Cesil supports writing dynamic types, those not known at compile time. As you might expect from reading static and dynamic types, it is very similar to how static types are read…


Overthinking CSV With Cesil: Reading Dynamic Types

In my last post I went over how to use Cesil to deserialize to known, static, types. Since version 4.0, C# has also had a notion of dynamic types – ones whose bindings, members, and conversions are all resolved at runtime – and Cesil also supports deserializing into these.

In 2020, supporting dynamic isn’t exactly a given – dynamic is relatively rare in the .NET ecosystem, the big “Iron” use cases in 2015 (dynamic languages running on .NET) are all dead as far as I can tell, and the static-vs-dynamic-typing pendulum has been swinging back towards static with the increasing popularity of languages like Go, Rust, and TypeScript (even Python supports type annotations these days). All that said, I still believe there are niches in C# well served by dynamic – “quick and dirty” data loading without declaring types, and loading heterogeneous data. These are both niches Cesil aims to support well, and therefore dynamic support is a first-class feature.

Part of being a first-class feature means that all the flexibility and ease of use from static types is also present when working with dynamic. There aren’t any new types or interfaces, just use Configuration.ForDynamic() instead of Configuration.For<TRow>(), Options.DynamicDefault (which assumes a header row is present) instead of Options.Default (which will detect if a header row is present or not, which isn’t possible with unknown types), and the EnumerateDynamicXXX() methods on CesilUtils. The same readers with the same methods are all available, only now instead of some concrete T you’ll get a dynamic back. And, while dynamic operation does impose additional overhead, Cesil still aims for dynamic operations to be reasonably performant – within a factor of 3 or so of their static equivalent.

Regardless of the Options used, the dynamic rows returned by Cesil always support:

  • Casting to IDisposable
  • Calling the Dispose() method
  • Get accessor with an int (ie. someRow[0]), which returns a dynamic cell
    • This will throw if the int is out of bounds
  • Get accessor with a column name (ie. someRow[“someColumn”]), which returns a dynamic cell
    • If there was no header row present when reading (or if the column name is not found), this will throw
  • Get accessor with an Index (ie. someRow[^1]), which returns a dynamic cell
    • This will throw if the Index is out of bounds
  • Get accessor with a Range (ie. someRow[1..2]), which returns a dynamic row
    • This will throw if the Range is out of bounds
  • Get accessor with a ColumnIdentifier (ie. someRow[ColumnIdentifier.Create(3)]), which returns a dynamic cell

Likewise, regardless of the Options used, dynamic cells (obtained by indexing a dynamic row per above) always support casting to IConvertible. IConvertible is a temperamental interface, so Cesil’s implementation is limited – it doesn’t support non-null IFormatProviders, and makes a very coarse attempt at determining TypeCode. Basically, Cesil does just enough for the various methods on Convert to work “as you’d expect” for dynamic cells.

Just like with static deserialization, the ITypeDescriber on the Options used to create the IBoundConfiguration<TRow> controls how values are mapped to types. The differences are that dynamic conversions are discovered each time they occur (versus once, for static types) and conversion decisions are deferred until a cast (versus happening during reading, for static types). Dynamic deserialization does not allow custom InstanceProviders (as the dynamic backing infrastructure is provided directly by Cesil) – however the XXXWithReuse() methods on I(Async)Reader<TRow> still allow for some control over allocations.

Customization of dynamic conversions can be done with the DynamicRowConverter type (for rows) and the ITypeDescriber.GetDynamicCellParserFor() method (for cells). I’ll dig further into these capabilities in a later post. Out of the box, the DefaultTypeDescriber (used by Options.DynamicDefault) implements the conversions you would expect.

Namely, for dynamic rows Cesil’s defaults allow conversion to:

  • Object
  • Tuples
    • Rows with more than 7 columns can be mapped to nested Tuples using TRest generic parameter
  • ValueTuples, including those with a TRest parameter
    • Rows with more than 7 columns can be mapped to nested ValueTuples using TRest generic parameter
  • IEnumerable<T>
    • Each cell is lazily converted to T
  • IEnumerable
    • Each cell becomes an object, with no conversion occurring
  • Any type with a constructor taking the same number of parameters as the row has columns
    • Each cell is converted to the expected parameter type
  • Any type with a constructor taking zero parameter, provided the row has column names
    • Any properties (public or private, static or instance) whose name matches a column name will be set to the column’s value

If no conversion is possible, Cesil will raise an exception. If a conversion is chosen that requires converting cells to static values, those conversions may also fail and raise exceptions.

For dynamic cells, Cesil’s defaults allow conversion to:

As with rows, finding no conversion or having a conversion fail will cause Cesil to raise an exception.

And that covers the why and what of dynamic deserialization in Cesil. This post leaves me with two Open Questions:

  1. Are there any useful dynamic operations around reading that are missing from Cesil?
  2. Do the conversions provided by the DefaultTypeDescriber for dynamic rows and cells cover all common use cases?

As before, I’ve opened two issues to gather long form responses.  Remember that, as part of the sustainable open source experiment I detailed in the first post of this series, any commentary from a Tier 2 GitHub Sponsor will be addressed in a future comment or post. Feedback from non-sponsors will receive equal consideration, but may not be directly addressed.

Next time I’ll dive into the write operations Cesil supports, starting with static types.


Overthinking CSV With Cesil: Reading Known Types

The most common operation for a C# serialization library is usually reading into a known, static, type. That is, you’re given a stream or a blob of bytes and need to turn it into an instance of some type T. Cesil aims to make this common operation simple, fast, and customizable.

For cases where performance and customization are less important, CesilUtils exposes a bunch of EnumerateXXX methods. Both synchronous and asynchronous versions available, but all methods return results lazily.

Maximum performance and flexibility is found in using either IReader<TRow> or IAsyncReader<TRow> interfaces, obtained from an IBoundConfiguration<TRow> created via Configuration.For<TRow>. Unlike CesilUtils, using these interfaces lets you cache and reuse an IBoundConfiguration<TRow> and allow you to read comments and reuse rows.

Concretely, I(Async)Reader<TRow> methods let you:

Determining what members on the given TRow type map to which columns, how those columns should be parsed, and how members should be set is done with the ITypeDescriber registered on the Options provided to Configuration.For<TRow> or the method on CesilUtils (by default, this is an instance of DefaultTypeDescriber). When an IBoundConfiguration<TRow> is created ITypeDescriber.EnumerateMembersToDeserialize is invoked once and the returned DeserializableMembers detail how Cesil will map rows of data to TRow instances.

Preciesly, you can specify

  • The name of the column a member maps to
    • If a CSV lacks a header row, the order of the DeserializableMembers will be used to match columns instead
  • The Parser to use to turn a ReadOnlySpan into a specific type
  • An (optional) Reset to call before setting a member
  • The Setter to use to place the type created by the Parser on a member of TRow
  • Whether or not a member is required

A separate call to ITypeDescriber.GetInstanceProvider will be made to obtain an InstanceProvider which is used to get TRow instances needed when reading a row. While the call to get the InstanceProvider always happens, the InstanceProvider won’t be used if the XXXWithReuse methods are called with a non-null TRow reference. InstanceProviders allow you to implement sophisticated row re-use or initialization logic that a simple “ref TRow” isn’t adequate for.

There’s a great deal of flexibility in how InstanceProviders, Parsers, Resets, and Setters can be created which will be covered in a later post.

Internally, Cesil models reading a CSV as transitions through a state machine. Each character read is mapped to a CharacterType (one of EscapeStartAndEnd, Escape, ValueSeparator, CarriageReturn, LineFeed, CommentStart, Whitespace, Other, and DataEnd), which is then used in conjunction with the current State to look up a TransitionRule. TransitionRules specify the new State as well as an AdvanceResult, which instructs Cesil to take certain actions (like skipping the character, appending a character to the read buffer, finishing a column or row, etc.). Only the mapping from char to CharacterType is dependent on the configured Options, Cesil pre-allocates and reuses the TransitionRules that back the state machine.

Although Cesil’s state machine progresses one character at a time, Cesil reads multiple-characters at a time in order to maximize performance and better match modern C# interfaces like PipeReader. Control over the read buffer’s size is provided through ReadBufferSizeHint. Cesil also batches certain common AdvanceResults, like skipping or appending characters, so that the overhead of certain method calls is minimized in hot paths.

Taken altogether, and at a very high level, when Cesil reads a single row this is what happens:

  1. Characters are read into the read buffer, if it is empty
    1. If there are no more characters to read into the buffer, proceed as if we have read a single EndOfData CharacterType.
  2. If no instance of TRow has been provided, Cesil obtains one using the InstanceProvider
  3. For each character in the read buffer…
    1. The character is mapped to a CharacterType
    2. The current State and CharacterType are used to find the next State and an AdvanceResult

      1. If the AdvanceResult is batchable, note is made of it but no action is taken
      2. If the AdvanceResult is not batchable, any pending batched actions are taken and then the new action is taken
        1. If the AdvanceResult finishes a value, the current pending value is Parsed, the Reset for the current column is called (if it exists), the Setter is called
        2. If the AdvanceResult finishes a record, we return the row and are finished
    3. Remove the read character from the buffer
  4. If we haven’t returned a row, go back to 1

There are a few consequences of this design:

  1. There can be pending data in the read buffer when a row is returned, which means that you cannot use Cesil to read “up to a particular row” in the underlying data stream. Once Cesil starts reading, no guarantees are made about the state of the underlying stream.
  2. For maximum performance it’s worth reusing IBoundConfigurations, as a decent amount of reflection and lookup creation happens when one is created. All I(Async)Readers that one creates will reuse that work, making a cache very efficient.
  3. In asynchronous cases, Cesil will await only when the read buffer is empty and cannot be filled without blocking. This means that Cesil can “go async” much less frequently than might naively be expected, were it to be reading characters one at a time.

Finally, Cesil does offer support for reading whole line CSV comments. Although non-standard and rather rare, they arise often enough to be worth supporting. The reader interfaces expose TryReadWithComment(WithReuse)(Async) methods that return a ReadWithCommentResult, a tagged union type that wraps the comment or row read. In order to read comments, Options.CommentCharacter must have been set when the IBoundConfiguration<TRow> was created – calling any of the XXXWithComment methods when it has not been set will raise an exception. If a comment is encountered when a non-XXXWithComment method is invoked, but Options was configured with comment support, the comment will be silently skipped.

That wraps up what static deserialization looks like in Cesil.

The Open Question for this post is the same as the previous post, but with a particular focus on reading: Is there anything missing from IReader(Async) that you’d expect to be supported in a modern .NET CSV library?

This question has already led to some planned changes, namely removing the class constraint on I(Async)Reader’s TCollection generic parameter, and adding comment writing methods that take ReadOnlySpan<char> and ReadOnlyMemory<char> parameters.

Remember that, as part of the sustainable open source experiment I detailed in the first post of this series, any commentary from a Tier 2 GitHub Sponsor will be addressed in a future comment or post. Feedback from non-sponsors will receive equal consideration, but may not be directly addressed.

Next time I’ll be discussing reading dynamic types, and why I think that’s still worth supporting in 2020…


Overthinking CSV With Cesil: A “Modern” Interface

Part of Cesil’s raison d’être is to be a “modern” library for CSV, one which takes advantage of all the fancy new additions in recent C# and .NET Core versions. What exactly is “modern” is debatable, so this post lays out my particular take.

To make things concrete, the “main” interfaces for Cesil are split into:

  • Configuration – with the Options (and it’s Builder) and Configuration classes
  • Reading – with the IReader<TRow> and IAsyncReader<TRow> interfaces
    • Each interface has a way to read single rows, lazily enumerate all rows, greedily read all rows, and read all rows into the provided collection
  • Writing – with the IWriter<TRow> and IAsyncWriter<TRow> interfaces
    • Each interface has a way to write a single row, write several rows lazily, and write several rows greedily.
  • Utilities – with numerous methods on the CesilUtils static class
    • These methods provide single call ways to read and write collections of rows at the expense of some efficiency
  • Type Describing – with many types describing things like “creating rows” and “getting members”
    • These will be covered in detail in a later post

The first thing you’ll notice when using Cesil is that it splits setup into two logical steps, building Options and binding Configurations. Options cover all the generally reusable parts of working with CSVs (things like separators, and memory pools), while Configurations represent a binding of Options to a particular Type. Binding a type implies a fair amount of work, in particular a decent amount of reflection to determine columns. By separating Options and Configurations, Cesil allows easy and efficient reuse of the “cheap” parts of a setup while giving control over when the expensive parts happen.

You’ll also quickly notice that Cesil tends to hand you interfaces instead of base classes. This is a consequence of my belief that encouraging inheritance in end user code is generally a mistake, combined with a desire to keep implementation details hidden. Thus Cesil exposes IReader<TRow> rather than SyncReaderBase<TRow>, and nearly every exported class is sealed.

Cesil splits reading and writing into separate interfaces in a manner similar to the recent System.IO.Pipelines namespace. Coupling reading and writing would mean that certain operations would be allowed by the type system even if they couldn’t possibly work at runtime – say, writing to something that was backed by a ReadOnlySequence<T>. The BCL has some examples of this failure, like Stream, whose Remarks call out that “Depending on the underlying data source or repository, streams might support only some of these capabilities”. Effectively this means that there are methods on all Streams that cannot be safely called in all cases, and that is a poor design choice to make in 2020.

Asynchronous and synchronous operations also get separate interfaces rather than one shared one. While not as footgun-y as mixing reading and writing, mixing synchronous and asynchronous operation is fraught with potential for error – either in correctness (such as starting synchronous operations while asynchronous ones are pending completion) or performance (such as sync-over-async). Potential for error is increased with the introduction of IAsyncDisposable and await using, the synchronous nature of IDisposable and using can be hidden in otherwise asynchronous code. Accordingly, all methods on IAsyncReader<TRow> and IAsyncWriter<TRow> are asynchronous and all methods on IReader<TRow> and IWriter<TRow> are synchronous – the former two implement IAsyncDisposable and the latter implement IDisposable.

Other, less immediately obvious, choices made in Cesil:

  • Most types are immutable, and all immutable types implement IEquatable<T>
    • Mutability is a footgun in the highly concurrent code that is increasingly common, and so is avoided everywhere possible
  • Relatively few primitives are in the interface, enums (like EmitDefaultValue) and semantic wrappers (like ColumnIdentifier) are preferred
    • Primitive types are easy to accidentally misuse and harder to read (ie. what does “true” mean when passed to method “Foo”)
  • Comments in CSVs are read and written with specific methods (TryReadWithComment(Async) and WriteComment(Async)), by default they are ignored when read (even if supported by a set of Options)
    • Comments are relatively rare, so the basic operations shouldn’t be encumbered by having to deal with them
    • They must be different methods because the implicit type of all comments is `string` not TRow
  • Recently introduced types like ReadOnlySequence<T>, IBufferWriter<T>, PipeReader, and PipeWriter have first class support
    • Older types like TextReader and TextWriter are also supported, since these are still supported in the BCL and lots of code continues to use them

Having spelled out Cesil’s read and write interfaces leads to the second Open Question: Is there anything missing from IReader(Async) and IWriter(Async) that you’d expect to be supported in a modern .NET CSV library?

As before, I’ve opened an Issue to gather long form responses. Remember that, as part of the sustainable open source experiment I detailed in the first post of this series, any commentary from a Tier 2 GitHub Sponsor will be addressed in a future comment or post. Feedback from non-sponsors will receive equal consideration, but may not be directly addressed.

Now that we’ve covered, at a very high level, the overall interface for Cesil, the next post will dig into how reading static types works in detail


Overthinking CSV With Cesil: CSV Isn’t A Thing

For those who read my previous post, when you read “CSV library” you likely had one of two thoughts depending on how much exposure you’ve had to CSV files – either:

  1. Dealing with CSVs is so simple, how much could there be to write about?
  2. Dealing with CSVs is insanely complicated, why would you ever do that?

My day job is running a data team so I’m firmly in camp #2, lots of things run on CSV and it’s crazy complicated. Fundamentally this is because CSV isn’t a format, it’s a family of related formats. If you work with arbitrary CSV files long enough, you’ll eventually encounter one that doesn’t even use commas for separators.

Like most weird things, this is a consequence of history. The idea of CSV dates back at least 40 years, while the RFC “standardizing” it is from 2005. That’s a lot of time for different versions to flourish.

To get more concrete, CSV is a subset of the Delimiter Separated Values (DSV) family of tabular data formats – one which often (but not always!) uses commas to separate values. The most common variant is almost certainly that produced by Microsoft Excel (on Windows, in an English locale) – it uses commas to separate values, double quotes to start escaped values, double quotes to escape within escaped values, and the carriage-return line-feed character sequence to end a row.

Cesil aims to support all “reasonable” DSV formats, with defaults for the most common kind of CSV. A later post will go into exactly how flexible Cesil can be, but from a format perspective Cesil can handle:

  • Any single character value separator
  • Either no way to escape a value, or a single character starting and stopping escaped values
  • Either no way to escape a character within an escaped value, or a single character escape
  • Any of the \r, \n, or \r\n character sequences for ending a row
  • No comments, or “whole row” comments
  • Optional leading or trailing whitespace around values
  • Requiring a header row, forbidding a header row, or making a header row optional

This flexibility makes it possible to handle relatively standard things like Tab Separated Value (TSV) files, or CSV files which use an unusual character for escaping as well as kind of crazy things like CSVs using semicolons to separate values, or where values have been visually aligned with whitespace. All of this functionality, and much more, is configured with Cesil’s Options and OptionsBuilder classes.

And now we encounter Cesil’s first Open Question: Do these options provide adequate flexibility?

I’ve opened an Issue to gather long form responses. Remember that, as part of the sustainable open source experiment I detailed in the first post of this series, any commentary from a Tier 2 GitHub Sponsor will be addressed in a future comment or post. Feedback from non-sponsors will receive equal consideration, but may not be directly addressed.

Now that I’ve covered the formats Cesil can handle, in the next post I will cover the whats and whys of the interface it exposes


Overthinking CSV With Cesil: An Introduction

Several months ago About a year ago (how time flies) I decided to spin up a new personal project to get familiar with all the new goodies in C# 8 and .NET Core 3. I happened to be dealing with some frustrating CSV issues at the time, so the project was a CSV library.

Once I got into the meat of the project, I started really overthinking things. The end result was Cesil – a pre-release package is available on Nuget, source is on GitHub, and it’s got reference documentation and a prose wiki. It’s released under the MIT license.

When I say I was overthinking things, I mean that rather than build a toy just for my own edification I ended up trying to do The Right Thing™ for a .NET library released in 2020. This, at least 14 part, blog series will cover exactly what that entailed but in short I committed to:

  • Async as a first class citizen
  • Maximum consumer flexibility
  • Extensive documentation
  • Comprehensive test coverage
  • Adopting C# 8 features
  • Modern patterns and conventions
  • Efficiency, especially in terms of allocations

Interpretations of each of those points can be a matter of opinion, and I’m not going to claim to have 100% correct opinions. I attempted to record both things I consider opinions and open questions, both of which I’ll expound upon as this series continues.

My hope is that Cesil is easy to use, hard to misuse, handles the common cases out of the box, and can be configured to handle almost anything you might want to do with CSV. I intend to respond to feedback and make changes as needed over the course of this series to make it more likely those hopes are realized.

A final bit of overthinking on the whole project has been around sustainable open source. There’s been a fair amount of discussion on the subject, the gist of which is that loads of people and companies benefit from volunteers doing skilled work without compensation – and that is an unsustainable practice. As a small experiment inline with these thoughts, I’ve set up GitHub Sponsors for Cesil with a few low commitment tiers. I’ll both be using the tiers to prioritize responding to some feedback, and reporting on the results of this experiment towards the end of the blog series.

Now with the introduction out of the way, I’m ready to dive into technical bits in the next post


Adding Static Code Analysis to Stack Overflow

As of September 23rd 2019 we’re applying static analysis to some of the code behind public Stack Overflow, Stack Overflow for Teams, and Stack Overflow Enterprise in order to pre-emptively find and eliminate certain kinds of vulnerabilities. How we accomplished this is an interesting story, and also illustrative of advancements in .NET’s open source community.

But first…

What did we have before static analysis?

The Stack Overflow codebase has been under continuous development for around a decade, starting all the way back on ASP.NET MVC Preview 2. As .NET has advanced we’ve adopted tools that encourage safe practices like Razor (which defaults to encoding strings, helping protect against cross site scripting vulnerabilities). We’ve also created new tools that encourage doing things the Right Way™, like Dapper which handles parameterizing SQL automatically while still being an incredibly performant (lite-)ORM.

An incomplete, but illustrative, list of default-safe patterns in our codebase:

  • Automated SQL parameterization with Dapper
  • Default encoding strings in views with Razor
  • Requiring cross site request forgery (XSRF) tokens for non-idempotent (ie. POST, PUT, DELETE, etc.) routes by default
  • HMACs with default expirations and common validation code
  • Adopting TypeScript, an ongoing process, which increases our confidence around shipping correct JavaScript
  • Private data, for Teams and Enterprise, is on separate infrastructure with separate access controls

In essence we were safe, at least in theory, from most classes of injection and cross site scripting attacks.

So, …

What did static analysis give us?

In large part, confidence that we were consistently following our pre-established best practices. Even though our engineers are talented and our tooling is easy to use, we’ve had dozens of people working on Stack Overflow for 10+ years – inevitably some mistakes slipped into the codebase. Accordingly most fixes were just moving to doing something “the right way,” and pretty minor. Things like “use our route registration attribute, instead of [HttpPost]” or “remove old uses of SHA1, and switch to SHA256”.

The more “exciting” fixes required introducing new patterns, and updating old code to use them. While we had no evidence that any of these were exploited, or even exploitable in practice, we felt it was best to err on the side of caution and address them anyway. We added three new patterns as part of adopting static code analysis:

  1. We replaced uses of System.Random with an equivalent interface backed by System.Security.Cryptography.RandomNumberGenerator.
    1. It is very hard to prove a random number being predictable either is or isn’t safe, so we standardized on always hard to predict.
  2. We now default to forbidding HTTP redirects to domains we do not control, requiring all exceptions be explicitly documented.
    1. The concern here is open redirects, which can be used for phishing or other malicious purposes.
    2. Most of our redirects were already appropriately validating this, but the checks were scattered across the code base. There were a few missing or buggy checks, but we found no evidence of them being exploited.
  3. We strengthened XSRF checks to account for cases where users move between unauthenticated and authenticated states.
    1. Our XSRF checks previously assumed there was a single token tied to a user’s identity. Since this changes during authentication, some of our code suppressed this check and relied on other validation (completing an OAuth flow, for example).
    2. Even though all cases did have some kind of XSRF prevention, having any opt-outs of our default XSRF checking code is risky – so we decided to improve our checks to handle this case. Our fix was to allow two tokens to be acceptable, briefly, on certain routes.

Our checks run on every pull request for Stack Overflow, and additionally (and explicitly) on every Enterprise build – meaning we aren’t just confident that we’re following our best practices today but we’re confident we will keep following them in the future.

In terms of Open Web Application Security Project (OWASP) lists, we gained automatic detection of:

That wraps up what we found and fixed, but…

How did we add static code analysis?

This is boring because all we did was write a config file and add a PackageReference to SecurityCodeScan.

That’s it – Visual Studio will pick it up as an analyzer (so you get squigglies) and the C# compiler will do the same so you get warnings or errors (we treat all warnings as errors).

Not real code, ’cause by the time I thought to take a screenshot we’d already fixed everything.

Far more interesting is all the open source stuff that made this possible:

  • In 2014 Microsoft open sourced Roslyn, their C# and VB.NET compiler
  • Visual Studio 2015 ships with support of Roslyn analyzers
  • The authors of Security Code Scan start work in 2016
  • I contribute some minor changes to accommodate Stack Overflow peculiarities in 2019

If you’d told me 6 years ago that we’d be able to add any sort of code analysis to the Stack Overflow solution: trivially, for free, and in a way that contributes back to the greater developer community – I wouldn’t have believed you. It’s great to see “the new Microsoft’s” behavior benefit us so directly, but it’s even greater to see what the OSS community has built because of it.

We’ve only just shipped this, which begs the question…

What’s next with static code analysis?

Security is an ongoing process, not a bit you flip or a feature you add. Accordingly there will always be more to do and places we want to make improvements, and static code analysis is no different.

As I alluded to at the start, we’re only analyzing some of the code behind Stack Overflow. More precisely we’re not analyzing views, or tracing through inter-procedural calls – and analyzing both is an obvious next step.

We’ll be able to start analyzing views once our migration to ASP.NET Core is complete. Pre-Core Razor view compilation doesn’t give us an easy way to add any analyzers, but that should be trivial once we’re upgraded. Razor’s default behavior gives us some confidence around injection attacks, and views usually aren’t doing anything scary – but it will be nice to have stronger guarantees of correctness in the future.

Not tracing through inter-procedural calls is a bit more complicated. Technically this is a limitation of Security Code Scan, there’s an issue for it. That we can’t analyze views reduces the value of inter-procedural analysis today, since we almost always pass user-provided data into views. For now, we’re comfortable focusing on our controller action methods since basically all user-provided data passes through them before going onto views or other inter-procedural calls.

The beauty of open source is that when we do come back and do these next steps (and any other quality of life changes), we’ll be making them available to the community so everyone benefits. It’s a wonderful thing to be able to benefit ourselves, our customers, and .NET developers everywhere – all at the same time.


Sabbatical Log: Retrospective

I took a sabbatical in November and set out to learn some game development. It’s been a month-and-change since I wrapped up, and now I’m looking back to reflect on my endeavor.

Sabbatical?

My auto-responder for the month I was out.

My auto-response while I was out.

Stack Overflow gives employees 4 weeks of paid vacation on their 5th anniversary, and an additional week each subsequent year — up to a maximum of 8 accumulated weeks. It’s the same as regular vacation in terms of scheduling, pay, and whatnot except that you must take a minimum of a month off at a time. It was a big year for sabbaticals at Stack Overflow; 13 people took them in 2018.

Before taking the time off, I knew I’d need a project to keep me entertained, and I decided I’d play with a kind of coding I’d never really done before — game development.

What did I do?

I built the very beginning of a A Link To The Past pseudo-clone, working from first principles. Besides actually blitting pixels onto the screen, I built everything: loading assets, collision detection, animations, room transitions, etc. In the end I had a game with one real room, a working enemy, a few interactable objects (doors, bushes, walls, and pits), support for room transitions, and a lot of half-way decent infrastructure.

What am I proud of?

I feel like I actually accomplished a lot, given how little I knew going in — I was just an enthusiastic reader on the subject, with a decent command of C#. More concretely, and with some distance from the original authorship, these are the parts I’m most proud of:

  • Hot reloading of assets
    • Sprites, animations, hitmaps, and room backgrounds can all be edited while the game is running and they’ll be automatically reloaded. This made me way more productive when it came to “creating” assets (i.e. chopping up LTTP screenshots and stitching them together).
    • For a real game, I would have built a bunch of tools for creating and previewing things, but hot reloads felt like a clever and cheap alternative given my limited time.
    • I did sprites on the 2nd day (which implicitly handled hit maps), rooms on the 7th, and animations on the 8th.
  • A focus on DEBUG performance
    • I spent most of my time in DEBUG builds since I was, well, building and debugging. Accordingly, I felt I had to keep DEBUG performance decent enough to actually run the full game.
    • Since wrapping up, I’ve come across discussions suggesting that some actual game devs share this sentiment, which is a nice bit of validation. Of course, they may not be representative of the game industry as a whole, but at least I’m not alone in my craziness.
    • I spent time explicitly working on DEBUG performance on the 14th and 15th.
  • A decent separation of concerns
    • I had never worked with the Entity-Component-System pattern, but found that it did a really good job of keeping code separated in a sensible way.
    • For example, both the knights and the player can cut bushes… but neither of those entities have any knowledge of the other. Generally, things “just worked” without nasty code.
    • I also kept rendering unaware of the rest of the game, which feels like an accomplishment.
  • Useful debug overlays
    • I implemented overlays for sprite bounds, collision bounds, collision polygons, sub-system timings, and render timings (including FPS). These were crazy useful for debugging, and frankly I should have worked on them even earlier than I did.
    • I implemented these overlays on the 12th, 13th, 14th, and 28th.

What didn’t I get to?

“The rest of the game” is the obvious answer, but I knew going in that there was no way I could build everything in a month. A few of the things that I had at least made notes to start on were:

  • A DeathSystem to handle death animations and dropping items when something dies
    • Right now the sword knight, as the only enemy, handles its own death animations, which is less than ideal.
  • A source of random numbers
    • Current code just uses a System.Random instance, but I’d like to implement a custom provider for a few reasons:
      • Guarantee of stability, so no matter where the code is running random numbers will be the same.
      • In the same vein, parallelism requires some changes so that random numbers are consistent in a frame even if the actual code runs at different times.
      • There’s a lot of fun to be had with manipulating RNGs in old games (check out the Luck Manipulation TAS videos), so a custom seeding paired with a simple algorithm can actually be a plus.
  • Recording gameplay videos
    • I kept the rendering code and logic code separate, in part to enable this, but never got around to it.
    • I captured all the GIFs in my posts with ScreenToGif.
  • Some system for positioning frames in an animation and/or multi-part entity
    • In a number of systems there’s a “glue” function that repositions various entities based on current animation frames, which is not great.
    • These relative positions shouldn’t be in code as constants, they should be loaded up as part of an asset.
    • I mulled over the problem for a while, but couldn’t come up with a nice generic solution… maybe there’s something in the literature I’m unaware of, or maybe I’ve structured my entities and animations poorly.
  • Automated rebuilding of asset/animation/etc. enums
    • I mentioned this on the 16th, and it only got worse as time went on.
    • Besides lack of time, my biggest worry was that I’d build something that wasn’t portable outside of my machine — always a risk when tweaking builds.

What do I regret?

  • Using a custom fixed point implementation for collision detection
    • It was an interesting exercise, and I’m not convinced that it couldn’t be done well, but I spent a lot of time on it for little gain.
    • I also had to spend a lot of time optimizing this code since, especially in DEBUG builds, it was thousands of times slower than just using floats.
  • I didn’t start on debug overlays soon enough
    • I assumed that test would cover most of the same ground, which turned out to be false.
    • A few days were lost because I hadn’t anticipated the need for debug overlays, making debugging a lot harder than it should have been.

Overall, I have few regrets, which feels indicative of a month well spent. That said, if and when I continue working on this codebase, I’m sure I’ll find more things to regret.

Where did I spend my time?

  1. Collision detection and related code
    1. I spent time on this during the 3rd7th, 9th11th, 13th, & 16th.
    2. This totals about 33% of my time over the entire sabbatical.
  2. Asset creation and manipulation
    1. This isn’t well accounted for in the actual posts, but most days I spent an hour or so doing something with assets. Oftentimes this was just scaling and making minor edits, but sometimes it was painstaking screenshot comparisons to figure out relative positioning between two separate assets.
    2. As a rough estimate, I spent about 12% of my time on this task.
  3. Debugging tools
    1. Primarily overlays, as documented in the 11th13th.
    2. A lot of ToString() implementations to make life easier in the debugger.
    3. This was probably also about 8% of my time.

Everything else took the remaining 40-50% of my time, which feels about right. Since I was mostly working on building up the infrastructure of a proper game, I don’t think spending half of my time on plumbing was a problem. Now, if I had spent a year or two on this project, that division of time would be problematic — I’d expect more time to be spent on the actual “game” part as development progressed.

Will I keep going?

I’d like to, which is a weaselly answer.

The value of uninterrupted time to devote to development is hard to overstate, and if I continue I won’t have that. So I couldn’t expect to be nearly as productive, which makes the whole thing less attractive — I’m one of those people who derives a lot of enjoyment from making tangible progress.

All that said, I’m going to try and spend a weekend every now and then plugging away at the LearnMeAThing codebase. I may even keep notes and blog about it occasionally.

Where’s the code?

I can’t exactly publish the whole project, since it’s full of Link To The Past derived assets. I’m also not all that interested in really open sourcing it, because it’s a learning project for me I wouldn’t accept pull requests or handle issues. That said, I like the idea of folks learning from my examples-and-or-mistakes: so I put together a GPLv3 one-time dump of just the code.


Sabbatical Log: December 1st and 2nd

This blog series is running about a week behind my actual work, giving me time to clean things up.  The date the actual work was done is in the title, horizontal lines indicate when I stopped writing and started coding.

Family was in town on the first, I got nothing done.  But I did get a bonus day on the 2nd!


Logically, the knight knows how to chase the player now… there are some issues, however.

Just have to spend some time debugging here, then I’ll get to the user actually taking damage and recoiling.


The knight can now stab players, and it knocks the player away. As with other damage-y things, the player system is not actually aware of the knight or it’s sword – it is just informed that it was struck by something with the DealsDamage component attached to it.

Last task for the sabbatical, make the knight die when it’s hit by the player’s sword.


The sword knight can die!

And with that, my sabbatical is ended. I intend to write up a retrospective at a later date, but all in all I’m pretty happy with my progress and I learned a lot. Definitely ready to get back to work and see all my Stack Overflow people again though, a month of not working is quite enough.

Continue onto the Retrospective