What are the deficiencies of the built-in BinaryFormatter based .Net serialization?

If you mean BinaryFormatter:

  • being based on fields, is very version intolerant; change private implementation details and it breaks (even just changing it to an automatically implemented property)
  • isn't cross-compatible with other platforms
  • isn't very friendly towards new fields
  • is assembly specific (metadata is burnt in)
  • is MS/.NET specific (and possibly .NET version specific)
  • isn't obfuscation-safe
  • isn't especially fast, or small output
  • doesn't work on light frameworks (CF?/Silverlight)
  • has a depressing habit of pulling in things you didn't expect (usually via events)

I've spent lots of time in this area, including writing a (free) implementation of Google's "protocol buffers" serialization API for .NET; protobuf-net

This is:

  • smaller output and faster
  • cross-compatible with other implementations
  • extensible
  • contract-based
  • obfuscation safe
  • assembly independent
  • is an open documented standard
  • works on all versions of .NET (caveat: not tested on Micro Framework)
  • has hooks to plug into ISerializable (for remoting etc) and WCF

Given any random object, it's very difficult to prove whether it really is serializable.


Versioning of data is handled through attributes. If you aren't worried about versioning then this is no problem. If you are, it is a huge problem.

The trouble with the attribute scheme is that it works pretty slick for many trivial cases (such as adding a new property) but breaks down pretty rapidly when you try to do something like replace two enum values with a different, new enum value (or any number of common scenarios that comes with long-lived persistent data).

I could go into lots of details describing the troubles. In the end, writing your own serializer is pretty darn easy if you need to...