ORMBattle.NETThe ORM tool shootout

  • Increase font size
  • Default font size
  • Decrease font size
Home Summary

ORMBattle.NET Test Suite

E-mail Print PDF

Goals of this web site

This website is devoted to direct ORM comparison. We compare quality of essential features of well-known ORM products for .NET framework. This web site might help you to:

  • Compare the performance of your own solution (based on a particular ORM listed here) with peak its performance that can be reached on this ORM, and thus, likely, to improve it.
  • Choose the ORM for your next project taking its performance and LINQ implementation quality into account.

We do not provide something like "overall test score", because multiplier (importance factor) of each particular test result in overall score always depend on a particular use case. We provide all the results "as is" allowing you to decide what's important. But of course, with nice charts and colorful score tables.

Comparing our test suite to car tests, we measure:

  • Motor horsepower, torque, breaking distance and maximal speed. This indirectly exposes internal quality and complexity of engine.
  • Comformance to EURO-5 standard. Again, this indirectly exposes the complexity and neweness of underlying technologies.

Engines

Which one is better?

If you're interested in details, let's go further.

Why fully feature-based comparisons are bad?

And why we compare just essential features? There are lots of ORM tools for .NET, and their features vary quite significantly. Moreover, even the same feature can be implemented in completely different ways in two different products. There are some relatively old frameworks (e.g. NHibernate) integrating tons of features inside, as well as relatively new ones offering a subset of them, but exposed or implemented in newer fashion. So how to compare all this stuff? Must we score each feature? What multipliers must be assigned to each of them, if so?

Necessity of a particular feature depends on a particular use case, so we think we simply can't fairly rank them. More generic conclusion from above is: it's impossible to faily decide what must be called as feature, and thus must be included into the comparison, and what must be omitted. Let's look on few examples of features:

  • Authorization support
  • Access control system
  • Dependency injection mechanism

It looks like they aren't related to ORM, although some products incorporate them. Can we consider them as features for our comparison? if so, are they more important in comparison to e.g. "Stored procedures support", or not? Obviously, there are no exact answers to these questions. All depends on a particular use case.

Another issue with features is their actual implementation. As it was mentioned, it may vary dramatically: extremely cool here versus just pale reflection of user expectations there. Must we test everything to score the implementation, or just believe the information provided by vendor? Even worse, have you noted we just mentioned "user expectations"? Must we analyze them? Again, that's too complex and quite dependent on a particular case.

And finally, frequently features are just marketing. Feature matrixes published on vendor's web sites tends to be more and more fat, and it's really difficult to find out what a particular feature really means there, and which of them are really necessary in your particular case. So marketing makes this even more complex.

Try to extrapolate the same to automobiles again - you'll immediately understand what does this mean.

So what's the solution?

We think it's a good idea to compare the quality of essential features. Essential features are features provided by any ORM. What quality means here? It is normally associated with:

  • Compatibility. Easy to measure, if there are commonly accepted standards a particular tool claims to support.
  • Performance. Easy to measure.
  • Usability. Difficult to measure: it is very subjective.
  • Implementation quality. Various code metrics, internal complexity, conformance with .NET coding style and so on. Can be measured, but actually subjective as well.

As you will find, we decided to take Compatibility and Performance from this list and measure them for a particular set of features:

  • Compatibility is measured for LINQ implementation. Currently LINQ is the only widely adopted standard related to ORM on .NET. It is supported by all leading ORM tools there. Moreover, it was pretty easy to write a single test sequence for all ORM tools supporting it. Currently we compare only those ORM tools that support LINQ - we think LINQ will stay intact for very long time, and thus its support is a kind of "must have" feature for any ORM tool now.
  • Performance is measured for basic CRUD and query operations: (C) create entity, (R) read (fetch) entity by its key, (U) update entity, (D) delete entity; in addition, we've measured performance of LINQ query returning a single instance by its key - i.e. query performance.

As you see, the set of functions we test is rather limited. The essential question is: can you judge about the overall ORM tool quality by results of our tests? We hope so:

  • Our performance tests show if attention was really paid to basic operations we test. Likely, 20-30% difference with the leaders means nothing, but e.g. 10 times difference definitely means a lot. Note that color doesn't reflect % of difference, it just allows to identify the leaders & loosers on a particular test. There are some red cells showing 50% difference with leader, and in many cases that is absolutely ok (e.g. with non-compiled queries - in general, 99% of executed queries must be compiled in real-life application). But as I mentioned, in some cases difference is much more significant. So look on numbers and charts to see what red and green results really mean.
  • LINQ implementation tests show how much attention was paid by a prticular vendor to likely the worst faced problem: LINQ to SQL translation. It's well known that LINQ is really simple from the outside, but quite complex inside. How many products on .NET incorporate full-featured LINQ translator to some other query language (i.e. the ones that "understand" most of IQueryable extension methods, but not just Where + few others)? It is enough to use your armfingers to enumerate all of them. Furthermore, there is no standard LINQ implementation path (and, likely, it won't appear in the nearest year or two) - any vendor solves this task mainly relying just on its own. So that's why a number showing how deep a particular implementation seems a very good scoremark.

Tests

Currently there are 2 test sequences:

  • LINQ implementation tests score quality of LINQ implementation in a particular ORM. There are 100 tests covering wide range of LINQ queries, from very basic to very complex ones. These tests are performed on Northwind database.
  • Performance tests score performance of basic operations, such as instance creation or key resolution. The result for each test here is operation per second rate for a particular basic operation. The instances we use here are tiny: there is a sealed type having Int64 key field and Int64 value field. We use such tiny instances both to maximize the overhead from ORM usage, and to show the maximal possible performance. Moreover, we intentionally use clustered primary index + sequential reads on this test to maximize the throughput. The idea behind any our performance test is to show peak performance that can be reached for a similar operation on a particular ORM.

See also

  • Precautions we are taking to ensure we measure actual execution times, rather than something else (e.g. JITting).
  • Equipment, on which current results were produced.

Results

Follow to the scorecard for details.

Last Updated on Tuesday, 19 January 2010 18:20  

Polls

Which test must we add next?
 

Subscribe to our blog

Who's Online

We have 27 guests online