Archive for category design patterns

The dreaded “Save(Foo bar)”

For the last year or so my biggest antagonist has been “Save(Foo bar)”. There is usually a lot of things going wrong when that method turns up. I’m not saying that saving stuff to the persistence storage is a bad thing, but I’m arguing that the only place it should exist is in actually saving data in the persistence/data layer. Don’t put it in your services, don’t put it in your business logic, and for crying out loud; do not base entire behavior around it.

The source of the ill behavior

When developing software a user will quite often tell you that she wants a button that will save the current changes she’s done to the current work. As a good developer we implement this button and make the user happy. After all, their happiness is our salary. In a pure data-entry application this is all good, but as soon as we add just a little bit of behavior we need to start pushing the actual saving far, far away. Consider this screen:

In this setting the Save button makes perfect sense. The user changes a couple of things and then commits those changes to the system. Now start to think about more then just storing the values in a database, think about adding behavior to the changes.

Do not throw away the users intent!

When a user makes a change to any of these fields, they do so with an intention. Changing the address might mean that the receiving end of an order has changed. Changing logistics provider might mean that a new one need to be notified. With a save method, the only way for our business logic to know what to do, is to figure it out;


                        
public void Save(Order changedOrder) { if (changedOrder.Address != existingOrder.Address) OrderAddressChanged(existingOrder, changedOrder); if (changedOrder.Status != existingOrder.Status) OrderStatusChanged(existingOrder, changedOrder); if (changedOrder.LogisticServiceProvider != existingOrder.LogisticServiceProvider) LogisticServiceProviderChanged(existingOrder, changedOrder); }

                        

And even worse;


                        
public void OrderStatusChanged(Order existingOrder, Order changedOrder) { if(changedOrder.Status == Status.Cancelled) ......

                        

Basically what we’ve done is throwing away the users intent and capturing of the actual work the user has done. I see this happening a lot in service interfaces (WCF Service contracts with a single Save method), in “Managers” (Let’s not even go there) and in local services.

The code this will generate a couple of releases down the road will smell in so many scents it’s not even funny to joke about. But to get you started, read up on Open/Closed (A SOLID principle).

 

Enough with the bashing, what’s the solution?

The idea that a user can commit their work in a single push of a button is a good one. But that doesn’t mean that the save need to anything else then a button. The actual work under the covers should reflect what the user really wants to happen. Changing a service provider on an order that’s packed should probably issue a message to the old provider not to pick the package up and a message to new to actually do. This kind of behavior is best captured with commands.

Let’s revisit the order form:

In this scenario every red box here is actually a intended command form the user. Identified as;

ChangeOrderAddress

ChangeOrderLogisticServiceProvider

ChangeOrderStatus (or actually, CancelOrder, ShipOrder, etc)

Implementing this could either be using the command pattern or methods on the order class. “It all depends” on your scenario on preferences.

So what am I really suggesting?

Make sure that your code reflects the intention of the user. Not the technical details of how the data is persisted. You will save a lot of pain, grief and hard work by just identifying what the user intends, capture that in commands or something similar and execute them in order when the user hit’s save. And most importantly SAVE IS A METHOD ON THE PERSISTENCE LAYER, NOT A BUSSINESS FUNCTION.

,

No Comments

The repository pattern explained and implemented

The pattern documented and named “Repository” is one of the most misunderstood and misused. In this post we’ll implement the pattern in C# to achieve this simple line of code:

var customers = customers.Matching(new PremiumCustomersFilter())

as well as discuss the origins of the pattern and the original definitions to clear out some of the misrepresentations.

The formal description

My first contact with the repository pattern was through the study of Domain Driven Design. In his book[DDD] (p. 147), Eric Evans simply states that;

Associations allow us to find an object based on it’s relationships to another. But we must have a starting point for a traversal to an ENTITY or VALUE in the middle of it’s life cycle.

My interpretation of the section on repositories is simple, the Domain do not care about where the objects are stored in the middle of it’s life cycle but we still need a place in our code to get (and often store) them.

In Patterns of Enterprise Application Architecture[PoEAA] (p. 322), repositories is described as:

Mediates between the domain and the data mapping layers using a collection-like interface for accessing domain objects.

Examining the both chapters; we’ll quickly come to understand that the original ideas of the repository is to honor and gain the benefits of Separations of Concerns and hide infrastructure plumbing.

With these principles and descriptions this simple rule emerges:

Repositories are the single point where we hand off and fetch objects. It is also the boundary where communication with the storage starts and ends.

A rule that should guide any and all attempts of following the Repository pattern.

Implementations

There are several ways of implementing a repository. Some honor the original definitions more then others (and some are just blunt confusing). A classic implementation looks a lot like a DAL class:

public class CustomerRepository
{
       public IEnumerable FindCustomersByCountry(string country) {…}
}

Using this implementation strategy; the result is often several repository classes with a lot of methods and code duplicated across them. It misses the beauty and simplicity in the original definitions. Both [PoEAA] and [DDD] uses a form of the Specification Pattern (implemented as Query Object in PoEAA) and asks the repository for objects that matches that, instead of named methods.

In code this gives the effect of having several small classes instead of a couple of huge ones. Here is a typical example:

public IEnumerable Matches(IQuery query) { …. }
var premiumCustomers = customers.Matches(new PremiumCustomersFilter())

The above code is a great improvement over the named methods strategy. But let’s take it a little further.

The Generic Repository

The key to a generic repository is to think about what can be shared across different entities and what needs to be separate. Usually the initialization of infrastructure and the commands to materialize is sharable while the queries are specific. Let’s create a simple implementation for Entity Framework:

public class Repository : IRepository 
    where T : class
{
    protected ObjectContext Context;
    protected ObjectSet QueryBase;

    public Repository(ObjectContext context)
    {
        Context = context;
        QueryBase = context.CreateObjectSet();
    }

    public IEnumerable Matches(ICriteria criteria)
    {
        var query = criteria.BuildQueryFrom(QueryBase);
        return query.ToList();
    }

    public void Save(T entity)
    {
        QueryBase.AddObject(entity);
    }
}

Using Generics in the definition of the repository, allows us to reuse the basics while still allowing us to be specific using the criteria. In this naïve implementation there is not much that would be shared, but add things like logging, exception handling and validation and there is LoC to be saved here. Notice that the repository is executed and returns an IEnumerable with the result as we expect all communication with the store to go through the repository.

The query objects then implement the ICriteria interface and adds any filtering needed. An example query can look like this:

public class WarehousesWithReservableQuantitiesFor : ICriteria
{
    private readonly string _itemCode;
    private readonly int _minimumRemaingQuantity;

    public WarehousesWithReservableQuantitiesFor(string itemCode, 
                                                int minimumRemaingQuantity)
    {
        _itemCode = itemCode;
        _minimumRemaingQuantity = minimumRemaingQuantity;
    }

    IQueryable 
        ICriteria.BuildQueryFrom(ObjectSet queryBase)
    {
        return (from warehouse in queryBase
                from stock in warehouse.ItemsInStock
                where stock.Item.Code == _itemCode 
                        && (stock.Quantity - stock.ReservedQuantity) 
                                > _minimumRemaingQuantity
                select warehouse)
                .AsQueryable();
    }
}

There is a couple of things to notice here. First of all the interface is implemented explicit, this “hides” the method from any code that isn’t, and shouldn’t be, aware that there is the possibility to create a query here. Remember: …It is also the boundary where communication with the storage starts and ends….

Another thing to note is that it only handles the query creation, not the execution of it. That is still handled by the Generic repository. For me, using the above type of repository / query separation achieves several goals.

There is high reuse of the plumbing. We write it once and use it everywhere:

var customers = new Repository();
var warehouses = new Repository();

This makes it fairly quick to start working with new entities or change plumbing strategies.

Usage creates clean code that clearly communicates intent:

 

var reservables = warehouses.Matching
        (new WarehousesWithReservableQtyFor(code, 100));

 

Several small classes with one specific purpose instead of a couple of huge classes with a loads of methods.

It might seem like a small difference. But the ability to focus on just the code for a single query in one page and the ease of navigating to queries (especially if you use R#’s Find type) makes this an enormous boost in maintainability.

The above example is based on Entity Framework, but I’ve successfully used the same kind of implementation with NHibernate and Linq To SQL as well.

Composing Critierias

By utilizing Decorators or similar composition patterns to build the criteria’s, it’s possible to compose queries for each scenario. Something like:

var premiumCustomers = customers.Matching(
        new PremiumCustomersFilter( 
                new PagedResult(page, pageSize) 
);

Or:

var premiumCustomers = customers.Matching(
        new PremiumCustomersFilter(),  
        new FromCountry("SE")
);

The implementation of the above examples is outside the scope of this post and is left as an exercise to the reader for now.

Repositories and testing

In my experience there is little point of Unit testing a repository. It exists as a bridge to communicate with the store and therein lies it value. Trying to unit test a repository and/or it’s query often turns out to test how they use the infrastructure, which has little value.

That said, you might find it useful to ensure that logging and exception handling works properly. This turns out to be a limited set of tests, especially if you follow the implementation above.

Integration tests is another story. Validating that queries and communication with the database acts as expected is extremely important. How to be effective in testing against a store is another subject which we won’t be covering here.

Making repositories available for unit testing to other parts of the system is fairly simple. As long as you honor the boundary mentioned earlier and only return well known interfaces or entities (like the IEnumerable), mocking or faking repositories will be easy and technology agnostic (ex. using rhino mocks):

ProductListRepository = 
        MockRepository.GenerateMock>();

 ProductListRepository.Expect(
                repository => repository.Matching(new RuleFilter("PR1"))
                                 .Return(productListRules);

 

In summary

The repository pattern in conjunction with others is a powerful tool that lowers friction in development. Used correctly, and honoring the pattern definitions, you gain a lot of flexibility even when you have testing in the mix.

To read more about repositories I suggest picking up a copy of [PoEAA] or [DDD].

Read more patterns explained and exemplified here

 

, ,

29 Comments

Unity LifeTimeManager for WCF

I really love DI-frameworks. One reason is that it allows me to centralize life-time management of objects and services needed by others. Most frameworks have options to control the lifetime and allows objects to be created as Singletons, ThreadStatics, Transients (a new object everytime) etc.

I’m currently doing some work on a project where Unity is the preferred DI-framework and it’s hooked to resolve all service objects for us.

The challenge

WCF has three options for instances, PerCall, PerSession and Single (http://msdn.microsoft.com/en-us/library/system.servicemodel.instancecontextmode.aspx). IN essence this will have the following effect on dependencies injected:

PerCall All dependencies will be  resolved once per message sent to the service.
PerSession – All dependencies will be resolved once per WCF service Session.
Single – All dependencies will be resolved exactly once (Singleton).

Now why is this challenge? It’s as expected that the constructor parameters is called once per instantiation, isn’t it?

The basis of the challenge is when you want to share instances of dependencies one or more levels down. Consider this code: ´


                      
  1: public class CalculationService : Contract {

                      
  2:   public CalculationService(IRepository ruleRepository, 

                      
  3:                             IRepository productRepository){..}

                      
  4: }

This would work perfectly fine with the InstanceContextModes explained above, but what if we want to share instances of NHibernate Sessions or Entity Framework Contexts between repositories?
The default setting for most DI-Frameworks is to always resolve objects as “Transients”, which means once for each object that is dependent on them.
This is where LifeTime management comes into play by changing how the DI-Framework shares instances between dependant objects.
Unity has six “sharing-options” (from http://msdn.microsoft.com/en-us/library/ff660872(PandP.20).aspx):
TransientLifetimeManager: A new object per dependency
ContainerControlledLifetimeManager One object per unity container instance (including childrens)
HierarchicalLifetimeManager One object per unity child container
PerResolveLifetimeManager A new object per call to resolve, recursive dependencies will reuse objects.
PerThreadLifetimeManager One new object per thread
ExternallyControlledLifetimeManager Moves the control outside of Unity

As you can see we are missing our scenario. We’d like to share all dependencies of some objects across a single service instance, no matter what InstanceContextMode we choose.

The Solution

For Unity there is a good extension point that can help us. Combine that with WCF’s ability to add extensions to Instances and problem will be solved.

First we extend WCF instance context so it can hold objects created by Unity:


                      
  1: public class WcfServiceInstanceExtension : IExtension

                      
  2: {

                      
  3:     public static WcfServiceInstanceExtension Current

                      
  4:     {

                      
  5:         get

                      
  6:         {

                      
  7:             if (OperationContext.Current == null)

                      
  8:                 return null;

                      
  9: 

                      
 10:             var instanceContext = OperationContext.Current.InstanceContext;

                      
 11:             return GetExtensionFrom(instanceContext);

                      
 12:         }

                      
 13:     }

                      
 14: 

                      
 15:     public static WcfServiceInstanceExtension GetExtensionFrom(

                      
 16:                                               InstanceContext instanceContext)

                      
 17:     {

                      
 18:         lock (instanceContext)

                      
 19:         {

                      
 20:             var extension = instanceContext.Extensions

                      
 21:                                            .Find();

                      
 22: 

                      
 23:             if (extension == null

                      
 24:             {

                      
 25:                 extension = new WcfServiceInstanceExtension();

                      
 26:                 extension.Items.Hook(instanceContext);

                      
 27: 

                      
 28:                 instanceContext.Extensions.Add(extension);

                      
 29:             }

                      
 30: 

                      
 31:             return extension;

                      
 32:         }

                      
 33:     }

                      
 34: 

                      
 35:     public InstanceItems Items = new InstanceItems();

                      
 36: 

                      
 37:     public void Attach(InstanceContext owner)

                      
 38:     { }

                      
 39: 

                      
 40:     public void Detach(InstanceContext owner)

                      
 41:     { }

                      
 42: }

InstanceContextExtension, which get’s applied on each WCF Service Instance

                      

                      
  1: public class InstanceItems

                      
  2: {

                      
  3:     public object Find(object key)

                      
  4:     {

                      
  5:         if (Items.ContainsKey(key))

                      
  6:             return Items[key];

                      
  7: 

                      
  8:         return null;

                      
  9:     }

                      
 10: 

                      
 11:     public void Set(object key, object value)

                      
 12:     {

                      
 13:         Items[key] = value;

                      
 14:     }

                      
 15: 

                      
 16:     public void Remove(object key)

                      
 17:     {

                      
 18:         Items.Remove(key);

                      
 19:     }

                      
 20: 

                      
 21:     private Dictionary<object, object> Items 

                      
 22:                          = new Dictionary<object, object>();

                      
 23: 

                      
 24:     public void CleanUp(object sender, EventArgs e)

                      
 25:     {

                      
 26:         foreach (var item in Items.Select(item => item.Value))

                      
 27:         {

                      
 28:             if (item is IDisposable)

                      
 29:                 ((IDisposable)item).Dispose();

                      
 30:         }

                      
 31:     }

                      
 32: 

                      
 33:     internal void Hook(InstanceContext instanceContext)

                      
 34:     {

                      
 35:         instanceContext.Closed += CleanUp;

                      
 36:         instanceContext.Faulted += CleanUp;

                      
 37:     }

                      
 38: }

InstanceItems, used by the extension to hold objects created by unity
This gives us a nice place to put created objects, it will also call dispose on any objects when the instance is closing down.
Now we need to tell Unity to use our new shiny class, this is done by first extending LifeTimeManager:

                      

                      
  1: public class WcfServiceInstanceLifeTimeManager : LifetimeManager

                      
  2: {

                      
  3:     private readonly Guid key;

                      
  4: 

                      
  5:     public WcfServiceInstanceLifeTimeManager()

                      
  6:     {

                      
  7:         key = Guid.NewGuid();

                      
  8:     }

                      
  9: 

                      
 10:     public override object GetValue()

                      
 11:     {

                      
 12:         return WcfServiceInstanceExtension.Current.Items.Find(key);

                      
 13:     }

                      
 14: 

                      
 15:     public override void SetValue(object newValue)

                      
 16:     {

                      
 17:         WcfServiceInstanceExtension.Current.Items.Set(key, newValue);

                      
 18:     }

                      
 19: 

                      
 20:     public override void RemoveValue()

                      
 21:     {

                      
 22:         WcfServiceInstanceExtension.Current.Items.Remove(key);

                      
 23:     }

                      
 24: }

The LifeTimeManager that uses our WCF Extension
All that’s left now is to tell Unity when to use this LifeTimeManager instead of the default. That is done when we register the type:
container.RegisterType(new WcfServiceInstanceLifeTimeManager(), 
                                 new InjectionFactory(
                                     c => SessionFactory.CreateSession()));

In conclusion


So, DI-Frameworks are powerful to handle dependencies but sometimes they need a little nudge in the right direct. Custom LifeTimeManagement is one of those nudges you can do and both Unity and WCF helps you do that.

, ,

7 Comments

Slice up your business logic using C# Extension methods to honor the context

One of my favorite features with C# 3.0 is the extension methods. An excellent way to apply some cross cutting concerns and great tool for attaching utility functions. Heavily used by the LINQ frameworks and in most utility classes I’ve seen around .NET 3.5 projects. Some common example usages I’ve come across include:

   1: var name = "OlympicGames2010".Wordify();

   2: var attributeValue = typeof(Customer).Attribute(o => o.Status)

   3:                                      .Value();

Lately I’ve started to expand the the modeling ideas I tried to explain during my presentation at Öredev 2008. It became more of a discussion with James O. Coplien then a presentation and I was far from done with my own understanding of the ideas and issues I’d identified (there are things improve in this content). The core idea is pretty simple though:

Not all consumers of an object are interested of the same aspects of that object, it will take on different roles in different contexts

Let me explain with the simplest example; when building an order system, the aspects of a product that the order think is important are usually not exactly the same aspects that the inventory system value.

Contexts in object models

Eric Evans touches this in his description of “bounded contexts” (Domain Driven Design p335) where he stresses the importance of defining contexts where a model is valid and not mix it into another context.  In essence the model of a product should be duplicated, once in the order context and once in the inventory context.

This is a great principle but at times it be too coarse-grained. James Coplien and Trygve Reenskaug have identified this in their work around, what they call, “DCI architecture”. Richard Öberg et al have done some work in what they call qi4j where they are composing objects with bits and pieces instead of creating full blown models for each context.

Slicing logic and models using Extension Methods

Let’s get back to the extension methods and see how they can help us slice business logic up in bits and pieces and “compose” what we need for different contexts.

In the code base I’m writing for my TechDays presentation I have a warehouse class that holds stock of items. These Items are common for different contexts, they will surface in orders and PLM. One of the features in this application is to find a stock to reserve a given an item. The following code is used to find that stock:

 

   1: return Stock.First(stock => stock.Item == item);

 

Although trivial, this is a business rule for the warehouse. When the warehouse class evolved this piece of rule would be duplicated in methods like Reserve, Releaes and Delete. A classic refactoring would be to use Extract Method to move it out and reuse that piece, something like:

   1: private bool Match(Stock stock, ItemInfo item)

   2: {

   3:    return stock.Item == item

   4: }

   5: ...

   6: return Stock.First(stock => Match(stock, item));

 

This is a completely valid refactoring but honestly we loose some intent, the immediate connection with stock and item are not as explicit and the lambda expression wasn’t simplified that much.

So let’s Refator to Slice instead:

   1: public static class ItemInAWarehouseSlices

   2: {

   3:     public static bool Match(this ItemInfo @this, Stock stock)

   4:     {

   5:         return stock.Item == @this;

   6:     }

   7: }

Adding this business rules as an extension method gives us a natural place for the code to live and a mechanism to use to compose objects differently in different contexts. Importing this extension method class into the Warehouse C#-file, ItemInfo will provide the logic needed in that context;

   1: return Stock.First(item.Match);

Adding the rule this way also sprinkles a touch of DSL on it and gives it a semantic meaning which makes the code make more sense.

Why don’t you just put that method on the ItemInfo, you migh ask. Well the answer is simple. ItemInfo is a concept that might be shared across contexts. Contexts that have no grasp of what a Stock is, nor should it. If I’d add everything I needed to the ItemInfo class for all contexts that uses Item. I would be in a bad shape. Thus the ideas behind Contextual Domain models, Bounded Context, DCI and Composite objects.

Extend away …

So extension methods have other usages then just extending existing classes with some utility. It’s also a powerful tool to slice your logic in composable pieces which helps you focus on the aspects you think is important in the current context you are working.

 

So what do you think? Will this help you create clear separtions?

, ,

1 Comment

Tip: Mocking callbacks using RhinoMocks when lambdas are involved

There is several cool language features available in C# 3.0, one of them is lambdas. It’s a handy tool when you want to execute a few lines of code where a delegate is expected. Fredrik Normén and I chatted about one of those places today, callbacks in async scenarios.

Given this callback:

public string ResultOfServiceCall { get; private set; }
public ContextThatDoesStuff(IServiceWithCallback service)
{
    service.Begin(
            () => ResultOfServiceCall = service.End()
            );
}

How would you set up a mock to handle that? Somehow you need to intercept that lambda and execute it. One way, there might be others, to solve this with RhinoMocks is to use the WhenCalling feature. A simple test to demonstrate:

 

[TestMethod]
public void TheTest()
{
    var mock = MockRepository.GenerateMock();

    mock.Expect(service => service.Begin(null))
        .IgnoreArguments()
        .WhenCalled(invocation => ((Action)invocation.Arguments[0]).Invoke());

    mock.Expect(service => service.End())
        .Return("The String");

    var context = new ContextThatDoesStuff(mock);


    Assert.AreEqual(context.ResultOfServiceCall, "The String");
}

In the above example we capture the in parameter, here it’s a lambda, casting it to an Action delegate and invoke it.

I just love the flexibility in RhinoMocks, don’t you?

, , , TDD

No Comments

Executing lambdas on transaction commit – or Transactional delegates.

One great thing with the TransactionScope class is that it makes it possible to include multiple levels in your object hierarchy.  Everything that is done from the point that the scope was created until it is disposed will be in the exact same transaction. This makes it easy to control business transactions from high up in the call stack and allow for everything to participate.

Everything but calls to delegates, until now. I checked in code in our project today to allow for this:

   1: using(var scope = new TransactionScope)

   2: {

   3:     repository.Save(order);

   4:     bus.SendMessage(new OrderStatusChanged {OrderId = order.Id, Status = order.Status});

   5:     scope.Complete();

   6: }

   7:  

   8: // In some message listener somwhere that listens to the Message sent

   9: // But still in the transaction scope.

  10:  

  11: public void HandleMessage(OrderStatusChanged message)

  12: {

  13:     new OnTransactionComplete

  14:     (

  15:         () => publisher.NotifyClientsStatusChange(message.OrderId, message.NewStatus);

  16:     ) 

  17: }

  18:  

  19:  

 

Why is this interesting?

Why would anyone want a delegate to be called on commit and only then? The scenarios where I find this and other similar scenarios (the code allows for OnTransactionRollback and AfterTransactionComplete as well) is where you have events or message sent to other subsystems asking them react. It can be more then one that listens and often you might not want to carry out the action if the transaction started high above in the business layer isn’t commited (like telling all clients that the status has changed and then the transaction rolls back).

How can I implement this?

The code to achieve this was fairly simple. Transactions in System.Transaction has a EnlistVoilatile method where you can send in anything that implements IEnlistmentNotification. This handy little interface defines four methods:

   1:  

   2: public interface IEnlistmentNotification

   3: {

   4:     void Prepare(PreparingEnlistment preparingEnlistment);

   5:     void Commit(Enlistment enlistment);

   6:     void Rollback(Enlistment enlistment);

   7:     void InDoubt(Enlistment enlistment);

   8: }

Any object enlisting in a transaction with this interface will be told what’s happening and can participate in the voting. This particular implementation don’t vote it just reacts on Commit or Rollbacks.

The implementation spins around the class TransactionalAction that in it’s constructor accepts an Action delegate as a target and then enlists the object in the current transaction:

 

   1: public TransactionalAction(Action action)

   2: {

   3:      Action = action;

   4:      Enlist();

   5: }

   6:  

   7: public void Enlist()

   8: {

   9:      var currentTransaction = Transaction.Current;

  10:      currentTransaction.EnlistVolatile(this, EnlistmentOptions.None);

  11: }

Enlisting the object into the transaction will effectively add it to the transactions graph. As long as the transaction scope have a root reference, the object will stay alive. That is why this works by only stating new OnTransactionComplete( () => ) and you don’t have to save the reference.

The specific OnTransactionComplete, OnTransactionRollback  and WhenTransactionInDoubt then inherits from TransactionalAction and overrides the appropriate method (the base commit ensures that we behave well in a transactional scenario):

   1: public class OnTransactionCommit : TransactionalAction

   2: {

   3:     public OnTransactionCommit(Action action)

   4:         : base(action)

   5:     { }

   6:  

   7:     public override void Commit(System.Transactions.Enlistment enlistment)

   8:     {

   9:         Action.Invoke();

  10:         base.Commit(enlistment);

  11:     }

  12: }

Hey, what about AfterTransactionCommit?

The IEnlistmentNotification isn’t an interceptor model. It just allows you to participate in the transactional work. To be able to make After calls we need to use another mechanism. The TransactionCompleted event. As the previous code, this is fairly simple;

   1: public class AfterTransactionComplete

   2: {

   3:     private Action action;

   4:  

   5:     public AfterTransactionComplete(Action action)

   6:     {

   7:         this.action = action;

   8:         Enlist();

   9:     }

  10:  

  11:     private void Enlist()

  12:     {

  13:         var currentTransaction = Transaction.Current;

  14:  

  15:         if(NoTransactionInScope(currentTransaction))

  16:             throw new InvalidOperationException("No active transaction in scope");

  17:  

  18:         currentTransaction.TransactionCompleted += TransactionCompleted;

  19:     }

  20:  

  21:     private void TransactionCompleted(object sender, TransactionEventArgs e)

  22:     {

  23:         if(TransactionHasBeenCommited(e))

  24:             action.Invoke();

  25:     }

  26: }

Summary

With some simple tricks using Transaction.Current and lambdas we can now participate in our transactions with our delegates. Download the code to get to do this:

   1: new OnTransactionComplete

   2: (

   3:   () => DoStuff()

   4: )

   5:  

   6: new OnTransactionRollback

   7: (

   8:     () => DoStuff()

   9: )

  10:  

  11: new AfterTransactionCommit

  12: (

  13:     () => DoStuff();

  14: )

  15:  

  16: // Also includes a more "Standard" api:

  17: Call.OnCommit( () => DoStuff );

  18:  

  19: // Since Call has extension methods, this is also valid:

  20: ((Action)(()=> firstActionWasCalled=true)).OnTransactionCommit();

I hope this helps you build more robust solutions.

Code download (project is VS2010 but it works on .NET 2.0) [306kb]

More information on IEnlistmentNotification

More information on EnlistVolatile

, ,

2 Comments

Pattern focus: Decorator pattern

The decorator pattern is a pretty straightforward pattern that utilizes wrapping to extend existing classes with new functionality. It’s commonly used to apply cross-cutting concerns on top of business classes, which avoids mixing that up with the business logic. Frameworks like Spring.Net uses this pattern to apply it’s functionality to your classes.

Implementing decorator pattern

The pattern is pretty simple to implement; extract public methods from the class to wrap to an interface and implement that on the decorator. Let the decorator receive an object of the original type and delegate all calls to that object. Let’s look at an example that uses caching (inspired from a discussion with Anders Bratland today on the Swenug MSN group).

The UserService we’ll extend is pretty straight forward; 

   1: public interface IUserService

   2: {

   3:     User GetUserById(int id);

   4: }

   5: 

   6: public class UserService : IUserService

   7: {

   8:     public User GetUserById(int id)

   9:     {

  10:         return new User { Id = id };

  11:     }

  12: }

The idea is that we want to cache the user and avoid mixing that logic with the code to retrieve the user. For this we create the decorator wrapper;

   1: public class CachedUserService : IUserService

   2: {

   3:     private readonly IUserService _userService;

   4: 

   5:     public CachedUserService(IUserService service)

   6:     {

   7:         _userService = service;

   8:     }

   9: }

The wrapper implements the same interface as the wrapped class and takes an instance of the wrapped object (in this case the interface to ensure we can add more then one decorator) and store it as an instance variable for later use.

To add our caching, we’ll implement the GetUserById method to first check our cache if the user is there and if not just delegate the call to the wrapped object;

   1: private static readonly Dictionary<int, User> UserCache = new Dictionary<int, User>();

   2: 

   3: public User GetUserById(int id)

   4: {

   5:     if (!UserCache.ContainsKey(id))

   6:         UserCache[id] = _userService.GetUserById(id);

   7: 

   8:     return UserCache[id];

   9: }

Anyone consuming this service will never have to worry about the caching we just applied, additionally our original code never have to know that it might be cached and each concern can be maintained separately. A win-win on both parts, concerns successfully separated.

 

Using decorators

Since we need to compose the objects to get all the decorators we want wrapping our object. It’s usually advisable to hide the creation of the object inside a factory. A typical usage could be;

   1: public static class ServiceFactory

   2: {

   3:     public static IUserService Create()

   4:     {

   5:         var baseObject = new UserService();

   6:

   7:         return new AuditableUserService(new CachedUserService(baseObject))

   8:     }

   9: }

  10: .

  11: .

  12: .

  13: private void SetupContext()

  14: {

  15:     _service = ServiceFactory.Create();

  16: }

Using this approach we are free to add and subtract as many decorators to the original object as we wish and the consumers never have to know.

Usage scenarios for the decorator pattern

There is several usages for this pattern for a lot of different scenarios not only adding caching to a service. Let’s examine the categories and what they can be used for.

Applying cross cutting concerns

We’ve just seen how we can apply a cross cutting concerns, like caching, to an existing class. The factory hints about Auditing and there is several other you could imagine being interesting. One could be the notorious INotifyPropertyChanged interface that’s needed for data binding. Wrapping the bound object into a decorator that automatically calls the event will keep the domain objects free from infrastructure concerns. Something like;

   1: public class NotifyingUser : INotifyPropertyChanged, IUser

   2: {

   3:     private IUser _user;

   4:     public NotifyingUser(IUser user)

   5:     {

   6:         _user = user;

   7:     }

   8: 

   9:     public int Id

  10:     {

  11:         get { return _user.Id; }

  12:         set

  13:         {

  14:             _user.Id = value;

  15:             RaisePropertyChanged("Id");

  16:         }

  17:     }

This will keep the user entity clean from data binding concerns and we’ll be able to provide the functionality. Win-Win.

Creating proxies

A lot of frameworks use decorators to create runtime proxies that applies the framework functionality. Spring.Net is an example of that, it uses decorators to apply Mixins and Advices.

Strangle wine refactoring

Michael Feathers talks a lot about how to handle legacy code. One of the concepts I’ve heard him talk about is the “Strangle wine”. A plant that starts out small but if you let it grow it will soon take over and strangle everything else. Refactoring using strangle wine strategies is about replacing legacy code with new code. A variant of decorator pattern can be very useful to accomplish that. In a recent project we refactored from Linq To Sql to NHibernate, not all at once but step by step. To do this successfully we used the ideas behind the decorator (with a touch of the Adapter pattern thrown in there);

   1: public class OrderRepository : IOrderRepository

   2: {

   3:     private readonly IOrderRepository _ltsRepository;

   4:     private readonly IOrderRepository _nhRepository;

   5: 

   6:     public OrderRepository(IOrderRepository ltsRepository, IOrderRepository nhRepository)

   7:     {

   8:         _ltsRepository = ltsRepository;

   9:         _nhRepository = nhRepository;

  10:     }

  11: 

  12:     public Order GetOrderById(int id)

  13:     {

  14:         var ltsObject = _ltsRepository.GetOrderById(id);

  15:

  16:         return Translator.ToEntityFromLts(ltsObject);

  17:     }

  18: 

  19:     public IList ListOrders()

  20:     {

  21:         return _nhRepository.ListOrders();

  22:     }

  23: }

This allowed us to move over from Linq to Sql over to NHibernate method by method. Slowly strangling the old repository to death so it could be cut out.

Summary

Decorator pattern is a very versatile pattern with a lot of usage scenarios. A great tool to have in your toolbox. Either as a stand alone pattern or, as we’ve seen in some of the examples, in cooperation with others. It is simple but powerful and I hope you’ll find usages for it in your applications.

, ,

3 Comments

Tracking a property value changes over time: Temporal property using NHibernate

A common problem that often needs to be solved is to answer a question like “how did my inventory look 1 month ago?”. Keeping this kind of history, and query over it, can be tedious and is something that need to be thought about for a bit. In this post I will show you how you can track a single property in time using the Temporal property pattern written down by Martin Fowler and NHibernate to persist the change over time.

Implementing the temporal property pattern in C#

This example is derived from a solution in the current project I am in. The idea is that we have a bunch of products in a warehouse. The number of products in stock varies, of course, and we would like to ensure that the application always can answer the question “How many of product x did I have at date y”.

 figure 1, the inventory class

Figure 1 show the basics of the ProductInventory class. I’ll show you how to extend the Quantity property as a Temporal property.

To ensure that we keep track of all the values that Quantity ever had, we need to save those values in a table and attach some date of validity to it. Martin’s original pattern suggests that every value gets a date range with a Valid From and Valid To attribute. After discussing this a bit with Philip Nelson, who works with temporal data daily, I’ve come to the conclusion that a single column that states “EffectiveDate” is usually good enough and certainly good enough for this scenario.

First off, let’s add a mechanism that allows us to track these values. For this we will be using a dictionary:

protected IDictionary quantityTrail = ... 

The dictionary will use DateTime as key and a Quantity entity (with implicit operator for int and all) to hold the quantity values. To work with this list we’ll add a couple of methods and make some changes to the Quantity property;

public virtual Quantity Quantity
{
    get { return QuantityAt(DateTime.Now); }
    set { AddTrail(value); }
}

private void AddTrail(Quantity value)
{
    quantityTrail.Add(DateTime.Now, value);
}

public virtual Quantity QuantityAt(DateTime date)
{
    return (from item in quantityTrail
            where item.Key <= date
            orderby item.Key descending
            select item.Value)
        .FirstOrDefault();
}

listing 1, code to add temporal functionality

As you can see in listing 1 we’ve changed the setter to save the value to the dictionary instead of a field. We’ve also changed the getter to ask for the quantity in the Now time frame.

In listing 1 you also see the QuantityAt method that looks in the dictionary and picks out the entry that is older or equal to the date provided. Congratulations you have now implemented a temporal property with C#. Figure 2 shows the end result;

figure 2, end result 

Storing temporal properties using NHibernate

NHibernate comes with built in support to easily store a dictionary in a table. This is done using the list type and a definition of an index. In our case, the index is our DateTime. Listing 2 shows the mapping for this scenario:

<map name="quantityTrail" access="field" inverse="false" cascade="all-delete-orphan">
  <key column="ProductInventoryId" />
  <index column="EffectiveDate" type="DateTime" />
  <one-to-many class="Quantity" />
map>   
...
<class name="Quantity" table="ProductInventoryQuantites">
  <id name="Id">
    <generator class="identity" />
  id>
  <property name="_quantity" column="Quantity" access="field" />    
class>

Listing 2, mapping to save a dictionary in the database

With these two pieces of the puzzle we’ll now start to track changes to the Quantity in our database. The entity will always deliver the latest value for the property and you are free to load-optimize the history as you find suitable (lazy / eager).

Optimizing for simpler querying

For faster query I’ve tweaked the basic solution a little bit. Instead of saving all the values in the trail I’ve added the current value to the entity table and are just passing new values into the dictionary. Listing 3 and 4 shows the optimization changes:

private Quantity _quantity;
public virtual Quantity Quantity
{
    get { return _quantity; }
    set
    {
        _quantity = value;
        AddTrail();
    }
}

listing 4, changes to the quantity property for optimization

<property name="Quantity" 
          column="CurrentQuantity" 
          type="QuantityTypeMap, Core" />

Listing 5, added property mapping for optimization

This little “optimization” will allow for loading the entity from one table in a “normal” case and then lazy load the list of history values to call QuantityAt() when asking for history.

Conclusions and downloadable code

So with a little bit of OO and great help from NHibernate it’s fairly simple to track historical data for a property. I am not sure how other ORM’s would solve this, if they can. Do you?

Code example: TemporalPattern.Zip [12k]

,

7 Comments