Posts Tagged design patterns

The dreaded “Save(Foo bar)”

For the last year or so my biggest antagonist has been “Save(Foo bar)”. There is usually a lot of things going wrong when that method turns up. I’m not saying that saving stuff to the persistence storage is a bad thing, but I’m arguing that the only place it should exist is in actually saving data in the persistence/data layer. Don’t put it in your services, don’t put it in your business logic, and for crying out loud; do not base entire behavior around it.

The source of the ill behavior

When developing software a user will quite often tell you that she wants a button that will save the current changes she’s done to the current work. As a good developer we implement this button and make the user happy. After all, their happiness is our salary. In a pure data-entry application this is all good, but as soon as we add just a little bit of behavior we need to start pushing the actual saving far, far away. Consider this screen:

In this setting the Save button makes perfect sense. The user changes a couple of things and then commits those changes to the system. Now start to think about more then just storing the values in a database, think about adding behavior to the changes.

Do not throw away the users intent!

When a user makes a change to any of these fields, they do so with an intention. Changing the address might mean that the receiving end of an order has changed. Changing logistics provider might mean that a new one need to be notified. With a save method, the only way for our business logic to know what to do, is to figure it out;


                        
public void Save(Order changedOrder) { if (changedOrder.Address != existingOrder.Address) OrderAddressChanged(existingOrder, changedOrder); if (changedOrder.Status != existingOrder.Status) OrderStatusChanged(existingOrder, changedOrder); if (changedOrder.LogisticServiceProvider != existingOrder.LogisticServiceProvider) LogisticServiceProviderChanged(existingOrder, changedOrder); }

                        

And even worse;


                        
public void OrderStatusChanged(Order existingOrder, Order changedOrder) { if(changedOrder.Status == Status.Cancelled) ......

                        

Basically what we’ve done is throwing away the users intent and capturing of the actual work the user has done. I see this happening a lot in service interfaces (WCF Service contracts with a single Save method), in “Managers” (Let’s not even go there) and in local services.

The code this will generate a couple of releases down the road will smell in so many scents it’s not even funny to joke about. But to get you started, read up on Open/Closed (A SOLID principle).

 

Enough with the bashing, what’s the solution?

The idea that a user can commit their work in a single push of a button is a good one. But that doesn’t mean that the save need to anything else then a button. The actual work under the covers should reflect what the user really wants to happen. Changing a service provider on an order that’s packed should probably issue a message to the old provider not to pick the package up and a message to new to actually do. This kind of behavior is best captured with commands.

Let’s revisit the order form:

In this scenario every red box here is actually a intended command form the user. Identified as;

ChangeOrderAddress

ChangeOrderLogisticServiceProvider

ChangeOrderStatus (or actually, CancelOrder, ShipOrder, etc)

Implementing this could either be using the command pattern or methods on the order class. “It all depends” on your scenario on preferences.

So what am I really suggesting?

Make sure that your code reflects the intention of the user. Not the technical details of how the data is persisted. You will save a lot of pain, grief and hard work by just identifying what the user intends, capture that in commands or something similar and execute them in order when the user hit’s save. And most importantly SAVE IS A METHOD ON THE PERSISTENCE LAYER, NOT A BUSSINESS FUNCTION.

,

No Comments

NDC2010: Michael Feathers – The deep synergy between good design and testability

In this session Michael Feathers basically killed the argument based on; I don’t want tests to drive my design with a simple example base on the code smell; Iceberg class.

IMAG0110

Private methods are really hard to test, and here we have a class with a bunch of them, It will be hard to test, but in addition to that it’s also a bad design decision. It violates the single responsibility principle.

A better design would be to break this out in smaller classes, something like;

IMAG0111 Which honors SRP and will be much easier to test.

So why is this true?
There is basically two reasons, related to each other, that drives this synergy. To begin with you are consuming and using the api, which quickly lets you feel the pain. The second reason is based on the fact that test help you understand code. Badly designed code will be hard to test since it’s also very hard to understand what it does. In the light of this, SOLID principles makes a lot of sense. Not only does it allow you to test your code, but it makes it easier to understand.

With these facts, Michael stated a simple truth;

Solving Design Problems
Solves Test Problems

It’s not about letting a test tell you what to do, making code testable does not necessarily imply better design. But better design will make your code easier to test.

This is also why he has problems with “Groping test tools”, tools that let you test private parts. He feels they are a “Get out of jail free card”. Which removes an important force to make the design better; Pain. Pain is useful as a learning tool in that it tells you what to avoid. Sure removing test-pain will make the code easier to test, but will it make it easier to maintain or extend? Usually not.

He then concluded that testing isn’t hard, testing is easy in the presence of good design.

Reflections
This was a great talk! By examining relationship between test-pains, code smells and design principles, Michael made a clear point that there is a synergy. If you find it hard to test your code, there is usually some other problem with it.

He didn’t want to impose TDD on everyone, but he made his case that test will help you understand your code better and act as a constraint that forces you into good design. Something you should be doing anyway.

 

Catalogue of testing pains and relation to code smells

State hidden within a method – you find yourself wishing you could access local Variables in a test. Methods are typically to long and are violating Single Responsibility Principle.
Difficult Setup – Instantiating a class involves instantiating 12 others. This pain says something about the design, it means that the class isn’t factored into smaller pieces. Problem: To Much coupling.
Incomplete shutdown – pieces of your code lose resources and you never realize it until you test. Classes don’t take care of themselves, poor encapsulation.
State-leaks across tests – The state of one test affects another. Smells of improper use of sIngletons or other forms of global mutable state.
Framework Frustration – Testing in the presence of a framework is hard Insufficient Domain separation. Lacking good separation of concerns.
Difficult mocking – You find yourself writing mocks for objects returned by other objects Law of Demeter Violations
Difficult mocking 2 – hard to mock particular classes Insufficient abstraction and / or dependency inversion violation.
Hidden effects – you can’t test a class because you have no acess to the ultimate effect ot its execution Insufficient separation of concerns and encapsulation violation
Hidden inputs – There is no way to instrument the setup conditions for a test through the API Over encapsulation, insufficient separation of concerns
Unwieldy parameter lists – it’s too much work to call a method or instantiate a class To many responsibilities in a class or a method. SRP violation
Insufficient access – you find yourself wishing you could test a private method. To many responsibilities in a class. SRP violation
Test Trash – many unit tests change whenever you change your code. Open/Closed violations

, , TDD

2 Comments

NDC2010: Michael Feathers – Testable C#

IMAG0104 This session was basically explaining a simple rule that regardless if you are using TDD; should be followed at all times:

Never hide a TUF in a TUC

What did Michael mean by that?

There are two things that can be “Tests Unfriendly”, features and constructs. The common denominator of the two is that they are hard to replace or isolate and thus impose a lot of constraints on what need to be up and running to test.

He then went on to explain exactly what TUF’s and TUC’s to look out for.

Test Unfriendly Features (TUF)
A TUF is something that has a tight coupling to a resource that’s hard to mock, fake or replace for testability. He listed a few;

  • File access
  • Long computations
  • Network access
  • Database access
  • Global variable manipulation
  • Use of 3rd party libraries or frameworks

The list is in no way complete but clearly shows a pattern and really follows the ideas that a unit test should be isolated from any external dependencies.

Test Unfriendly Constructs (TUC)
A TUC is a language feature that makes it hard to fake, mock or replace a piece of code. He was very clear that he didn’t say that these features where bad in any way, just that by using them we should be aware that we might introduce testability issues. Certainly in combination with TUF’s.

He listed a few C# constructs that wasn’t replaceable (or at least hard to replace without stuff like TypeMock);

  • Private/static/sealed/non-virtual methods
  • Object constructors
  • Static constructors
  • Static initialization expressions
  • Sealed classes
  • Static classes
  • Const/read only initalization expressions

    When dealing with these TUC’s, since they aren’t completely worthless, he gave a simple advice; “Do not code with the idea “why would anyone ever want to replace this” but think instead in terms of “this is really important that nobody can change or easily access”.

Reflections
This was a good talk, not in terms of new things I didn’t know. But in introducing rules and arguments for building testable code. Not everyone will write code TDD-style but by at least making them following the simple rule of “Don’t hide a TUF in a TUC”  it will be easy to add tests later on if there is something you are curious about or want to validate.

This was a great take away that I see I can put into use right away.

, , TDD

1 Comment

Pattern focus: Decorator pattern

The decorator pattern is a pretty straightforward pattern that utilizes wrapping to extend existing classes with new functionality. It’s commonly used to apply cross-cutting concerns on top of business classes, which avoids mixing that up with the business logic. Frameworks like Spring.Net uses this pattern to apply it’s functionality to your classes.

Implementing decorator pattern

The pattern is pretty simple to implement; extract public methods from the class to wrap to an interface and implement that on the decorator. Let the decorator receive an object of the original type and delegate all calls to that object. Let’s look at an example that uses caching (inspired from a discussion with Anders Bratland today on the Swenug MSN group).

The UserService we’ll extend is pretty straight forward; 

   1: public interface IUserService

   2: {

   3:     User GetUserById(int id);

   4: }

   5: 

   6: public class UserService : IUserService

   7: {

   8:     public User GetUserById(int id)

   9:     {

  10:         return new User { Id = id };

  11:     }

  12: }

The idea is that we want to cache the user and avoid mixing that logic with the code to retrieve the user. For this we create the decorator wrapper;

   1: public class CachedUserService : IUserService

   2: {

   3:     private readonly IUserService _userService;

   4: 

   5:     public CachedUserService(IUserService service)

   6:     {

   7:         _userService = service;

   8:     }

   9: }

The wrapper implements the same interface as the wrapped class and takes an instance of the wrapped object (in this case the interface to ensure we can add more then one decorator) and store it as an instance variable for later use.

To add our caching, we’ll implement the GetUserById method to first check our cache if the user is there and if not just delegate the call to the wrapped object;

   1: private static readonly Dictionary<int, User> UserCache = new Dictionary<int, User>();

   2: 

   3: public User GetUserById(int id)

   4: {

   5:     if (!UserCache.ContainsKey(id))

   6:         UserCache[id] = _userService.GetUserById(id);

   7: 

   8:     return UserCache[id];

   9: }

Anyone consuming this service will never have to worry about the caching we just applied, additionally our original code never have to know that it might be cached and each concern can be maintained separately. A win-win on both parts, concerns successfully separated.

 

Using decorators

Since we need to compose the objects to get all the decorators we want wrapping our object. It’s usually advisable to hide the creation of the object inside a factory. A typical usage could be;

   1: public static class ServiceFactory

   2: {

   3:     public static IUserService Create()

   4:     {

   5:         var baseObject = new UserService();

   6:

   7:         return new AuditableUserService(new CachedUserService(baseObject))

   8:     }

   9: }

  10: .

  11: .

  12: .

  13: private void SetupContext()

  14: {

  15:     _service = ServiceFactory.Create();

  16: }

Using this approach we are free to add and subtract as many decorators to the original object as we wish and the consumers never have to know.

Usage scenarios for the decorator pattern

There is several usages for this pattern for a lot of different scenarios not only adding caching to a service. Let’s examine the categories and what they can be used for.

Applying cross cutting concerns

We’ve just seen how we can apply a cross cutting concerns, like caching, to an existing class. The factory hints about Auditing and there is several other you could imagine being interesting. One could be the notorious INotifyPropertyChanged interface that’s needed for data binding. Wrapping the bound object into a decorator that automatically calls the event will keep the domain objects free from infrastructure concerns. Something like;

   1: public class NotifyingUser : INotifyPropertyChanged, IUser

   2: {

   3:     private IUser _user;

   4:     public NotifyingUser(IUser user)

   5:     {

   6:         _user = user;

   7:     }

   8: 

   9:     public int Id

  10:     {

  11:         get { return _user.Id; }

  12:         set

  13:         {

  14:             _user.Id = value;

  15:             RaisePropertyChanged("Id");

  16:         }

  17:     }

This will keep the user entity clean from data binding concerns and we’ll be able to provide the functionality. Win-Win.

Creating proxies

A lot of frameworks use decorators to create runtime proxies that applies the framework functionality. Spring.Net is an example of that, it uses decorators to apply Mixins and Advices.

Strangle wine refactoring

Michael Feathers talks a lot about how to handle legacy code. One of the concepts I’ve heard him talk about is the “Strangle wine”. A plant that starts out small but if you let it grow it will soon take over and strangle everything else. Refactoring using strangle wine strategies is about replacing legacy code with new code. A variant of decorator pattern can be very useful to accomplish that. In a recent project we refactored from Linq To Sql to NHibernate, not all at once but step by step. To do this successfully we used the ideas behind the decorator (with a touch of the Adapter pattern thrown in there);

   1: public class OrderRepository : IOrderRepository

   2: {

   3:     private readonly IOrderRepository _ltsRepository;

   4:     private readonly IOrderRepository _nhRepository;

   5: 

   6:     public OrderRepository(IOrderRepository ltsRepository, IOrderRepository nhRepository)

   7:     {

   8:         _ltsRepository = ltsRepository;

   9:         _nhRepository = nhRepository;

  10:     }

  11: 

  12:     public Order GetOrderById(int id)

  13:     {

  14:         var ltsObject = _ltsRepository.GetOrderById(id);

  15:

  16:         return Translator.ToEntityFromLts(ltsObject);

  17:     }

  18: 

  19:     public IList ListOrders()

  20:     {

  21:         return _nhRepository.ListOrders();

  22:     }

  23: }

This allowed us to move over from Linq to Sql over to NHibernate method by method. Slowly strangling the old repository to death so it could be cut out.

Summary

Decorator pattern is a very versatile pattern with a lot of usage scenarios. A great tool to have in your toolbox. Either as a stand alone pattern or, as we’ve seen in some of the examples, in cooperation with others. It is simple but powerful and I hope you’ll find usages for it in your applications.

, ,

3 Comments

Tracking a property value changes over time: Temporal property using NHibernate

A common problem that often needs to be solved is to answer a question like “how did my inventory look 1 month ago?”. Keeping this kind of history, and query over it, can be tedious and is something that need to be thought about for a bit. In this post I will show you how you can track a single property in time using the Temporal property pattern written down by Martin Fowler and NHibernate to persist the change over time.

Implementing the temporal property pattern in C#

This example is derived from a solution in the current project I am in. The idea is that we have a bunch of products in a warehouse. The number of products in stock varies, of course, and we would like to ensure that the application always can answer the question “How many of product x did I have at date y”.

 figure 1, the inventory class

Figure 1 show the basics of the ProductInventory class. I’ll show you how to extend the Quantity property as a Temporal property.

To ensure that we keep track of all the values that Quantity ever had, we need to save those values in a table and attach some date of validity to it. Martin’s original pattern suggests that every value gets a date range with a Valid From and Valid To attribute. After discussing this a bit with Philip Nelson, who works with temporal data daily, I’ve come to the conclusion that a single column that states “EffectiveDate” is usually good enough and certainly good enough for this scenario.

First off, let’s add a mechanism that allows us to track these values. For this we will be using a dictionary:

protected IDictionary quantityTrail = ... 

The dictionary will use DateTime as key and a Quantity entity (with implicit operator for int and all) to hold the quantity values. To work with this list we’ll add a couple of methods and make some changes to the Quantity property;

public virtual Quantity Quantity
{
    get { return QuantityAt(DateTime.Now); }
    set { AddTrail(value); }
}

private void AddTrail(Quantity value)
{
    quantityTrail.Add(DateTime.Now, value);
}

public virtual Quantity QuantityAt(DateTime date)
{
    return (from item in quantityTrail
            where item.Key <= date
            orderby item.Key descending
            select item.Value)
        .FirstOrDefault();
}

listing 1, code to add temporal functionality

As you can see in listing 1 we’ve changed the setter to save the value to the dictionary instead of a field. We’ve also changed the getter to ask for the quantity in the Now time frame.

In listing 1 you also see the QuantityAt method that looks in the dictionary and picks out the entry that is older or equal to the date provided. Congratulations you have now implemented a temporal property with C#. Figure 2 shows the end result;

figure 2, end result 

Storing temporal properties using NHibernate

NHibernate comes with built in support to easily store a dictionary in a table. This is done using the list type and a definition of an index. In our case, the index is our DateTime. Listing 2 shows the mapping for this scenario:

<map name="quantityTrail" access="field" inverse="false" cascade="all-delete-orphan">
  <key column="ProductInventoryId" />
  <index column="EffectiveDate" type="DateTime" />
  <one-to-many class="Quantity" />
map>   
...
<class name="Quantity" table="ProductInventoryQuantites">
  <id name="Id">
    <generator class="identity" />
  id>
  <property name="_quantity" column="Quantity" access="field" />    
class>

Listing 2, mapping to save a dictionary in the database

With these two pieces of the puzzle we’ll now start to track changes to the Quantity in our database. The entity will always deliver the latest value for the property and you are free to load-optimize the history as you find suitable (lazy / eager).

Optimizing for simpler querying

For faster query I’ve tweaked the basic solution a little bit. Instead of saving all the values in the trail I’ve added the current value to the entity table and are just passing new values into the dictionary. Listing 3 and 4 shows the optimization changes:

private Quantity _quantity;
public virtual Quantity Quantity
{
    get { return _quantity; }
    set
    {
        _quantity = value;
        AddTrail();
    }
}

listing 4, changes to the quantity property for optimization

<property name="Quantity" 
          column="CurrentQuantity" 
          type="QuantityTypeMap, Core" />

Listing 5, added property mapping for optimization

This little “optimization” will allow for loading the entity from one table in a “normal” case and then lazy load the list of history values to call QuantityAt() when asking for history.

Conclusions and downloadable code

So with a little bit of OO and great help from NHibernate it’s fairly simple to track historical data for a property. I am not sure how other ORM’s would solve this, if they can. Do you?

Code example: TemporalPattern.Zip [12k]

,

7 Comments

Creating a dynamic state machine with C# and NHibernate

In my last post (An architecture dilemma: Workflow Foundation or a hand rolled state machine?) I talked about the discussion around an architectural choice. The conclusion of that discussion was to hand-roll a dynamic state machine. This post is the first part of 3 explaining the solution we used. In this part we’ll focus on the state machine, in the following two parts we’ll be adding business rules to each state and utilizing a couple of tricks from NHibernate to make those rules rich.

If you are uncertain what the state machine pattern looks like, there is some information here: http://msdn.microsoft.com/en-us/magazine/cc301852.aspx . For the rest of this post I will assume that you got a basic understanding of it.

The requirements for our state machine was that users should be able to add their own states and tight them into the state machine. They should also be able to define the flow, designing the structure of which states can transition to each other.

The basic state machine looks something like this:

Since we need to be more dynamic then this our model turned out more like this instead:

In this model we are using composition instead of inheritance to build our state machine. The list “AllowedTransitions” contains a list of states that is allowed to transition to from the current one.

The method “CanBeChangedInto” takes a state object and compares it to the list of states and decides if the transition is allowed. For our scenario this comparison is done by implementing IEquatable and overriding the appropriate operators (==, !=).

This is defined on a template kind of entity, there will be “instances” made out of this template where all attributes are copied onto the instance.

The implementation of the ChangeStateTo method on the template is fairly simple:

Template:
public void ChangeStateTo(State newState)
{
    if (State.CanBeChangedInTo(newState))
        State = newState;
    else
        throw new InvalidStateTransitionException();
}

State:
public bool CanBeChangedInTo(State state)
{
    return AllowedTransitions.Contains(state);
}

Simple but yet very powerful and allows for the kind of dynamic state transitions our scenario requires.

Adding state transitions is easy as well, since the AllowedTransitions list holds

Our project uses NHibernate as a persistence engine and to persist our dynamic state we use a many to many bag mapped in xml:

<hibernate-mapping xmlns="urn:nhibernate-mapping-2.2" assembly="" namespace="">
  <class name="State" table="States">
    <id name="Id" type="int">
      <generator class="native" />
    id>

    <property name="Name" length="255" not-null="true" />

    <bag name="AllowedTransitions" table="AllowedStateTransitions">
      <key column="FromState_Id" />
      <many-to-many column="ToState_Id"  class="State" />
    bag>
  class>
hibernate-mapping>

Between NHibernate many-to-many mapping and the design of the state transition list it’s easy to start building an application upon this. We are not done yet though. Every state needs more transition rules then just “AllowedTransitions”. The next post will tackle that requirement.

, , ,

5 Comments