Archive for category Architecture

Platform migrations – not really like a flock of sparrows.

Migrations. Moving from one place to another. For birds in the Nordics it is as simple as flying south for the winter. For the client I am currently flying to, the trip will be a little more complicated.

In about an hour I will be landing in Brussels to share experiences, thoughts and ideas on migrating away from a global and distributed IBM Lotus Notes/Domino solution to a new platform.

There are quite a few things to think about, no matter if the migration is from one platform to another or even to a newer version.

Here are three of the things I will be sharing today:

Business case

Building a business case on mere cost cuts for licenses and hardware will most probably not motivate a migration. The return of investment in pure financials after a migration project will take years to realize. The business case only starts making sense when you add qualitative values onto of quantitative. Things like:

  • User experience, will it be faster to find documents? Less time to spent to perform tasks?
  • Platform alignment and integrations.
  • Ease of finding competence

One-to-one migrations

There is no such thing as a one-to-one migration on a feature level. The new solution will be different, take advantage of that. Don’t try to bend it over backwards based on what is in the old.

Use the strengths in the new to deliver more value then is currently there. Focus should be on delivering the same capability, adapted and improved.

The big bang theory

Don’t do big bangs. Do a phased migration. It will let you learn from experience and adjust as you go. Plan for, and expect, co-existence. Find the key usage scenarios and migrate one or two of them. Adapt, improve, move on.

Don’t be fooled by the straight forward advices; there are more than one devil in the details here.

My experience is that any migration will be a bumpy ride. However, following these three advices will give less bumps for the business and more tools to parry them in the project.

Happy migrations!

, , ,

1 Comment

Enterprise Social: Technology is not the answer. This is the answer.

- “We need an internal Facebook”

This is a very common statement in any organization today. They want to replicate the success of social collaboration giants like Facebook, Twitter and YouTube is massive. Having people in an organization engaged in communication as they are on those services is very compelling from a business perspective.

The next question after that is usually:

- “Can technology X give me Facebook”?

“Technology X” is commonly SharePoint 2013, Yammer, NewsGator or Lotus Connections. There are several organizations interested in investing into these applications to get “Facebook”, “Twitter” or “YouTube” to their organizations.

But why? Are they asking the right questions?

Social collaboration is after all not a technology. It is soft values such as communication, engagement and people. To realize those values, technology is not the answer – it is just one of the tools. As such, there are others that are much more important to get right.

Communication Strategy
All features of a social collaboration platform or product is there to enable communication in some form. From micro blogging to video archives, they are built to let people engage in communication with each other. So ensure that these communications are engaging and valuable for you as a business, the features you enable and create need to have a strategy behind them. A strategy to align them. A strategy to ensure you invest in the right communication tool for your organization.

Useful, Usable, Beautiful
The above title is the catch line for the Avanade design studios. It is the catch line because these three together makes up the best user experience for any application. For social collaboration projects is is one of the key components. If you want to engage your people to communicate, the tool need to be bring them immediate value, be easy to use and be visual appealing. Anything less and the threshold of using the tool will be to high and you will fail to engage your people.

Changing behavior
It does not matter if you have the most useful, most useable, most beautiful tool to enable your communication strategy if nobody is using it. Change does not come easy. Change is not automatic. Change will be initiated from a “what’s in it for me” perspective. So you need to plan for change; What training will you have (videos), what incentives will there be (gamification), what activities will you drive to ensure adoption (collaboration champions)?

When Facebook and YouTube first started up, they did not have any of these in place. But yet they succeeded in building the largest social collaboration sites in the world. There are several reasons they did. The “what’s in it for me”-incentives are very compelling, the early adopters where already champions of internets various services and the value where apparent for them. To replicate the same success, you need to replicate the same setting.

Social collaboration is really valuable for any organization. But value is not objective, it is subjective. Any tool you want people to use need to be valuable for them and should under no circumstances create pain or frustration. To successfully instigate change in your organization, you really need to plan for it. Ensure you get all three right in your social collaboration project, and it will be as successful as Facebook has been.

, ,

1 Comment

Sometimes it is business that needs to understand development

There is always that guy. The business oriented guy, the guy who can’t understand why a few lines of code can take a whole day to produce. The guy who believes that pair-programming is the equivalents to “get one pay for two”. This is a story about that guy and how I made him understand.

A few years back I was involved in in a project that had the attention of a vice president in a huge enterprise. The project had haltered and the VP’s response was to micro-manage developers tasks. One of the meetings I was asked to prepare was to explain why a switch in data access technology had to be done. A gruesome task: Explaining technology limitations to someone with absolutely no technology background. In the end it succeeded. Turning technology limitations into pure numbers: Bugs/LoC, Cost of a Bug, hours spent on performance tweaking, etc., etc.

But that is not what this post is about. This post is about how I got him to understand that developers are not glorified copy writers with the task of writing as many letters/day as possible:

- “I don’t understand? How can you only produce 100 lines of code in a full day? And that’s with two developers at the same keyboard!”

- “You write business plans right?

- “Yes.”

- “And how long is that document, about 30 pages?”

- “Yes?!”

-“I can write 30 pages of text in Word in a day, maybe half a day. How come it takes you weeks to produce the business plan?”

-“Isn’t that obvious? We need to figure out what the business plan is about, the text is just a documentation of our thinking.”

-“Exactly”.

From that point on there were no more discussions on lines of code, technical details or design/architectural decisions. From that point on it was only about features and velocity, process and predictability, and the most important feature of them all: delivery.

, , ,

4 Comments

The benefits of minimizing the centralization of IT

One of the lesser known ideas and practices behind what has come to known as SOA was the physical organization of teams into service areas. The basic idea is that there shouldn’t be a centralized “IT-Department” that manages the systems they need, but every department should have their own dedicated IT that help them conduct their business as effective as possible.

At a first glimpse this seems like someone didn’t think this through. After all, why should the finance department handle it’s own IT? They should focus on finances! This is true, but digging past the first shallow glimpse there is actually really interesting benefits here, none of the technical in nature.

It’s all about understanding the core

Centralized IT-departments are often very good at performing their core-business; IT. But if IT isn’t the organizations core business there is a lot to be learned, there is a gap to be bridged and you really need dedicated IT-staff that understands exactly what your departments function is in the the whole. This often leads to IT-departments dividing into subgroups with staff specialized in servicing a separate department. The collective knowledge collected over years a group like this will often represent a full understanding of what the department actually does. This leads to shorter cycles, quicker feedback loops and more to-the-point implementations.

If you understand the finance department, it will be a lot quicker to understand new requirements. This is a first step into making IT part of the business and not only a cost of performing it. It also comes a long way to becoming more cost effective then general development departments ever can be.

It’s all about money streams

As business has evolved, a lot of organizations today can’t conduct it without IT-support. Often you hear that cost of IT is the cost of doing business. In a sense it is (though I’d say that cost of IT is an investment that will pay it’s own weight done right, but that’s another discussion).

Add to this that all head of departments have their own budget that they need to maintain and fulfill. All IT-projects with a centralized IT-department will quickly come to the same discussion, who will pay for this? If there is an investment that need to be done that the finance department will reap the benefits of, should the cost end up on the IT-departments bottom line?

A lot of organizations try to solve this by charging the departments for new projects and taking a fixed fee for maintenance. This solves part of the problem but creates new grounds for discussions.

When is the project done and when do we go into maintenance mode? Is maintenance just about bug-fixes and what if that is the case, what is a bug? Endless discussions. All based on the fact that the managers of each department have to report their budgets.

Much like some discussions between clients and consultants.

If managing the budget was entirely up to the department itself, this would not be an issue. It would be up to the department to do investments that bring their business forward. All costs associated with their IT-support would be visible in their own budgets. Manageable in their own budgets. There will never be a discussion about investing X to get benefit Y, it will all boil down to the same bottom line and ROI will be the responsibility of the same manager.

There are some IT functions that might not be feasible for this, like desktop maintenance. Maybe the BizTalk server should be maintained centralized for all departments, etc. But for the system support you need, for any customization or development you need. This is a smooth way to go.

 

Conclusion

Organizing your development into areas of service might not be for everyone. But there are great benefits for those who does. Shorter feedback loops for requirements, shorter decision paths and visualization of actual cost / benefits in the right place; the bottom line for the department actually using the service.

, Organizations, Service Area,

1 Comment

The dreaded “Save(Foo bar)”

For the last year or so my biggest antagonist has been “Save(Foo bar)”. There is usually a lot of things going wrong when that method turns up. I’m not saying that saving stuff to the persistence storage is a bad thing, but I’m arguing that the only place it should exist is in actually saving data in the persistence/data layer. Don’t put it in your services, don’t put it in your business logic, and for crying out loud; do not base entire behavior around it.

The source of the ill behavior

When developing software a user will quite often tell you that she wants a button that will save the current changes she’s done to the current work. As a good developer we implement this button and make the user happy. After all, their happiness is our salary. In a pure data-entry application this is all good, but as soon as we add just a little bit of behavior we need to start pushing the actual saving far, far away. Consider this screen:

In this setting the Save button makes perfect sense. The user changes a couple of things and then commits those changes to the system. Now start to think about more then just storing the values in a database, think about adding behavior to the changes.

Do not throw away the users intent!

When a user makes a change to any of these fields, they do so with an intention. Changing the address might mean that the receiving end of an order has changed. Changing logistics provider might mean that a new one need to be notified. With a save method, the only way for our business logic to know what to do, is to figure it out;


                        
public void Save(Order changedOrder) { if (changedOrder.Address != existingOrder.Address) OrderAddressChanged(existingOrder, changedOrder); if (changedOrder.Status != existingOrder.Status) OrderStatusChanged(existingOrder, changedOrder); if (changedOrder.LogisticServiceProvider != existingOrder.LogisticServiceProvider) LogisticServiceProviderChanged(existingOrder, changedOrder); }

                        

And even worse;


                        
public void OrderStatusChanged(Order existingOrder, Order changedOrder) { if(changedOrder.Status == Status.Cancelled) ......

                        

Basically what we’ve done is throwing away the users intent and capturing of the actual work the user has done. I see this happening a lot in service interfaces (WCF Service contracts with a single Save method), in “Managers” (Let’s not even go there) and in local services.

The code this will generate a couple of releases down the road will smell in so many scents it’s not even funny to joke about. But to get you started, read up on Open/Closed (A SOLID principle).

 

Enough with the bashing, what’s the solution?

The idea that a user can commit their work in a single push of a button is a good one. But that doesn’t mean that the save need to anything else then a button. The actual work under the covers should reflect what the user really wants to happen. Changing a service provider on an order that’s packed should probably issue a message to the old provider not to pick the package up and a message to new to actually do. This kind of behavior is best captured with commands.

Let’s revisit the order form:

In this scenario every red box here is actually a intended command form the user. Identified as;

ChangeOrderAddress

ChangeOrderLogisticServiceProvider

ChangeOrderStatus (or actually, CancelOrder, ShipOrder, etc)

Implementing this could either be using the command pattern or methods on the order class. “It all depends” on your scenario on preferences.

So what am I really suggesting?

Make sure that your code reflects the intention of the user. Not the technical details of how the data is persisted. You will save a lot of pain, grief and hard work by just identifying what the user intends, capture that in commands or something similar and execute them in order when the user hit’s save. And most importantly SAVE IS A METHOD ON THE PERSISTENCE LAYER, NOT A BUSSINESS FUNCTION.

,

No Comments

Unity LifeTimeManager for WCF

I really love DI-frameworks. One reason is that it allows me to centralize life-time management of objects and services needed by others. Most frameworks have options to control the lifetime and allows objects to be created as Singletons, ThreadStatics, Transients (a new object everytime) etc.

I’m currently doing some work on a project where Unity is the preferred DI-framework and it’s hooked to resolve all service objects for us.

The challenge

WCF has three options for instances, PerCall, PerSession and Single (http://msdn.microsoft.com/en-us/library/system.servicemodel.instancecontextmode.aspx). IN essence this will have the following effect on dependencies injected:

PerCall All dependencies will be  resolved once per message sent to the service.
PerSession – All dependencies will be resolved once per WCF service Session.
Single – All dependencies will be resolved exactly once (Singleton).

Now why is this challenge? It’s as expected that the constructor parameters is called once per instantiation, isn’t it?

The basis of the challenge is when you want to share instances of dependencies one or more levels down. Consider this code: ´


                      
  1: public class CalculationService : Contract {

                      
  2:   public CalculationService(IRepository ruleRepository, 

                      
  3:                             IRepository productRepository){..}

                      
  4: }

This would work perfectly fine with the InstanceContextModes explained above, but what if we want to share instances of NHibernate Sessions or Entity Framework Contexts between repositories?
The default setting for most DI-Frameworks is to always resolve objects as “Transients”, which means once for each object that is dependent on them.
This is where LifeTime management comes into play by changing how the DI-Framework shares instances between dependant objects.
Unity has six “sharing-options” (from http://msdn.microsoft.com/en-us/library/ff660872(PandP.20).aspx):
TransientLifetimeManager: A new object per dependency
ContainerControlledLifetimeManager One object per unity container instance (including childrens)
HierarchicalLifetimeManager One object per unity child container
PerResolveLifetimeManager A new object per call to resolve, recursive dependencies will reuse objects.
PerThreadLifetimeManager One new object per thread
ExternallyControlledLifetimeManager Moves the control outside of Unity

As you can see we are missing our scenario. We’d like to share all dependencies of some objects across a single service instance, no matter what InstanceContextMode we choose.

The Solution

For Unity there is a good extension point that can help us. Combine that with WCF’s ability to add extensions to Instances and problem will be solved.

First we extend WCF instance context so it can hold objects created by Unity:


                      
  1: public class WcfServiceInstanceExtension : IExtension

                      
  2: {

                      
  3:     public static WcfServiceInstanceExtension Current

                      
  4:     {

                      
  5:         get

                      
  6:         {

                      
  7:             if (OperationContext.Current == null)

                      
  8:                 return null;

                      
  9: 

                      
 10:             var instanceContext = OperationContext.Current.InstanceContext;

                      
 11:             return GetExtensionFrom(instanceContext);

                      
 12:         }

                      
 13:     }

                      
 14: 

                      
 15:     public static WcfServiceInstanceExtension GetExtensionFrom(

                      
 16:                                               InstanceContext instanceContext)

                      
 17:     {

                      
 18:         lock (instanceContext)

                      
 19:         {

                      
 20:             var extension = instanceContext.Extensions

                      
 21:                                            .Find();

                      
 22: 

                      
 23:             if (extension == null

                      
 24:             {

                      
 25:                 extension = new WcfServiceInstanceExtension();

                      
 26:                 extension.Items.Hook(instanceContext);

                      
 27: 

                      
 28:                 instanceContext.Extensions.Add(extension);

                      
 29:             }

                      
 30: 

                      
 31:             return extension;

                      
 32:         }

                      
 33:     }

                      
 34: 

                      
 35:     public InstanceItems Items = new InstanceItems();

                      
 36: 

                      
 37:     public void Attach(InstanceContext owner)

                      
 38:     { }

                      
 39: 

                      
 40:     public void Detach(InstanceContext owner)

                      
 41:     { }

                      
 42: }

InstanceContextExtension, which get’s applied on each WCF Service Instance

                      

                      
  1: public class InstanceItems

                      
  2: {

                      
  3:     public object Find(object key)

                      
  4:     {

                      
  5:         if (Items.ContainsKey(key))

                      
  6:             return Items[key];

                      
  7: 

                      
  8:         return null;

                      
  9:     }

                      
 10: 

                      
 11:     public void Set(object key, object value)

                      
 12:     {

                      
 13:         Items[key] = value;

                      
 14:     }

                      
 15: 

                      
 16:     public void Remove(object key)

                      
 17:     {

                      
 18:         Items.Remove(key);

                      
 19:     }

                      
 20: 

                      
 21:     private Dictionary<object, object> Items 

                      
 22:                          = new Dictionary<object, object>();

                      
 23: 

                      
 24:     public void CleanUp(object sender, EventArgs e)

                      
 25:     {

                      
 26:         foreach (var item in Items.Select(item => item.Value))

                      
 27:         {

                      
 28:             if (item is IDisposable)

                      
 29:                 ((IDisposable)item).Dispose();

                      
 30:         }

                      
 31:     }

                      
 32: 

                      
 33:     internal void Hook(InstanceContext instanceContext)

                      
 34:     {

                      
 35:         instanceContext.Closed += CleanUp;

                      
 36:         instanceContext.Faulted += CleanUp;

                      
 37:     }

                      
 38: }

InstanceItems, used by the extension to hold objects created by unity
This gives us a nice place to put created objects, it will also call dispose on any objects when the instance is closing down.
Now we need to tell Unity to use our new shiny class, this is done by first extending LifeTimeManager:

                      

                      
  1: public class WcfServiceInstanceLifeTimeManager : LifetimeManager

                      
  2: {

                      
  3:     private readonly Guid key;

                      
  4: 

                      
  5:     public WcfServiceInstanceLifeTimeManager()

                      
  6:     {

                      
  7:         key = Guid.NewGuid();

                      
  8:     }

                      
  9: 

                      
 10:     public override object GetValue()

                      
 11:     {

                      
 12:         return WcfServiceInstanceExtension.Current.Items.Find(key);

                      
 13:     }

                      
 14: 

                      
 15:     public override void SetValue(object newValue)

                      
 16:     {

                      
 17:         WcfServiceInstanceExtension.Current.Items.Set(key, newValue);

                      
 18:     }

                      
 19: 

                      
 20:     public override void RemoveValue()

                      
 21:     {

                      
 22:         WcfServiceInstanceExtension.Current.Items.Remove(key);

                      
 23:     }

                      
 24: }

The LifeTimeManager that uses our WCF Extension
All that’s left now is to tell Unity when to use this LifeTimeManager instead of the default. That is done when we register the type:
container.RegisterType(new WcfServiceInstanceLifeTimeManager(), 
                                 new InjectionFactory(
                                     c => SessionFactory.CreateSession()));

In conclusion


So, DI-Frameworks are powerful to handle dependencies but sometimes they need a little nudge in the right direct. Custom LifeTimeManagement is one of those nudges you can do and both Unity and WCF helps you do that.

, ,

7 Comments

Slice up your business logic using C# Extension methods to honor the context

One of my favorite features with C# 3.0 is the extension methods. An excellent way to apply some cross cutting concerns and great tool for attaching utility functions. Heavily used by the LINQ frameworks and in most utility classes I’ve seen around .NET 3.5 projects. Some common example usages I’ve come across include:

   1: var name = "OlympicGames2010".Wordify();

   2: var attributeValue = typeof(Customer).Attribute(o => o.Status)

   3:                                      .Value();

Lately I’ve started to expand the the modeling ideas I tried to explain during my presentation at Öredev 2008. It became more of a discussion with James O. Coplien then a presentation and I was far from done with my own understanding of the ideas and issues I’d identified (there are things improve in this content). The core idea is pretty simple though:

Not all consumers of an object are interested of the same aspects of that object, it will take on different roles in different contexts

Let me explain with the simplest example; when building an order system, the aspects of a product that the order think is important are usually not exactly the same aspects that the inventory system value.

Contexts in object models

Eric Evans touches this in his description of “bounded contexts” (Domain Driven Design p335) where he stresses the importance of defining contexts where a model is valid and not mix it into another context.  In essence the model of a product should be duplicated, once in the order context and once in the inventory context.

This is a great principle but at times it be too coarse-grained. James Coplien and Trygve Reenskaug have identified this in their work around, what they call, “DCI architecture”. Richard Öberg et al have done some work in what they call qi4j where they are composing objects with bits and pieces instead of creating full blown models for each context.

Slicing logic and models using Extension Methods

Let’s get back to the extension methods and see how they can help us slice business logic up in bits and pieces and “compose” what we need for different contexts.

In the code base I’m writing for my TechDays presentation I have a warehouse class that holds stock of items. These Items are common for different contexts, they will surface in orders and PLM. One of the features in this application is to find a stock to reserve a given an item. The following code is used to find that stock:

 

   1: return Stock.First(stock => stock.Item == item);

 

Although trivial, this is a business rule for the warehouse. When the warehouse class evolved this piece of rule would be duplicated in methods like Reserve, Releaes and Delete. A classic refactoring would be to use Extract Method to move it out and reuse that piece, something like:

   1: private bool Match(Stock stock, ItemInfo item)

   2: {

   3:    return stock.Item == item

   4: }

   5: ...

   6: return Stock.First(stock => Match(stock, item));

 

This is a completely valid refactoring but honestly we loose some intent, the immediate connection with stock and item are not as explicit and the lambda expression wasn’t simplified that much.

So let’s Refator to Slice instead:

   1: public static class ItemInAWarehouseSlices

   2: {

   3:     public static bool Match(this ItemInfo @this, Stock stock)

   4:     {

   5:         return stock.Item == @this;

   6:     }

   7: }

Adding this business rules as an extension method gives us a natural place for the code to live and a mechanism to use to compose objects differently in different contexts. Importing this extension method class into the Warehouse C#-file, ItemInfo will provide the logic needed in that context;

   1: return Stock.First(item.Match);

Adding the rule this way also sprinkles a touch of DSL on it and gives it a semantic meaning which makes the code make more sense.

Why don’t you just put that method on the ItemInfo, you migh ask. Well the answer is simple. ItemInfo is a concept that might be shared across contexts. Contexts that have no grasp of what a Stock is, nor should it. If I’d add everything I needed to the ItemInfo class for all contexts that uses Item. I would be in a bad shape. Thus the ideas behind Contextual Domain models, Bounded Context, DCI and Composite objects.

Extend away …

So extension methods have other usages then just extending existing classes with some utility. It’s also a powerful tool to slice your logic in composable pieces which helps you focus on the aspects you think is important in the current context you are working.

 

So what do you think? Will this help you create clear separtions?

, ,

1 Comment

Creating a dynamic state machine with C# and NHibernate, Part 2: Adding business rules.

This the second part of a series started in an earlier post; Creating a dynamic state machine with C# and NHibernate 

In my first post I showed you how to create a state machine, attach it to an entity and then save it using NHibernate. In this post we’ll extend the state machine with the capability of adding business rules that must be fulfilled to allow transitions. These rules will be dynamically added to each state and persisted to the database using NHibernate.

 

Extend the model with business rules using strategy pattern

The first step will be to use an implementation of the Strategy Pattern to ensure that our business rules engine is open for extension (thus following the open-closed principle). First we’ll define an interface to use for our business rules, listing 1 shows the definition;

 

public interface IRule
{
    bool IsMetBy(Template entity, State state);
}

listing 1, the IRule interface

The IsMetBy method accepts the entity that the state is attached to which we’ll be using as data for our rules later. Next we add a list of business rules to the state class;

 

public class State
{
    ...

    private IList _transitionRules = new List();
    public virtual IList TransitionRules
    {
        get { return _transitionRules; }
        set { _transitionRules = value; }
    }

    public virtual bool HasAllTransitionRulesMetBy(Template entity)
    {
        var transitionIsAllowed = true;
        foreach (var rule in TransitionRules)
            transitionIsAllowed &= rule.IsMetBy(entity, this);

        return transitionIsAllowed;
    }
}

listing 2, list of rules on the state class

With this addition every state now holds a list of rules that need to be fulfilled before the state accept being changed into it. We’ve added a method that runs through all the business rules for the state and validate that all of them are met. This a call to HasAllTransitionRulesMetBy is added to the templates ChangeStateTo method;

public void ChangeStateTo(State newState)
{
    if (State.CanBeChangedTo(newState) && newState.HasAllTransitionRulesMetBy(this))
        State = newState;
    else
        throw new InvalidStateTransitionException();
}

listing 3, changes to the ChangeState method

At this point changing state will run through the list of allowed transitions, functionality we added in the first part, and the list of rules to make sure the transition is allowed. We honor the open – closed principle by allowing rules to be added in a simple fashion, thus not relying on a lot of refactoring when rules change or get’s added.

Implementing a rule

To make this state machine meaningful we need to start creating business rules. For convenience I’ve added a base class Rule that implements some properties needed later but in essence it’s the same as our interface. First rule will ensure that an entity from the template has a ScheduledHours property with a minimum of time. Listing 4 shows our first rule,

 

public class IsScheduledForAtLeast : Rule
{
    public virtual int ScheduledHours { get; set; }

    protected IsScheduledForAtLeast() {}

    public IsScheduledForAtLeast(int scheduledHours)
    {
        this.ScheduledHours = scheduledHours;
    }

    public override bool IsMetBy(Template entity, State state)
    {
       if ( entity.ScheduledHours >= ScheduledHours && state == "Closed" )
           return true;

        return false;
    }
}

listing 4, an example rule

In a typical scenario these rules might be a bit more complex and in part three of this series we’ll look into rules that need more then just the entity state to get it’s work done. Figure 1 displays the model we’ve built so far;

Figure 1, our model so far

A test to validate this looks something like listing 5;

public static class States
{
    public static State ClosedState = new State("Closed")
    { TransitionRules = new List
    { new MinimumAttendanceRule(8), new IsScheduledForAtLeast(4)} };

    public static State OnGoingState = new State("OnGoing");

    public static State OpenState = new State("Open")
    { AllowedTransitions = new List { "Paused", ClosedState } };
}

[Test]
public void It_will_not_allow_state_transition_from_closed_to_open()
{
    var entity = new Entity(States.ClosedState);

    Assert.Throws(typeof(InvalidStateTransitionException),
        () => entity.ChangeStateTo(States.OpenState));
}

 

Using NHibernate to persist a template with state and business rules.

So far we can build a state machine that is setup with transition rules and business rules for each state, but only in memory. For this to be meaningful we actually need to persist it as well. For our scenario we want to persist the template, it’s current state, all allowed state transitions and the rules added to each transition.

An example setup that we need to persist looks like listing 6;

[Test]
public void Save_a_state_with_transition_rules_added()
{
    var savedState = new State("Open")
                         {
                             TransitionRules =
                             new List {
                                new MinimumAttendanceRule(8),
                                new IsScheduledForAtLeast(5)}
                         };

    stateRepository.Save(savedState);
    var fetchedState = validationRepository.Get(savedState.Id);

    Assert.That(fetchedState.TransitionRules.Count, Is.EqualTo(2));

   Assert.That(fetchedState.TransitionRules.FirstOrDefault(
    rule => rule is MinimumAttendanceRule), Is.Not.Null);

    Assert.That(fetchedState.TransitionRules.FirstOrDefault(
    rule => rule is IsScheduledForAtLeast), Is.Not.Null);
}

So how do you save something this dynamic to the database? There is no way of telling what rules will be added and certainly not a table structure that will fit. Can we do it? Yes we can. Using inheritance mapping in NHibernate this is very possible. For our scenario we’re using the inheritance type “Discriminator column” and a many-to-many relationship between state and rule. The database tables for this will look like figure 2;

figure 2, Table structure 

As figure 2 shows it is now possible to store every state with it’s transitions, their rules and any configured value needed (we serialize all values into one column at the moment). We need to update our NHibernate mapping to include the list of rules and all implemented rule types. Listing 7 shows the mapping files for this scenario;

 

<hibernate-mapping xmlns="urn:nhibernate-mapping-2.2">
  <class name="State" table="States">    

     ...

    <bag name="TransitionRules" cascade="all" table="TransitionRules">
      <key column="StateId" />
      <many-to-many column="RuleId"  class="Rule" />
    bag>
  class>
hibernate-mapping>

<hibernate-mapping xmlns="urn:nhibernate-mapping-2.2">
  <class name="Rule" table="Rules" abstract="true">
    <id name="Id" type="int">
      <generator class="native" />
    id>
    <discriminator column="Name" />

    <subclass discriminator-value="IsScheduledForAtLeast"
                     name="IsScheduledForAtLeast">
      <property name="ScheduledHours" column="Value" />
    subclass>
  class>
hibernate-mapping>

listing 7, NHibernate mapping

 

 

Summary

With the usage of interfaces and the open-closed principle, we get a flexible way to add rules to our state machine. These rules can easily be added in a user interface and several templates with different sets of states and business rules. Using some powerful mapping techniques in NHibernate this is persisted easily as well.

This was part 2 of a three part series. In the last part we’ll be using dependency injection in our rules to enable more advanced scenarios. I’ll also provide you with a complete end-to-end sample solution.

, , ,

12 Comments

Architecture considerations: When do I benefit from services?

As a .NET developer it’s becoming increasingly tempting to create service layers for our application and utilize some of the strengths in Windows Communication Foundation in our solutions. With the power WCF brings to the table and all the messages about SOA in general, it’s easy get swept into it all and default to distribution.

In this post I will try to outline some scenarios where a service layer might be beneficial and alternatives that might suite that scenario better sometimes.

Other applications needs to integrate with you

If there is other applications that want to integrate with you, chances are that services might be your solution. Before you start writing your service layer you owe yourself to think about why the other application is integrating with you.

Pushing data to or from you

If other applications are solely interested in pushing data to or from you, a service might not be the best answer. There is other tools such as Sql Server Integration Services that is better suited for data import / export of this kind.

If all you want to do is share some piece of information, like your product catalog, it might even be as simple as exposing a XML-document. No need for a service layer. (all though WCF does a good job exposing XML-documents with the REST starter-kit).

You want to expose a business process

This is the block-buster feature of a service. Create a service and expose your business processes. Does this mean you will have to expose all of your business processes as services and have a service layer in your applications? Usually no.  Usually it’s enough to create an integration point, a anti-corruption layer or a small service exposing exactly what you want integrated. There is no real need for a full blown tier separation.

Wait a minute, is there no reasons for the service tier?

Of course there is. There is a couple of really good reasons. Most of them tied into the word “tier”. When do I need a separate “tier”? A couple of of reasons:

Lightweight UI’s like Silverlight or Flash

Advanced business logic is usually heavy weight. It involves a lot of code and usually it doesn’t belong on the client side. For lightweight UI’s, or Rich Internet Applications (RIA), this is very true. You want the full power of the CLR and the .NET framework for your applications and to get that you’ll need to separate the application into at least two tiers.

Wanting “middleware” in your application

Often there is the need for integration with other systems, orchestration of some sort or interesting conversation patterns like publish / subscriber. This is not the job for a client but for middleware. Or a “server”. In this case, separating the back-end into it’s own tier is a really good idea.

Summary

So use your service tier wisely, there isn’t one pattern and one usage of it and often it’s not the only solution or even the best.  The extra tier will bring extra complexity so make sure it will carry it’s own weight.

, ,

No Comments

Creating a dynamic state machine with C# and NHibernate

In my last post (An architecture dilemma: Workflow Foundation or a hand rolled state machine?) I talked about the discussion around an architectural choice. The conclusion of that discussion was to hand-roll a dynamic state machine. This post is the first part of 3 explaining the solution we used. In this part we’ll focus on the state machine, in the following two parts we’ll be adding business rules to each state and utilizing a couple of tricks from NHibernate to make those rules rich.

If you are uncertain what the state machine pattern looks like, there is some information here: http://msdn.microsoft.com/en-us/magazine/cc301852.aspx . For the rest of this post I will assume that you got a basic understanding of it.

The requirements for our state machine was that users should be able to add their own states and tight them into the state machine. They should also be able to define the flow, designing the structure of which states can transition to each other.

The basic state machine looks something like this:

Since we need to be more dynamic then this our model turned out more like this instead:

In this model we are using composition instead of inheritance to build our state machine. The list “AllowedTransitions” contains a list of states that is allowed to transition to from the current one.

The method “CanBeChangedInto” takes a state object and compares it to the list of states and decides if the transition is allowed. For our scenario this comparison is done by implementing IEquatable and overriding the appropriate operators (==, !=).

This is defined on a template kind of entity, there will be “instances” made out of this template where all attributes are copied onto the instance.

The implementation of the ChangeStateTo method on the template is fairly simple:

Template:
public void ChangeStateTo(State newState)
{
    if (State.CanBeChangedInTo(newState))
        State = newState;
    else
        throw new InvalidStateTransitionException();
}

State:
public bool CanBeChangedInTo(State state)
{
    return AllowedTransitions.Contains(state);
}

Simple but yet very powerful and allows for the kind of dynamic state transitions our scenario requires.

Adding state transitions is easy as well, since the AllowedTransitions list holds

Our project uses NHibernate as a persistence engine and to persist our dynamic state we use a many to many bag mapped in xml:

<hibernate-mapping xmlns="urn:nhibernate-mapping-2.2" assembly="" namespace="">
  <class name="State" table="States">
    <id name="Id" type="int">
      <generator class="native" />
    id>

    <property name="Name" length="255" not-null="true" />

    <bag name="AllowedTransitions" table="AllowedStateTransitions">
      <key column="FromState_Id" />
      <many-to-many column="ToState_Id"  class="State" />
    bag>
  class>
hibernate-mapping>

Between NHibernate many-to-many mapping and the design of the state transition list it’s easy to start building an application upon this. We are not done yet though. Every state needs more transition rules then just “AllowedTransitions”. The next post will tackle that requirement.

, , ,

5 Comments