Posts Tagged Architecture
In the face of frustration.
Posted by Patrik Löwendahl in Column on May 12, 2011
I love to write code. I’ve written code in one form or another for the past 25 years and I’ll probably still write code for the upcoming 25. I love the pure power of creation and freedom to let creativity flow that code gives me. But like in any love story there is times when it just frustrates the hell out of me.
I have a confession. I’m not proud of this, I know it’s morally wrong and I would never do this in any other setting; I’ve taken on a mistress.
My mistress is business value. Not that code can’t be business value, but sometimes other options can more quickly satisfy business values and in the face of frustration I’ve started to indulge in those options.
In the past I’ve easily accepted blame for throwing code at most problems and tried to deliver features at a rapid agile pace. Often and early. Sometimes code is just not as rapid as I want.
Now if just the platforms I choose to install to meet business needs quickly wasn’t so frustrating to code for…
Standard products – Can’t live without them, can’t live with them.
No Comments
The dreaded “Save(Foo bar)”
Posted by Patrik Löwendahl in Architecture, design patterns on September 24, 2010
For the last year or so my biggest antagonist has been “Save(Foo bar)”. There is usually a lot of things going wrong when that method turns up. I’m not saying that saving stuff to the persistence storage is a bad thing, but I’m arguing that the only place it should exist is in actually saving data in the persistence/data layer. Don’t put it in your services, don’t put it in your business logic, and for crying out loud; do not base entire behavior around it.
The source of the ill behavior
When developing software a user will quite often tell you that she wants a button that will save the current changes she’s done to the current work. As a good developer we implement this button and make the user happy. After all, their happiness is our salary. In a pure data-entry application this is all good, but as soon as we add just a little bit of behavior we need to start pushing the actual saving far, far away. Consider this screen:
In this setting the Save button makes perfect sense. The user changes a couple of things and then commits those changes to the system. Now start to think about more then just storing the values in a database, think about adding behavior to the changes.
Do not throw away the users intent!
When a user makes a change to any of these fields, they do so with an intention. Changing the address might mean that the receiving end of an order has changed. Changing logistics provider might mean that a new one need to be notified. With a save method, the only way for our business logic to know what to do, is to figure it out;
And even worse;
Basically what we’ve done is throwing away the users intent and capturing of the actual work the user has done. I see this happening a lot in service interfaces (WCF Service contracts with a single Save method), in “Managers” (Let’s not even go there) and in local services.
The code this will generate a couple of releases down the road will smell in so many scents it’s not even funny to joke about. But to get you started, read up on Open/Closed (A SOLID principle).
Enough with the bashing, what’s the solution?
The idea that a user can commit their work in a single push of a button is a good one. But that doesn’t mean that the save need to anything else then a button. The actual work under the covers should reflect what the user really wants to happen. Changing a service provider on an order that’s packed should probably issue a message to the old provider not to pick the package up and a message to new to actually do. This kind of behavior is best captured with commands.
Let’s revisit the order form:
In this scenario every red box here is actually a intended command form the user. Identified as;
ChangeOrderAddress
ChangeOrderLogisticServiceProvider
ChangeOrderStatus (or actually, CancelOrder, ShipOrder, etc)
Implementing this could either be using the command pattern or methods on the order class. “It all depends” on your scenario on preferences.
So what am I really suggesting?
Make sure that your code reflects the intention of the user. Not the technical details of how the data is persisted. You will save a lot of pain, grief and hard work by just identifying what the user intends, capture that in commands or something similar and execute them in order when the user hit’s save. And most importantly SAVE IS A METHOD ON THE PERSISTENCE LAYER, NOT A BUSSINESS FUNCTION.
NDC2010: Greg Young – 7 reasons why DDD projects #fail
Posted by Patrik Löwendahl in NDC2010 on June 17, 2010
#1 Lack of intent
You build a system but you do not find out what the intention of the user is. It is visible by domain models that are empty from business logic. There will only be four verbs, CRUD which doesn’t tell you anything about what the user wanted to do.
#2 Anemic Domain Model
A model that looks and smell like a real domain model, but when you are looking for behavior you can’t find it. You have domain services that becomes transaction scripts on top of your Domain Models. It’s not an anti-pattern but it’s not DDD either. Anemic Domain Models work wells in teams with little OO skills in a simple model, but usually they are bad.
#3 DDD-Lite
Using DDD as a pattern language, you do not define context you just implement patterns. If you are a .NET developer and you are running down this path, stop. You either need to do real DDD or something simpler. The middle road is problematic. DDD-Lite will give you all of the overhead and none of the benefits.
Naming things is really important, the value is not in the domain model. The domain model is only a representation of the ubiquitous language. So if the language is not there, the competitive advantage will not surface.
#4 Lack of isolation
Part of why building domain models are expensive is because we want them isolated from everything else. But without it, there will be abstraction leakage left and right.
Example, We might get back 300 fields in our XML but we are only using two. Our domain model should only have those two.
Another example, UoW is really a infrastructure concern why would you bring that into the domain?. If you have your aggregate boundaries right, you don’t need a unit of work.
#5 Ubiquitous what?
The ubiquitous language is only ubiquitous in a context. If you aren’t using bounded context, you will fail with defining the language. The way to find contexts is to find places where word have different meaning. We need to break things apart and focus on our context boundaries.
#6 Lack of refinement
Most projects do not come back to the domain model and do any refinement. The team learns more about the domain, but do not go back and update the model with the new knowledge. A year down in the project you will be translating the model to the domain experts and the ubiquitous language will be lost.
Since the domain model is a tool to analyze the domain and solve problems, it has to be clean and reflect the level of understanding we have of the domain.
#7 Proxy Domain Expert (Business analyst)
Each level of translation a little more get lost. If we are using a proxy domain experts, what’s really happening is that we have two ubiquitous languages. One from the BA to the real domain expert, one from the BA to the developers. What’s the chances they are the same?
A BA shouldn’t be in the middle, they should work as a facilitator and help developers and the domain experts to communicate and get closer to each other.
Reflections
I’ve been in project where all 7 of these failures has emerged in one way or another. It ties into what Eric Evans have been talking about a lot today and most of SOLID principles. Make sure you have clear context where models make sense and clear separation of concerns and you’ll be in a good starting position.
I also reflect over the fact that most failures are based on communication and the expression of what get’s communicated. So much in software development are really not about technology, but yet as the engineers we are, we try to make it about that.
NDC 2010: Eric Evans – Folding together DDD into Agile
Posted by Patrik Löwendahl in NDC2010 on June 17, 2010
One of the most puzzling emails Eric have received was one claiming that his book really proved that up front design was important. In large this is a miss conception on how modeling happens. A tremendous amount of knowledge comes from actually implementing the software. You have the most insight at the end of the project. You have the most ignorance in the beginning of the project.
On the other hand Agilistas often declare that modeling isn’t necessary at all which is a miss conception of the importance of modeling. In short, the importance is tied into what your goal is and what you think is important.
Different goals, different requirements on modeling
- If your goal is to just complete the next story, modeling isn’t that important. The span of your focus is just that story and a model doesn’t make sense. A lot of agile has this focus in Eric’s opinion.
- If your goal is to complete a releasable set of stories with an acceptable level of bugs, some modeling will be required. Each story will have it’s own impact on the code and for them to behave well together, there has to be some design and thought.
- If your goal is to deliver a release that the team can continue to extend in the next release, you will need a reasonable amount of modeling and design. First release won’t see a huge difference, but the second release will come out faster.
- If your goal is to deliver a clear and cohesive user experience you’ll need a clear underlying model or it will be really hard to put a well designed user interface on top of it.
Modeling and TDD
With a green bar, TDD will give you A answer. It’s a correct answer, but the question is, is it the best model that delivers the answer? It turns out that finding the right answer is not quite good enough. We need to find the right answer that makes sense to a business person.
DDD and The Agile Process
Eric introduced a modeling sub process that he calls “Whirlpool” which helps in defining when and where modeling happens:
It’s currently a draft and you can read more about it @ http://domainlanguage.com/processdraft/
Reflections
It’s really interesting to see how Eric tackles the ideas that modeling should be done upfront and the agile notion of very little modeling, and doing so with a piece or explained process. I don’t fully grasp what all parts of the process does or fit together, but I for one will follow the progress and try to apply it in my next scrum / kanban project I get into.
NDC 2010: Eric Evans -What I learned since the book
Posted by Patrik Löwendahl in NDC2010 on June 17, 2010
This was one of the most rewarding sessions for me. Eric Evans explained what he picked up and learned since he wrote the book, what parts that he realized was more important that he initially thought and what parts had been missing.
A missing building block: Domain Events
Something happened that domain experts care about.
what would make this clearer? A property or an event, what would be easier to hook into?
In his book, events wasn’t covered at all. In the DDD movement today though Events are carefully explored and popular in expressing things that happens that domain experts care about. They lead to clearer and more expressive models in that instead of adding more state, communicate that state shift with other objects to other objects. Which in terms open up some architectural options that wasn’t available before.
- Events that representing the state of entities (making the model clearer).
- Decoupling subsystems with event streams. Instead of creating tight coupling between subsystems you just send an event to tell others that might be interested what’s going on.
- Enabling High-performance systems. Evans stated that before he saw what Greg Young did with CQRS and event streams, he didn’t think DDD was suitable for this kind of systems. But with this take, it works.
What he learned about Aggregates
Aggregates are there to create a consistent boundary. In the book they are explained as a suitable boundary for transactions, distribution and concurrency. But over the years it has turned out that it’s also suitable to use as boundaries for properties and invariants. Again tying into the message he delivers over and over and over again, there is always more then one model
Things learned by Strategic design
There are a couple of concepts explained in Strategic Design. Some have turned out to be more important then others, Eric emphasizes that Core Domain, Context map and Bounded Context are really important but Large Scale hasn’t come up to often.
Context mapping on the other hand was much more important then he thought from the beginning. There are always multiple models which we need to accept and focus on the mapping between them instead of trying to make one that fits all. The one thing to remember is to be able to clearly communicate in what context a model id viable and have easy ways to communicate in which context we are talking.
This is where bounded context shines, they describe the conditions under which a particular model applies.
Missing Strategic Design concepts
Partners
Partners work on mutually dependent context in cooperation. This means that you share the responsibility of a context and teams will either succeed together or fail together.
Sometimes you just have to give up. BBoM is the notion of code in such a bad shape that trying to apply DDD on it doesn’t make sense at all. Michael Feathers talk in length about how to handle these scenarios. Within this boundary I’m not going to do any fancy stuff because it won’t work.
Object – relational
Storage should be handled as a separate context and thus a separate model. Use the strengths of Relational Modeling and don’t try to fit your objects directly into tables and relations. This makes sense when you think about it, if you represent a piece of data with different models in separate context, why would storage be any different?
Eric also mentioned that the work being put into NoSQL is very interesting for DDD practitioners since storage is just another context, and if that context can be expressed easier and be more out of the way of other context, it’s a good thing.
Reflections
I really liked this talk. Much because it validates thoughts I have (and have imposed on my team) around the “One model to rule them all”. There will always be different representations of a piece of data and Evans confirms this with very convincing rhetoric’s. Especially when it comes to putting the database in a separate context and thinking of it as a boundary.
It was also interesting to hear Eric’s thoughts on events and how they have become a first class citizen in Domain Driven Design. The trend in both design and architecture today is to use anonymous push via events and queues which actually is the driver behind a-not-yet published codeplex project which helps in setting up domain events and / or push things out in integration.
Watch this space for updates on that.
NDC2010: Kevlin Henney – Strategies against Architecture
Posted by Patrik Löwendahl in NDC2010 on June 16, 2010
When I first saw Kevlin’s session description I was very skeptical, I initially choose another session to go in this slot but the speaker had an emergency so I decided to go. The reason for my skepticism was that the session description kind of made the argument of “beat up the architect”. This session was nothing of the sort, it was very good.
There is always an arhictecture and one or more architects
Kevlin argued that there is always an architecture in your system. The big difference is in how you think about it or come about it and how is responsible for it. In smaller teams and some systems there is no need for a designated “architect”, while in others it makes sense to have a person that focuses on the big picture and tries to hold all the pieces together. In his point of view, this architect should be involved in implementing the architecture in code but not put them in the critical path. This is the only way for an architect to see and feel the pain of the decisions made.
Technical debt
Technical debt is something that all systems have more or less. The big difference is how you handle them and how they came about. He showed a four corned diagram that categorized how debt usually come about.
Reckless (deliberate) | Prudent (deliberate) |
We don’t have time for design | We must ship now and will deal with consequences later |
Reckless (inadvertent) | Prudent (inadvertent) |
“What’s layering?” | Now we know how we should have done it |
It’s quite obvious which of these reasons are the more sane then the others. In Kevlin’s opinion; the only time Technical Debt is ok is when you have a process where you handle bugs and technical debt directly after release so they don’t stack up indefinitely.
Patterns
In his talk, Kevlin also talked about patterns and how they could be in the way of a good architecture. He talked about a workshop they’ve done to find patterns in their own software that resurfaced in more places then one and learned from that. He argued that the purpose of patterns was to “uncover better ways of developing software by seeing what others already have done”. A view I really like and hope I already live by. Patterns are there to help you cut through the woods and get to the meat, not to win a code-beauty contest.
In summary
Kevlin’s view of an architect was more that of a leader. A leader that gives people a sense of progress and helps in delivering software with good quality. It’s a healthy position. I liked his talk, it gave me inspiration and reinforced how I see on my own role at Sogeti as the Lead Solution Architect for Microsoft Practice and will certainly color how I work in that role. Thanks Kevlin.
NDC2010: Chris Sells on Data
Posted by Patrik Löwendahl in NDC2010 on June 16, 2010
First session ended, Chris Sells on data. He kicked on a lot of open doors, tried to sell the idea that M and Oslo’s death are over exaggerated and will be the next big thing. It’s over, let it go.
Chris position on Data
Chris started off by stating his position on data, it can be saved in many forms; graphs, trees or tables, but as it seems we as an industry more often then not revert back to tables. Since it brings the most utility of the three for multi purposes. Later in the talk he spoke about NoSQL and how a lot of these technologies solve interesting problems, often with scale, but warned the audience (as the engineers they are) to think that the new shiny toy comes without flaws or drawbacks. Every tool has his/her place in the eco-system.
Data for everyone, really everyone
An interesting point he made though, that runs chills down my spine, is that availability and access of data changes. It’s not that it changes that gives me the creeps, it’s how he and the team he works for envisions the change. Chris made a parallel with Excel and how good it was, how it allowed everyone to be a “programmer”, his vision was that with Microsofts OData and things like Excel Power Pivot, everyone will be able to query data and put in their program. As there isn’t enough Excel mess to clean up in the world?! But hey, at least it’s consultant friendly.
Perspective
Chris concluded in his talk that how we think about data changes, how we expose/get exposed to data changes and that no matter what we do, data will be what’s important (he also said that behavior was “the 90’s”, meh?). I’d agree that data is important, how we store data is important and how we access data is important. But data isn’t just there to be entered, read or draw diagram off. There is a huge portion of data that’s used to support and make business processes easier. Excel doesn’t help with that, neither do OData (and certainly not M). So even with all these new shiny toys Microsoft will be putting out, we’ll still build our software as we used to. Just with more options.
Common Service Host for Windows Communication Foundation
Posted by Patrik Löwendahl in WCF on June 14, 2010
After having to write new instance providers, host factories and service behaviors for almost every new WCF project; I decided to write a simple reusable component for WCF and dependency injection and put it on codeplex so that I never had to write that again.
The idea is simple, when creating service hosts you more often then not want a centralized controlling factory that handles your dependency wiring and life time management of said dependencies. WCF requires you to add custom code to the initialization extension points.
Enter the Common Service Host. Based on the Common Service Locator interface and model it allows for a single library with classes for any DI-Framework to automatically wire up your service instances. From the codeplex documentation:
Example host configuration:
public class AppContainer : UnityContainerProvider { public void EnsureConfiguration() { UnityContainer.RegisterType(); } }
Example self-hosted usage:
using(var host = new CommonServiceHost()) { host.Open(); }
Example usage for a .svc file:
<%@ ServiceHost Language="C#" Service="Sogeti.Guidelines.Examples.Service1" Factory="Sogeti.Guidelines.WCF.Hosting.CommonServiceHostFactory`1 [[Sogeti.Guidelines.Examples.AppContainer, Sogeti.Guidelines.Examples.Service]], Sogeti.Guidelines.WCF.Hosting" %>
Included providers in the current release for Unity and spring.net
Get your copy here: http://commonservicehost.codeplex.com
Architecture considerations: When do I benefit from services?
Posted by Patrik Löwendahl in Architecture, WCF on September 28, 2009
As a .NET developer it’s becoming increasingly tempting to create service layers for our application and utilize some of the strengths in Windows Communication Foundation in our solutions. With the power WCF brings to the table and all the messages about SOA in general, it’s easy get swept into it all and default to distribution.
In this post I will try to outline some scenarios where a service layer might be beneficial and alternatives that might suite that scenario better sometimes.
Other applications needs to integrate with you
If there is other applications that want to integrate with you, chances are that services might be your solution. Before you start writing your service layer you owe yourself to think about why the other application is integrating with you.
Pushing data to or from you
If other applications are solely interested in pushing data to or from you, a service might not be the best answer. There is other tools such as Sql Server Integration Services that is better suited for data import / export of this kind.
If all you want to do is share some piece of information, like your product catalog, it might even be as simple as exposing a XML-document. No need for a service layer. (all though WCF does a good job exposing XML-documents with the REST starter-kit).
You want to expose a business process
This is the block-buster feature of a service. Create a service and expose your business processes. Does this mean you will have to expose all of your business processes as services and have a service layer in your applications? Usually no. Usually it’s enough to create an integration point, a anti-corruption layer or a small service exposing exactly what you want integrated. There is no real need for a full blown tier separation.
Wait a minute, is there no reasons for the service tier?
Of course there is. There is a couple of really good reasons. Most of them tied into the word “tier”. When do I need a separate “tier”? A couple of of reasons:
Lightweight UI’s like Silverlight or Flash
Advanced business logic is usually heavy weight. It involves a lot of code and usually it doesn’t belong on the client side. For lightweight UI’s, or Rich Internet Applications (RIA), this is very true. You want the full power of the CLR and the .NET framework for your applications and to get that you’ll need to separate the application into at least two tiers.
Wanting “middleware” in your application
Often there is the need for integration with other systems, orchestration of some sort or interesting conversation patterns like publish / subscriber. This is not the job for a client but for middleware. Or a “server”. In this case, separating the back-end into it’s own tier is a really good idea.
Summary
So use your service tier wisely, there isn’t one pattern and one usage of it and often it’s not the only solution or even the best. The extra tier will bring extra complexity so make sure it will carry it’s own weight.
No Comments
An architecture dilemma: Workflow Foundation or a hand rolled state machine?
Posted by Patrik Löwendahl in Architecture, Methodology on September 8, 2009
Workflow Foundation is an interesting piece of technology, in a recent architectural decision for a project I had time to examine the pro’s and con’s of WF for a particular challenge.
This sprint a story came up that will give super-users of a system the ability to define new states in a state machine and attach business rules for state transition dynamically through a user interface. These custom states are then attached to entities in the system.
Workflow foundation is an excellent engine for this kind of flexibility, pop a couple of custom activities and just create new state machines for each time you need changes. Well, that’s what it says on the box, but is it really that simple?
On the execution side of things, WF is an excellent choice. Good engine with a lot of built in functionality. But what about the end user side? Allowing users to easily define new states, attach rules to each state and attach them to entity templates?
One thing that WF isn’t is User Friendly. So what would it take to make WF user friendly? A custom designer, that emits xoml, some training in the WF designer and process orientation and a container to run the state machine in. This is a lot of work.
WF is powerful, but when the user are involved. It’s weaknesses quickly become expensive.
A simple solution, not as powerful on the execution side, is to handcraft a state machine and store the state in a simple table or two. Utilizing the state machine pattern with a touch of strategy pattern and you can come a long long way.
So since the user experience was the top priority for our scenario and we really only needed the state machine functionality, no work to be done . The first iteration was handcrafted, utilizing NHibernate to store some state in a database.
So, WF does solve a lot of things. But it comes with a cost and increased complexity. If you aren’t using WF’s full potential, chances are it’s to expensive for you.
My next post on this subject will present the solution in code.
5 Comments