Archive for category Methodology
Platform migrations – not really like a flock of sparrows.
Posted by Patrik Löwendahl in Architecture, Methodology on November 22, 2012
Migrations. Moving from one place to another. For birds in the Nordics it is as simple as flying south for the winter. For the client I am currently flying to, the trip will be a little more complicated.
In about an hour I will be landing in Brussels to share experiences, thoughts and ideas on migrating away from a global and distributed IBM Lotus Notes/Domino solution to a new platform.
There are quite a few things to think about, no matter if the migration is from one platform to another or even to a newer version.
Here are three of the things I will be sharing today:
Business case
Building a business case on mere cost cuts for licenses and hardware will most probably not motivate a migration. The return of investment in pure financials after a migration project will take years to realize. The business case only starts making sense when you add qualitative values onto of quantitative. Things like:
- User experience, will it be faster to find documents? Less time to spent to perform tasks?
- Platform alignment and integrations.
- Ease of finding competence
One-to-one migrations
There is no such thing as a one-to-one migration on a feature level. The new solution will be different, take advantage of that. Don’t try to bend it over backwards based on what is in the old.
Use the strengths in the new to deliver more value then is currently there. Focus should be on delivering the same capability, adapted and improved.
The big bang theory
Don’t do big bangs. Do a phased migration. It will let you learn from experience and adjust as you go. Plan for, and expect, co-existence. Find the key usage scenarios and migrate one or two of them. Adapt, improve, move on.
Don’t be fooled by the straight forward advices; there are more than one devil in the details here.
My experience is that any migration will be a bumpy ride. However, following these three advices will give less bumps for the business and more tools to parry them in the project.
Happy migrations!
Sometimes it is business that needs to understand development
Posted by Patrik Löwendahl in Architecture, Methodology, People on September 10, 2012
There is always that guy. The business oriented guy, the guy who can’t understand why a few lines of code can take a whole day to produce. The guy who believes that pair-programming is the equivalents to “get one pay for two”. This is a story about that guy and how I made him understand.
A few years back I was involved in in a project that had the attention of a vice president in a huge enterprise. The project had haltered and the VP’s response was to micro-manage developers tasks. One of the meetings I was asked to prepare was to explain why a switch in data access technology had to be done. A gruesome task: Explaining technology limitations to someone with absolutely no technology background. In the end it succeeded. Turning technology limitations into pure numbers: Bugs/LoC, Cost of a Bug, hours spent on performance tweaking, etc., etc.
But that is not what this post is about. This post is about how I got him to understand that developers are not glorified copy writers with the task of writing as many letters/day as possible:
- “I don’t understand? How can you only produce 100 lines of code in a full day? And that’s with two developers at the same keyboard!”
- “You write business plans right?
- “Yes.”
- “And how long is that document, about 30 pages?”
- “Yes?!”
-“I can write 30 pages of text in Word in a day, maybe half a day. How come it takes you weeks to produce the business plan?”
-“Isn’t that obvious? We need to figure out what the business plan is about, the text is just a documentation of our thinking.”
-“Exactly”.
From that point on there were no more discussions on lines of code, technical details or design/architectural decisions. From that point on it was only about features and velocity, process and predictability, and the most important feature of them all: delivery.
The benefits of minimizing the centralization of IT
Posted by Patrik Löwendahl in Architecture, Methodology on November 10, 2010
One of the lesser known ideas and practices behind what has come to known as SOA was the physical organization of teams into service areas. The basic idea is that there shouldn’t be a centralized “IT-Department” that manages the systems they need, but every department should have their own dedicated IT that help them conduct their business as effective as possible.
At a first glimpse this seems like someone didn’t think this through. After all, why should the finance department handle it’s own IT? They should focus on finances! This is true, but digging past the first shallow glimpse there is actually really interesting benefits here, none of the technical in nature.
It’s all about understanding the core
Centralized IT-departments are often very good at performing their core-business; IT. But if IT isn’t the organizations core business there is a lot to be learned, there is a gap to be bridged and you really need dedicated IT-staff that understands exactly what your departments function is in the the whole. This often leads to IT-departments dividing into subgroups with staff specialized in servicing a separate department. The collective knowledge collected over years a group like this will often represent a full understanding of what the department actually does. This leads to shorter cycles, quicker feedback loops and more to-the-point implementations.
If you understand the finance department, it will be a lot quicker to understand new requirements. This is a first step into making IT part of the business and not only a cost of performing it. It also comes a long way to becoming more cost effective then general development departments ever can be.
It’s all about money streams
As business has evolved, a lot of organizations today can’t conduct it without IT-support. Often you hear that cost of IT is the cost of doing business. In a sense it is (though I’d say that cost of IT is an investment that will pay it’s own weight done right, but that’s another discussion).
Add to this that all head of departments have their own budget that they need to maintain and fulfill. All IT-projects with a centralized IT-department will quickly come to the same discussion, who will pay for this? If there is an investment that need to be done that the finance department will reap the benefits of, should the cost end up on the IT-departments bottom line?
A lot of organizations try to solve this by charging the departments for new projects and taking a fixed fee for maintenance. This solves part of the problem but creates new grounds for discussions.
When is the project done and when do we go into maintenance mode? Is maintenance just about bug-fixes and what if that is the case, what is a bug? Endless discussions. All based on the fact that the managers of each department have to report their budgets.
Much like some discussions between clients and consultants.
If managing the budget was entirely up to the department itself, this would not be an issue. It would be up to the department to do investments that bring their business forward. All costs associated with their IT-support would be visible in their own budgets. Manageable in their own budgets. There will never be a discussion about investing X to get benefit Y, it will all boil down to the same bottom line and ROI will be the responsibility of the same manager.
There are some IT functions that might not be feasible for this, like desktop maintenance. Maybe the BizTalk server should be maintained centralized for all departments, etc. But for the system support you need, for any customization or development you need. This is a smooth way to go.
Conclusion
Organizing your development into areas of service might not be for everyone. But there are great benefits for those who does. Shorter feedback loops for requirements, shorter decision paths and visualization of actual cost / benefits in the right place; the bottom line for the department actually using the service.
Duoblog: Everybody wants choices but nobody wants to make a choice
Posted by Patrik Löwendahl in Column, Methodology, People on October 8, 2009
Johan Lindfors, Microsoft and I discussed the growing opinion that software development and .net framework is getting to complex on MSN the other day. He suggested that we write a duoblog about it, an initiative started by Chris Hedgate, don’t miss Johan’s view on the same subject here: http://blogs.msdn.com/johanl/archive/2009/10/08/duoblog-everybody-wants-choices-but-nobody-wants-to-make-a-choice.aspx
The last year or so I’ve read in the Swedish magazine, Computer Sweden(in swedish), listened to developers on shows like DotNetRocks, and had discussions with several developers which all have had a similar concerns about the future: “Software development is to complex, .NET is to complex. There is just to much to learn, too many choices I have to make”. This makes me a bit sad, and frustrated at the same time. This is why.
Why are there all these choices in .NET?
Let’s turn the clock back a little bit, the year is 1999 and Visual Basic 6 and classic ASP has their prime time. While developers using this platform are building e-commerce sites and line of business applications with somewhat success; they are still missing key components and Microsoft plans to fill that with a new platform, .NET.
2001 .NET becomes a tremendous success. Advanced applications can be built easier then before and developers are satisfied, for the moment. With better understanding of the .NET framework developers soon see even more opportunities where software can help businesses and soon they crave for more. This is only natural, with every technological advance we do, we look at the horizon for the next.
Microsoft continuous to put out new functionality and adds value to the .NET platform trying to meet all the demands that arise. They learn a lot during this process and in some cases they decide that old parts of the framework, like ASMX Web Services, won’t cut it when they move forward. It gets replaced by newer and better technology. As a good tool vendor though, Microsoft leaves the old technology in the stack for backwards compatibility, not to be chosen over the new technology.
It’s now 2009 and Microsoft is very soon launching a new version of their platform, .NET 4.0 and Visual Studio 2010, with even more changes and choices and there is no doubt there will be even more in the future. All in response to customer feedback.
In the meanwhile businesses has changed.
During these 10 years when the development platforms have evolved and developers have been provided more tools, business has changed. In 1999 IT and software was considered a cost of doing business. In 2009, for many companies, IT and software is their business. I’m not only talking about companies that build software or sell consultants, I’m talking about all sorts of businesses. Business and their view on IT has changed, based on the same mechanics that the development platform has; for every advancement in business process support by software, they want more.
Today there are higher expectations on software then it was 10 years ago. Microsoft is trying to help us developers to meet these expectations by providing us the tools we need. Higher abstraction layers, more automation and frameworks that solve specific problems.
So THAT is why we are where we are.
In some ways I agree, software development is complex, .NET framework is big. But there is a reason for it, business demands on software are more complex, business itself is more complex then it was 10 years ago. But this is our job. Our job is to help business evolve, and if we don’t evolve with them we will be their stalling factor, we will fail to support their needs.
To help business with the best solution we need choices, we need even more choices. We need choices outside of the .NET space. We need to learn and understand when and where a certain choice is the best choice. This is our responsibility as software developers, this is why we are paid, this is our god damn JOB.
Software development is all about learning
So this is why I am sad and frustrated, developers seem to not understand the basics of the job requirements. As a software developer, my job always include constant learning and constant improvement of my skills. If I can’t agree with that I am a bad developer. This is not the tool vendors fault, this is because business change, improve and learn as well. If we don’t do that with them, we will be left behind.
Similar posts on this subject:
Excel as a software engineer, be a professional not an amateur
Managing Parent/Child relationships with NHibernate (Inverse management)
Posted by Patrik Löwendahl in Data Access, Methodology, ORM on September 22, 2009
When working with parent/child relationships in object models it is important to know what kind of Inverse Management your ORM technology have. Inverse management means handling all the relationships and keys shared between the parent and the child. This post will help you understand how NHibernate manages these relationships and what options you have.
Standard parent / child with inverted properties
The standard parent / child object model usually looks something like the below picture;
figure 1, Standard parent / child
In figure 1 you see that the comment entity has an inverse property back to product.
Note: NHibernate requires you to manually set the product property on the comment to the correct object, it has no automatic inverse management of properties. This is usually done by adding an “AddComment” method on the product that encapsulate the logic needed to get the relationship right.
The below represents how the foreign key constraint in the database looks. Figure 2 shows the standard parent / child;
figure 2, parent child database model
In this case the inverted property ensures that the comment object itself will “contain” a copy of the product Id to insert into the database. You tell NHibernate about this relationship and how to handle the keys by setting up the mapping like listing 1;
<class name="Product" table="Products"> ... <bag name="Comments" inverse="true" cascade="all"> <key column="ProductId" /> <one-to-many class="Comment"/> bag> class> <class name="Comment" table="Comments"> ... <many-to-one name="Product" column="productId" /> class>
listing 1, standard parent / child mapping
Using the above xml NHibernate will have enough to figure out that the product id should be persisted into the comments table together with the comment itself.
The bag mapping tells the product that there is an inverse property on the comment and instructs NHibernate to let the comment handle the relationship on it’s own.
Variation 1, no inverse property on the child
A common approach in object modeling is to use aggregate roots and just let the relationship flow from the parent to the child, not an inverse back. This makes sense when you think about the object behaviors; comment will never stand on its own, it will always be accessed through the product.
Figure 3 illustrates how the such a model looks like;
figure 3, Aggregate model
This approach leaves NHibernate a bit dry. In this variation; comment can’t stand on its own and will not be able to deliver the product id to the database. It will instead rely on the comments list from the product to provide that. NHibernate needs to be told that this is your intention the bag declaration has to be changed into;
<bag name="Comments" inverse="false" cascade="all">
NHibernate now knows that the comment entity doesn’t have a parent property that points back.
There is a caveat with this though, NHibernate waits a bit to insert the identity of the product into the comment. Figure 4 shows the statements NHibernate sends to the database;
figure 4, Statements sent to the database
As you can see, the product id is sent in a separate statement after the rows have been inserted. This means that the product Id column in the comments table has to be nullable. As long as this save will be in a transaction and the amount for rows are small, this will be a viable solution. Just be aware of the mechanics NHibernate uses.
Variation 2, the hybrid solution
If you don’t want the inverse property and can’t set the foreign key to nullable the two above solutions won’t help you. For this variation you need to put a hybrid solution together.
This is a similar to the standard parent / child, but instead of the full entity we will only use a protected field on the comment. The field you want to add would look something like the following;
protected int _MAP_productId;
which then would be mapped like a regular property, not an object reference;
<property name="_MAP_productId" access="field" />
Note: It’s usually a very good idea to name the field with an awkward name like the one above, this ensures that developers after you will think twice before using it for any other purpose then mapping. This is also a place where I would consider adding a code comment.
To set the field you could either create a constructor or expose an internal property that the product can use. Don’t try to write to the field from the outside directly, NHibernate has issues with internal fields and making it public will just be ugly.
The drawback with this approach is that NHibernate won’t be able to automatically set the identity on the comment. This means that you have one of two options for getting that product id:
- Don’t use auto-generated Id’s, make sure you assign one to the product before adding any comments.
- Save the product first, before adding any comments to it. This way the Id will be set in time.
I’m sure there is an extension point somewhere in NHibernate that would allow for the above variation to be automatically handled. I will get back to you when and if I find it.
Summary
The object model and relational model are different schemas and as such compromises have to happen. NHibernate makes a very good job in hiding those compromises in most cases, but when it comes to inverse management you the developer need to take a stand on what compromise is the right one for your solution. Now you know your options, choose wisely.
Resources
Nhibernate project website:
NHibernate documentation about parent / child:
https://www.hibernate.org/hib_docs/nhibernate/html/example-parentchild.html
NHProf application by Ayende that was used to inspect the queries sent:
3 Comments
An architecture dilemma: Workflow Foundation or a hand rolled state machine?
Posted by Patrik Löwendahl in Architecture, Methodology on September 8, 2009
Workflow Foundation is an interesting piece of technology, in a recent architectural decision for a project I had time to examine the pro’s and con’s of WF for a particular challenge.
This sprint a story came up that will give super-users of a system the ability to define new states in a state machine and attach business rules for state transition dynamically through a user interface. These custom states are then attached to entities in the system.
Workflow foundation is an excellent engine for this kind of flexibility, pop a couple of custom activities and just create new state machines for each time you need changes. Well, that’s what it says on the box, but is it really that simple?
On the execution side of things, WF is an excellent choice. Good engine with a lot of built in functionality. But what about the end user side? Allowing users to easily define new states, attach rules to each state and attach them to entity templates?
One thing that WF isn’t is User Friendly. So what would it take to make WF user friendly? A custom designer, that emits xoml, some training in the WF designer and process orientation and a container to run the state machine in. This is a lot of work.
WF is powerful, but when the user are involved. It’s weaknesses quickly become expensive.
A simple solution, not as powerful on the execution side, is to handcraft a state machine and store the state in a simple table or two. Utilizing the state machine pattern with a touch of strategy pattern and you can come a long long way.
So since the user experience was the top priority for our scenario and we really only needed the state machine functionality, no work to be done . The first iteration was handcrafted, utilizing NHibernate to store some state in a database.
So, WF does solve a lot of things. But it comes with a cost and increased complexity. If you aren’t using WF’s full potential, chances are it’s to expensive for you.
My next post on this subject will present the solution in code.
5 Comments
Excel as a software engineer, be a professional not an amateur
Posted by Patrik Löwendahl in Methodology, People on September 2, 2009
Jeff McArthur is a frequent blogger and tweets a lot. He’s one of those random (at least for me) guys you stumble across that writes good stuff you want to read.
During this weekend he’s put out a lot of about the difference between a professional and an amateur. Most of them was in general terms and they all got me thinking. What do I think defines a professional software engineer? What is our profession all about?
Some label most of us technology geeks, and to some degree this is very true. I like technology. I like software and hardware and I play games. A typical description of a geek.
But is our profession only about technology? It plays a huge role indeed; technology is part of our toolbox. But is it all?
I would argue that technology is just a small part of what we need to do to be considered a professional. Here is my list, in no particular order:
- Dedication – be dedicated towards delivering the BEST solution, not just A solution.
- Continuously seek more knowledge – to be able to decide what is the best solution, you need to have a broad toolbox with a understanding and competence in many technologies, methods and tools.
- Strive to deliver quality – not just quantity. A good piece of software doesn’t only do the job. It’s also reliable, stable and maintainable.
- Pride – software engineering is a craftsmanship. You need to put pride in what you create.
- Question your skills – at all times. There is always something to perfect, something to get a little better at.
- Welcome feedback on your work – always look for opportunities to get your work criticized by everyone. Listen to it and change.
This seems to be oddly general. Being a professional in any trade means a lot of the above, but in our trade where business and technology moves faster then a high-speed train. We need to constantly be on our toes. Putting in an effort to learn, evolve and excel. When we do that, we can start to count ourselves as professionals in the profession of Software Engineering.
The taxes and fines of system performance and how to handle them
Posted by Patrik Löwendahl in Methodology, Performance on August 22, 2009
“We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil.” – Donald Knuth: The Art of Computer Programming
This is one of the most repeated truth in our field of business and most follow this by the letter. I follow this. As an architect with deadlines and budgets, it’s important to focus on the right things. There is more to this truth that first meets the eye though, important amendments; “The tax and fines of system performance”.
The tax of system performance is the cost we have to pay, in time or resources, to work on the performance of a given feature or subsystem in our solutions. This tax is not mandatory, it is very much voluntary and you can choose to pay this when the feature is developed or just ignore the tax all together. Just hope that is was a tax you didn’t have to pay. I think that this is what Donald Knuth wants us to to do, don’t pay the tax if you don’t have to.
There is a downside of embracing this methodology in a fanatic fashion, as usual. If it turns out that the tax had to be paid, there will be a fine added to the tax which will increase the total cost . If this happens very late in the development cycle, this fine will be very expensive and sometimes be larger then the original tax costs.
So, premature optimization is bad, but holding off optimization might as well be as bad or even worse. How should we as architects / developers handle this, how are we to know when the tax should be paid to avoid the fine and when we can ignore it?
The answer to this is spelled Performance Budget that should be in the description of features that goes into the project. This budget should clearly state what’s expected in terms of performance. Time to first byte, response time, memory consumption and so forth. Adding this and making sure it is validated before feature check-in, will quickly show you when you need to pay the tax and when you can ignore it without being fined.
A typical performance budget can look something like this:
As a user given an address I can create an order with address information so that I can invoice the client.
Performance budget
–
Saving the order can not take more then 1 second.
Saving the order can not consume more then 1k memory.
As any requirements, defining these are hard work. But it will help you avoid those fines and will cost you less in the end. As any feature description, this text is worthless if the validation never occurs. I will talk more about validating performance budgets, tracking performance degradation and preparing your systems for performance investigation. So keep in touch.