Posts Tagged NDC2010
NDC2010: Michael Feathers – The deep synergy between good design and testability
Posted by Patrik Löwendahl in NDC2010 on June 18, 2010
In this session Michael Feathers basically killed the argument based on; I don’t want tests to drive my design with a simple example base on the code smell; Iceberg class.
Private methods are really hard to test, and here we have a class with a bunch of them, It will be hard to test, but in addition to that it’s also a bad design decision. It violates the single responsibility principle.
A better design would be to break this out in smaller classes, something like;
Which honors SRP and will be much easier to test.
So why is this true?
There is basically two reasons, related to each other, that drives this synergy. To begin with you are consuming and using the api, which quickly lets you feel the pain. The second reason is based on the fact that test help you understand code. Badly designed code will be hard to test since it’s also very hard to understand what it does. In the light of this, SOLID principles makes a lot of sense. Not only does it allow you to test your code, but it makes it easier to understand.
With these facts, Michael stated a simple truth;
Solving Design Problems
Solves Test Problems
It’s not about letting a test tell you what to do, making code testable does not necessarily imply better design. But better design will make your code easier to test.
This is also why he has problems with “Groping test tools”, tools that let you test private parts. He feels they are a “Get out of jail free card”. Which removes an important force to make the design better; Pain. Pain is useful as a learning tool in that it tells you what to avoid. Sure removing test-pain will make the code easier to test, but will it make it easier to maintain or extend? Usually not.
He then concluded that testing isn’t hard, testing is easy in the presence of good design.
Reflections
This was a great talk! By examining relationship between test-pains, code smells and design principles, Michael made a clear point that there is a synergy. If you find it hard to test your code, there is usually some other problem with it.
He didn’t want to impose TDD on everyone, but he made his case that test will help you understand your code better and act as a constraint that forces you into good design. Something you should be doing anyway.
Catalogue of testing pains and relation to code smells
State hidden within a method – you find yourself wishing you could access local Variables in a test. | Methods are typically to long and are violating Single Responsibility Principle. |
Difficult Setup – Instantiating a class involves instantiating 12 others. | This pain says something about the design, it means that the class isn’t factored into smaller pieces. Problem: To Much coupling. |
Incomplete shutdown – pieces of your code lose resources and you never realize it until you test. | Classes don’t take care of themselves, poor encapsulation. |
State-leaks across tests – The state of one test affects another. | Smells of improper use of sIngletons or other forms of global mutable state. |
Framework Frustration – Testing in the presence of a framework is hard | Insufficient Domain separation. Lacking good separation of concerns. |
Difficult mocking – You find yourself writing mocks for objects returned by other objects | Law of Demeter Violations |
Difficult mocking 2 – hard to mock particular classes | Insufficient abstraction and / or dependency inversion violation. |
Hidden effects – you can’t test a class because you have no acess to the ultimate effect ot its execution | Insufficient separation of concerns and encapsulation violation |
Hidden inputs – There is no way to instrument the setup conditions for a test through the API | Over encapsulation, insufficient separation of concerns |
Unwieldy parameter lists – it’s too much work to call a method or instantiate a class | To many responsibilities in a class or a method. SRP violation |
Insufficient access – you find yourself wishing you could test a private method. | To many responsibilities in a class. SRP violation |
Test Trash – many unit tests change whenever you change your code. | Open/Closed violations |
NDC2010: Michael Feathers – Testable C#
Posted by Patrik Löwendahl in NDC2010 on June 18, 2010
This session was basically explaining a simple rule that regardless if you are using TDD; should be followed at all times:
Never hide a TUF in a TUC
What did Michael mean by that?
There are two things that can be “Tests Unfriendly”, features and constructs. The common denominator of the two is that they are hard to replace or isolate and thus impose a lot of constraints on what need to be up and running to test.
He then went on to explain exactly what TUF’s and TUC’s to look out for.
Test Unfriendly Features (TUF)
A TUF is something that has a tight coupling to a resource that’s hard to mock, fake or replace for testability. He listed a few;
- File access
- Long computations
- Network access
- Database access
- Global variable manipulation
- Use of 3rd party libraries or frameworks
The list is in no way complete but clearly shows a pattern and really follows the ideas that a unit test should be isolated from any external dependencies.
Test Unfriendly Constructs (TUC)
A TUC is a language feature that makes it hard to fake, mock or replace a piece of code. He was very clear that he didn’t say that these features where bad in any way, just that by using them we should be aware that we might introduce testability issues. Certainly in combination with TUF’s.
He listed a few C# constructs that wasn’t replaceable (or at least hard to replace without stuff like TypeMock);
- Private/static/sealed/non-virtual methods
- Object constructors
- Static constructors
- Static initialization expressions
- Sealed classes
- Static classes
- Const/read only initalization expressions
When dealing with these TUC’s, since they aren’t completely worthless, he gave a simple advice; “Do not code with the idea “why would anyone ever want to replace this” but think instead in terms of “this is really important that nobody can change or easily access”.
Reflections
This was a good talk, not in terms of new things I didn’t know. But in introducing rules and arguments for building testable code. Not everyone will write code TDD-style but by at least making them following the simple rule of “Don’t hide a TUF in a TUC” it will be easy to add tests later on if there is something you are curious about or want to validate.
This was a great take away that I see I can put into use right away.
NDC2010: Greg Young – 7 reasons why DDD projects #fail
Posted by Patrik Löwendahl in NDC2010 on June 17, 2010
#1 Lack of intent
You build a system but you do not find out what the intention of the user is. It is visible by domain models that are empty from business logic. There will only be four verbs, CRUD which doesn’t tell you anything about what the user wanted to do.
#2 Anemic Domain Model
A model that looks and smell like a real domain model, but when you are looking for behavior you can’t find it. You have domain services that becomes transaction scripts on top of your Domain Models. It’s not an anti-pattern but it’s not DDD either. Anemic Domain Models work wells in teams with little OO skills in a simple model, but usually they are bad.
#3 DDD-Lite
Using DDD as a pattern language, you do not define context you just implement patterns. If you are a .NET developer and you are running down this path, stop. You either need to do real DDD or something simpler. The middle road is problematic. DDD-Lite will give you all of the overhead and none of the benefits.
Naming things is really important, the value is not in the domain model. The domain model is only a representation of the ubiquitous language. So if the language is not there, the competitive advantage will not surface.
#4 Lack of isolation
Part of why building domain models are expensive is because we want them isolated from everything else. But without it, there will be abstraction leakage left and right.
Example, We might get back 300 fields in our XML but we are only using two. Our domain model should only have those two.
Another example, UoW is really a infrastructure concern why would you bring that into the domain?. If you have your aggregate boundaries right, you don’t need a unit of work.
#5 Ubiquitous what?
The ubiquitous language is only ubiquitous in a context. If you aren’t using bounded context, you will fail with defining the language. The way to find contexts is to find places where word have different meaning. We need to break things apart and focus on our context boundaries.
#6 Lack of refinement
Most projects do not come back to the domain model and do any refinement. The team learns more about the domain, but do not go back and update the model with the new knowledge. A year down in the project you will be translating the model to the domain experts and the ubiquitous language will be lost.
Since the domain model is a tool to analyze the domain and solve problems, it has to be clean and reflect the level of understanding we have of the domain.
#7 Proxy Domain Expert (Business analyst)
Each level of translation a little more get lost. If we are using a proxy domain experts, what’s really happening is that we have two ubiquitous languages. One from the BA to the real domain expert, one from the BA to the developers. What’s the chances they are the same?
A BA shouldn’t be in the middle, they should work as a facilitator and help developers and the domain experts to communicate and get closer to each other.
Reflections
I’ve been in project where all 7 of these failures has emerged in one way or another. It ties into what Eric Evans have been talking about a lot today and most of SOLID principles. Make sure you have clear context where models make sense and clear separation of concerns and you’ll be in a good starting position.
I also reflect over the fact that most failures are based on communication and the expression of what get’s communicated. So much in software development are really not about technology, but yet as the engineers we are, we try to make it about that.
NDC 2010: Eric Evans – Folding together DDD into Agile
Posted by Patrik Löwendahl in NDC2010 on June 17, 2010
One of the most puzzling emails Eric have received was one claiming that his book really proved that up front design was important. In large this is a miss conception on how modeling happens. A tremendous amount of knowledge comes from actually implementing the software. You have the most insight at the end of the project. You have the most ignorance in the beginning of the project.
On the other hand Agilistas often declare that modeling isn’t necessary at all which is a miss conception of the importance of modeling. In short, the importance is tied into what your goal is and what you think is important.
Different goals, different requirements on modeling
- If your goal is to just complete the next story, modeling isn’t that important. The span of your focus is just that story and a model doesn’t make sense. A lot of agile has this focus in Eric’s opinion.
- If your goal is to complete a releasable set of stories with an acceptable level of bugs, some modeling will be required. Each story will have it’s own impact on the code and for them to behave well together, there has to be some design and thought.
- If your goal is to deliver a release that the team can continue to extend in the next release, you will need a reasonable amount of modeling and design. First release won’t see a huge difference, but the second release will come out faster.
- If your goal is to deliver a clear and cohesive user experience you’ll need a clear underlying model or it will be really hard to put a well designed user interface on top of it.
Modeling and TDD
With a green bar, TDD will give you A answer. It’s a correct answer, but the question is, is it the best model that delivers the answer? It turns out that finding the right answer is not quite good enough. We need to find the right answer that makes sense to a business person.
DDD and The Agile Process
Eric introduced a modeling sub process that he calls “Whirlpool” which helps in defining when and where modeling happens:
It’s currently a draft and you can read more about it @ http://domainlanguage.com/processdraft/
Reflections
It’s really interesting to see how Eric tackles the ideas that modeling should be done upfront and the agile notion of very little modeling, and doing so with a piece or explained process. I don’t fully grasp what all parts of the process does or fit together, but I for one will follow the progress and try to apply it in my next scrum / kanban project I get into.
NDC 2010: Eric Evans -What I learned since the book
Posted by Patrik Löwendahl in NDC2010 on June 17, 2010
This was one of the most rewarding sessions for me. Eric Evans explained what he picked up and learned since he wrote the book, what parts that he realized was more important that he initially thought and what parts had been missing.
A missing building block: Domain Events
Something happened that domain experts care about.
what would make this clearer? A property or an event, what would be easier to hook into?
In his book, events wasn’t covered at all. In the DDD movement today though Events are carefully explored and popular in expressing things that happens that domain experts care about. They lead to clearer and more expressive models in that instead of adding more state, communicate that state shift with other objects to other objects. Which in terms open up some architectural options that wasn’t available before.
- Events that representing the state of entities (making the model clearer).
- Decoupling subsystems with event streams. Instead of creating tight coupling between subsystems you just send an event to tell others that might be interested what’s going on.
- Enabling High-performance systems. Evans stated that before he saw what Greg Young did with CQRS and event streams, he didn’t think DDD was suitable for this kind of systems. But with this take, it works.
What he learned about Aggregates
Aggregates are there to create a consistent boundary. In the book they are explained as a suitable boundary for transactions, distribution and concurrency. But over the years it has turned out that it’s also suitable to use as boundaries for properties and invariants. Again tying into the message he delivers over and over and over again, there is always more then one model
Things learned by Strategic design
There are a couple of concepts explained in Strategic Design. Some have turned out to be more important then others, Eric emphasizes that Core Domain, Context map and Bounded Context are really important but Large Scale hasn’t come up to often.
Context mapping on the other hand was much more important then he thought from the beginning. There are always multiple models which we need to accept and focus on the mapping between them instead of trying to make one that fits all. The one thing to remember is to be able to clearly communicate in what context a model id viable and have easy ways to communicate in which context we are talking.
This is where bounded context shines, they describe the conditions under which a particular model applies.
Missing Strategic Design concepts
Partners
Partners work on mutually dependent context in cooperation. This means that you share the responsibility of a context and teams will either succeed together or fail together.
Sometimes you just have to give up. BBoM is the notion of code in such a bad shape that trying to apply DDD on it doesn’t make sense at all. Michael Feathers talk in length about how to handle these scenarios. Within this boundary I’m not going to do any fancy stuff because it won’t work.
Object – relational
Storage should be handled as a separate context and thus a separate model. Use the strengths of Relational Modeling and don’t try to fit your objects directly into tables and relations. This makes sense when you think about it, if you represent a piece of data with different models in separate context, why would storage be any different?
Eric also mentioned that the work being put into NoSQL is very interesting for DDD practitioners since storage is just another context, and if that context can be expressed easier and be more out of the way of other context, it’s a good thing.
Reflections
I really liked this talk. Much because it validates thoughts I have (and have imposed on my team) around the “One model to rule them all”. There will always be different representations of a piece of data and Evans confirms this with very convincing rhetoric’s. Especially when it comes to putting the database in a separate context and thinking of it as a boundary.
It was also interesting to hear Eric’s thoughts on events and how they have become a first class citizen in Domain Driven Design. The trend in both design and architecture today is to use anonymous push via events and queues which actually is the driver behind a-not-yet published codeplex project which helps in setting up domain events and / or push things out in integration.
Watch this space for updates on that.
NDC2010: Eric Evans – Strategic Design
Posted by Patrik Löwendahl in NDC2010 on June 17, 2010
In almost every conversation I’ve had with Eric Evans and Jimmy Nilsson and every talk I’ve heard Eric present, the same message have been lifted; “I wish I put part 3 [strategic design] in the beginning of the book”. In this talk he explained why.
Bad strategies
ent on mentioning a few other mistakes people do:
- Building a platform to make the other (lesser) programmers more productive. If they are [lesser] chances are they can’t use the platform anyway.
- Cleaning up other peoples mess. (Invisible value). One irresponsible programmer can keep 5 really good programmers busy cleaning up.
- Rebuilding already working software, delivers visible value, but will it deliver any new sexy value?
- The Enterprise Model
In Eric’s opinion there are really bad consequences for a bad strategy and most of them are tied into information not floating to the surface. He lifted up a really good example where a programmer warns about potential data-corruption that someone else introduced, sits up all night to fix it, and when there is no data corruption; management will turn to the fixer and say “see, no problems you where just crying wolf”.
What’s a responsible designer to do?
In the talk, Eric highlighted three strategies from his book that we should focus on to create a great strategy.
Distilling the core Domain
There is always a reason for building custom software. Something makes it hard to but it off-the-shelve. This is the core domain. This is the difference between what you do and what others do. Sometimes it’s easy to find, other times it’s not as easy. The point is that it’s in the core domain the interesting stuff is happening, where the new sexy features get implemented.
Context Mapping / Bounded Context
For this part the message was clear; “There will always be more then one model”. He gave the example of The Blind Men and The Elephant. Not everyone will look at things in the same way and context is very important.
Anti-corruption layer
This is a strategy where you hide away the bad design with a layer above it that straightens it out a bit, build your new features on top of that piece.
Good Strategic Goals,
A picture says it all,
Reflections
Strategic design it is my favorite part of Evan’s book. It helps us understand how to look ahead and make sure that our shiny new toy shines all the time. I’ve seen this talk before but it’s like the book, every time I see it there is a new point made that I didn’t thing about before. This time the focus on the core domain really sunk in and I immediately realized why we’ve hade problems in our current project and in a couple of earlier.
NDC2010: Summarizing day 1
Posted by Patrik Löwendahl in NDC2010 on June 16, 2010
So, day one has ended. It has been a full day, a day with surprisingly high quality. I say surprisingly since I went here with a feeling that the first day would be a filler. But I was mistaken.
Venue was great, enough space so it wasn’t feeling very crowded nor empty with the 1450 participants. The overflow rooms was genius and as usual the users found another way to utilize it by surfing through sessions instead of staying at a single one, or as Julie Lerman told me; “Ive heard it referred to as the ADD (attention deficit disorder) room”.
Food was quickly served and garbage even quicker disposed of, so the logistics went very smooth indeed.
For me the content was interesting, I got to dig deep into all things data and especially hear some bright people talk about the NoSQL movement.
NoSQL, well it’s nothing new that’s for sure. But it has been branded and got loud advocates, some which proclaim the death of RDMBS. Yet I can hear RDMBS proclaim; "The reports of my death are greatly exaggerated". One things for certain, up until this date RDBMS is the only model that have stayed around through any and all other attempts at storage.
My biggest take away today though was that I finally had time to sit down and get all things HTML5 explained to me, beyond the freaking Video element. Remy Sharp did an excellent presentation and now I see the hype.
I think the approch with using shims are neat, that way it will be possible to extend browsers with more functionality as the standard evolves without reinstalling a new browser.
Put HTML5 together with JSON/REST based databases like CouchDB or even OData and you’ll see loads of client / server apps written entirely in javascript running locally and offline. An interesting trend, but I fear it will create a lot of strange and crappy solutions. Oh well, more consultant hours for me I guess.
Closing day one, see you tomorrow for a full day of Eric Evans, Greg Young and Juli Lerman and ofcourse the NDC 2010 party. See you there if you are here.
Posts about day 1:
Kevlin Henney – Strategies against Architecture
NDC2010: Rob Conery – I can’t hear you, there is an ORM in my ear
NDC2010: Kevlin Henney – Strategies against Architecture
Posted by Patrik Löwendahl in NDC2010 on June 16, 2010
When I first saw Kevlin’s session description I was very skeptical, I initially choose another session to go in this slot but the speaker had an emergency so I decided to go. The reason for my skepticism was that the session description kind of made the argument of “beat up the architect”. This session was nothing of the sort, it was very good.
There is always an arhictecture and one or more architects
Kevlin argued that there is always an architecture in your system. The big difference is in how you think about it or come about it and how is responsible for it. In smaller teams and some systems there is no need for a designated “architect”, while in others it makes sense to have a person that focuses on the big picture and tries to hold all the pieces together. In his point of view, this architect should be involved in implementing the architecture in code but not put them in the critical path. This is the only way for an architect to see and feel the pain of the decisions made.
Technical debt
Technical debt is something that all systems have more or less. The big difference is how you handle them and how they came about. He showed a four corned diagram that categorized how debt usually come about.
Reckless (deliberate) | Prudent (deliberate) |
We don’t have time for design | We must ship now and will deal with consequences later |
Reckless (inadvertent) | Prudent (inadvertent) |
“What’s layering?” | Now we know how we should have done it |
It’s quite obvious which of these reasons are the more sane then the others. In Kevlin’s opinion; the only time Technical Debt is ok is when you have a process where you handle bugs and technical debt directly after release so they don’t stack up indefinitely.
Patterns
In his talk, Kevlin also talked about patterns and how they could be in the way of a good architecture. He talked about a workshop they’ve done to find patterns in their own software that resurfaced in more places then one and learned from that. He argued that the purpose of patterns was to “uncover better ways of developing software by seeing what others already have done”. A view I really like and hope I already live by. Patterns are there to help you cut through the woods and get to the meat, not to win a code-beauty contest.
In summary
Kevlin’s view of an architect was more that of a leader. A leader that gives people a sense of progress and helps in delivering software with good quality. It’s a healthy position. I liked his talk, it gave me inspiration and reinforced how I see on my own role at Sogeti as the Lead Solution Architect for Microsoft Practice and will certainly color how I work in that role. Thanks Kevlin.
NDC2010: Rob Conery – I can’t hear you, there is an ORM in my ear
Posted by Patrik Löwendahl in NDC2010 on June 16, 2010
In this session Rob Conery tried to show of the new Shiny toy “NoSQL” in as many forms as possible, discussing the pro’s and and some con’s. He based much of the talk on his experiences with building TekPub and the requirements they had around that. He tried to illustrate that this wasn’t a magical kingdom and refrained from the usual snake-oil selling techniques. So all in all a good presentation, content and comments below;
The talk
In his talk Rob talked and showed two types of “NoSql”-databases, the graph (object) and document kind. Rob categorized document databases as a storage with a bunch of JSON serialized to a bunch of BSON stored as key / value pair. For TekPub they where using MongoDB and that’s the tool he used in his recorded (o.O) demos. He had some neat .NET code that created Session-like containers for the MongoDB and it looked simple to communicate with.
Graph (or object databases) Rob categorized as storage by serialized binary blobs and used DB4O to demonstrate.
Both demo’s had abstractions that hid away how you really communicated with the databases and Rob talked more about the strengths and low-friction of the model, but it was really hard to see it from his demos because of the abstraction level and simple examples.
The conflict
A bit into the talk we hit a conflict. What about reporting? Can we do that with NoSQL databases? Most of them you can’t and Rob showed a strategy of storing copies of data in a relational database for using as a report store. This made it very clear that most NoSQL are for specific purposes and RDMBS are general purpose.
As a developer you also need to think differently and accept Eventual Consistency.
The discussion
I think Rob did a good job in introducing NoSQL to those who haven’t seen it, he also lifted one of the problems with the technology today, reporting. As most developers I like that there are new tech to solve my problems, NoSQL are interesting and as I said before I’ll play around with the tech. But as most hypes I’d be really careful in declaring the old king dead and the new king on the throne.
The NoSQL databases we see to day are good for very specific scenarios, they scale well and for have been serving large websites very well. They are immature though, tooling, infrastructure support and the overall eco-system are far behind the RDBMS product we have out there.
I like the idea to use “NoSQL” to feed web sites, similar to having data cached in memory or “read-copies” for our presentations, while still pushing data to better models for BI or support for business processes.
NDC2010: Hadi Hariri – CouchDB For .NET Developers
Posted by Patrik Löwendahl in NDC2010 on June 16, 2010
Hadi speaks in one of the scariest rooms ever, It’s built on the top of this sports arena. Way way up with a ridiculous steep angle down to the stage. Butterflies in my stomach:
Prelude
Specifications today are written in the language of the client but most of the data modeling is done in a relational model. Hadi talks about foreign keys and the idea that a invoice have to have a customer because the relational model requires a foreign key and a bunch of other things. This makes some changes inflexible, like a client wanting to invoice an anonymous customer.
We have an impedance miss match between how we look at the data and how the client looks at the data. When client talks about invoices they see the actual invoice, when we see invoices we see the relational mode.
In Domain Driven Design, et al, we model the clients perspective using objects and then use an ORM to map between the business perspective and the relational model. Which in Hadi’s opinion brings a heap of other problems into the flexibility of responding to change and feature implementations.
Hadi’s session will show how to work with document databases using CoachDb
The Tool
CouchDB is a document database that stores complete documents in JSON. It has ACID compliance for each documentation and uses REST to access the stored documents. The initial engines was built in C++ was was ported to Erlang because of it’s strength in concurrent operations, which accessing and writing data is really about. It works on a lot of platforms; Windows, Linux, Andriod, Maemo, Browser Couch (HTML 5 storage) and CouchDB is able to synchronize data between any number of instances on any type of platform.
CouchDB doesn’t have a schema, well there is a small schema with an ID (_id) and a Revision (_rev), which allows you to store documents in any format you want and change that format during the documents lifetime.
Since CouchDB is built as a REST-based db, everything is done through HTTP (GET, POST; UPDATE and DELETE) and JSON which makes it very easy to access either by running your own HTTP requests or using the browser based admin app Futon.
Writing and versions
CouchDB uses an insert only approach, so each write to a document creates a new version of that document. This means you can track changes through the documents lifetime, but it also means that there is no locks and thus data consistency will follow the Eventual Consistency paradigm.
CouchDB enforces optimistic looking by enforcing the rule that you have to pass a revision number when updating.
Since the nature of CouchDB is to be distributed, this means that you can have a lot of versions spread out in your eco-system up-until the synchronization occurs. This makes it scalable but changes how we have to think about consistency.
When creating new documents, the engine wants you to specify an iD. Anything can be an ID but the DB uses UUID as the default and let’s you generate them easily.
Queries
Since the REST-Api only allows you to query on ID’s you can’t do this;
select * from customers where email =’’
Which means that you have to use MAP / Reduce. That is create a Map (a view as another document) of the data which is a javascript function which executes the against the documents and returns the data that you need.
This means that all queries have to be pre-created. There is no ad-hoc queries in CouchDB, you have to think about all of your queries in advance.
The perspective
NoSQL is a curious thing and CouchDB falls into that category. There are great advantages in using documents databases if documents is all you want. It won’t support things like reporting or BI.
CouchDB will probably really quick find documents using it’s ID but I’m still skeptical about performance when it comes to more advanced queries, especially if you need queries based on cross references.
The architecture of CouchDB allows to simply replicate and scale out instances across the internet, but I’d love to see some numbers on hardware vs performance. Javascript and JSON is not quick, so maybe it needs more hardware to achieve the same performance as other options.
All in all, I’ll probably pick up the tech and play around with it a bit to find good scenarios where it is a perfect match.