In a system migration, one of the most common testing requirements is “the numbers must match the old system”. Which sounds reasonable enough – especially in systems where reports go externally and if the numbers change your image suffers if you can’t explain the change in a manner acceptable to the consumer.
However from an IT project perspective, this testing requirement is a surefire way to ruin the projects reputation, annoy the customer and drive everyone on the project insane. Let me explain…
The Old System is not the New System
As obvious as it may seem, the problem with matching the old system is that what has been specified as the new system is highly unlikely to match the old one. The reasons for this are manifold, but common examples include forgotten practices and undocumented changes. Even with great business requirements you’ll still find this stuff. The older the system, the more skeletons live in the cupboard.
What happens when you start the reconciliation process is that you discover the business requirements you were given don’t produce the same results as the old system, despite nominally doing the same thing because:
- Implicit behavior not captured (e.g: Exclude anything over 5 years old. The old system also threw away anything with a negative age)
- Explicit behavior not captured (e.g: Product “B” is overridden to product “A” on Wednesdays)
- The old system is wrong (e.g: it just ignored orders with a negative value)
The underlying problem is that nobody has perfect knowledge of the old system. The new system may be perfectly understood as all the rules are spelled out in black and white, but is rarely a perfect reflection of the old one.
Managing the reconciliation
Of course, in many projects this reconciliation requirement is inescapable, and by the point you reach this stage the requirements phase is over and done with so whether it was the requirements gathering process was inadequate, the requirements weren’t fully reviewed – or whatever, the management of the reconciliation process is all that lies on your control, and I believe is best done by adhering to three simple rules:
- Testing to the requirements (not expectations)
- Strict change control for any deviation from requirements
- An open ended test period for reconciliation
What this means is that firstly the written, signed-off requirements are what you develop, deliver and test against to claim project success (importantly, and knowingly, not business success). If the business expected a different outcome that is immaterial in terms of the project’s accountability. This is often frustrating to the business, but vital to the project so that those involved can safely say they have done what was asked of them with the resources provided (if they did… it can equally be used by the business to highlight a poorly performing project team).
The second point means that any deviation from the written, signed-off requirements is properly captured and costed. I’ll be the first to admit that neither of these policies will make a project lead terribly popular, but it is for the benefit of the project and the business. The reason behind this is that the cost of insufficient / missed requirements is spelled out as a business and project cost, not simply a poor project delivery cost. It raises the visibility to the business of these changes, and helps prevent the business being able to offload the costs (in monetary and image terms) to the project team.
The third point is that in terms of estimating for the reconciliation testing, you must have a large contingency period and also not be held to a fixed cost and time for it. In the reconciliation period it’s likely you will discover new or incomplete requirements, which means cycles of more development and testing. There is no way of knowing in advance exactly what this will be – any guess will be a stab in the dark – and I once saw a 6 month project overrun by 18 months as this phase ran its course – all at the consultancies expense.
Justifying wearing the pain
The reason why these strict policies will benefit is not around costs and timelines, which will probably drift by roughly the same amount whether they are managed on a reactive “fix them as they come” basis or under the strict approach I propose. The benefit is around ensuring the reason for the drift is understood and the pain is shared, not allocated 100% to the project team.
If the changes are managed reactively, then in the short term the project delivery team feel they are being helpful and accommodating. However in the long term what happens is the customer starts perceiving a couple of things. One, that all delays in the project are the delivery team’s fault as they are the ones always taking longer to implement the requirement – even though the delay is a non-technical one to do with changed requirements. Second (and more dangerous) is the belief that change comes at zero cost to them – so they have no hesitation in adding extra components or requirements in – further delaying delivery and making the project team look even worse.
Applying strict change control is not about pushing back against the business to prevent them making changes – it is about making visible to them the cost of those changes. It’s a lot easier to face up to a stakeholder who asks why a project is running late if you can quantify the delays in terms of specific problems and shared responsibilities.
Yes, it’s Project Self Defence
One initial comment I had on this approach was that if the project delivers, but it’s wrong, then it’s still wrong. And I agree – this is purely Project Self Defence.
First, it’s about managing cost and budget – you can do what you are asked with the resources and time you estimated. You cannot necessarily do what the business expects with those resources. Any gap between requirements and expectations needs to be managed and the cost understood and shared. Especially that painful “match the old system” testing period which can go on for a very long time.
Second it’s about managing perception and image of the project. If you run late / over cost because of accommodating change then the project team suffers all the reputational damage. If you can call out that the delays are for identifiable reasons where the responsibility is shared, then the delivery problems become a joint concern with more buy in from the business.
Hopefully now you’ll think twice about accepting that testing requirement now…