BI Monkey goes independent!

After many years working as a consultant for a number of providers small and large, and servicing clients across a broad range of industries, I have now taken the plunge and decided to operate independently.

I’ll be providing independent Business Intelligence Consulting to organisations across Australia, focusing on:

  • Strategy Creation and Review
  • Solution Architecture
  • Data Warehousing and ETL
  • Microsoft Business Intelligence technical support
  • Microsoft Business Intelligence training
  • Agile Enablement

Full details can be found at my “Consulting and Technical Services” page

Some of this decision was supported by some reading of James Serra’s posts from his master post “Blueprint for consulting riches” – which is ironic given his recent move to Microsoft as an F/T employee. Either way, his series is well worth a read for those mulling over their approach to work.

Read More

PowerPoint and productivy

Dilbert 24/12/2005
Eight slides of productivity! Impressive!


An in joke for one of my fellow leaders in the BI industry…



Read More

Do you know what motivates you?

Do you know what motivates you at work? Is it the glory, the cash, the dramatic road warrior lifestyle? Or do you blindly “do stuff” and enjoy some of it, and other bits not so much?

Well, occasionally in the mass of Management reading I do I come across something that helps me realise how I operate and improves how I perform through higher self awareness. I recently read “Drive” by Daniel Pink, and suggest you do too – as it will help you get to grips with how you are motivated at work.

Welcome to Motivation 3.0

Central to the book is the theory of Motivation 3.0. To understand how we got there, we need to know about 1.0 & 2.0. Motivation 1.0 was pretty simple – eat, find shelter, or die. Good cavemen grade stuff. Moving to 2.0 we enter the industrial age where performance is rewarded and disobedience punished.

Daniel Pink’s theory is that we have moved now to 3.0, as 2.0 only works for jobs with a fixed path to completion with no room for creativity, such as data entry or widget making. Work is increasingly creative – BI is definitely short on routine, easily defined work – and he proposes that you cannot give rewards for being creative because that makes creativity work, and then demotivates you to be creative….  a bit of a fatal blow in the modern workplace.

So Motivation 3.0 gives the worker the inner drive to solve creative problems through 3 things:

  • Mastery – striving to be a master of your trade
  • Autonomy – freedom to pursue your own path to your objectives
  • Purpose – being part of something bigger than making money

These all lead to the employer having to have faith in employees to do the right thing and work for the goals of the company without the traditional constraints of Motivation 2.0 – i.e. punishment and reward. It ultimately drives to the Result Oriented Work Environment – where hours are less important than what you deliver in the time you spend. Imagine a world without the 9-5 obligation where half your day is wasted because you just aren’t in the zone (or “in Flow” as it is referred to by some researchers), and you may as well have been at the beach?

It’s a short and interesting read, backed up with research, examples and stories that will prove thought provoking, and may change the way you go about your job.

Update 15/01/2013 – thanks to one of my colleagues, here’s a great TED Talk from the author on some of the key themes:

Read More

How to create a great technical presentation

We’ve all been there… in a technical presentation, wanting desperately for it to end because you’re just not engaged, you’ve lost track of what you are there for and the door is just sooooo tempting.

Want to avoid being the person on stage, increasing the gravitational pull of the door? Here’s my top tip: on the drawing board, don’t think about what features you want to demonstrate.

Did I just say don’t think about features?

Yes I did. Because people aren’t interested in features in isolation. I went to a dire presentation at SQL PASS where some well respected figure was demonstrating exciting new SQL2012 features by repeatedly saying “and you click this and look this happens then you click this and this happens and you click this and this happens and then you click this and this happens and that’s PowerView”. I walked out of that session half way through, none the wiser about PowerView.

It’s a common enough mistake amongst us techy folk – we are so impressed by the gadgetry and trickery that we lose sight of what it’s actually being used for. I remain a shameless ETL geek, but I can safely say that whenever I’m presenting the MS BI stack, my beloved SSIS rarely gets a look in. And why? Because the end user cares little about how their data got to them in a usable state. They care about how they can use their data, even if the DW / ETL story is going to chew up 75% of what they are going to spend.

So what should I think about?

People use software to do work. The features of that software enable people to do their work faster and more effectively. If you want to engage your audience you need to think about their business process first. You need to start your demo by thinking about the users story. You can then tell that story with your features – and get their engagement as they see how their process will change.

In a client specific situation, you tune that story to the client. In a general presentation you have to find a hook more or less everyone can get into – which is why Movies data keeps cropping up in some of the broader Microsoft demos. Almost everyone knows a few famous actors and movies and can connect with the story you try to tell.

Here is a simple example: In Reporting Services, you can change the background colour of a row of data using a formula. Neat feature. Demo that feature in isolation and most business users will give you a big fat “like, whatever.” Tell a business user that when a value indicates they need to take action, we can highlight that data so it jumps out at them – they get the value. They’ll keep listening to you.

Obviously this post is taking a very simplistic view – but next time you’re doing a presentation – start with your story first and there’s a good chance it’ll be much more interesting.


Read More

Death by reconciliation

In a system migration, one of the most common testing requirements is “the numbers must match the old system”. Which sounds reasonable enough – especially in systems where reports go externally and if the numbers change your image suffers if you can’t explain the change in a manner acceptable to the consumer.

However from an IT project perspective, this testing requirement is a surefire way to ruin the projects reputation, annoy the customer and drive everyone on the project insane. Let me explain…

The Old System is not the New System

As obvious as it may seem, the problem with matching the old system is that what has been specified as the new system is highly unlikely to match the old one. The reasons for this are manifold, but common examples include forgotten practices and undocumented changes. Even with great business requirements you’ll still find this stuff. The older the system, the more skeletons live in the cupboard.

What happens when you start the reconciliation process is that you discover the business requirements you were given don’t produce the same results as the old system, despite nominally doing the same thing because:

  • Implicit behavior not captured (e.g: Exclude anything over 5 years old. The old system also threw away anything with a negative age)
  • Explicit behavior not captured (e.g: Product “B” is overridden to product “A” on Wednesdays)
  • The old system is wrong (e.g: it just ignored orders with a negative value)

The underlying problem is that nobody has perfect knowledge of the old system. The new system may be perfectly understood as all the rules are spelled out in black and white, but is rarely a perfect reflection of the old one.

Managing the reconciliation

Of course, in many projects this reconciliation requirement is inescapable, and by the point you reach this stage the requirements phase is over and done with so whether it was the requirements gathering process was inadequate, the requirements weren’t fully reviewed – or whatever, the management of the reconciliation process is all that lies on your control, and I believe is best done by adhering to three simple rules:

  1. Testing to the requirements (not expectations)
  2. Strict change control for any deviation from requirements
  3. An open ended test period for reconciliation

What this means is that firstly the written, signed-off requirements are what you develop, deliver and test against to claim project success (importantly, and knowingly, not business success). If the business expected a different outcome that is immaterial in terms of the project’s accountability. This is often frustrating to the business, but vital to the project so that those involved can safely say they have done what was asked of them with the resources provided (if they did…   it can equally be used by the business to highlight a poorly performing project team).

The second point means that any deviation from the written, signed-off requirements is properly captured and costed. I’ll be the first to admit that neither of these policies will make a project lead terribly popular, but it is for the benefit of the project and the business. The reason behind this is that the cost of insufficient / missed requirements is spelled out as a business and project cost, not simply a poor project delivery cost. It raises the visibility to the business of these changes, and helps prevent the business being able to offload the costs (in monetary and image terms) to the project team.

The third point is that in terms of estimating for the reconciliation testing, you must have a large contingency period and also not be held to a fixed cost and time for it. In the reconciliation period it’s likely you will discover new or incomplete requirements, which means cycles of more development and testing. There is no way of knowing in advance exactly what this will be – any guess will be a stab in the dark – and I once saw a 6 month project overrun by 18 months as this phase ran its course – all at the consultancies expense.

Justifying wearing the pain

The reason why these strict policies will benefit is not around costs and timelines, which will probably drift by roughly the same amount whether they are managed on a reactive “fix them as they come” basis or under the strict approach I propose. The benefit is around ensuring the reason for the drift is understood and the pain is shared, not allocated 100% to the project team.

If the changes are managed reactively, then in the short term the project delivery team feel they are being helpful and accommodating. However in the long term what happens is the customer starts perceiving a couple of things. One, that all delays in the project are the delivery team’s fault as they are the ones always taking longer to implement the requirement – even though the delay is a non-technical one to do with changed requirements. Second (and more dangerous) is the belief that change comes at zero cost to them – so they have no hesitation in adding extra components or requirements in – further delaying delivery and making the project team look even worse.

Applying strict change control is not about pushing back against the business to prevent them making changes – it is about making visible to them the cost of those changes. It’s a lot easier to face up to a stakeholder who asks why a project is running late if you can quantify the delays in terms of specific problems and shared responsibilities.

Yes, it’s Project Self Defence

One initial comment I had on this approach was that if the project delivers, but it’s wrong, then it’s still wrong. And I agree – this is purely Project Self Defence.

First, it’s about managing cost and budget – you can do what you are asked with the resources and time you estimated. You cannot necessarily do what the business expects with those resources. Any gap between requirements and expectations needs to be managed and the cost understood and shared. Especially that painful “match the old system” testing period which can go on for a very long time.

Second it’s about managing perception and image of the project. If you run late / over cost because of accommodating change then the project team suffers all the reputational damage. If you can call out that the delays are for identifiable reasons where the responsibility is shared, then the delivery problems become a joint concern with more buy in from the business.

Hopefully now you’ll think twice about accepting that testing requirement now…

Read More

Human Infrastructure & Analyst First

Recently I was watching a vendor vs vendor punchup on LinkedIn – various salespeople, vested interest consultants and fanboys all trying to declare their database was clearly far better than the other. To me it looked suspiciously like a bunch of car salesmen desperately trying to convince someone their vehicle was the superior one because of x,y or z feature.

The Stig

Rather driven by having recently attended an early meeting of Analyst First, I was somehat bemused at the complete sidelining of the Human component. Hands down, I will agree a Porsche 911 is technically faster than a Subaru Impreza. However, stick them both on the same track, put me in the Porsche and the Stig in the Impreza – and I wouldn’t put great odds on me crossing the finishing line first (or, to be honest – at all – i’m no race driver and would propbably end up in a ditch with spinning wheels).


In any tooling choice, it is smarter to pick a toolset with which you can comfortably match people’s skills and experience. I will build you a great Microsoft BI solution, because I know the toolset intimately and will squeeze every possible drop of value out of it. I will make a middling Cognos solution because I roughly know what it does and should do in theory (I will also complain vociferously about anything MS BI can do better that it can’t). I will build you a terrible Jaspersoft solution because I don’t even know how to turn it on.

The impact of a few shortfalls here and there in capabilities of a toolset your team are familiar with will be minor compared to the impact of them blindly feeling their way through a new toolset with a set of preconceptions based on how their previous one worked.

Analyst First

Which is kind of where Analyst First comes in. They represent a component of the Analyst community here in Australia with a very focused aim: to equip the man, not man the equipment. What does that mean in practice? It means not spending the big bucks on analytics software and expecting the analytical manna to start falling from heaven, but instead spending it on the people who know the raindance, so to speak. Their proposition is simple and quite reasonable: a good analyst first and foremost needs skills – not tools – to do their jobs well.

Rolling back to the car analogy, there is no no point buying a learner driver a Porsche – spend the money on driving lessons first. The learner will benefit more from it, and also not suffer from the false sense of security that a powerful car can give you. I’m fast! I’m safe! I’m wrapped around a lampost! Oops. Analytics is a tricky occupation – it’s very easy for powerful tools to give you an answer, and for the inexperienced analyst to believe it must be right because the expensive tool made the answer (and made it look pretty to boot).

I’ve done just enough Data Mining to know that the wrong answers can leap off the page and look very convincing until you look under the hood as to why you get that answer. One example was that I had a strongly predictive indicator come out of my data. It predicted with about 95% accuracy that if factor Y was present, the customer fell into category X. Convincing stuff. Until I got under the hood and discovered that factor Y was only ever entered into the system for customers in category X. It went from being 95% predictive to 0%.

Human Infrastructure

Tying these two related topics together is the concept of Human Infrastructure, one that is often neglected in project plans and budgets. BI, and its cleverer – if scruffier and more academically inclined – relative, Analytics – is not just another system which needs a mundane user guide which states “To get outcome A, press button B”. To get value out of data you don’t just need to know how to use the tool – you also need to understand how to analyse data. This is a mishmash of competencies around maths, stats and logic to name a few, none of which are able to be bypassed through use of a tool.

I often hear that users don’t want to know about the details of calculations and aggregations and that BI should just serve it up on a plate. This worries me as if your end users aren’t motivated enough (or as few people will dare say out loud, smart enough) to understand how an outcome arose, but are prepared to make decisions on it, then they will make bad decisions. Witness the sub Prime crisis driven by people selling stuff devised by clever quants regardless of their own ability to understand it.

The bottom line: Worry less about tools, and more about the people that are going to use them.

Read More

Exceptional programmers are 100 times better than average ones

… at least, this is a claim made by Mark Zuckerberg of Facebook:

“Someone who is exceptional in their role is not just a little better than someone who is pretty good,” he argued when asked why he was willing to pay $47 million to acquire FriendFeed, a price that translated to about $4 million per employee. “They are 100 times better.”

The source of this quote is from a blog post here : Great People Are Overrated which actually argues the counter point – that a solid team is more important than a few superstars. The comments are actually more enlightening than the article itself and largely agree with Zuckerberg’s position – I suggest reading through some of them.

As a total aside, the comments also lead me to this article about a book I’ve added to my must read list – The Mythical Man Month – which talks about various software project management issues some of which relate to the above. The Wikipedia article linked to has a good summary of the key points (not least the 9 women can’t make 1 baby in a month).

So, Superstars or Average Joes?

The answer probably lies somewhere in the middle. You need the superstars to provide vision, solve problems and create genuinely innovative and effective solutions. You need the Joes to implement, fix, maintain – do the stuff that can waste a superstars energy and focus. This also gives the Joes a chance to develop, learn – and potentially become stars themselves (the caveat being that there will always be Joes who stay Joes – and perhaps these should be fired if you want to truly excel as an organisation). Also, an organisation once it reaches a certain size needs structure, process and systems to operate without collapsing, and this is a Joe job, not a Superstar one.

Something that perhaps isn’t drawn out by the article is the issue of Domains of stardom. Superstars will invariably be able to grasp the concepts of and even have the capacity to excel in other domains that they turn their focus to, but tend to be Superstars in their own Domain. Loathe that I am to use sports analogies, a Superstar Football player will probably make a pretty good Rugby player – but not a Superstar – until they decide that’s what they want to do and focus exclusively on that. In my own little world, I’d happily state I’m an SSIS Star – not the best – but outclassing most. However I’m an average SSAS guy, and wouldn’t want to trade MDX punches with the likes of Boyan Penev.

The reason I draw out the Domain issue is one of Ego. People who are very good at one thing often get confused and think their Domain expertise means they are qualified to speak out on other issues. For example witness Linus Pauling’s absurd position on Vitamin C – a medical issue – upon which as a great physicist he was utterly unqualified to make comment.

A key message to take away is that you need Superstars to succeed and excel. A team of Joes will never make your organisation great, just functional.

What is the ideal Superstar : Average Joe ratio?

The above formula will be weighted by the size of your organisation.  In a small outfit you need to be made up of near 100% greatness so that you can drive, expand and succeed. In a bigger one, you are compelled by supply to bring on Joes purely because there aren’t enough Stars around, and no Superstar wants do to donkeywork. Besides, there is donkeywork to be done and you don’t want to waste your best people on that. You can however multiply the value of the Superstars by getting them to create solutions and solve problems but not get slowed down by the detail of actual implementation.

Superstars grow and drive your business. Joes maintain it. Is growth worth “X” times more than maintenance? To answer that question in terms of how business evaluates that simply ask yourself why do the sales people get paid more than a grunt developer…

Read More

Too Many Hats

I’m coming to the end of a project and contemplating some of the lessons learned during its near 9 month duration. One key one I’ve taken away is that in the middle of the project we had a bit of a resourcing issue and my role was expanded to cover more activities – from pure architect to hybrid architect / project manager / senior developer. Not a problem in itself – as a consultant you expect to wear many hats – technical expert, customer relationship manager, architect, sales guy – it’s all part of the fun of consulting. However I found myself on the hook at one point for a hat too many – and that hat was the easiest for me to wear – the developer hat.

The difference with the developer hat is two-fold. First up, you have hard deliverables. If you are a bit late with a project plan, or your architecture isn’t quite complete – life can usually move on. If your code is late, you start holding up other developers and raising red markers on the project plan. Of course the code you get allocated as a senior guy is not the easy stuff, but the bits where dragons can be found – so the odds on it being late goes up. Which leads to the second aspect – you fall under pressure because your code is late – and under pressure you focus on what you are comfortable with dealing with – coding – and everything else gets left by the wayside.

Consequently the other activities on my plate began to lose focus – notably the project manager related ones (organisation is not my strong point anyway) – and the project began to suffer because soft deliverables can slip… but only so far. Thus the pressure rises and my head stuck itself deeper in the tasks I could deal with and knew I would get called out on.

The tl/dr summary of this is that when doing multiple roles on a project:

  • Hard deliverables add significant extra pressure to your role
  • It is easy to unintentionally put more effort into the role you are most comfortable with (especially under pressure)

Read More

COSH – The Cost of Substandard Hardware

Now, some of you may note that “Substandard” is probably a more polite term than most would use in the acronym “COSH” – but this is a family friendly blog (just in case there are any 8-year old ETL developers out there)

I’m sure all of us in our developing career have been given the worst PC in the building, dev servers made of bricks and the wonky chair that needs a Degree in Engineering to sit on without sustaining injury. Which, on a personal level, sucks as you can’t work as fast as your brain wants and rapidly becomes frustrating. The thrill of a dangerous chair wears off pretty quickly, too.

However looking at it from above, this also attracts a cost to the project as a whole. For example, say you have all your development machines hosted virtually. Makes sense – easy to reproduce your dev environments for everybody, avoids having to get lots of stuff installed on desktops – all round a sensible approach. This works in practice, but there’s 2 things you have to really think about:


If your Host goes down, then your developers are offline. Let me spell out how much that costs to you in cold, hard maths:

Number of developers * Developer cost per hour * Hours of downtime = Cost of failure.

So, assuming a standard Australian dev resource @ $100ph, that means if you have a team of 4 devs, a day of downtime costs this:

4 * $100 * 8 = $3200 / day

Plus of course that doesn’t allow for the cost of delaying the project by one day. Suddenly that backup host doesn’t seem so expensive. Or the cost of some Infrastructure consultants to make sure that the homebrewed Hyper-V setup is actually configured properly, which leads me on to…


Poorly performing hardware can slow developers down. Beautiful though your documentation and planning may be, much development is still only practical by repeating a “run-check-fix-rerun” cycle, and is the only way to unit test. If your dev hardware runs twice as slow as it could (say, relative to production) then that slows your developers down. It’s not quite as clear cut as above, as developers develop as well as run – i’d say the factor is probably about 50% of dev time is spent running – feel free to make your own judgment, but here’s the maths:

Number of developers * Developer cost per day * Developer Run Time *  (Relative performance -1) = Cost of poor performance.

So, assuming a standard Australian dev resource @ $100ph, that means if you have a team of 4 devs, running 50% of the time on twice as slow hardware,  the daily slowdown cost is this:

4 * 800 * 0.5 * (2-1) = $1600 / day

Not quite as expensive as downtime, but a subtle, creeping cost nonetheless.


The above examples apply to any setup, not just Virtual – slow desktops, flaky database servers, slow networks – they can all have an impact. You may disagree with my maths (I’m happy to take your views on my formulas) – but what I’m trying to illustrate is that there is more to poorly performing development environments that just annoyed developers. It costs the project time and money – and these costs can get easily get big enough to warrant spending on expertise or hardware to remediate.

Read More

Waterfall and the Illusion of Control

I recently overheard a PM at my client site say the following:

We’re great at delivering projects on time and on budget. We’ve delivered three such projects! The original project, the second project to address all the scope we cut in the first phase to deliver it on time and on budget, and the third project to address all the scope we cut in the second phase. By phase three we had finally delivered the original project!

This obviously was to a greater extent tongue in cheek, but it exposes a significant weakness in overplanning a project, which I think is Waterfall’s biggest problem. There are three variables in any project – Schedule, Scope and Budget. Any project managers job is to tame these beasts and bring them in line with The Plan. The problem with this is that adhering to The Plan becomes paramount in a Waterfall driven project because the Project Manager is held accountable to this. At a high level view, in my experience  most Project Owners when whipping the PM’s will measure them against (in order) Budget, Schedule and Scope. See what was last on that list? Scope – the useful bit of the project that the business actually use. But if they have managed to do the project (as an abstract concept, anyway) on budget and on time, they have delivered the illusion of control.

To me, any project should put Scope top of the list, as Scope = Business Value. Budget should be mapped to Scope areas so you can get value out of what you pay for. For example, if you have a shiny UI that is pretty much neutral in terms of benefit relative to cost, as soon as this starts overrunning, can it. If you have a core DW platform that has benefits that outweigh cost tenfold, then allow it to overrun, as long as you are still going to get payback. Schedule, is to me, irrelevant – it should be along the lines of “When do we want it? Now!” If you don’t want it now… well, why are you building it?

What’s the solution? I’m not going to jump up and start shouting Agile, but at least Agile puts Scope back at the top of the list. In big projects, it may not be the right approach – but it’s a set of processes to consider. Ultimately I understand the need for control to be in place – after all, the businesses wallet is not infinitely deep and managers rarely have the patience of saints – but I think overly planned approaches result in diminished delivered value.

Read More