The fine art of starting to adopt Agile with a Zero sprint

Agile methodologies have a patchy track record in BI/DW projects. A lot of this is to do with adopting the methodologies themselves – as I’ve alluded to in prior posts there are a heap of obstacles in the way that are cultural, process and ability based.

I was discussing agile adoption with a client who readily admitted that their last attempt had failed completely. The conversation turned to the concept of the Zero sprint and he admitted part of the reasons for failure was that they had allowed Zero time for their Zero sprint.

What is this Zero sprint anyway?

The reality of any technical project is that there are always certain fundamental decisions and planning processes that need to be gone through before any meaningful work can be done. Data Warehouses are particularly vulnerable to this – you need servers, an agreed design approach, a set of ETL standards – before any valuable work can be done – or at least without incurring so much technical debt that your project gets sunk after the first iteration cleaning up after itself.

So the Zero Sprint is all that groundwork that needs to be done before you get started. It feels counter agile as you can easily spend a couple of months producing nothing of any direct value to the business/customer. The business will of course wonder where the productivity nirvana is – and particularly galling is you need your brightest and best on it to make sure you get a solid foundation put in place so it’s not a particularly cheap phase either.

How to structure and sell the Zero sprint

The structure part is actually pretty easy. There’s a set of things you need to establish which will form a fairly stable product backlog. Working out how long they will take isn’t that hard either as experienced team members will be able to tell you how long it takes to do pieces like the conceptual architecture. It just needs to be run like a long sprint.

Selling it as part of an Agile project is a bit harder. Because you end up not delivering any business consumable value you need to be very clear about what you will deliver, when you will deliver it and what value it adds to the project. It starts smelling a lot like Waterfall at this point, so if the business is skeptical that anything has changed, you have to manage their expectations well. Be clear that once the initial hump is passed, the value will flow – but if you don’t do it the value will flow earlier to their expectations, but then quickly after the pipes will clog with technical debt (though you may want to use a different terminology!).

Read More

Managing complex security in SSAS Tabular – Yeah Nah edition

A couple of times recently I have come up against requirements which have required some fairly complex logic to apply security. One involved some fairly gnarly relationships coming from multiple directions, the other involved grinding through Hierarchies from parent nodes down to permitted viewable children.

The problem with both cases is that though the logic can sometimes be written (albeit usually in an ugly as hell manner) – the functions needed to do so perform atrociously. For complex relationships you are obligated to take in context after context, changing filters and doing all sorts of DAX voodoo. As we know by now, avoiding relationships is good for performance. Hierarchies can be managed through the PATH function, but it’s a text operation that is far from speedy.

Let’s give a quick example of some complex security – consider the below data model:

Complex Model
Complex Model

Here the security controls who can see what has been spent on a Task in the FactTable object. How can see what depends on their Role and/or the Unit they are in. There is also a 1:many relationship between a person and the login they can use.

So for dynamic security you need to navigate from the User Id to the Person and assess what Unit they are in for the Unit based permissions. You also need to assess what Role they are in to get the Role based permissions.

I took one look at this and shuddered at the messy DAX I was going to have to write, plus how terribly it would perform.

Do it in the Cube? Yeah Nah.

So I thought “Yeah nah” and decided the cube was the wrong place to be doing this. Ultimately all I was trying to get towards was to pair a given login with a set of tasks that login would have permissions against. This is something that could easily be pushed back into the ETL layer. The logic to work it out would still be complex, but at the point of data consumption – the bit that really matters – there would be only minimal thinking by the cube engine.

So my solution enforces security through a role scanning a two column table which contains all valid pairings of login and permitted tasks to view. Very fast to execute when browsing data and a lot easier to code for. The hard work is done in loading that table, but the cube application of security is fast and easy to follow. The hierarchy equivalent is a pairing of User Id with all the nodes in the Hierarchy that are permitted to be seen.

As a final note, for those non-Aussie readers the expression “Yeah nah” is a colloquialism that implies that the speaker can’t be bothered with the option in front of them. For example: “Do you want a pie from the Servo, Dave?” “Yeah nah.”

Read More

Dynamic Time Variant calculations in DAX

A common requirement in any set of calculations is to create a range of time variants on any measure – Prior Period, Year to Date, Prior Year to Date, Prior Quarter…   you think of a time slice and someone will find it useful.

However the downside to this is that in the model you end up maintaining lots  of calculations that are all largely doing the same thing. Any good coder likes to parameterise and make code reusable. So how could we do this in Tabular? There is a way that is a very specific variant of the idea of Parameter Tables

Disconnect your Dimensions!

Step one is to unhook your Date Dimension from your fact table. This may seem counter-intuitive, but what it frees you to do is to use the Date dimension as a source of reference data that doesn’t filter your data when you select a date – this simplifies the subsequent calculations significantly. You also need to add to the date dimension all the dates you will need to perform your calculations – Year starts, Prior Year starts, Period starts etc. – this isn’t compulsory but you’ll be grateful later on when you need these dates and don’t have to calculate them on the fly, trust me. Your Date table (I’m going to cease calling it a Dimension, it isn’t any more) will end up looking something like this:

Date Table
Date Table

In practice you would hide all the columns apart from the Date as this is the only one that actually gets used by users.

Time for the Variants

Next, we need to create a simple filter table to apply the Time Variant calculations. All it needs is a numeric identifier per variant and a variant name, like so:

Variants Table
Variants Table

This – quite clearly – isn’t the clever bit. The thing to observe with all of these variants is that they create a date range. So what we need to do is calculate the applicable Start and End dates of that range. This is the bit where we are grateful we pre-calculated all those in our Date table. We add two Measures to the table, StartDate and EndDate, which detect which Time Variant is being calculated and then work out the appropriate date, based on the currently selected date. The DAX for StartDate looks like this:

StartDate:=
SWITCH(MIN([VariantID]),
1,MIN(Dates[PeriodStart]),
2,MIN(Dates[PriorPeriodStart]),
3,MIN(Dates[YearStart]),
4,MIN(Dates[SamePeriodPriorYearStart]),
5,MIN(Dates[PriorYearStart])
)

We use a SWITCH statement against the VariantID to detect which Variant we are trying to get the date range start for, then pick the right date from the Date Table. Pre-calculating these in the Date table keeps this part simple.

Add it all up

The final part is to pull these dates into the measure:

TotalTransactionAmount:=SUMX(CALCULATETABLE(Transactions,DATESBETWEEN(Transactions[TransactionDate],[StartDate],[EndDate])),Transactions[TransactionAmount])

This works by using the DATEBETWEEN function to apply a custom date range filter to the Transactions table – which we create dynamically through our StartDate and EndDate calculations.

Our end result:

Time Variant Results
Time Variant Results

We can see above that we can for a single selected date, generate a range of Start and End dates and apply those to our single summarising function to create multiple Time Variations.

The sample workbook is here: DAX Time Variants

Read More

Great PASS BIDW VC Video on how Vertipaq Compression works in SSAS Tabular / PowerPivot

Embedded below is a great video from Marco Russo on how the compression engine works in SSAS Tabular / PowerPivot:

This is from the SQL PASS BI Virtual Chapter Youtube channel – well worth nosing around now they post all their sessions on there (handy for us in Oz where the timings are usually not great)

Read More

Shrink Tabular column space used by over 50% using a simple trick

I’ve recently wrapped up writing the draft of a PowerPivot book (news on that once it’s published) and as part of having to make sure I “knew my onions” I spent a bit of time working my way around understanding the compression engine. I came across this post – Optimizing High Cardinality Columns in VertiPaq – by Marco Russo, and it sparked my interest in seeing how it could be applied to a couple of common data types – financial amounts and date / times. This first lead to me getting distracted building a tabular model to see how much memory columns (and other objects) used. Now i’m getting back to what took me down that path in the first place: seeing how different data type constructions affect memory usage.

How PowerPivot compresses Data

As an introduction, it really helps to understand how PowerPivot compresses data in the first place*. The key tool it uses is a Dictionary which assigns an integer key to a data value. Then when the data is stored it actually stores the key,  rather than the data. When presenting the data, it retrieves the keys and shows the user the values in the dictionary.

To illustrate, in this list of Names and Values:

Names and Values
Names and Values

We have several repetitions of Name. These get stored in the dictionary as follows:

Names Dictionary
Names Dictionary

Then, internally PowerPivot stores the data of Names/Values like this:

PowerPivot Stored Data
PowerPivot Stored Data

This results in high compression because a text value takes up much more space than an integer value in the database. This effect multiples the more repetitive (i.e. lower cardinality) the data is. High cardinality data,  typically numeric values and timestamps – do not compress as well as the number of dictionary entries is often not much less than the number of actual values.

* Quick caveat: this is the theory, not necessarily the practice. The actual compression algorithms used are proprietary to Microsoft so they may not always follow this pattern.

Splitting Data – the theory

The key to Marco’s approach is to split data down into forms with lower cardinality. So what does that mean?

For a financial amount, the data will be in the form nnnnnnn.dd – i.e. integer and fraction, dollars and cents, pounds and pence, etc. But the key thing is that the cents / pence / “dd’ portion is very low cardinality – there are only one hundred variations. Also, stripping out the “dd” potion will probably end up reducing the cardinality of the number overall. For example, consider these unique 4 numbers:

  • 4.95
  • 4.50
  • 7.95
  • 7.50

That is four distinct numbers… but two integer parts and two fraction parts. At this small scale it makes no difference, but for thousands of values it can make a big impact on cardinality.

For a DateTime the data will be in the form dd/mm/yy : hh:mm:ss.sss. You can separate out the time component or round it down to reduce cardinality. Your use case will determine what makes sense, and we will look at both below.

Splitting Data – the practice

Any good theory needs a test, so I created a one million row data set with the following fields:

  • TranCode: A 3 character Alpha transaction code
  • TranAmount: A random number roughly between 0.00 and 20,000.00
  • TranAmountInteger: The Integer part of TranAmount
  • TranAmountFraction: The Fraction part of TranAmount
  • TranDateTime: A random date in 2014 down to the millisecond
  • TranDate: The date part of TranDateTime
  • TranTime_s: The time part of TranDateTime rounded to the second expressed as a time datatype
  • TranTime_ms: The time part of TranDateTime rounded to the millisecond expressed as a time datatype
  • TranTime_num_s: The time part of TranDateTime rounded to the second expressed as an integer datatype
  • TranTime_num_ms: The time part of TranDateTime rounded to the millisecond expressed as an integer datatype
  • TranTime_s_DateBaseLined: The time part of TranDateTime rounded to the second expressed as a datetime datatype, baselined to the date 01/10/1900
  • TranTime_ms_DateBaseLined: The time part of TranDateTime rounded to the millisecond expressed as a datetime datatype, baselined to the date 01/10/1900

The generating code is available here. I’ve used some T-SQL Non Uniform Random Number functions to get more “realistic” data as early drafts of this test were delivering odd results because the data was too uniformly distributed so VertiPaq couldn’t compress it effectively.

You may be wondering why I’ve produced TranTime as time and datetime datatypes – the short answer is Tabular Models treat sql server time datatypes as text datatypes in the tabular model, so I wanted to check if that made a difference as I was getting some strange results for split time.

I then imported the table into a tabular model and processed it, then used the discover_memory_object_usage to work out space consumed by column. The results were this:

Split Column Memory Usage
Split Column Memory Usage

There was a clear saving for splitting the financial amounts into integer and fractions – the split column saved around 50% of the space.

DateTime behaved very oddly. Rounding down the precision from milliseconds to seconds brought big savings – which makes sense as the cardinality of the column went from 1,000,000 to 60,000. However splitting it out to just the time component actually increased space used.

I tried fixing this by baselining the time component to a specific date – so all millisecond/second components were added to the same date (01/01/1900) – this basically made no difference.

A more effective variation was to just capture the number of milliseconds / seconds since the start of the date as an integer, which saved about 89% and 92% of space respectively.

Splitting Data – the advice

Though there are certain costs associated with doing so, such as the loss of the ability to do DISTINCTCOUNT on values, but if your model is pushing memory limits then splitting decimal numbers into their integer and fraction (especially currency fields) can make a big difference – my experiments showed 50% and that was using fairly random data – real life tends to be a bit more ordered so you can hope for more savings.

Fundamentally it looks like DateTime values compress poorly, and Time values even more so. A better solution – at least from a compression standpoint – is to store the date value as a Date datatype in the model, and have any time component stored as integers. How this impacts performance when bringing these together at runtime using the DATEADD function is a matter for you to test!

Read More

Productivity issues for Agile in BI/DW – Part 2: Technology

Agile in a BI/DW environment faces a unique set of challenges that make becoming productive more difficult. These issues fall into a couple of categories. First are the difficulties in  getting the team to the productivity nirvana promised, which I covered in this post. Second are the difficulties posed by technology and process, which I’ll talk about today.

Some obstructions cannot be moved by thought alone.

Solving problems by thought alone
Solving problems by thought alone

Agility in traditional coding environments runs at a very high level like this: User states requirements, Coder develops an application that meets those requirements, test, showcase, done.

In BI/DW environments there process is less contained and has a lot of external dependencies. A user requesting a metric on a report is not a matter of coding to meet that requirement – we need to find the data source, find the data owner, get access to the data, process it, clean it, conform it and then finally put it on the report. Depending on the size and complexity of the organisation this can take anywhere between days and months to resolve.

Agile development as it is traditionally understood, with short sprints and close user engagement works well for reporting and BI when the data has already been loaded into the Warehouse. If you are starting from scratch, your user will often have become bored and wandered off long before you give them any reporting.

(Yes, once again, nobody cares about the back end because it’s boring and complicated)

Rather than move the mountain to Mohammed…

There are some steps you can take to mitigate this. The product backlog is your friend here. Often with some relatively light work on the backlog you can identify which systems you are going to hit and broadly what data you will need from those systems.

On a large scale project you may find that you have multiple systems to target, all of which will vary in terms of time from discovery to availability in the DW. Here I generally advocate switching to a Kanban type approach (i.e. task by task rather than sprint based) where you try and move your tasks forward as best you can, and once you are blocked getting at one system, while you wait for it to unblock move on to another.

As systems get delivered into the EDW you can start moving to delivering BI in a more interactive, sprint based fashion. I generally advocate decoupling the BI team from the DW team for this reason. The DW team work on a different dynamic and timescale to the BI team (though note I count building Data Marts as a BI function, not a DW function). You do run the risk of building Data Warehouse components that are not needed, but knowing you will discarding some effort is part of Agile thinking so shouldn’t be a big concern.

Once again its about people

You may notice that none of the issues I’ve raised here are set in stone technical issues. It’s still about people – the ability of external people to react to or accommodate your needs – the capacity of users to be engaged in protracted development processes – the flexibility of project sponsors not to have a rigid scope.

Good people who can be flexible and accommodate change are the keystone to agile success. No tool or process with ever trump these factors.

Read More

Productivity issues for Agile in BI/DW – part 1: People

Agile in a BI/DW environment faces a unique set of challenges that make becoming productive more difficult. These issues fall into a couple of categories. First are the difficulties in  getting the team to the productivity nirvana promised. Second are the difficulties in simply being productive. Today I’ll focus on the first case.

Productivity nirvana is hard to find.

Nirvana
Nirvana

A core principle of Agile is the cross functionality of teams – so if there is slack in demand for one type of resource in a sprint, that resource can help out where there is stress on another. So a coder may pick up some test work, a web developer may help with some database design or a tester may help with some documentation and so on. The end result being the team can pretty much jump in each others shoes for basic tasks and only lean on the specialists for the tricky bits.

In BI/DW this cross-skilling is harder to pull off. The technical specialisation is more extreme – people tend to sit in the ETL, Cube or Report developer buckets and its taken them quite a while to get there. There is occasional crossover between a couple of technologies (usually at the BI end between Cube & report) but true polymaths are very rare. Plus the skills required to be good at any of these technologies tends to need very different mindsets – ETL developers tend to need to be methodical, logical thinkers with a strong eye for details and a love of databases – whereas report developers are often more creative and engage more with people (the business). This makes hopping into other team members shoes quite hard.

Meditations on the path

These things can be overcome to an extent by limiting the domains where cross-skilling is expected. This can be done in smaller teams by focusing the areas where the team can support each other away from the technical – for example testing or documentation can be pretty process driven and an ETL developer can easily test a report. Expectations around cross-skilling need to be reined in and the sprint planned with that in mind. This isn’t to say that cross-skilling can’t arise – but the time to get there is going to be a lot longer.

In larger teams you can look at dividing up the teams into areas where cross-skilling is more practical. Typically I like to Partition the DW and BI teams, though I take the perspective that your data mart ETL developer is part of the BI team which means you do need a bit of a flexible player in that BI ETL role though.

Once again its about people

A topic I like to hammer home is that most of your project concerns are not technical or process driven – it’s all about people, specifically people’s ability and willingness to adapt and learn. Picking team members who can adapt, are willing to adapt and can see the value to themselves in doing so are going to get you to the productivity nirvana that much faster.

As always, thoughts, comments and war stories welcome!

Read More

IAPA 2014 Salary Survey

The IAPA salary survey came out a couple of months back, and though it is Analytics focused it has some interesting results for those of us in the BI world. My key takeaways follow.

From a purely self interested point of view, Analytics is a well paid profession and it’s getting more so. Further, recruiters are reporting that finding people is getting harder, which indicates the talent pool is not all that deep and has been sucked fairly dry already. Something I experience regularly when trying to find BI talent.

If you want a job in the field, you’re best off being in Sydney or Melbourne. There also appears to be minimum education level of a bachelors degree with most professionals holding a masters or higher. Marketing is one of the biggest employers of analysts.

For those in the field there seems to be a mid career slump in satisfaction (around the ten year mark). Fresh starters are all excited and lifers seem happy too, but somewhere in the middle the enthusiasm fades.

Despite all the market enthusiasm, a significant proportion of respondents said there is an ongoing challenge reported that analysts struggle to get their organisation to value or act on analytics findings – supportive of Eugene Dubossarsky’s claims that business heavily invest in vanity analytics so they can claim “me too” rather than to derive real value.

Technical takeways – for all the noise, Big Data is still a Small Concern and regular sized analytical problems are prevalent. Excel is the #1 tool used to work with data, and if you are more of an integrator good SQL skills are king.

Last of all, There still seems to be a heavy focus on social media analytics – despite it’s dubious value – but it pays better. Something which underscores the vanity analytics claims further.

Read More

Agility in BI

Over the last couple of years I have been increasingly exposed to Agile Methodologies for delivering BI projects. I’ve even once accidentally got myself trained as a Scrum Master. The topic itself isn’t exactly new (I blogged about it at the end of 2009) but adoption is starting to pick up as recognition that it can significantly improve the results of BI projects. What follows are some early stage ramblings on my experience so far as I try to pull together a more coherent story as I think its one that needs to be told and sold.

Agile Principles

For those of you unfamiliar with Agile, it stems from the Agile Manifesto drawn up by some experienced developers who figured there was a better way than waterfall. The aim is to deliver faster, earlier and closer to user expectations. This is achieved by focusing on high value deliverables, delivering them piecemeal in descending order of value and engaging very closely with the business. All sounds pretty sensible so far.

Agile People

There are a large variety of Agile Methodologies in play – Scrum, Extreme Programming, Lean Startup….   and none of them matter. Or at least, which one you pick doesn’t really matter. What matters more is whether your team does – or doesn’t – have the ability to work in an agile fashion. What I mean by this is that team members are self starting, self managing individuals with a low level of ego – for example they are happy to pitch in for the team and test some other developers work, rather than go “No, I code C++ and that’s all I will do.”. Good Agile teams are made up of members who understand they are in a team and that a team is not a fit of discrete lego bricks of skills but a fuzzy stew of ingredients where contributions blend into each other.

Now “the team” may just be your little pod of developers going out on a limb and trying soemthing new. Or, in a case I worked with last year, it may be a collosal financial services organisation where “the team” also encompassed very senior executives. Agility is a fragile thing and can just as easily be broken by a bottom rung developer not pulling their weight as it can be by a senior manager not understanding what the team is doing and trying to influence them to behave in non-agile ways.

Adoption Issues

The take up of Agile is often fraught with frustration and disappointment that a Nirvana of productivity doesn’t arise as soon as the Agile team forms. The reasons behind this are manifold – not least because it takes a while for the team to start working together with the new approach. Most methodologies acknowledge that initially productivity will be worse, rather than improved, until the team has stormed formed and normed its way to a higher productivity level. Many businesses struggle with the idea that they must let the patient get worse before they can get better.

Something else I have seen frequently is that the business engagement is poor – and not for want of trying. Before Agile is attempted, the business will often complain IT doesn’t involve them enough. Once it is attempted, they then grumble that involves them too much. This is incredibly frustrating to see as it underscores the business percepetion that IT is the servant and they are the master, rather than a valued partner trying to help them achieve their goals.

The Road to Success

I’m still gathering my thoughts on this but I suspect part of the road to success in doing BI in an Agile way is going to involve raising some red flags in the same way that you do for BI projects for the classic modes of failure (e.g. no business sponsor). Too often i’ve seen Agile fail because it is driven out of the development team wanting to do better but being unable to get the support needed to make it happen. The other killer is the “WAgile” trap where the team tries to operate as an Agile unit under a Waterfall project management approach.

I’m also keen to hear any readers war stories in the comments.

Read More

Top Six Worst Practices in BI – Vendor Nonsense

A while ago I was pointed at this TDWI white paper – “Top Six Worst Practices in Business Intelligence” – which turned out to be a classic TDWI vendor driven pile of steaming misinformation. Verbatim, here are their claims:

  1. Buying what analysts want without considering the needs of other users
  2. Deploying new BI tools without changing the Excel mindset into a BI platform mindset
  3. Making a BI purchasing decision based on one hot feature, or buying a feature list rather than laying the foundation for a comprehensive BI strategy
  4. Lack of a concrete data quality strategy or treating data quality as an afterthought
  5. Not taking a “mobile-first” approach, or not considering the needs of mobile BI users
  6. Ignoring new data, new sources, and new compliance requirements

Just in case anyone is in danger of believing this, I thought I’d give a rebuttal as I have a headache and am feeling grumpy.

1. Buying what analysts want without considering the needs of other users

The paper makes the somewhat bizarre (and unsupported) claim that:

Most companies make business intelligence (BI) purchasing decisions based on input from
just one group of users – the business analysts. This single perspective, however, creates
many problems.

I can safely say in any purchasing decision I’ve ever come across the BA’s input has been somewhere between nada and zip. The reality is that it’s driven by corporate policies, what legacy systems are already in place, licensed and supported and one in a blue moon some idiot in management sees Tableau or Qlikview and buys that without considering any of the underlying problems that will cause.

There is a grain of truth in this point – that any purchasing decision that doesn’t consider the end users preferred way to to use information is doomed to failure. The back end is irrelevant – end users do not care about the Data Warehouse platform, the ETL tool or even the Cube platform you use. Just the front end. And most of the time, that front end is Excel.

2. Deploying new BI tools without changing the Excel mindset into a BI platform mindset

This is a vendor problem, not a user problem in that users prefer Excel to their tool. Sorry about that to anyone who isn’t Microsoft. It’s kind of odd because this point contradicts their first point.

3. Making a BI purchasing decision based on one hot feature

Yes. This explains Tableau and Qlikview’s popularity. However the solution is not – as the erstwhile vendor claims – their product. In fact, I’m not even sure why this is on the list. Technical issues are rarely the source of BI project failure, so it doesn’t really matter what product you choose – and I’m sure every vendor in the world will recoil in horror at this uncomfortable truth. What matters are people and data. The tool connecting the two is often inconsequential.

4. Lack of a concrete data quality strategy or treating data quality as an afterthought

This I agree with. Data Quality is a huge pain point.

5. Not taking a “mobile-first” approach

If it’s relevant to your organisation. In my current organisation it is utterly irrelevant. In many projects I’ve worked on it’s been a nice to have that got discarded quickly due to its poor value. If it affects adoption, then it’s relevant and must be considered. If it won’t, it doesn’t need to be thought about.

6. Ignoring new data, new sources, and new compliance requirements

I’m ambivalent about this one. The implication of this is less about ignoring and more about being unable to adapt. Rigid, locked down BI systems rapidly become irrelevant because they must change as business changes. However this is as much a function of people and process as it is technology.

How about ONE worst practice in Business Intelligence?

Try this instead: “Believing your BI implementations success or failure will be impacted by technology more that by the people using it”.

…as a cheeky second practice – “Believing anything vendors tell you”.

Merry Christmas and a Happy New Year!

Read More