SQL Pass Day #2

So, on to day 2 of SQL Pass, and a very SSIS focused one – mainly because I attended Matt Masson’s SSIS session and learned about a whole new bunch of nice features that have made it in to the next version.

I took away the following interesting points from that session:

  1. CDC (Change Data Capture) is supported more effectively through some new components – a CDC Control, CDC Source and CDC Splitter
  2. ODBC improvements mean improved performance for non SQL Server databases
  3. Connection Managers get a few new features – Offline, Expression and Project indicators. Also Offline Connection Managers are picked out through a timeout and importantly now halt validation of any related components (so you no longer get those drags as SSIS tries to validate components and flows hooked to dead connections)
  4. File Connection Manager can now handle variable numbers of columns (i.e. it won’t crash)
  5. Pivot gets a UI – hurrah! (Note: SCD still sucks)
  6. Project Parameters can be configured at design time though Visual Studio Configurations
  7. Breakpoints are now in the Script Component so we can see what data is causing or components to blow up
  8. Data Taps – data viewers for live execution that dump out to .csv files
  9. Package Execution via PowerShell or even T-SQL!

Matt also gave a preview of Barcelona in action, and it looks pretty neat.

I also attended a DQS session that showed a few new features on the UI. Elad Ziklik highlighted that the CTP3 release should be viewed more as a capability preview rather than a test drive of a functional product – so looking forward to a new CTP.

One more day down – one to go…!

Read More

SQL Pass Day #1

So the BI Monkey was at PASS and intended blogging on his phone or at one of the many internet pods provided, but sadly WordPress and IE / Android aren’t friends so this is going to be a retrospective.

So, here are my takeaways from day 1:

From the keynote:

Denali becomes SQL2012 and is slated for release in the first half of next year (way to allow yourself some leeway on the final release date!)

Crescent becomes Power View – and will work on multiple mobile platforms – Apple & Android via Browser, WP7 via a richer app

HADOOP is going to be supported in Windows server – the first CTP is due next year. If you have access to the PASS DVD’s / Sessions – see Dr Dewitt’s presentation on HADOOP – very enlightening

From wandering around the summit:

As per Elad Ziklik of the DQS team, performance is one of their focuses for the next release

I got to finally meet Ivan Peev of CozyRoc after many years of email and phone conversations, and he told me about their new SAS Connector – which allows reading and writing to SAS datasets without an actual SAS install.

I also got to meet Matt Masson, guru of the SSIS team who told me about Project Barcelona – a tool that will do data lineage, metadata management and impact analysis via a crawler as opposed to a manually maintained set.

I also got to sit in on a customer feedback session about BI in the Cloud – unfortunately all under NDA – but it was a great forum to discuss and help direct Microsoft’s Cloud BI ambitions.

I also had a chat with fellow Aussie BI guy Roger Noble who told me about a use for the Term Extraction transformation in SSIS – using it to scan through documents and auto-tag them as the were uploaded to SharePoint – which is pretty cool.

So, that was Day 1… Day 2 to follow!

Read More

Simple Data Quality Scoring with SSDQS & SSIS

A common requirement in Data Warehousing is to apply a Data Quality “score” to records as they come in. The score is then used to identify and filter or fix bad data coming in depending on its assigned quality.

A practical example of this might be that in a Customer Address record, a missing Postcode might attract a high score as it’s a very important field. However a badly formatted work, home or mobile telephone number may attract a lower score as it may not be as important to the business. Though, cumulatively, if all three numbers are badly formatted that may be necessary to give a combined high score so the record gets examined.

An example of this is below. A failed Postcode gets a score of 3, and a failed telephone number gets a score of 1. Thus, anything with a score of 3 or above either has a failed Postcode, or 3 failed telephone numbers, and can thus be subject to special handling.

Data Quality Scoring Example
Fig 1: Data Quality Scoring Example

From the example above we can see this is a fairly arbitrary process in terms of how scores are calculated and used. SSDQS itself doesn’t natively support assigning a score or weight to a failed data item, but what it does do is provide us with a flexible engine to help us decide what is a failed data item. SSIS can then react to this pass / fail behaviour and apply a scoring.

Setting up an SSDQS Knowledge Base for Scoring

Given that the basis for scoring is pretty binary in nature, I set up a simple KB that had domains that would either pass or fail a piece of data. I first created a data set with three data fields:

  • Year – Values ranging from 1970 to 2025
  • Value – Values ranging from 0 to 100
  • Code – Values A,B,C,D,E

I then set up a KB to evaluate the fields as follows:

  • Year – Valid from 1975 to 2020
  • Value – Valid from 10 to 95
  • Code – Valid values A,C,E

Note that I did not set up any Domain Values or do any training – I just set up the KB, Domains and Domain Rules. All I want to use DQS for is to identify records that are invalid for SSIS to use in scoring.

Using SSIS to Score SSDQS output

Next I hookup up my SSIS Data Quality Cleansing Component to push the source data through the Knowledge Base, and get the status of each of the columns after they pass through. As there are no preloaded valid values in the Domains, the status comes back as either “Invalid” (it failed the Domain rule) or “Unknown” (in this configuration, this translates to a correct value).

DQS output Data Viewer
DQS output Data Viewer

The Data Quality Cleansing Component doesn’t support scoring in itself. This has to be added using a Derived Column on an item by item basis. Using a simple IF / THEN / ELSE expression, I assign a score of 1 to each failed column based on the status of the record, as below:

Applying Score using a Derived Column
Applying Score using a Derived Column

Because of the Pipeline nature of SSIS, I then need to add a second Derived Column transform downstream to weight and add the scores together to create a final, record level score:

Aggregating and Weighting Score using a Derived Column
Aggregating and Weighting Score using a Derived Column

This results in a final Data Quality “Score” assigned to each record:

Weighted Scoring output Data Viewer
Weighted Scoring output Data Viewer

What you then do with these scores is up to you. In my example package, I used a Conditional Split to send records with a score over a certain threshold to a different destination:

DQS Scoring example Data Flow
DQS Scoring example Data Flow

Improving the Scoring process

The example I’ve created is quite simplistic – it has hard coded weightings and redirection thresholds, and can only react to two (of a possible three) record statuses.  The process could be made more flexible using metadata driven weightings and thresholds (provided as package inputs).

Beyond that you have the option to handle the clean and dirty data more appropriately – by pushing dirty data into a cleanup process, halting ETL processes etc, etc.

The key takeaway here is that DQS enables you to create a scoring process that is independent of the actual Data Quality rules that pass or fail a piece of data. The DQS Knowledge Base is your flexible input of what qualifies as a good or bad record, instead of having to hard code using SQL or Derived Columns, which could get messy very quickly.

Read More

Columnstore Indexes revisited

Having now researched Columnstore Indexes further, I thought I’d share the key learning I’ve picked up on this feature – which now sounds even more powerful than I’d originally thought.

The most important thing to take away is that a Columnstore Index should actually cover the entire table. Its name is a little misleading – the feature is less of an index, and more of a shadow copy of the table’s data, compressed with the Vertipaq voodoo. I suspect they have used the term index because the Columnstore doesn’t cover all data types – the important ones are there, but some extreme decimals and blobs are excluded – for a full list see the MSDN documentation. So for any big table, whack a Columnstore index across the entire table.

Next up is to understand how to use them and how to detect when they are or are not being used. The key thing is to only use them in isolation (e.g. summary queries) or for Inner Joins. Outer Joins don’t work right now, though there are cunning workarounds that apply if you are Outer Joining to summary data – see Eric Hanson’s video referenced below somewhere around the 50 minute mark.

You can detect when they are being used by the Execution Mode described in the Query Plan. This is new in Denali and is either Row or Batch. Row means traditional SQL Server execution and Batch means the Columnstore is being used.

So, the key takeaways:

  • For any large table put a Columnstore index across the entire table
  • Only join using Inner Joins
  • Spot the use of the Columnstore in Query plans via the Execution Mode of Batch

Useful reference material:

Read More

SQL Server Data Quality Services & SSIS – Performance

This is a snippet of a post on the performance of the DQS engine when called from SSIS. I’ve created a simple number based Domain rule and replicated it 5 times in my knowledge base. My package then feeds copies of the same set of data into the DQS component (5000 rows) and runs it through 1 – 5 domains.

The performance profile is as below:

SSIS DQS Component Performance
SSIS DQS Component Performance

There seems to be a fairly linear relationship between the number of domains being processed and execution time. Note that I’ve created a dummy value for “0” to indicate what the start-up time of the DQS component might be, as it’s impossible to have a DQS Cleansing Component in the flow with no columns mapped.

I’d ignore the actual numbers – this is on a development VM which is definitely not configured for performance – and I’m aware the DQS Team are working on performance issues (though by the looks of it, better be working hard).

Read More

SQL Server Data Quality Services & SSIS

So far in my posts on SSDQS we’ve looked at the Data Quality Services Client and building SSDQS Knowledge Bases. Now in practice when handling bulk data a need to reference this in routine loads is needed, and to nobody’s surprise, SSIS is the tool for the job.

The DQS Cleansing Component

So, in our (shiny, new) SSIS Toolbox we have a new component to connect to DQS – the DQS Cleansing Component:

SSIS DQS Cleansing Component
SSIS DQS Cleansing Component

The DQS cleansing component pushes a data flow to the DQS Engine for validation. This requires a special Connection Manager, the DQS Cleansing Connection Manager, which as we can see below is a simple creature:

SSIS DQS Cleansing Connection Manager
SSIS DQS Cleansing Connection Manager

The sole option at this point is to choose which DQS Server to point at. So, lets look at what we get in the SSIS Component once we use the Connection Manager:

SSIS DQS Cleansing Component Connection Manager options
SSIS DQS Cleansing Component Connection Manager options

Once again – still nice and simple – choosing your Connection Manager allows you to then pick from a list of Published Knowledge Bases. Once a KB is selected, a list of the available Domains is populated, though there is nothing you can do with this list other than review it. So next we move to the Mapping tab:

SSIS DQS Cleansing Component Mapping Tab
SSIS DQS Cleansing Component Mapping Tab

The usual suspects are there – pick your input columns in the top half of the tab and they become available for mapping in the lower half. Each input column can be mapped to a single Domain (I can’t quite see how Composite Domains work in this context). You then get three output streams – the Output, Corrected Output and Status Output. The Output is just the column passed through, Corrected is the column value corrected by the DQS Engine and the Status is the record status (which comes out as Correct, Corrected or Unknown which corresponds to the DQS Data Quality Project statuses. In the Advanced Editor you can also switch on Confidence and and Reason Outputs, which relate to matching projects.

Note that there is only a single output for the DQS Cleansing Component – if you want to send OK, Error and Invalid records to different locations, you will need to do so with a downstream Conditional Split component.

Summary

So we’ve had a quick look at the basics of automating DQS activities using SSIS, and how SSIS plugs in to the DQS Server. Subsequent posts will start digging into some practical implementation including performance.

Some further reading can be found here:

Read More

SQL Server Data Quality Services – Composite Domains

One of the things I skimmed over in previous posts was the concept of Composite Domains. This is a combination of domains that are assessed with interdependencies on content.

At a very simple level, a Composite Domain addresses these kind of problems:

  • If City = “London”, Country must equal “England”
  • If Wealth Category = “Millionaire”, Bank Balance must be greater than 1 million

They allow us to validate separate data items in combination, thus allowing the writing of more complex rules beyond the already capable single field ones.

Implementing Composite Domains

Below is a couple of screenshots around setting up composite domains. First we have the definition of what fields need to be included:

DQS Client - Composite Domain Properties
DQS Client - Composite Domain Properties

This is fairly straightforward – just pick at least two fields that are interrelated from the available Domain list. The other screen of interest is under the Rules tab:

DQS Client - Composite Domain Rules
DQS Client - Composite Domain Rules

Here it can be seen that a composite domain rule has the capability to evaluate two components at a time. This is a deliberate limitation, so if you wanted to validate 3 fields in combination, you would have to do it via a set of rules that cross over. If you were looking at validating the rule:

If City = “London”, Country = “UK” & Region = “Europe”

You would have to do it through the following rules

If City = “London”, Country = “UK”
If City = “London”, Region = “Europe”

The rules allow for AND / OR at the field level, so you could have rules that read

If City = “London” OR “Birmingham”, Country = “UK”

If Bank Balance >= 2 million and < 1 billion, Wealth Category = “Multi Millionaire”

There seems to be scope for improvement here – the rule capability is a little simplistic but I imagine will meet most scenarios and does make cross field validation possible.

Other features and summary

There are two features I skipped over – Reference Data (for a bigger future post on the whole concept) and the Value Relations tab which, at this point in time seems not to be working and is just a statistical summary of the values found in the data.

There’s not much to close out on otherwise – Composite Domains allow fields to influence other fields from a data quality perspective. The documentation on this feature is sparse at this point so hopefully we’ll get more information soon.

Read More

SQL Server Data Quality Services – Domain Management

In the previous post we looked at creating a Knowledge Base through the Knowledge Discovery process. This gave us a first glimpse of what can be done in terms of managing incoming values and providing correction.In this post we will look at the more advanced capabilities of managing incoming data quality issues such as format rules, reference data, etc.

In a quick update to yesterdays post, the helpful team @ the MSDN DQS Forum have answered my query about the difference between Invalid and Error values:

An Invalid value is a value that does not belong to the domain and doesn’t have a correction. For example, the value 12345 in a City domain. An Error value is a value that belongs to the domain  but is a syntax error. For example Shicago instead of Chicago in a City domain.

The functional difference: when the system sets statuses for values, it does so according to the above semantics. If a value failed a domain rule, its status will be Invalid. If the system detects a syntax error and an associated correction, the erroneous value status will be Error.

However DQS does do not enforce this semantics on manual operations. So you can enter a correction for an Invalid value without changing its status, and you can remove a correction to an Error value without changing the status as well.

So the upshot is Invalid = Unusable, Error = Correctable.

Domain Management: Domain Rules

If we open our previously created Knowledge Base in Domain management mode, we get a list of Domains, as below:

Domain Management - Domain List
Domain Management - Domain List

And, to the right hand side we see our options for setting the Data Quality rules that apply to the domain in question:

Domain Management - Domain Rules
Domain Management - Domain Rules

The full list of tabs is:

  • Domain Properties
  • Reference Data
  • Domain Rules
  • Domain Values
  • Term-Based Relations

Domain Properties

Domain properties are the same as those available when creating a Domain – with the restriction that you cannot change the data type of the domain. The values available are:

  • Domain Name
  • Domain Description
  • Data Type (fixed after creation)
  • Use Leading values – a checkbox
  • Format output to – which gives a dropdown of formats suitable to the data type

Nothing terribly exciting here, so moving on…

Domain Values

This was covered in my previous post, but as a lightning recap here is where you can:

  • Flag data values as Correct, Invalid or Error
  • Provide corrected values for Errors
  • Add / Delete Values manually

So, lets move on to new features.

Domain Rules

Now we are getting into the syntax based validation that DQS can apply.

Data Quality Client - Domain Rules
Data Quality Client - Domain Rules

In the picture above I have generated a simple rule with a couple of elements – that Country must be at least 3 characters and does not contain any full stop, in an attempt to filter out any abbreviations. Now its worth pointing out at this stage that anything that fails a Domain Rule is considered Invalid – i.e. it is an unusable value that cannot be corrected. So by applying this rule I will render any abbreviations in my Domain Values list Invalid – and the DQS client is kind enough to warn me of this if I apply the rule:

Data Quality Client - Domain Rule Warning
Data Quality Client - Domain Rule Warning

The warning tells me I will increase my Invalid value count and consequently decrease my Valid value count.

It’s important to note that the rules will NOT apply to any cases where there is already a correction defined in the Domain Values. For example, the value “UK” which I corrected to “United Kingdom” is unaffected by this rule, but “HK” is rendered invalid as I have not corrected HK to “Hong Kong”.

So what rule types are available? Broadly it covers:

  • Length
  • Value
  • Contains
  • Begins with
  • Ends with
  • Pattern Matching
  • Regular Expression matching

As I showed in my example you can combine rule types with AND / OR operators so they become quite flexible. You can also have multiple rules per domain if you want to simplify managing multiple non dependent conditions.

Handily you can also test a rule during development – just above the rules list is a couple of buttons fro test runs – one allows you to test the rule against exiting Domain Values, and the other allows you to test it against a sample data set.

Term Based Relations

This is not the most obviously named component, but is effectively a Find and Replace engine. So you could repair common typos – e.g. change “teh” the “the” or expand common abbreviations – e.g change “Inc.” to “Incorporated”.

Domain Management - Term Based Relations
Domain Management - Term Based Relations

This is only available for String Domain types, and doesn’t appear as an option for other types. MSDN Documentation here.

Reference Data

Data Quality Services also allows you to use external sources to validate data against – so far this only covers Azure Datamarket – but the team have hinted that other options will become available. As this feature seems to be half baked at this point I’m not going to dig much further.

Summary

This post has skimmed over some of the features of DQS once you have an active Knowledge Base and Domains set up. The headlines of what we can see is:

  • Domain Values: Validation and correction of values against a known list
  • Domain Rules: Validation against formatting, patterns and values using a set of rules
  • Term Based Relations: Effectively “Find and Replace”
  • Reference Data: Validation against external data sources

There’s some things I’ve skipped over for future posts, such as composite domains and reference data in depth. But so far, still looking good!

Read More

Columnstore indexes in Denali (aka: “Apollo”)

James Serra has a great post on a new feature in SQL Server Denali – Columnstore indexes.

The tl/dr version is this:

Columnstore indexes use the Vertipaq compression engine (that’s the shiny compression engine in PowerPivot) to further compact indexes to make querying them between 10-100 times faster.

The most significant limitation is that tables become read-only when they have a columnstore index (no Inserts / Updates / Deletes etc) – though James notes you can work around this by using Partitions if you are dealing with tables that are just additive. Otherwise indexes will need to be dropped and recreated as data changes.

So – a powerful new indexing feature which, with careful management – can have a serious positive impact on the performance of your Data Warehouse.

Read More

SQL Server Data Quality Services – Creating a Knowledge Base

So far we’ve opened up the client and taken a look at the areas for working with. In this post I’ll look at setting up a Knowledge Base from scratch using some sample data I’ve mocked up based on some Netflix catalogue data.

Note all of the images are of near full screen so I’ve shrunk them in the post, just click on the image to see the full screen.

Creating a Knowledge Base

To create a new KB just select “New Knowledge Base” from the client front screen, and give it a name and description. You can either clone an existing one on already the server or import, but in this case I’m just going to start from scratch. There’s three possible modes in which you can create a new KB:

  • Domain Management – Creating from scratch with no guidance
  • Knowledge Discovery – Using a sample data set to guide building your KB with a view to using it for data cleansing rules
  • Matching Policy – Using a sample data set to guide building your KB with a view to using its record matching capabilities
SSDQS Client: Create new Knowledge Base
SSDQS Client: Create new Knowledge Base

As I’m not attempting to do any matching, I’m using the Knowledge Discovery approach. (Note: because I’m using an Excel source I need Excel installed on the machine).

Mapping Columns to Domains

Once I’ve picked my data source, I need to map columns:

Data Quality Client - Field Mapping
Data Quality Client - Field Mapping

At this stage, there is no facility to automatically create domains, so before I map the fields to a Domain, I need to create each of them:

Data Quality Client - New Domain
Data Quality Client - New Domain

Now a (known) defect appears to be fields from Excel are automatically treated as text – so if they contain numbers you cannot map them to a Numeric domain. So I’ll skip over evaluating Year for now.

Analysing and Managing the results

Once all the mapping is done, click next and you can upload the file for analysis. Now my sample file is 5,000 records, and it took DQS a few minutes just to upload this – so big sets may take a while. Anyway, the output of this is an analysis of the data, displayed in a profiler screen:

Data Quality Client - Discovery Output
Data Quality Client - Discovery Output

This breaks down the records by the following criteria:

  • New records – in this case, all of them as it’s a first pass
  • Unique – how many unique records
  • Valid – how many valid records – again, as it’s a first pass, everything is valid

So, we skip on to the next stage – to manage the results of the findings. This is the first time we start to see what DQS can offer us in terms of cleansing:

Data Quality Client - Managing Domain Values
Data Quality Client - Managing Domain Values

Here we are reviewing the values found for Country, and can manage the values that come through, flagging them as Correct, Error or Invalid – and assign a corrected value to incorrect ones.

I’m trying to see if there is a functional difference between Error and Invalid. As per documentation:

The status of the value, as determined by the discovery process. You can change the type by clicking the down arrow and selecting a different type. A green check indicates that the value is correct or corrected; a red cross indicates that the value is in error; and an orange triangle with an exclamation point indicates that the value is not valid.
A value that is not valid does not conform to the data requirements for the domain. A value that is in error can be valid, but is not the correct value for data reasons.

Update: the difference has been clarified here in the DQS Forum.

Skipping over the semantics issues, what we see here is a list of the values that the DQS Client has found in the Knowledge Discovery analysis of the data. We can then flag these values to Invalid or Error as we see fit – or leave them at their default value of Correct. Once we have flagged them as not correct, it is then possible to enter the Correct value in the “Correct To” column. Handily, the client then groups your corrected values under the correct value in the list.

The final thing is to click Next and Publish the Knowledge Base (i.e. store the results back on the DQS Server)

Summary

So in this post we have quickly reviewed the creation of  Knowledge Base through the Knowledge Discovery Mode. This has allowed us to create a set of values in our Knowledge Base using some sample data and then apply some corrections to those values, using a simple GUI to manage the results.

In the next post I will look at working in more depth with this created Knowledge Base using the “Domain Management” mode.

Read More