The Term Extraction Transformation and “Animal Farm”

b
Fig 1: The Term Extraction Transformation

In this post I will be covering the Term Extraction Transformation. The sample package can be found here for 2005 and guidelines on use are here. Todays exercise will be a fun one as i’m going to apply the transformation to George Orwell’s book Animal Farm – a copy of which I obtained in text form from Project Gutenburg Australia.

What does the Term Extraction Transformation do?

In simplest terms it can extract individual nouns and collections of nouns and adjectives from text (these are the “Terms”) and returns them with a frequency count or score.In my example, a common Noun term is “Animal”, and a common Noun Phrase term is “Animal Farm”.

Because it uses an internal dictionary to simplify terms to identify repeated elements – such as removing plurals – it only works for English text. The dictionary is not exposed and cannot be edited, nor can the component be pointed at a custom dictionary of your choosing, so like the Fuzzy Lookup it is a bit of a black box in terms of your ability to tweak its operation – the algorithms and dictionary are fixed – i’ll pick up some flaws with this later. The only real control you can have over the content of the output is the use of an Exclusion List, which allows you to feed a list of terms to ignore into the component.

Configuring the Term Extraction Transformation

b
Fig 2: The Term Extraction Tab

The first thing to configure is the input column on the “Term Extraction” tab – this transformation accepts a single input column which must be either a Unicode Text Stream or Unicode String. In the example package i’ve simply used a Data Conversion task to convert my Non-Unicode input stream prior to the Term Extraction. You can also assign custom names to the Term and Score columns as well.

b
Fig 3: The Exclusion Tab

Next up is to specify your Exclusion list, if you are using one – this must be in the form of a single column in a table in either a SQL Server or Access Database (apparently Excel is also an undocumented  option) .In my example I have used the Name column of the Adventureworks Departments table, so the names of any Departments that appear in the text won’t appear in the output. Admittedly this is unlikely in Animal Farm, but if you were web mining your own website may choose to ignore your company name as it will appear often and may tell you nothing.

b
Fig 4: The Advanced Tab

The final page is the most important in terms of affecting the output. Term Type controls whether the component returns Nouns, Noun Phrases – or both. Score type controls whether the score returned is a simple count or the TFIDF – the Inverse Document Frequency – TFIDF of a Term T = (frequency of T) * log( (#rows in Input) / (#rows having T) ). I’m sure that’s a useful number to someone. Parameters sets the minimum frequency a term has to have before it will be output – obviously a setting of 1 would  return every siingle noun and /or noun phrase found. Maximum length of term sets the maximum number of words in a term. Finally Options sets the case sensitivity of the search.

The Term Extraction Transformations’ dictionary limits

The problem with this component stems from its black box dictionary which limits how well it can handle data. As an example, despite it claiming to remove plurals, if you look in the results of the example package, both Commandment and Commandments appear as distinct terms. If you extend this to the real world – say, mining emails or web pages – misspellings are common, product names are often nonsensical from a dictionary point of view – and a custom dictionary would allow you to work around that. As it is you would end up having to fix it after extracting it.

By adding a custom dictionary, or allowing it to be extended in the reverse of an exclusion table, this component would become more useful. I’ve added a connect article suggesting this – please vote it up if you think it will improve your lot. Update 21/10/2010: The SSIS Team are not implementing this feature, which is a shame.

When would you use the Term Extraction Transformation?

Douglas Laudenschlager comments here on some scenarios envisaged by Microsoft Research in China for use of terms within text data for mining. It should be applied to situations where you need to trawl through large amounts of (English) text data to pull out common terms. One use I attempted when learning SSIS was to try and emulate the Quackometer, a web based tool that tries to analyse web pages and determine if their content is valid science or junk science. I did this by pulling down the web pages as text, running them through the Term Extraction and then trying to detect common valid and junk science terms (and using an Exclusion list to remove common HTML terms). I never finished it but it remains a lurking project which may yet reappear on these pages.

MSDN Documentation for the Term Extraction Transformation can be found here for 2008 and here for 2005.

Read More

A caution on using Dimensional DSVs in Data Mining – part 2

As a followup to this post I have found that not only does using a table external to the one being mined to provide a grouping fail to actually group within the model, it also confuses the Mining Legend in the Mining Model Viewer.

What I was seeing in the Mining Legend for a node in a Decision Tree was like this:

Total Cases: 100

Category A: 10 Cases

Category B: 25 Cases

Category C: 0 Cases

Category D: 9 Cases

… so the Total cases and the cases displayed didn’t tie up. By digging further using the Microsoft Mining Content Viewer and looking at the NODE_DISTRIBUTION I saw that there were multiple rows for the categories, and the Mining Legend was just picking one of those values.

So if you find youself looking at a node and wondering why the numbers don’t add up – it’s because your grouping hasn’t been used by the model.

Read More

A caution on using Dimensional DSVs in Data Mining

If you are using a dimensional-style DSV in a Data Mining project, such as below:
b
Fig 1: A Dimensional DSV

Be aware that if you include a column from a Dimension table in your Mining Structure, the model will actually identify each key entry on the source table as a distinct value, rather than each distinct value in the Dimension table. I found this out because I added a grouping category to one of my dimensional tables – a simple high – medium – low group – and there were multiple values in the attribute states for each grouping, as below:

b
Fig 2: Mining Legend

To work around this you will need to add a Named Calculation to get the group on the main table, or convert the main table to a Named Query.

Read More

Quick book review: Data Mining with SQL Server 2005

I’ve just about squeezed all I can from Data Mining with SQL Server 2005 by ZhaoHui Tang and Jamie MacLennan – both of whom were part of the Data Mining development team for SQL Server 2005.

This book provides a lot of what seems to be absent from BOL and MSDN – it goes through most facets of Data Mining using SQL server reasonably thoroughly, but from a very technical angle. It is littered with big chunks of code and feels and reads like technical documentation most of the way through. It doesn’t provide much insight into how to carry out effective Data Mining or interpret results – what little is there is useful, but it’s a slog to find it.

As a technical reference I’d recommend it, not least because of the dearth of decent documentation. If you’re a beginner trying to work out how to use the product to get results, you need to look elsewhere.

Read More

Cannot View Data Mining Model in BIDS – function does not exist

I’d been running some Naive Bayes Data Mining models without problems as part of initiating a Data Mining exercise, so it was time to move on and cut the data some different ways. So I set up a Decision Tree model and it processed fine, but when I tried to view it a message box appeared telling me it wasn’t going to co-operate:

The tree graph cannot be created because of the following error:

‘Query (1,6) The

‘[System].[Microsoft].[AnalysisServices].[System].[DataMining].[DecisionTrees].[GetTreeScores] function does not exist.’.

Fortunately someone had hit this before, as the solution is rather obscure. The install I am working against is non-standard, being split across two drives. What had happened is the path for the Data Mining dll’s set up in the install process didn’t actually match where they were placed.

So when I looked under the assembly location – SSMS > AS Server > Assemblies > System > Properties, the Source Path referenced a dll that didn’t actually exist – so it appears this incorrect path does not raise an error when trying to start the server. To fix it, I located located where the dll really was, then updated the config files where this path is stored – System.0.asm.xml and VBAMDX.0.asm.xml – to point to that path.

A restart of the server and the models reprocessed and I could happily view the output!

Read More

The Percentage Sampling Transformation

b
Fig 1: The Percentage Sampling Transformation

In this post I will be covering the Percentage Sampling Transformation. The sample package can be found here for 2005 and guidelines on use are here.

What does the Percentage Sampling Transformation do?

This component is very simple – it splits a dataset by randomly directing rows to one of two possible outputs (as you can see in example 2 in the package, you can use just a single output if you want). All you need to decide is in what proportion (as a whole percentage) you want the rows split into the two output data flows. In the picture below you see the configuration options – the percentage split, the names of the two outputs and the Random Seed.

b
Fig 2: The Percentage Sampling Transformation Options

The effect of the Random Seed can be seen in the sample package – if you run it multiple times you will get different results for the split each time, as each time you run it the Random Seed is different because the package decides what it is based on the tick count of the operating system (and no, I don’t know what that is either!). Note that in the example even though the percentage sample is set to 30% it’s unusual for the output rows to be split exactly 30:70. This is because the rows are allocated to an output by a throw of the randomisers dice. If you set a value for the Random Seed you fix the results of the throws and will always get the same rows sent to the same outputs, though there is still no guarantee it will be 30:70.  As the data set you split gets bigger, the impact of this effect will be less significant.

b
Fig 3: Percentage Sampling Transformation Results

Where would you use this transformation?

The main use for this as far as Microsoft is concerned is carving up data sets for Data Mining into training and test cases. But anywhere you need to divide a dataset truly randomly – e.g. separating out customers for a different target mailing – this is the component for the job.

MSDN Documentation for the Percentage Sampling Transformation can be found here for 2008 and here for 2005.

Read More

Microsoft’s secret forecasting tool – the Office Suite

Last night I attended an IAPA presentation on basic forecasting concepts and the tools used, presented by the ever interesting Eugene Dubossarsky (of Presciient, an analytics consultancy).  I will skip over the forecasting content as for the Microsoft BI community, the interesting part is which tool he used for most basic forecasting activities. It was Excel. Then, when he needed to do more advanced work, he used – Excel. Only when he needed to do trickier stuff with larger amounts of data did he pull in a more heavyweight tool – Access.

That’s right – the office suite covers the majority of forecaster’s needs. SQL Server and Analysis Services didn’t get a look in until the really heavyweight analytics processes began. For his purposes however, Eugene much prefers R, an open source stats program that is free, very powerful and now a serious competitor to SAS – much to their annoyance. Microsoft are rumoured to be talking to the people behind R, and an acquisition would make sense for both sides – R is not user friendly, which Microsoft could provide help with – and adding the capabilities of R would allow Microsoft to take a slug at SAS’s BI market.

So, this shows that most users still aren’t fully aware of, let alone using Excel’s capabilites – otherwise they wouldn’t be paying analytics consultants to to use it for them. Microsoft are always pushing Excel further, so now i’ll cover two features of Excel that the power users may not be aware of. It’s easy to forget sometimes that the 2007 Office suite wasn’t just a new, pretty interface – it also added huge BI capabilities.

The Data Mining Add-In for Excel (download for SQL Server 2008 or 2005)

This Add-In allows you to leverage the Data Mining capabilities of Analysis Services through Excel. It allows you to use Excel as the front end for creating and working with Data Mining models that exist on your server. However what really makes it interesting for Excel users is that it allows you to perform Data Mining on your spreadsheet data.

There is a Virtual Lab here explaining and demonstrating their use.

Project Gemini

This feature is slated for the next release of Excel, and is an in-memory tool for analysing large amounts of data in an OLAP style, but without all the fiddly data modelling normally required. It is a clear slug at other players in the in-Memory market, such as QlikTech. The models created will also be able to be ported back to SSAS with minimum effort as well. For more details read this commentary from the OLAP Report.

Microsoft has one of the most powerful BI Tools in the world in Excel, users just need to be made aware!

Read More