After many years working as a consultant for a number of providers small and large, and servicing clients across a broad range of industries, I have now taken the plunge and decided to operate independently.
I’ll be providing independent Business Intelligence Consulting to organisations across Australia, focusing on:
- Strategy Creation and Review
- Solution Architecture
- Data Warehousing and ETL
- Microsoft Business Intelligence technical support
- Microsoft Business Intelligence training
- Agile Enablement
Full details can be found at my “Consulting and Technical Services” page
Some of this decision was supported by some reading of James Serra’s posts from his master post “Blueprint for consulting riches” – which is ironic given his recent move to Microsoft as an F/T employee. Either way, his series is well worth a read for those mulling over their approach to work.
The Gartner Magic Quadrant for Business Intelligence and Analytics Platforms is now available.
Good news for Microsoft again – it remains in the Leaders quadrant, though in line with all other MegaVendors has slipped a bit due to a weak story around data discovery. It still remains a well loved platform by both users, developers and architects and is showing increasing levels of being the standard enterprise product. For those of us working the the field it remains a safe bet from a career point of view for a good few years yet.
On the downside there are the same bugbears we are still complaining about – no credible mobile story, metadata management is non-existent (hello Project Barcelona – no news for 2 years now?) and PowerView, while shiny, is no match for the likes of QlikView or Tableau (regardless of how ugly they are behind the shiny screens, that’s what the users see and judge you on).
Anyway, not too shabby a report card, a decent score but the usual caveat of “could try harder”. But the other big kids (IBM Cognos, SAS, Oracle) are doing pretty much the same so not much to worry about.
An in joke for one of my fellow leaders in the BI industry…
I’ll be presenting at TechEd Australia 2013 on “Big Data, Small Data and Data Visualisation via Sentiment Analysis with HDInsight”
In the session I’ll be looking at HDInsight – Microsoft’s implementation of Hadoop – and how to leverage that to perform some simple Sentiment Analysis, then link that up with structured data to perform some Data Visualisation using the Microsoft BI stack, especially PowerView.
Hopefully this will also tie in with the release of a White Paper on the subject so anyone with deep technical interest can get hands on with the experience.
I’m excited to get a chance to present again – look forward to seeing you there!
This Thursday 22nd August Sydney BI Social presents “Rapid Fire Mini Sessions” – presented by a range of experienced BI professionals giving a quick overview of a topic they are experts in. The sessions are:
Session 1: Power BI in action, the exciting new BI functionality in Excel 2013 and Office 365
Session 2: 5 tips for better data visualisation
Session 3: A day in the life of an SQL DBA
Also – for those with an eye further on the horizon, on Weds Sep 18th we have Stephen Samild presenting on “The Data to Decision Landscape”.
This Wednesday 17th June Sydney BI Social presents “BI & NoSQL” – presented by Stephen Young, CEO of GraphBase and architect of the GraphBase DBMS. Steve will give an overview of the various classes of NoSQL database, their advantages and disadvantages, with an emphasis on Graph Databases and the novel ways that they can be used for Business Intelligence purposes.
The thing about Big Data is, well… it’s big. Which has impacts in terms of how long it takes you to move your data about and the space it needs to be stored in. Now as a novice, I had assumed that you had to decompress your data to process it and I also had to tolerate the huge volumes of output my (admittedly not very efficient) code output.
As it turns out, you can not only process input in a compressed format, you can also compress the output – as detailed in the Hadoop Streaming documentation. So now my jobs start smaller and end smaller, and without a massive performance overhead.
So how does it work? Well, to read compressed data you have to configure absolutely nothing. It just works, as long as Hadoop recognises the compression algorithm. To compress the output, you need to tell the job to do so. Using the “-D” option you can set some generic command options to configure the job. A sample job – formatted for HDInsight – is below, with the key options highlighted in blue:
This tells the job to compress the output, and to use GZip as the compression technique.
And now, my jobs are still inefficient but at least take up less disk space!