Monthly Archives: January 2018

Power BI Desktop files are smaller now

I was working on a demo for my upcoming Pluralsight image

It used to be that you could look at the data model and see a version number.

image

But now, it’s almost entirely unintelligible. The only thing you can read is “This backup was created using xpress 9 compression.”

image

A little image

When imported into power bi Desktop, the new compression model is dramatically more efficient. 184 KB versus 2,288 KB.

image

What I haven’t figured out yet is if this impacts in-memory use or just when it’s saved to disk. Still it’s nice to see Microsoft continuing to make improvements.

Power BI Precon: Implementing the other 90%

Last year I got to do a Precon on Power BI for the Pittsburgh SQL Saturday. This year I’m honored to be presenting at Cleveland and Cincinnati. This time I thought it would make sense to have a blog post summarizing what’s covered.

One of the things that I found frustrating when I first learned about Power BI, was all of the behind the scenes stuff. It was easy to find information about charts and graphs, but less so about how everything fits together. This precon focuses on two main areas: data wrangling, and administration.

Session 1: Database Theory

Because Power BI is aimed at business users in large part, there are many people using it who don’t have a traditional data background. This means it’s worth touching on some of the fundamentals such as primary keys, normalization and star schema.

The most important things to understand when modeling for Power BI, is that it’s optimized for star schema in particular and filtering/aggregating in general. That fact that it’s a columnar database means it can handle a certain amount of flattening/denormalizing gracefully, because it has really good compression.

Session 2: Power Query

One of the things that can be confusing is that it has 2 different data manipulation languages, M and DAX. (3 languages if you could R!). So a question that comes up a lot is when to use which language.

Power Query is designed for business users primarily, especially since it started as an Excel add-in. In fact the official Microsoft litmus test is that is was designed for users who get value from the excel formula bar in their work. As a result, it has a strong GUI component, but is really basic in a lot of ways.

The way I like to think of it is “Anything you could pay someone minimum wage to do in Excel, you can automate in Power Query.” Power query is all about basic clean up and data prep. You aren’t going to be adding a lot of meaning to the data.

Session 3: DAX

DAX is the language you are going to use to model your data and add meaning to it. DAX is deceptively simple, looking very similar to Excel formulas. In reality, the learning curve on DAX can be quite painful, because it requires thinking in terms of columns and filters, not in terms of rows.

Session 4: Data Gateways

Data gateways are the way that you bridge the cloud Power BI service to whatever data lives on premises. Installation and configuration is pretty simple overall. Data Gateways allow for schedule refreshes of your data up to the cloud.

One thing that’s worth knowing are the alternative query methods available it gateways. By using DirectQuery or live connections, you can query live data without having to export it all to the cloud.

Related course: Leveraging Timely On-premises Data with Power BI

Session 5: Licensing and deployment

With power BI, generally you are going to be buying pro licenses for all of your users, at $10 per month. However there are other licensing scenarios such as Power BI Reporting server and Power BI Premium. But you are probably going to be going with the pro license.

There are so many was to publish Power BI reports:

  1. Personal workspaces
  2. App Workspaces
  3. Organizational content packs
  4. Publish to web
  5. Sharepoint
  6. Power BI Premium
  7. Power BI Embedded
  8. Power BI report server

It can get a bit difficult to keep up with all of the options.

Session 6: Security and Auditing

There are three big pieces to securing Power BI: What data can be access, what reports can be accessed and what can people share. In addition to that, there are interesting features with row-level security built in to Power BI as well as SSAS.

In terms of auditing, much of that is going to be based on the Unified Audit log for Office 365, which requires some work to enable. There are also things you can do with PowerShell and with auditing data gateways.

Overview

Overall I’m pretty proud of the contents. This is the kind of precon I wish I had been able to attend 3 years ago so I had an idea of what I was doing.

Building a DBA Salary Calculator, Part 0: Initial findings

I’m planning on building a salary calculator based on the data from played around with some of the numbers earlier. This time I’m planning on going a lot deeper.

I want your help and feedback! I want to know what would make a calculator most useful to you. Feel free to poke holes in my methodology and tell me how a real data scientist would handle this project.

In this post, I’m going to outline some initial findings as well how I’m planning to approach this project. All of the information below is based on a narrow subset:

  • USA, full postal code
  • DBA job
  • Between $15,000 and $165,000

Regarding zip codes, some people only entered a portion of their zip for privacy sake. In the final analysis, I plan on taking into account the ~200 US individuals who did that.

Initial findings

The data isn’t very predictive

So I’m using something called a multiple linear regression to make a formula to predict your salary based on specific variables. Unfortunately, the highest Coefficient of Determination (or R2) I’ve been able to get is 0.37. Which means, as far as I understand it, that at most the model explains 37% of the variation.

Additionally the spread on the results isn’t great either. The standard deviation, a measure of spread, is about $25,000 on the original subset of data. Which means we’d expect 68% to be within +/- $25,000 of the average and 95% to be within +/- $50,000 of the average. So what happens when we apply our model?

When we apply the model we get something called residuals, which are basically the difference between what we predicted and what the actual salary was. The standard deviation on those residuals is $20,000. Which means that our confidence range is going to be +/- 20-40k. That to me doesn’t seem like a great range.

There are a few strong indicators

Let’s take a look at what we get when we do a multiple regression with the Excel Analysis ToolPak addin:

image

The two biggest factors by far seem to be how long you’ve worked and and where you live. In fact, we can explain 30% of the variance using those two variables:

image

The two other variables that are very strong are whether you telecommute and whether you are independent. When we add those, our adjusted R2 goes up to 33%.

Then after that we have a handful of variables that have a less than 5% chance of being erroneous:

  • Gender. It’s still a bit early to jump to conclusions, but it looks like being female might cost you $6,000 per year. This is after controlling for years of experience, education, hours worked, and if this is your first job. Gender could still be tied to other factors like a gap in your career or if you negotiate pay raises.
  • First Job. “First job” I identified as having identical values for years of experience and years in this job. If you haven’t changed jobs, it could be costing you $4,000, which lines up with my personal experience.
  • Hours worked per week. This is basically what you would expect.
  • Education. This is the number of years of education you received outside of high school.
  • Build Scripts and automation. One of the tasks people could check was if they are automating their work. Out of all the tasks people could list, this seems to have the biggest impact.

There are some interesting correlations

Part of doing a multiple regression is making sure your variables aren’t too strongly correlated or “collinear”. As part of this, is possible to find some interesting correlations.

  • If you are on-call, you are less likely to have post-secondary education. You are also probably overworked and learning PowerShell (no surprise there).
  • Certifications correlate negatively with being a dev-dba instead of a production dba.
  • If this is your first job, you are less likely to be working more than 40 hours per week. Maybe that $4,000 paycut is worth it Winking smile
  • Independents also work less hours per week. So maybe your second job should be going independent.
  • If you telecommute, you might make $2,000 more per year for every day of the week you telecommute; but you are going to be working more hours as well.

Plans moving forward

So here is the current outline for this blog series:

  1. Identifying features (variables)
  2. Data cleanup
  3. Extracting features
  4. Removing collinear features
  5. Performing multiple regression
  6. Coding a calculator in Javascript
  7. Reimplementing everything in R

So let me know what you think. I plan on making all of the data and code freely available on github.

T-SQL Tuesday #98: Learning Troubleshooting from Games

 

TSQLTues

This week’s T-SQL Tuesday is about a time that you solved a difficult technical problem. Unfortunately my brain doesn’t store events that way, so I can’t think of any good stories. Instead, I want to talk briefly about how games have made me a better programmer and a better troubleshooter.

Map-making in TFC

TFC Boxart.png

The first game I want to talk about is Team Fortress Classic. It’s a team based shooter from the late 90’s. I used to play this game all the time. But I did even more than that, I would make custom levels to play on with other people.

Mapmaking for TFC, was generally a simple process. You would create simple polyhedrons and then apply textures/patterns to them. Then you would place non-terrain objects, called entities, inside of your terrain. Everything is pretty straightforward…until you get a leak.

A leak is when the outside of the level is accessible to the inside of the level. Imagine you are building a spaceship or a submarine, if you have a leak it just won’t work. The challenge is that the level editor won’t tell you where you have a leak1. So how do you solve it?

In my case, you encase half the level in solid rock, so to speak. I would just make a big cube and cover up half of the level. If the level compiled, I knew my leak was somewhere in that half. Then I just kept repeating with smaller and smaller cubes.

I do the same thing all the time in my professional life. I’ll comment whole swathes of code. I’ll jump to half-way to the data pipeline to see where the error starts. TFC taught me to keep cutting the problem in half until I find it.

Guessing the secrets of the universe with Zendo

Image result

Undoubtedly proof that I was destined to be a programmer, one of my favorite board games ever is Zendo. It was actually one of my nicknames in college. It’s got a silly theme about discerning if something has the Buddha nature. In reality, it boils down to one player making up a rule, and everyone else trying to determine what the rule is.

If it sounds easy, I dare you to play something similar over at the New York Times. Chances are you are going to get it wrong.

The biggest thing Zendo taught me, was fighting against confirmation bias. It taught me to ask “What would prove my theory wrong”. Good troubleshooting involves guessing a cause, determining a test that will give you new information, and then running that test.

That test might be running a simpler version of a query that’s failing. It might mean adding a breakpoint to your code and inspecting variables.

Learning how to think systematically about this sort of thing has been tremendously useful.

Learning outside of programming

Troubleshooting is very often a set of skills and approaches that don’t need to do anything with technology per-se. I think looking at how we can get these skills in other places, like games can be very useful.

Speaking of other sources, there are two book I can recommend wholeheartedly. The first is How to Solve It which is about how to solve mathematical problems, but it provide a number of ways to break down a problem or approach it from different angles. The second is called Conceptual Blockbusting. It focuses on the nebulous issue of how we think about problem solving. It’s very much a book about thinking and I definitely enjoyed it.

Footnotes

1 Only after writing this blog post did I find an article explaining out to get the level editor to tell you exactly were the leak is. Sigh.