Wednesday, April 23, 2014

SQLSaturday #305 Dallas Business Analytics Edition

I am excited about being selected to speak and attend the May 3rd SQLSaturday BA Edition in Dallas, TX. There are a number of sessions I want to attend including Eric Mezell’s An Introduction to Hadoop, Andy Egger’s Applied Predictive Analytics, David Browne on Enterprise Business Intelligence (for lunch) and Garrett Edmonson – Measuring Data Warehouse Performance. The full schedule is here -

I hope to create some time to visit with old friends in the Dallas area like Brian Smith (MS) and Tim Costello. I hope this is a precursor for Dallas to host a PASS Business Analytics week long conference in the future. California was a little to far to travel this year even though I should have went.

I am presenting one of my first BI presentations I have ever given – Dimensional Modeling 101. When I move over to Business Intelligence, I had a strong background in Data Normalization. Transitioning to a Dimension Model was quite difficult and I wish I had someone to explain like Warren Thornthwaite did at a pass PASS Summit before I started. He did a great job using PASS registration data to great a simple Data Mart and Analysis Services cube while previewing in Excel. That hooked me.

Thanks to all the sponsors for helping make this a free event for all attendees.


Friday, April 11, 2014

Call for Speakers – SQLSaturday #324 Baton Rouge

Baton Rouge is now on its 6th SQLSaturday and the event grows year after year. It all started with Patrick LeBlanc back in 2009 with #17 and with help from local User Groups (William AssafSparkhound), it has expanded its topics and attendees. This event is also called Baton Rouge Tech Fest because there are more than just SQL Server related topics. Last year, we had a CIO/Manager track, Windows Phone, SharePoint and .Net.

So, this is a Call for Speakers to the next SQLSaturday August 2dn 2014 in Baton Rouge at the Business Center on LSU’s campus. Most speakers get at least one session, so do not be shy about submitting. If you need help, practice by giving a Lighting Round at the local User Group (SQL and .Net) meeting the 2nd Wednesday of every month. There is even a Java user group meeting at the same time.

Pictures from previous SQLSaturdays in Baton Rouge


Tuesday, March 25, 2014

5 Tools Every DBA Should Know About

After monitoring enterprise SQL Server instances for over 10 years now, there are 5 tools that are used every day for spot check views of a system. The order here is not a countdown from 5 to 1. They are presented here in no particular order, because each has its own area where it is number one. Over the years, those around me have gain more understanding of a SQL Server instance and fine tuning that is needed to optimize the usage of these tools. Though most are free, there is a cost in performance on using these assistants and you better have a good understanding of their output.

Performance Dashboard

This feature was added in SQL Server 2005, but has versions available for 2008 and 2012. The 2008 version is actually the 2005 version with a modification to the script. It is a central location from Management Studio to view what is going on in the last 15 minutes plus drill-thru reports for current OS and SQL statistics along with historical statistics. The dashboard is divided into 4 areas – CPU Utilization, Current Waits, Current Activity and Historical/Miscellaneous Information. The links on the dashboard provided in each area drill into reports (rdl files) that get statistics from DMOs (Dynamic Management Objects) which are DMFs and DMVs you might be familiar with.


The top-left quadrant is for CPU Utilization. The blue part of the percentage bar is the amount of CPU used from SQL Server process, while the rest is outside of SQL Server CPU usage – Windows OS and other applications. Clicking the blue part of the CPU bar will show current running queries ordered by CPU usage. This is convenient information about was is taking up SQL CPU time at the current time. The Current Waits section lists the waits categorized by type with drill-thru capabilities to see the queries with waits. Blocking is one common wait I see. The lower left Current Activity table lets you see the requests happening along with all current connections from the User Sessions link. The 4th section has links to more historical statistics like IO by database files, Waits summed up by category, along with reports about queries sorted by Reads, Writes, CPU, Duration and more. The database IO usage can help with SAN Administrators.



Adam Machanic has done a great job with this tool, and it is suggested to go visit the 29 days of SP_WhoIsActive to really understand what he has done. I would take my time reading his blogs and definitely let each day sink in before going to the next post. This tool gives the user the ability to see what is running with all kinds of statistics like CPU, reads, writes and the list goes on and on. I have 2 short cut keys in Management Studio (SSMS) with 2 different versions: one has the Query Plan in a column and the other does not. I also filter out some logins (sa) and databases (master & msdb).

Ctrl-6 – no Query Plan and some filtering


Ctrl-7 – include Query Plan with different filtering

The filter assists with clearing the noise of executing queries from certain applications. Being able to drill into the query plan from the selected query is great because you can see the execution plan in SSMS.


I have to say I still use sp_Who3 ‘Active’ first before going to SP_WhoIsActive. I remember first using this in SQL Server 2000, but Denny Cherry might have created it before then. The query uses DBCC InputBuffer to get the actual SQL statement that is associated with the SPID of concern which is not available in some cases with DMOs.


Built-In Reports

Back to Microsoft and the ability to use reports right from Management Studio. My favorite is Disk Usage and Disk Usage by Top Tables. Right-clicking on a database, then selecting Reports/Standard Reports/Disk Usage from the context menu gives a summary of the database data and log files. You can see how full these files are and space free. If the files have expanded recently, you can expand the amounts and time it took. We recently used this to see a new Distribution database for replication was expanding at 1MB but 507 times each minute which showed up as IO Wait.

image  image

I really like the Schema Change and backup and Restore Events, but as you can see below there are many to help with exploring a instances and its databases.




PlanExplorer is a tool to view Query/Execution Plans. This tool integrates into Management Studio (SSMS) so it can be launch from viewing a query plan, maybe after using SP_WhoIsActive. The ability to sort by different columns of statistics from the plan is great. It also compresses the graphical display for easier reading. Highlights with various colors indicate the high object usage percentages so you can drill to the iterators that are the heavy hitters. See below for a view (Fit to Screen) in SSMS followed by the PlanExplorer view.





Should not be hard to see the difference this tools makes with viewing execution plans.



Even though Extended Events will be replacing Profiler in a few versions, Profiler is still being used every day around the world. Once you master the events to trap and columns to view, this tool becomes a great way to see what is going on. Profiler enables you to save templates once you get the columns and events setup that you like. Recalling these templates helps quickly start a profiler trace.


Caution should be notes here that profiler can cause performance problems on systems that are used heavily. I once frozen a production system that had 16 processors and 128GB of RAM by creating a client profile and trying to save the Query Plan event on a production system. The boss was not happy. This is where I learned you can script the Profile trace out and use as a Server-Side trace, saving the trace to a file to be read later. We also removed capturing the Query Plan which I do not suggest using in a trace.


Sunday, March 9, 2014

5 Things a developer should know about databases


Database Normalization

Theoretical versus real-world

There is not a strong need for developers to know the theoretical definition of 1st thru 5th normal form, but there needs to be an understanding of some design practices. The keys are:

  1. No Duplicate data in columns in one table
  2. Relationships built as foreign keys
  3. Primary Key to identify a row
  4. Learn Many to Many relationships (this could be a whole blog)

Always look for natural key when using identity for Primary Key

The first of which is a primary key. Over the years, I have finally grasped the understanding of using an Identity column as the primary key but always design a table with a natural key. The following example that shows the difference in a Customer table.

CREATE TABLE [dbo].[Customer](
    [CustomerID] [int] IDENTITY(1,1) NOT NULL,
    [CustomerName] [varchar](40) NOT NULL,
    [AddressLine1] [varchar](60) NULL,
    [City] [varchar](30) NOT NULL,
    [StateAbbreviation] [varchar](2) NOT NULL,
    [PostalCode] [varchar](15) NOT NULL,
( [CustomerID] ASC) 
    ON [dbo].[Customer]
([CustomerName] ASC)

The key is that I understand that the Identity column is not the column used to locate a customer, but the the Customer Name is really how we identify a customer. From my experience, I saw the identity column come into software development for 3 reasons:

  1. The int column takes up less space in non-clustered indexes
  2. It helps with lookups in Objects for Object Oriented coding
  3. The joins are faster and helps with related tables when used as part of the related table’s primary and/or foreign key

Lookup versus Parent-Child tables

The Customer table can be can be considered a Lookup table for customers that have Orders. The relationship between an Order Header and Order Detail would be a Parent-Child relationship. The Order Header and Detail still need to have a Natural Key just like the Customer table even though Identity is used strongly in these tables.


The example above show OrderID as the primary key of the OrderHeaders table, but there is a natural key to identify rows which is CustomerID + OrderDate. The OrderID column is carried down into the OrderDetail table. The column is combined with ProductID to form the primary key of the table.


Data Types

char vs. varchar

Remember that using a char data type will use the space required in the size in the data pages of ever row, where varchar will only take up the space used in the actual data in the column for that row.

Money not decimal

The money data type is a much better option for dollar amounts in a database than trying to specify the size and number of decimal places in a decimal data type.

nvarchar or nchar

The use of nvarchar and nchar are for storage of all Unicode characters while char and varchar are for non-Unicode characters. The nchar/nvarchar data types take up 2 times the space for a column in each row. In 25 years of IT work, I have never seen a need for Unicode characters, though some would say differently.

integers – tiny, small, int and bigint

The integer data type now have various size that are different in space in the rows of a table. If you know that a lookup table only has 10 possible rows, you do not need to use int or bigint for the ID column. You should go ahead and use smallint. See Books Online for more information.


Learn Primary Key defaults to Clustered

As we saw in the diagram above, the CustomerID was the primary, clustered index of the Customer table without even specifying clustered. This is by default for SQL Server. You can change this to be a non-clustered primary key index, but you should always have a clustered index on a table in a transaction type database (OLTP). You could have easily specified the Unique Index on CustomerName as Clustered but be warned. The clustered index columns are included in all non-clustered indexes on a table.

Non-clustered indexes

The non-clustered index is usually created for improving the performance of queries that request data from a different column than the primary key. In the example above, a query on CustomerName in a WHERE clause of a query can return results faster because of the Unique Non-Clustered index. Another place to look for non-clustered indexes is on foreign key columns of related tables.

Overuse of Include clause

Creating covering indexes used to require including the additional columns in the actual index for help with eliminating a lookup in a query plan. The addition of the INLCUDE clause in SQL Server now lets you include additional columns at the leaf level of the index so the additional columns do not have to take of space in the tree of the index. Watch out not to overuse this option, because you can end up with more data space used in indexes than the table itself. Also, I have seen where include indexes cause Deadlocking, which is not a good thing.


Do not use SELECT * (specify columns)

Using SELECT * causes a couple of issues. First, it might take a full scan of the table (Clustered Index Scan). This has higher IO and Memory usage than just specifying the columns you need. Second, if someone adds additional columns to the table, it can break views and code that are not equipped to handle the additional columns

Learn difference between LEFT and INNER

Joins in T-SQL go back to the idea of Database Normalization. The INNER JOIN will return all the rows that match in both tables. The LEFT will return all rows from the table specified in the FROM part of the T-SQL and returns data to the columns specified in the JOIN table that have a match. The rows that do not have a match will have Nulls in those specified columns.

New Features of SQL Server

Windows Functions

Every new version of SQL Server comes out with improvements and additions. Yes, Microsoft does actually improve our lives. I am extremely excited to use the ROW_NUMBER() OVER PARTITION to find the latest or earliest rows in a data set. This eliminates the need to use MAX on a Date in a sub-query to join to the main query to find some matching data that is not related to the keys of the tables. There is even more functions like RANK, DENSE_RANK and NTILE.

Common Table Expressions (replaces cursors in some cases)

If you want to learn something really cool, try common table expressions. This can eliminate the use of CURSORS to loop through rows with T-SQL code. It will get you on path to understanding Set Based querying of your database.

There are many of features of SQL Server that a developer should learn, and these are just some of the common items I see new developers lack in their experience that are usually the first for me to help as a co-worker. Happy programming!!!

Tuesday, January 28, 2014

Jan PASS Data Architecture VC presents Ike Ellis on Tips & Tricks

Please join the PASS Data Architecture virtual chapter on Thursday Jan 30th at noon central for SQL Server Tips and Tricks presented in 5 minute lectures.

Next Meeting

Thu, Jan 30 2014 12:00 (GMT-06:00) Central Time (US & Canada)


This is a SQL presentation for the YouTube generation.  Perfect for half-attentively listening in on your lunch break.  It's really twenty separate five minute lectures.  We'll cover tips and tricks on a range of topics, including data visualization, performance, architecture, and ETL processes.
About Ike Ellis:
Ike Ellis is a 17-year veteran with SQL Server.  He's written millions of lines of code, designed many databases, and troubleshot hundreds of performance problems (some of which he wasn't even the cause of.)  He's a SQL Server MVP, TechEd speaker, RedGate speaker, user group chairperson, and makes a mean guacamole.  Right now, he's neck-deep in several business intelligence projects.  He's designed dashboards, written ETL frameworks, created reports, and all that goes with it.  He doesn't know everything, but he loves to share and teach the things he does know.   Come join him at the SQL Pass Book Readers Virtual Chapter

Wednesday, January 15, 2014

PASS BI/DW Virtual Chapter–Analysis Services Attributes and Hierarchies

Please join the PASS BI/DW VC today January 15th at 7PM Central for a presentation on Attributes and Hierarchies in Analysis Services 2012. This is a presentation I have shown at 2 SQLSaturdays and the Baton Rouge SQL Server User Group as well as 24 Hours of PASS.

Next Meeting

Wed, Jan 15 2014 7PM Central

Thu, Jan 16 2014 12:00 (GMT+10:00) Canberra, Melbourne, Sydney

Attributes & Hierarchies in Analysis Services 2012



The session explains Attributes and Hierarchies in Analysis Services 2012 Semantic Model. They are used to slice (filter) and dice (group) measures and dimensions used to view analytical data. The slicing and dicing of data involves fields, columns, or whatever you want to labeled them. But, in SQL Server Semantic model these are called Attributes and Hierarchies. The configuration of these slicers and dicers are important in SQL Server Analysis Services. This session will demonstrate the difference between an attribute and columns in a hierarchy in the dimension as well as the configuration of these for best performance and viewing.

Thomas LeBlanc (MCSA BI 2008, MCITP 2005/2008 DBA & MCDBA 2000) is a Data Warehouse Architect in Baton Rouge, LA. He has been in the IT field since 1989 starting as a COBOL programmer, graduating to dBase, FoxPro, Visual FoxPro, upgrading to Visual Basic versions 3-6 and even some .Net(C#). Designing and developing normalized database has become his passion. Dimensional Modeling is now his future. Full-time DBA work started in 2001 for Thomas while working at a Paper Mill. DBA related work involved supporting 600+ GB databases with replication to a Data Warehouse and DB Mirroring to a disaster recovery site. Performance tuning and reviewing database design and code are an everyday occurrence. Using the Microsoft BI stack for analytical reporting has sharpened his skills with SSIS, SSAS, SSRS and PerformancePoint.

Wednesday, January 8, 2014

Jan 8th – Baton Rouge SQL Server User Group - Azure

Please join us tonight, Wednesday Jan 8th starting at 5:45PM with networking and food. The information below is from the BRSSUG Website:

January '14 Baton Rouge User Groups Meeting - Azure!

Topic: Microsoft Azure!

This is an all user groups on deck meeting! SQL, .net, SharePoint, IT Pro, La-WIT are all invited!

Date: Wednesday, January 8, 2014

Location: The Louisiana Technology Park -

Sponsored By: Microsoft 


5:45 - 6:15 pm: Networking and Refreshments

6:15 - 6:30 pm: Announcements

6:30 - 8:00 pm: Microsoft!

Main Topic: Microsoft Azure Info for ALL User Groups

Main Topic Speakers: Ryan Roussel, Microsoft and Bobby Lee, Microsoft

Main Topic Summary:

Azure is an exciting platform and between IAAS, PAAS, big data, storage, media, mobile services, etc. the possibilities in Azure truly seem limitless.  This user group session will focus on an introduction to Azure with a focus on the agenda topics below.  The session will include lots of live Azure demonstrations and answering questions from the group.  

Every person in attendance will have free access to Azure so they can do everything demoed at the user group meeting on their own after the session!

· Review Azure IAAS (Infrastructure As A Service) – Windows and Linux
· Review Azure PAAS (Platform As A Service) and Azure for the web 

    o How do developers leverage Azure for .Net, JAVA, Node.js, PHP, Python, Ruby, etc.
· Review Azure Mobile Services – Azure development for mobile apps on iOS, Android, Windows Phone
· Azure storage options review Azure storage (tables, blob, queue)
    o Developer perspective, when to look at Azure Storage, IAAS database (SQL or Oracle) for SQL Database (Azure PAAS database)
· Free Azure access, review options:
    o MSDN users have free Azure access of $150-$200 free Azure capacity per month. Activate today no credit card needed:

    o Free Azure access for Non-MSDN users, NO credit card needed: