Tuesday, March 25, 2014

5 Tools Every DBA Should Know About

After monitoring enterprise SQL Server instances for over 10 years now, there are 5 tools that are used every day for spot check views of a system. The order here is not a countdown from 5 to 1. They are presented here in no particular order, because each has its own area where it is number one. Over the years, those around me have gain more understanding of a SQL Server instance and fine tuning that is needed to optimize the usage of these tools. Though most are free, there is a cost in performance on using these assistants and you better have a good understanding of their output.

Performance Dashboard

This feature was added in SQL Server 2005, but has versions available for 2008 and 2012. The 2008 version is actually the 2005 version with a modification to the script. It is a central location from Management Studio to view what is going on in the last 15 minutes plus drill-thru reports for current OS and SQL statistics along with historical statistics. The dashboard is divided into 4 areas – CPU Utilization, Current Waits, Current Activity and Historical/Miscellaneous Information. The links on the dashboard provided in each area drill into reports (rdl files) that get statistics from DMOs (Dynamic Management Objects) which are DMFs and DMVs you might be familiar with.

image

The top-left quadrant is for CPU Utilization. The blue part of the percentage bar is the amount of CPU used from SQL Server process, while the rest is outside of SQL Server CPU usage – Windows OS and other applications. Clicking the blue part of the CPU bar will show current running queries ordered by CPU usage. This is convenient information about was is taking up SQL CPU time at the current time. The Current Waits section lists the waits categorized by type with drill-thru capabilities to see the queries with waits. Blocking is one common wait I see. The lower left Current Activity table lets you see the requests happening along with all current connections from the User Sessions link. The 4th section has links to more historical statistics like IO by database files, Waits summed up by category, along with reports about queries sorted by Reads, Writes, CPU, Duration and more. The database IO usage can help with SAN Administrators.

 

SP_WhoIsActive

Adam Machanic has done a great job with this tool, and it is suggested to go visit the 29 days of SP_WhoIsActive to really understand what he has done. I would take my time reading his blogs and definitely let each day sink in before going to the next post. This tool gives the user the ability to see what is running with all kinds of statistics like CPU, reads, writes and the list goes on and on. I have 2 short cut keys in Management Studio (SSMS) with 2 different versions: one has the Query Plan in a column and the other does not. I also filter out some logins (sa) and databases (master & msdb).

Ctrl-6 – no Query Plan and some filtering

image

Ctrl-7 – include Query Plan with different filtering

The filter assists with clearing the noise of executing queries from certain applications. Being able to drill into the query plan from the selected query is great because you can see the execution plan in SSMS.

NOTE: SP_Who3

I have to say I still use sp_Who3 ‘Active’ first before going to SP_WhoIsActive. I remember first using this in SQL Server 2000, but Denny Cherry might have created it before then. The query uses DBCC InputBuffer to get the actual SQL statement that is associated with the SPID of concern which is not available in some cases with DMOs.

 

Built-In Reports

Back to Microsoft and the ability to use reports right from Management Studio. My favorite is Disk Usage and Disk Usage by Top Tables. Right-clicking on a database, then selecting Reports/Standard Reports/Disk Usage from the context menu gives a summary of the database data and log files. You can see how full these files are and space free. If the files have expanded recently, you can expand the amounts and time it took. We recently used this to see a new Distribution database for replication was expanding at 1MB but 507 times each minute which showed up as IO Wait.

image  image

I really like the Schema Change and backup and Restore Events, but as you can see below there are many to help with exploring a instances and its databases.

image

 

PlanExplorer

PlanExplorer is a tool to view Query/Execution Plans. This tool integrates into Management Studio (SSMS) so it can be launch from viewing a query plan, maybe after using SP_WhoIsActive. The ability to sort by different columns of statistics from the plan is great. It also compresses the graphical display for easier reading. Highlights with various colors indicate the high object usage percentages so you can drill to the iterators that are the heavy hitters. See below for a view (Fit to Screen) in SSMS followed by the PlanExplorer view.

image

SSMS

image

PlanExplorer

Should not be hard to see the difference this tools makes with viewing execution plans.

 

Profiler

Even though Extended Events will be replacing Profiler in a few versions, Profiler is still being used every day around the world. Once you master the events to trap and columns to view, this tool becomes a great way to see what is going on. Profiler enables you to save templates once you get the columns and events setup that you like. Recalling these templates helps quickly start a profiler trace.

image

Caution should be notes here that profiler can cause performance problems on systems that are used heavily. I once frozen a production system that had 16 processors and 128GB of RAM by creating a client profile and trying to save the Query Plan event on a production system. The boss was not happy. This is where I learned you can script the Profile trace out and use as a Server-Side trace, saving the trace to a file to be read later. We also removed capturing the Query Plan which I do not suggest using in a trace.

image

Sunday, March 9, 2014

5 Things a developer should know about databases

 

Database Normalization

Theoretical versus real-world

There is not a strong need for developers to know the theoretical definition of 1st thru 5th normal form, but there needs to be an understanding of some design practices. The keys are:

  1. No Duplicate data in columns in one table
  2. Relationships built as foreign keys
  3. Primary Key to identify a row
  4. Learn Many to Many relationships (this could be a whole blog)

Always look for natural key when using identity for Primary Key

The first of which is a primary key. Over the years, I have finally grasped the understanding of using an Identity column as the primary key but always design a table with a natural key. The following example that shows the difference in a Customer table.

CREATE TABLE [dbo].[Customer](
    [CustomerID] [int] IDENTITY(1,1) NOT NULL,
    [CustomerName] [varchar](40) NOT NULL,
    [AddressLine1] [varchar](60) NULL,
    [City] [varchar](30) NOT NULL,
    [StateAbbreviation] [varchar](2) NOT NULL,
    [PostalCode] [varchar](15) NOT NULL,
 CONSTRAINT [PK_Address_AddressID] PRIMARY KEY CLUSTERED 
( [CustomerID] ASC) 
) ON [PRIMARY] 
 
CREATE UNIQUE NONCLUSTERED INDEX [unc_Customer_CustomerName] 
    ON [dbo].[Customer]
([CustomerName] ASC)
ON [PRIMARY]
GO

The key is that I understand that the Identity column is not the column used to locate a customer, but the the Customer Name is really how we identify a customer. From my experience, I saw the identity column come into software development for 3 reasons:



  1. The int column takes up less space in non-clustered indexes
  2. It helps with lookups in Objects for Object Oriented coding
  3. The joins are faster and helps with related tables when used as part of the related table’s primary and/or foreign key

Lookup versus Parent-Child tables


The Customer table can be can be considered a Lookup table for customers that have Orders. The relationship between an Order Header and Order Detail would be a Parent-Child relationship. The Order Header and Detail still need to have a Natural Key just like the Customer table even though Identity is used strongly in these tables.


image


The example above show OrderID as the primary key of the OrderHeaders table, but there is a natural key to identify rows which is CustomerID + OrderDate. The OrderID column is carried down into the OrderDetail table. The column is combined with ProductID to form the primary key of the table.


 


Data Types



char vs. varchar


Remember that using a char data type will use the space required in the size in the data pages of ever row, where varchar will only take up the space used in the actual data in the column for that row.


Money not decimal


The money data type is a much better option for dollar amounts in a database than trying to specify the size and number of decimal places in a decimal data type.


nvarchar or nchar


The use of nvarchar and nchar are for storage of all Unicode characters while char and varchar are for non-Unicode characters. The nchar/nvarchar data types take up 2 times the space for a column in each row. In 25 years of IT work, I have never seen a need for Unicode characters, though some would say differently.


integers – tiny, small, int and bigint


The integer data type now have various size that are different in space in the rows of a table. If you know that a lookup table only has 10 possible rows, you do not need to use int or bigint for the ID column. You should go ahead and use smallint. See Books Online for more information.


Indexes



Learn Primary Key defaults to Clustered



As we saw in the diagram above, the CustomerID was the primary, clustered index of the Customer table without even specifying clustered. This is by default for SQL Server. You can change this to be a non-clustered primary key index, but you should always have a clustered index on a table in a transaction type database (OLTP). You could have easily specified the Unique Index on CustomerName as Clustered but be warned. The clustered index columns are included in all non-clustered indexes on a table.


Non-clustered indexes


The non-clustered index is usually created for improving the performance of queries that request data from a different column than the primary key. In the example above, a query on CustomerName in a WHERE clause of a query can return results faster because of the Unique Non-Clustered index. Another place to look for non-clustered indexes is on foreign key columns of related tables.



Overuse of Include clause


Creating covering indexes used to require including the additional columns in the actual index for help with eliminating a lookup in a query plan. The addition of the INLCUDE clause in SQL Server now lets you include additional columns at the leaf level of the index so the additional columns do not have to take of space in the tree of the index. Watch out not to overuse this option, because you can end up with more data space used in indexes than the table itself. Also, I have seen where include indexes cause Deadlocking, which is not a good thing.



T-SQL



Do not use SELECT * (specify columns)


Using SELECT * causes a couple of issues. First, it might take a full scan of the table (Clustered Index Scan). This has higher IO and Memory usage than just specifying the columns you need. Second, if someone adds additional columns to the table, it can break views and code that are not equipped to handle the additional columns


Learn difference between LEFT and INNER


Joins in T-SQL go back to the idea of Database Normalization. The INNER JOIN will return all the rows that match in both tables. The LEFT will return all rows from the table specified in the FROM part of the T-SQL and returns data to the columns specified in the JOIN table that have a match. The rows that do not have a match will have Nulls in those specified columns.

New Features of SQL Server



Windows Functions


Every new version of SQL Server comes out with improvements and additions. Yes, Microsoft does actually improve our lives. I am extremely excited to use the ROW_NUMBER() OVER PARTITION to find the latest or earliest rows in a data set. This eliminates the need to use MAX on a Date in a sub-query to join to the main query to find some matching data that is not related to the keys of the tables. There is even more functions like RANK, DENSE_RANK and NTILE.


Common Table Expressions (replaces cursors in some cases)


If you want to learn something really cool, try common table expressions. This can eliminate the use of CURSORS to loop through rows with T-SQL code. It will get you on path to understanding Set Based querying of your database.


There are many of features of SQL Server that a developer should learn, and these are just some of the common items I see new developers lack in their experience that are usually the first for me to help as a co-worker. Happy programming!!!