pointers, solutions and scripts for the SQL DBA
not intended to replace msdn, common sense or oxford commas

12/04/2012

On the Advantages of DateTime2(n) over DateTime

Starting with SQL 2008, we database developers started becoming more familiar with datetime2.  Sometimes folks need convincing though, so here goes.

Here's a brief review of how the precision of the datetime2 data type converts from a varchar representing a time value out to one ten-millionths of a second.  Run this script yourself or view the results in the image below:

declare @datetime varchar(50) = '01/01/2012 11:11:11.1111111'

select        @datetime
select        convert(datetime2(0), @datetime)
select        convert(datetime2(1), @datetime)
select        convert(datetime2(3), @datetime)
select        convert(datetime2(4), @datetime)
select        convert(datetime2(7), @datetime)
select        convert(datetime2, @datetime) --default is 7


Want to do the same conversion with datetime or smalldatetime?  Can't.

Msg 241, Level 16, State 1, Line 10
Conversion failed when converting date and/or time from character string.

The old data types can't handle that much precision.  Gotta dial it down for the old datetime and smalldatetime types.  How quaint.

declare @datetime varchar(50) = '01/01/2012 11:11:11.111'
select        convert(datetime, @datetime)
select        convert(smalldatetime, @datetime)
select        convert(datetime2(0), @datetime)
select        convert(datetime2(1), @datetime)
select        convert(datetime2(3), @datetime)



















Note how the old data types are incapable of storing precision out to one one-thousandth of a second.

How about date ranges?

datetime: 1753-01-01 through 9999-12-31
smalldatetime: 1900-01-01 through 2079-06-06
datetime2: 0001-01-01 through 9999-12-31


Now the kicker. What's the cost to storing all that extra precision in datetime2? None. You can get more precision than datetime and fewer bytes per row per field by specifying a precision value for columns declared as datetime2(n).

For example, datetime(2) stores one hundreds of a second - realistically the same precision as datetime, which rounds the third place to the right of the decimal. And datetime(2) is two bytes smaller than datetime, making it ideal.

Don't need seconds, just hours and minutes? Stick with smalldatetime, 4 bytes, as opposed to datetime2(0) at 6 bytes.

Storage requirements 

smalldatetime:
4 bytes - precision to the minute (seconds are always :00)

datetime2(n):
6 bytes for precisions less than 3 - precision up to one hundredth of a second
7 bytes for precisions 3 and 4 - precision up to one ten thousandth of a second
8 bytes for precisions > 4 - precision up to one ten millionth of a second (within 100 nanoseconds)

datetime:
8 bytes - precision to one hundredth of a second, rounded precision to three thousands of a second

Clearly, datetime2 is an upgrade in range of values, precision (no rounding!) and storage size over datetime.

And that's only if you need to store date and time info. Since SQL 2008, we've also been able to store mm/dd/yyyy data in the date data type (3 bytes), and discrete hh:mm:ss in the time data type (5 bytes).

Oh yeah, and even though datetime is not deprecated, this friendly yellow box might make you think so.

Note Note
Use the timedatedatetime2 and datetimeoffset data types for new work. These types align with the SQL Standard. They are more portable. timedatetime2 and datetimeoffset provide more seconds precision. datetimeoffset provides time zone support for globally deployed applications.

.net developers?  Datetime and Datetime2(n) both map to System.DateTime.  No worries there.  More info here: http://msdn.microsoft.com/en-us/library/bb675168.aspx

12/03/2012

Management Studio Database Diagram Owners

If you're working in an environment as a developer without sysadmin privileges, and you are creating database diagrams using Management Studio underrated diagram tool, but not the database owner or a sysadmin, you'll see your created diagrams look like this


where the diagram is owned by the developer using the sql login 'jdoe'.  With many diagrams created by multiple developers, this can be ugly, confusing and just nonsensical.  Sysadmins don't have this problem, which is why like me you may have used Database Diagrams for years without encountering this issue.

There is no way to change this or rename it from the Management Studio GUI, but a simple script can fix the problem.

Find the diagram you want to rename, and the new principal you want to be the "owner".

select * from dbo.sysdiagrams
select * from sys.database_principals

(abbreviated results shown below)

name principal_id diagram_id
Diagram_0 1 1

name principal_id type type_desc
dbo 1 S SQL_USER
jdoe 6 S SQL_USER

This script can be executed by the developer to change the owner of the diagram from jdoe to dbo.

  update dbo.sysdiagrams
  set  principal_id  =--dbo
where  principal_id  =--jdoe
and    diagram_id    = 1

And now the diagram isn't owned by one of your developers.



11/30/2012

PowerPivot for Excel "Not loaded. The Managed Add-in Loader failed to initialize."

Error when trying to enable the PowerPivot for Excel add-in for Office 2010 reads:
"Not loaded.  The Managed Add-in Loader failed to initialize."



If the installation of the PowerPivot for Excel add-in succeeded, the add-in should be enabled in the Excel Add-ins options panel.  If not, you might see the above error when clicking on the Manage -> "COM Add-ins" -> GO.

Unfortunately, the help provided in this social.msdn thread proved to be accurate.
http://social.msdn.microsoft.com/Forums/en/sqlkjpowerpivotforexcel/thread/3ec71eec-288b-4b2c-a404-a1740103e483

Don't waste your time like I did uninstalling the PowerPivot for Excel Add-in, repairing office, reinstalling the Office Shared Features or Office Tools... and multiple orders of the above.  Just uninstall the add-in, uninstall Office (altogether, it seems, is necessary), reinstall Office with the Office Shared Features all enabled, reinstall the Add-in.

It is silly, but that's the only way I got it to work.

10/14/2012

Database Design Contest at the 2012 AITP Region 3 Student Conference


Had a great Friday in Lafayette, LA on the campus of the University of Louisiana-Lafayette.  31 different teams submitted database designs on my problem statement that required an ERD and sample data set.  The database design contest was sponsored by the Baton Rouge SQL Server User Group, a chapter of PASS.

When judging, I didn't see any identifying info besides a randomly-assigned number. I was happy to announce that the winner of the contest got a perfect grade on a 14-factor grading system, and that the top six places all got more than a 90% grade.  A big thanks to my Baton Rouge colleague Kristen Reeves, a former speaker at the BRSSUG, for helping me grade the entries.

Congratulations to all the contestants at the 2012 AITP Region 3 Student Conference Database Design contest!

10/11/2012

Performance Tuning Presentation from October 2012 BRSSUG Meeting

Thanks to the 15+ folks who attended the Baton Rouge SQL Server User Group meeting last night!  I hope everyone enjoyed "everyone gets a book" night!

I presented on SQL Server Performance tuning, which began with Michelle Ufford's article from GoDaddy on how she saw massive performance gains just by correcting columns to more appropriate data types.  Here's a link to that article: https://web.archive.org/web/20130629210916/http://inside.godaddy.com/scaling-database-data-types/

Then we went through database compression, indexing and execution plans, index INCLUDE structure, the missing indexes feature, how foreign keys can affect performance, isolation modes and MAXDOP.

I've uploaded all my sample scripts in a .zip file to the brssug.org website here for you to download.

Thanks for attending, see you November 14!

UPDATE 20141104: Link to a cached copy of Ufford's article.

9/26/2012

How BETWEEN is inclusive

This is a old and simple one, but an important subtlety that TSQL developers must be aware of.

Using the BETWEEN syntax is easy enough in a WHERE clause, and it's pretty obvious for numbers.

use testing
go

create table decbetweentesting
(      id int identity(1,1) not null primary key
,      testdec decimal(19,5) not null
)
go
insert into decbetweentesting (testdec) values (.99), (1), (1.01), (2), (2.01)
go
select * from decbetweentesting where testdec between '1' and '2'

And the long form equivalent

select * from decbetweentesting where testdec >= '1' and testdec <= '2'


id testdec
2 1.00000
3 1.01000
4 2.00000


id testdec
2 1.00000
3 1.01000
4 2.00000


Easy, right?

So don't mistake that simple logic for dates.


create table datebetweentesting
(      id int identity(1,1) not null primary key
,      testdate datetime not null
)
go
insert into datebetweentesting (testdate) values ('12/29/2011 23:59:59'), ('12/30/2011 00:00:00'), ('12/30/2011 00:01:00'), ('12/31/2011 00:00:00'), ('12/31/2011 00:01:00'), ('1/1/2012 00:00:00'), ('1/1/2012 00:01:00')
go
select * from datebetweentesting where testdate between '12/30/2011' and '12/31/2011'
select * from datebetweentesting where testdate >= '12/30/2011' and testdate <= '12/31/2011'


id testdate
2 2011-12-30 00:00:00.000
3 2011-12-30 00:01:00.000
4 2011-12-31 00:00:00.000


id testdate
2 2011-12-30 00:00:00.000
3 2011-12-30 00:01:00.000
4 2011-12-31 00:00:00.000



This simple query to get everything before the end of the year will IGNORE things that happened during the date of 12/31/2011, after midnight.  Almost certainly not what you want.

But, don't fall for this:


select * from datebetweentesting where testdate between '12/30/2011' and '1/1/2012'
select * from datebetweentesting where testdate >= '12/30/2011' and testdate <= '1/1/2012'

 id testdate
2 2011-12-30 00:00:00.000
3 2011-12-30 00:01:00.000
4 2011-12-31 00:00:00.000
5 2011-12-31 00:01:00.000
6 2012-01-01 00:00:00.000

id testdate
2 2011-12-30 00:00:00.000
3 2011-12-30 00:01:00.000
4 2011-12-31 00:00:00.000
5 2011-12-31 00:01:00.000
6 2012-01-01 00:00:00.000


Which would return data from the new year.

In other words, to catch the end of 2011 and nothing in 2012, don't use BETWEEN.


select * from datebetweentesting where testdate >= '12/30/2011' and testdate < '1/1/2012'



id testdate
2 2011-12-30 00:00:00.000
3 2011-12-30 00:01:00.000
4 2011-12-31 00:00:00.000
5 2011-12-31 00:01:00.000


In fact, I rarely use it at all because of this common mis-perception about the border exclusivity in other developers.


9/08/2012

SQL Server Best Practices for DMV's - Houston Tech Fest 2012

It is a pleasure meeting everyone today at Houston Tech Fest!  The new facility is miles away - literally and figuratively - from the previous Houston Tech Fest facility.

Here's a link below for downloading the presentation files and sample scripts from my presentation on "SQL Server Best Practices for DMV's" at Houston Tech Fest 2012 at the Reliant Center.

Download the .zip file here.

8/20/2012

SQL Agent Alerts - Severity Condition is NOT "greater than or equal to"

I ran into some bad training that had convinced a client sys admin that the SQL Agent Alerts had the "greater than or equal to property."

As in, if I want to catch and notify an operator of all high severity errors 17 through 25, all I needed to do was set up an Alert for Severity 17.  This is false.

Here's a quick proof:

USE [msdb]
GO

--This query will not run in a batch for obvious reasons.  The high-severity error messages will kill the batch.
--Run each statement manually and observe the results.

EXEC msdb.dbo.sp_add_alert @name=N'020',
              @message_id=0,
              @severity=20,
              @enabled=1,
              @delay_between_responses=0,
              @include_event_description_in=1,
              @job_id=N'00000000-0000-0000-0000-000000000000'
GO
--This query will return date and time 0 | 0.  The new alert exists.
select name, last_occurrence_date, last_occurrence_time from msdb.dbo.sysalerts where name = '020'
GO
RAISERROR (N'testingonly.  This is message %s %d.', -- Message text.
           21, -- Severity,
           1, -- State,
           N'number', -- First argument.
           5) WITH LOG; -- Second argument.
GO
--This query will still return date and time 0 | 0.  Alert for Severity 20 has NOT been triggered by a Severity 21 error.
select name, last_occurrence_date, last_occurrence_time from msdb.dbo.sysalerts where name = '020'
GO
RAISERROR (N'testingonly.  This is message %s %d.', -- Message text.
           20, -- Severity,
           1, -- State,
           N'number', -- First argument.
           5) WITH LOG; -- Second argument.
GO
--This query will return a valid date and time.  Alert for Severity 20 has been triggered by a Severity 20 error.
select name, last_occurrence_date, last_occurrence_time from msdb.dbo.sysalerts where name = '020'
GO


As you run the above script statement-by-statement (NOT in a batch), you'll see the occurrence data trigger only for an error with the severity EQUALS the defined alert state.  In this case, the Severity 21 error did NOT trigger the Severity 20 alert.

Here's a basic script from my toolbox to set up Alerts for all significant error severities, to notify your operator via email with details.  In this script, the operator is called "DBAs", and ideally would be a distribution group for all active members of the Database Administrator role in your IT department.

Make sure that database mail is setup, that SQL Agent is configured to use the appropriate Database Mail profile, and that the failsafe operator is enabled to email your DBA team.  

USE [msdb]
GO
EXEC msdb.dbo.sp_add_alert @name=N'errorseverity 17',
              @message_id=0,
              @severity=17,
              @enabled=1,
              @delay_between_responses=600,
              @include_event_description_in=1
GO
EXEC msdb.dbo.sp_add_notification @alert_name=N'errorseverity 17', @operator_name=N'DBAs', @notification_method = 1
GO
EXEC msdb.dbo.sp_add_alert @name=N'errorseverity 18',
              @message_id=0,
              @severity=18,
              @enabled=1,
              @delay_between_responses=600,
              @include_event_description_in=1
GO
EXEC msdb.dbo.sp_add_notification @alert_name=N'errorseverity 18', @operator_name=N'DBAs', @notification_method = 1
GO
EXEC msdb.dbo.sp_add_alert @name=N'errorseverity 19',
              @message_id=0,
              @severity=19,
              @enabled=1,
              @delay_between_responses=600,
              @include_event_description_in=1
GO
EXEC msdb.dbo.sp_add_notification @alert_name=N'errorseverity 19', @operator_name=N'DBAs', @notification_method = 1
GO
USE [msdb]
GO
EXEC msdb.dbo.sp_add_alert @name=N'errorseverity 20',
              @message_id=0,
              @severity=20,
              @enabled=1,
              @delay_between_responses=600,
              @include_event_description_in=1
GO
EXEC msdb.dbo.sp_add_notification @alert_name=N'errorseverity 20', @operator_name=N'DBAs', @notification_method = 1
GO
USE [msdb]
GO
EXEC msdb.dbo.sp_add_alert @name=N'errorseverity 21',
              @message_id=0,
              @severity=21,
              @enabled=1,
              @delay_between_responses=600,
              @include_event_description_in=1
GO
EXEC msdb.dbo.sp_add_notification @alert_name=N'errorseverity 21', @operator_name=N'DBAs', @notification_method = 1
GO
USE [msdb]
GO
EXEC msdb.dbo.sp_add_alert @name=N'errorseverity 22',
              @message_id=0,
              @severity=22,
              @enabled=1,
              @delay_between_responses=600,
              @include_event_description_in=1
GO
EXEC msdb.dbo.sp_add_notification @alert_name=N'errorseverity 22', @operator_name=N'DBAs', @notification_method = 1
GO
USE [msdb]
GO
EXEC msdb.dbo.sp_add_alert @name=N'errorseverity 23',
              @message_id=0,
              @severity=23,
              @enabled=1,
              @delay_between_responses=600,
              @include_event_description_in=1
GO
EXEC msdb.dbo.sp_add_notification @alert_name=N'errorseverity 23', @operator_name=N'DBAs', @notification_method = 1
GO
USE [msdb]
GO
EXEC msdb.dbo.sp_add_alert @name=N'errorseverity 24',
              @message_id=0,
              @severity=24,
              @enabled=1,
              @delay_between_responses=600,
              @include_event_description_in=1
GO
EXEC msdb.dbo.sp_add_notification @alert_name=N'errorseverity 24', @operator_name=N'DBAs', @notification_method = 1
GO
USE [msdb]
GO
EXEC msdb.dbo.sp_add_alert @name=N'errorseverity 25',
              @message_id=0,
              @severity=25,
              @enabled=1,
              @delay_between_responses=600,
              @include_event_description_in=1
GO
EXEC msdb.dbo.sp_add_notification @alert_name=N'errorseverity 25', @operator_name=N'DBAs', @notification_method = 1
GO


More info:
Error severity info: http://msdn.microsoft.com/en-us/library/ms164086.aspx

8/16/2012

Actual Emails: The Customer Doesn't Want Full Recovery Mode


Got this question in an email.  I'll paraphrase.

William 
Hoping you can advise me on best way to handle this. 
My client doesn’t want to use SQL backups. Instead, they are using VM and a third party backup software by choice.

Without a DBA on staff, they would prefer to deal with SQL the less the better.

They do not plan on doing any transaction logs and are not concerned with restore from point of failure.
... 
Goal would be to not have transaction logs or somehow keep those logs small.   
Colleague

Here's how I replied.

So, the first question to ask a client who doesn't want transaction log backups is, "why?" 
The transaction log is not something to be afraid of. Backing up the transaction log is a simple process and clears out the log. Backup the log at regular intervals (somewhere between 15 and 60min is typical). Then, you get all the advantages of point-in-time recovery, so if you have a 5:01pm disaster, you could restore all the way to 5:00pm, for example. Plus, full recovery mode is required for most SQL-based HA/DR scenarios.
Using a third party backup software is fine, but they must have purchased the SQL plugin.  This is usually an extra cost add-on for whatever backup license they bought.  You CANNOT backup an .mdf file with a file system backup and expect that to work. SQL MUST BACKUP SQL. 
That SQL plugin does exactly that for the third party software - sends commands to SQL to perform a backup. It's fine if they don't want to use sql maintenance plans or jobs to perform the backups, but they must use the sql plugin for any 3rd party enterprise backup software.
With a database in full recovery mode, you must perform full and transaction log backups, or the log will grow.
Typically, you do a nightly full and hourly tran log backups. The reason people get scared/confused/frustrated with transaction logs is that they don't realize you are supposed to back up the log (which empties it) regularly. As in, hourly at most. If they want the log to stay small, back it up every 15 minutes. BackupExec can do that. 
If they are really stubborn about not wanting to use all the DR capability that they paid for, putting the database in simple mode is fine. Just full sql backups are needed then. In simple mode, transactions aren't stored in the log after they commit. A transaction commits, there's a checkpoint, and then it is wiped from the log. A simple recovery mode database will have its transaction log grow to the size of the biggest single concurrent set of transactions at any given time. 
Never truncate a log file. Perform a log file backup, which will empty the file. The file may still be large though it is empty, in this case if you absolutely need the space back, you can shrink the log file. Never shrink a data or log file on a schedule. It creates fragmentation which hurts performance. 
Let me know if you have any questions! 
-William