pointers, solutions and scripts for the SQL DBA
not intended to replace msdn, common sense or oxford commas

Monday, December 16, 2013

Case-Sensitive Updates in a Case-Insensitive World

Assuming you're in a case-insensitive ("CI" in the collation name) column, this UPDATE statement:

UPDATE tablefoo
set foo = 'Z'
where foo = 'z'

will just update all values where foo = 'z' or 'Z'. SQL Server is not by default case-sensitive, be it at the server, database, or column level. Each of those levels can have their own collation set differently, and there are rules for determining how they are enforced.

For example, in the US, this is the default collation: SQL_Latin1_General_CP1_CI_AS, a case-insensitive, accent-sensitive collation.

So, given this data:
tablename: tablefoo
foo
z
Z
Z
z
Z
z
How can we update just the three records where foo = 'z' but not the 3 records where foo = 'Z'? Without ALTER'ing any objects to change their collation, obviously...

UPDATE tablefoo
set foo = 'Z'
where foo COLLATE SQL_Latin1_General_CP1_CS_AS = 'z'

And then, to make our UPDATE statement non-repeatable for safety, we should add one more line:
UPDATE tablefoo
set foo = 'Z'
where foo COLLATE SQL_Latin1_General_CP1_CS_AS = 'z'
and foo COLLATE SQL_Latin1_General_CP1_CS_AS <> 'Z'
More reading on collations:
http://technet.microsoft.com/en-us/library/ms180175.aspx


Thursday, December 12, 2013

It's Dangerous to ISNUMERIC, Take This Try_Convert

There's an issue with some characters in T-SQL when validating numeric values out of raw varchar fields, and it can be easily solved with a function new to SQL Server 2012.

Consider this sample setup, where we are stuck importing values from a varchar data source into an integer data destination, a common task in heterogeneous ETL activities:
create table tablefoo
(id int identity(1,1) not null primary key,
foo varchar(20) not null)
go
--add some valid integers
insert into tablefoo (foo) values
(123),
(456)

--add some dollar values
insert into tablefoo (foo) values
('$123'),
('$456')

--add some alphanumeric strings
insert into tablefoo (foo) values
('abc123'),
('def456')

--add scientific notation
insert into tablefoo (foo) values
('1e234')
go
select * from tablefoo 

/*
id foo
1 123
2 456
3 $123
4 $456
5 abc123
6 def456
7 1e234
*/
Let's try to validate only proper integer values in tablefoo.foo for insertion into a column with a data type bigint.
--this fails! why?
select 
      CASE   WHEN ISNUMERIC(c.foo) = 1 THEN CONVERT(bigint, c.foo ) ELSE NULL END
from tablefoo c
The error message is
Msg 8114, Level 16, State 5, Line 1 Error CONVERTing data type varchar to bigint.
Let's dig into the data case-by-case...
--this succeeds as designed, returning NULL for the alphanumeric value.
select 
      CASE   WHEN ISNUMERIC(c.foo) = 1 THEN CONVERT(bigint, c.foo ) ELSE NULL END
from tablefoo c
where foo = 'abc123'

--This fails because the dollar sign passes the ISNUMERIC check 
-- BUT is not valid for an int field 
select 
      CASE   WHEN ISNUMERIC(c.foo) = 1 THEN CONVERT(bigint, c.foo ) ELSE NULL END
from tablefoo c
where foo = '$123'

--This also fails because 1e234 is valid scientific notation 
-- BUT is not valid for an int field.
select 
      CASE   WHEN ISNUMERIC(c.foo) = 1 THEN CONVERT(bigint, c.foo ) ELSE NULL END
from tablefoo c
where foo = '1e234'
Our culprit was the curious behavior of certain characters (in this case, "$" and "e") which is valid for the ISNUMERIC built-in function, commonly used to detect numeric strings.

How to clean this up?
--this returns only integer values properly for the whole table
-- and is much prettier.
select 
 c.foo
, ISNUMERIC(c.foo) 
, TRY_CONVERT(bigint, c.foo )
from tablefoo c
Note the above values of ISNUMERIC and the error-less conversion of the TRY_CONVERT function. So not only is TRY_CONVERT downright sexy with its efficiency and compactness, it can help you avoid errors commonly encountered while using ISNUMERIC with characters like the dollar sign ($), decimal point (.), scientific notation (e) and comma (,).


Tuesday, December 03, 2013

Upcoming Special Events for the Baton Rouge SQL Server User Groups

I'm happy to announce a big slate of upcoming events for Baton Rouge User Groups and especially the SQL Server User Group in the next three months:

December 11: Holiday Party at On the Border - free for user group attendees, door prizes from sponsors! http://www.brssug.org/group-news/december13batonrougeusergroupsholidayparty


January 8 '14: Two speakers from Microsoft will present to the User Groups on all the new things happening with Azure. This is going to be big! http://www.brssug.org/group-news/january14batonrougeusergroupsmeeting


February 12 '14: The day of the user groups meeting, we'll have two national Business Intelligence speakers presenting a paid all-day training at the Louisiana Tech Park, then hanging around to present at the SQL Server User Group meeting that night!


Tim Mitchell’s Real World SSIS: A Survival Guide:https://www.eventbrite.com/e/real-world-ssis-a-survival-guide-tickets-9555728441


Bill Pearson's Practical Self-Service BI with PowerPivot for Excel:http://www.eventbrite.com/e/practical-self-service-bi-with-powerpivot-for-excel-tickets-9450327183


Please visit our website for complete details: www.brssug.org
Info on the Tech Park Location is here:www.brssug.org/where

Wednesday, November 20, 2013

Speaker Idol Competition at the Hammond .NET User Group

Here's the link to download the sample scripts and meme-tastic slidedeck from my presentation "SQL Server Indexes in Your Ill-Begotten Databases".

Thanks for everyone who attended, presented, judged and laughed!

Tuesday, November 05, 2013

Potential Backup Software Issues with Read-Only Databases in FULL Recovery

If you decide to park a database in read-only mode for a long period of time, go ahead and alter it into SIMPLE recovery mode too.

The message returned by SQL when you try to take a transaction log backup is:
 "BACKUP LOG cannot modify the database because the database is read-only. The backup will continue, although subsequent backups will duplicate the work of this backup."
 which is fairly self-explanatory to a human.

But to an automated system, such as a third-party enterprise backup platform, this message is not always interpreted correctly.

 eVault, for example, does not detect the read-only condition or the message from SQL and instead returns this generic error message:

 "A gap was detected in log files for database foo. You will need to re-run a Full backup."

Which is misleading and incorrect and pointed troubleshooting efforts of backup administrators in the wrong direction.

Sunday, November 03, 2013

SQL Saturday #255 Dallas 2013 - SQL Server Permissions and Security Principals

500+ IT pros at UT-Arlington this weekend for SQL Saturday #255 Dallas!

Notes from today:

On a more serious note:

Someone at #SQLSat255 asked one of my female developer coworkers today if she was a "booth babe". Someone else assumed she was in "sales", not technical. Damnit, guys. Totally not cool. We can do much better.

Friday, November 01, 2013

PowerShell: Delete Unneeded Rows/Columns In Excel

I had a project to automate the import of a large number of excel files via SSIS into SQL 2012. Many of the files had different headers, but they all had a column header row that preceded the data.

On some of the files, the header row started on row 2, 3, 4, or 5, and on some others, the first two or three columns were entirely blank. What a pain, right?

One saving advantage that I identified in this proprietary set of excel files: each of the datasets begins with a column headed "Identifier", so that is the cell that I want to end up in cell [A1]. That was definitely key to making this work and easily identifying when my header row began.

I automated cleaning up these spreadsheets with the following PowerShell script
Param([string]$wbname)

Try
{
 $err1 = 0;
 #testing only #$wbname = "E:\foo.xls"

 $xl = New-Object -comobject Excel.Application
 $xl.DisplayAlerts = $False

 $wb = $xl.Workbooks.open($wbname)
 $sheet = $wb.Sheets.Item(1)

#delete columns
while( ($sheet.Cells.Item(1,1).Text -eq "") -and ($sheet.Cells.Item(2,1).Text -eq "") -and ($sheet.Cells.Item(3,1).Text -eq "")){[void]$sheet.Cells.Item(1,1).EntireColumn.Delete()}

#delete rows
while( ($sheet.Cells.Item(1,1).Value() -NotLike "Identifier") -and ($sheet.Cells.Item(1,2).Value() -NotLike "Identifier")){[void]$sheet.Cells.Item(1,1).EntireRow.Delete()}

#cleanup
$wb.Close($true)
$xl.Quit()
[System.Runtime.Interopservices.Marshal]::ReleaseComObject($xl)
Remove-Variable xl

}
Catch
{
    $err1 = 1;
}
Finally
{
    if($err1 -eq 1)
    {
        #Write-Output "99";
 #Write-Output $_.Exception.Message;

 $stream = [System.IO.StreamWriter] "e:\error.txt"
 $stream.WriteLine($Error[0].InvocationInfo.PositionMessage)
 $stream.WriteLine($wbname)
 $stream.close()


 [Environment]::Exit("99");
    }
    Else
    {
        #Write-Output "0";
        [Environment]::Exit("0");
    }

}
The two while loops are where the customization was for my particular set of problematic Excel spreadsheets, but the general structure of the code is where you can change it up for yourself.

The last four lines are cleanup - while the last might not be very important, the second-to-last line is pretty important to make sure the stick COM object goes away. Explained much better here: http://technet.microsoft.com/en-us/library/ff730962.aspx

In order to put this in sequence in my SSIS package, I used the Execute Process Task. Screenshots below:

Unfortunately the client server I was working for here only had PowerShell 1.0 installed, but from what I understand, this should apply to PowerShell 2.0 and 3.0. I'm open to anyone who has any insight there.

(click to enlarge the below images)








The argument passed to the PowerShell file includes the full path to the Excel file. (I removed the paths from this actual client implementation.)

I'm open to feedback on this, wonder if anyone else has encountered a similar issue. Obviously, the best solution would have been to modify the process that generates these maddeningly inconsistent excel files.

Tuesday, October 22, 2013

SQL Server Configuration Manager (sqlserverManager11.msc) Hidden after Windows 8.1 Upgrade

You may have noticed, like I did, that after upgrading-in-place from Windows 8 Enterprise to Windows 8.1 Enterprise, my local installation of SQL 2012 SP1 no longer listed SQL Server Configuration Manager under the "SQL Server 2012" list in the Start applications screen.

A picture is worth a thousand words, so to confirm I posted screenshots of my Start Menu and Search menu.

Even searching for "SQL Server Configuration Manager" in the start menu returned no local results.

Note that in the screenshots to the left of the Apps By Category start menu, there is no familiar SQL Server Configuration Manager icon listed with the SQL 2012 Category. Searching for "Configuration" returns other local configuration programs, such as MDS Configuration Manager and SSRS Configuration Manager, but not SQL Server Configuration Manager.

I found SQL Server Configuration Manager right where it belongs, but it has vanished from the local program listing: "C:\Windows\SysWOW64\SQLServerManager11.msc". Same location as for previous versions.

A workaround solution is very basic. When you find it in Explorer, right click and "pin to start". Then you will be able to search for it as "sqlserverManager11". Now it will appear in the Desktop category of the Start Menu.

UPDATE: MSDN now includes this text to remedy this exact problem:
To access SQL Server Configuration Manager Using Windows 8
Because SQL Server Configuration Manager is a snap-in for the Microsoft Management Console program and not a stand-alone program, SQL Server Configuration Manager not does not appear as an application when running Windows 8. To open SQL Server Configuration Manager, in the Search charm, under Apps, type SQLServerManager12.msc for SQL Server 2014 (SQLServerManager11.msc for SQL Server 2012 or SQLServerManager10.msc for SQL Server 2008), and then press Enter.

You can find Configuration Manager versions in these standard locations.
More information here: https://msdn.microsoft.com/en-us/library/ms174212.aspx
SQL Server 2016C:\Windows\SysWOW64\SQLServerManager13.msc
SQL Server 2014C:\Windows\SysWOW64\SQLServerManager12.msc
SQL Server 2012C:\Windows\SysWOW64\SQLServerManager11.msc
SQL Server 2008C:\Windows\SysWOW64\SQLServerManager10.msc

Thursday, October 17, 2013

"SQL Server: Bolts to Buzzwords" presentation for the Hammond .NET User Group at Southeastern Louisiana University

One of my favorite groups to speak for (aside from the Baton Rouge SQL Server User Group) is the Hammond .NET User Group at Southeastern Louisiana University in Hammond, LA. A mostly undergraduate crowd of enthusiastic future developers (and perhaps some developers-turned-DBAs, like myself) make for a fun audience that asks great questions.

Here's my slidedeck for an introductory presentation on databases and SQL Server, perfect for a college crowd, most of which hadn't completed a capstone project requiring a database solution. In fact, the first line of the slidedeck details the highly important lesson of how to pronounce "SQL" - as in, "sequel" NOT "ess kew el".

Here's the download link for the PowerPoint slidedeck

Saturday, September 28, 2013

Houston Tech Fest 2013 - SQL Admin Best Practices with DMV's

Here is the downloadable file with all my sample scripts and slidedeck from my SQL Admin Best Practices with DMV's presentation from the 2000+ person Houston Tech Fest 2013 at the Reliant Center in Houston!

Thanks for attending! If you attended my talk, shoot me some feedback here: http://speakerrate.com/william.assaf

Download scripts and slidedeck

Wednesday, September 04, 2013

Replicated Stored Procedure Execution

One feature that many of my clients are unaware of is that the batched execution of a stored procedure can be replicated under certain circumstances. This can result in significant improvement in performance for stored procedures on replicated databases.

This MSDN article does a good job of breaking down the basics, but here's a demonstrable example you can use.

The rules about batching up the stored procedure executions are fairly severe, and require the SERIALIZABLE isolation level for the stored procedure's transaction. The goal is to see a single 100-row transaction, not 100 single-row transactions. The latter, inefficient behavior is the default behavior that we want to avoid. We'll prove it later.

Step one: Execute the below script block in an existing replicated database. (Not production, obviously.) Or, set up a new simple transactional replication between two databases, then run the below script block.
--Create testing table
CREATE TABLE [dbo].[TestArticle](
 [id] [int] IDENTITY(1,1) NOT FOR REPLICATION NOT NULL,
 [text1] [nvarchar](100) NULL,
 [text2] [nvarchar](100) NULL,
 [text3] [nvarchar](150) NULL,
 [text4] [nvarchar](150) NULL,
 [text5] [nvarchar](200) NULL,
 [int1] [int] NULL,
 [int2] [int] NULL,
 [int3] [int] NULL,
 [int4] [int] NULL,
 [int5] [int] NULL,
 CONSTRAINT [PK_TestArticle] PRIMARY KEY CLUSTERED ( [id] ASC )
)
--Populate with random data
declare @x int
select @x = 1
WHILE @x < 100
BEGIN
 --Insert filler data
 INSERT INTO dbo.TestArticle
 (text1,text2,text3,text4,text5,int1,int2,int3,int4,int5)
 SELECT
     replicate(char(round(rand()*100,0)),round(rand()*100,0))
 , replicate(char(round(rand()*200,0)),round(rand()*100,0))
 , replicate(char(round(rand()*300,0)),round(rand()*100,0))
 , replicate(char(round(rand()*400,0)),round(rand()*100,0))
 , replicate(char(round(rand()*500,0)),round(rand()*100,0))
 , round(rand()*10000,0)
 , round(rand()*10000,0)
 , round(rand()*10000,0)
 , round(rand()*10000,0)
 , round(rand()*10000,0)
 select @x = @x + 1
END
go
CREATE procedure [dbo].[addheaprows] 
as
 --Insert filler data
 INSERT INTO dbo.testarticle
 (text1,text2,text3,text4,text5,int1,int2,int3,int4,int5)
 SELECT top 100
     replicate(char(round(rand()*100,0)),round(rand()*100,0))
 , replicate(char(round(rand()*200,0)),round(rand()*100,0))
 , replicate(char(round(rand()*300,0)),round(rand()*100,0))
 , replicate(char(round(rand()*400,0)),round(rand()*100,0))
 , replicate(char(round(rand()*500,0)),round(rand()*100,0))
 , round(rand()*10000,0)
 , round(rand()*10000,0)
 , round(rand()*10000,0)
 , round(rand()*10000,0)
 , round(rand()*10000,0)
 FROM dbo.testarticle
GO

Step two: Add both the table and the stored procedure to a new publication as articles. The stored procedure must be added as an article with Replicate option = 'Execution in a serialized transaction of the SP'.

Step three: Set up a default "TSQL" trace on the Subscriber instance.

Step four: Execute the below on the Publisher instance so that you can see the replicated stored procedure execution in action.

SET TRANSACTION ISOLATION LEVEL SERIALIZABLE
BEGIN TRAN testrepl
exec dbo.addrows
COMMIT TRAN testrepl

--In the trace, you should see one trace event for SQL:BatchStarting, textdata =

SET TRANSACTION ISOLATION LEVEL SERIALIZABLE; EXEC "dbo"."addrows"  ;SET TRANSACTION ISOLATION LEVEL READ COMMITTED;
 
Step five: Now let's look at what happens normally without any modifications. Start up a default "TSQL" trace on the Subscriber instance and execute the below on the Publisher instance.
SET TRANSACTION ISOLATION LEVEL READ COMMITTED
BEGIN TRAN testrepl
exec dbo.addrows
COMMIT TRAN testrepl


--In the trace, you will see 100 trace events for RPC:Starting, textdata = 


exec [sp_MSins_dbotestarticle] '8C39EB22-71BF-488A-8066-00C4AFABA497', .... 

exec [sp_MSins_dbotestarticle] .... 

exec [sp_MSins_dbotestarticle] .... 

exec [sp_MSins_dbotestarticle] .... 

....
This is the default behavior, and it's a bit shocking to see for the first time.

Remember our rules about the SERIALIZABLE isolation level, which means this isn't necessarily appropriate for all environments. This change could lead to an increase in locking during the execution of your procedure, so educate yourself on heightened isolation levels and why they are important.

Tuesday, September 03, 2013

Actual Emails: TL;DR: Stop using varchar(max)

Wrote this email recently to a crew of developers who were shooting themselves in the foot with a database rich in varchar(max) data types.

Hey folks-
TL;DR: Stop using varchar(max). We’re not storing books.
We need to review and avoid the varchar(max) data type in our tables. Here’s a short treatise as to why.
In SQL Server, varchar(max) is intended to replace the old text data type, which was different from varchar(n) because it was designed to store massive amounts of data.  Massive being greater than 8000 characters, and all the way up to 2gb worth of data. That’s what the varchar(max), varbinary(max), and nvarchar(max) data types are optimized for – HUGE blocks of data in a single cell. We should only use it if we're actually intending to store massive text and use the fulltext indexing engine (a completely separate and specific topic for text blocks).
This is an oversimplification, but varchar(max) is designed to store data differently, and specially for large text blocks. It appears to behave the same as a varchar(n) field, and that’s deceptive when we are throwing 100-200 characters in each row field.
The big drawbacks biting us right now about varchar(max) have to do with indexing, and this is regardless of how much data is actually in a varchar(max) field. A varchar(max) column can’t be the key of a nonclustered index, even if it never stores more than 8000 characters, and can’t have ONLINE index maintenance performed.  As a result, it is generally a giant pain for indexing, a pain you only want to put up with if you absolutely have to.
Furthermore, we’re doing ourselves a disservice for performance, straight up. Unless you’re storing books, (max) hurts performance. Check out this blog post: http://rusanu.com/2010/03/22/performance-comparison-of-varcharmax-vs-varcharn/ 
In short, varchar(max) is burdensome overkill for our datasets.
So here’s the solution… Change varchar(max) to varchar(n), where n is an generous but appropriate number for that column’s data. If Excel creates varchar(max) columns for us when performing a data import wizard, change them to varchar(8000), which is the highest number you can assign to a varchar field.  Or better yet, once the data is in SQL, use this simple syntax to find out the max length of a column and then pad it.
For example: select MAX(LEN([yourcolumn])) from yourtable 
Problem is, our SSIS packages are all very picky about the data types and will break if we just change the data types. So, after making these table changes, you’ll need to open your SSIS package, open the data flow destination or other object, hit OK to apply the new metadata, save and deploy it again. No actual changes necessary.
This all came up because we have data quality issues with the fields Foo and Bar. Both of those columns are varchar(max). I’m dumping the varchar(max) data into temp tables with varchar(200) to get the queries to return in a reasonable amount of time. 
Let me know if you have any questions!
 William 

I like to use the word treatise to prepare my audience for verbosity.

Tuesday, August 20, 2013

Pointers For SQL Saturday Attendance

I recently got an email from another SQL Saturday organizer asking for "pointers for getting more attendees" to their upcoming event. Having just topped 750 preregistered and 560 in the door for SQL Saturday Baton Rouge 2013, I was honored to be asked and to reply:
Good luck on your upcoming event!
Push your fliers/announcements out to local universities and technical colleges, those are always a big source of attendees for us. Introduce yourself to faculty and department heads. We even had a masters degree program at LSU make attendance to our SQL Saturday a requirement of the program. We were able to advertise the event in junior- and senior-level computer science and Business Intelligence classes. 
Contact your local TV and radio shows. I have made appearances on local morning "chat" TV and radio shows, something that really helped get the "SQL Saturday" brand out there. Also look at the local calendars of business reports, chambers of commerce, social magazines, and local business magazines. Proactively contact your local business/IT incubators, technology parks, training providers, recruiting companies to get the word out (and to sponsor, if they are not already!) Those kind of companies usually pride themselves on being "in tune" with the local IT industry. 
Ask your speakers to blog about the event, especially if they are locals. Contact your local Microsoft PFE's (Premier Field Engineers) - they have local clients who will probably be interested and they can help you recruit attendees. 
We actually don't have a lot of luck getting numbers of people on Facebook or G+ or LinkedIn to "join" or "I'm Attending!" an event listing, but wouldn't hurt to make sure you have events listed all over social networks. 
Hit all the local user groups, not just the SQL group but also the SharePoint and .Net groups and get their mailing lists aware of your event. 
Finally, be prepared for a lot of walk-up traffic. We have many folks walk in without per-registering each year. Road signs will pull people in, so many sure you get your signage up around major nearby thoroughfares early in the morning on Saturday. We print out 100+ blank name badges and raffle tickets to handle all the people who don't print SpeedPASS or don't register at all each year. 
Whew, is that enough? Ha! Let me know if you have any questions, and good luck with your event!

Friday, August 16, 2013

Cannot find NT Service\MSSQL$SQL2012?

If you cannot find the virtual account used by the default settings installation of your SQL 2012 instance (in this case running on Windows 7), try searching for:
nt service\mssql

























You must have the "Built-in security principals" object type selected, and you must be searching on your local machine name (not in the domain).










Frustratingly, searching for the virtual account does not work for these strings:
  • MSSQL$SQL2012
  • MSSQL
  • nt service
  • nt service\
  • NT
Which is misleading...



Thursday, August 15, 2013

Distribution setup SQL Server Agent error: "RegCreateKeyEx() returned error 5, 'Access is denied.'"

In the Configure Distribution Wizard, the step "Configuring SQL Server Agent to start automatically" errors with the following text:
TITLE: Configure Distribution Wizard
------------------------------
An error occurred configuring SQL Server Agent.
------------------------------
ADDITIONAL INFORMATION:
RegCreateKeyEx() returned error 5, 'Access is denied.' (Microsoft SQL Server, Error: 22002)


This is a very minor error, and not difficult to work around at all. The wizard is attempting to change the SQL Server Agent service "Start Mode" to Automatic. You can do this via the SQL Server Configuration Manager instead.

In the Sysinternals Process Monitor, you may see: Operation: RegCreateKey Result: ACCESS DENIED Path: "HKLM\System\CurrentControlSet\Services\SQLAgent$SQL2012"

If you encounter this error, select "No" in the "SQL Server Agent Start" page in the Configure Distribution Wizard (as shown below), and then set your agent service to Automatic Start Mode via the SQL Server Configuration Manager.


















The third step of the wizard that failed before will not happen.

Why the failure actually occurs I did not figure this out, and I'm open to feedback, but this seems like a vestigial step to a wizard that otherwise has no negative impact. Running SSMS with "run as Administrator" does not appear to fix this error either. I'd love to know why this error happens in the first place.

Tuesday, August 13, 2013

Baton Rouge Startup Weekend

I'm spreading the word out about an event this weekend called Startup Weekend. It's an event for developers, designers, and entrepreneurs to come together and work on the ideas they have. Startup weekend is a lot crazy ideas, a little bit hackathon, and a little bit business venture incubation and evaluation. This is the third event in Baton Rouge. 

It's a great release and experience for "professional software developers" to spend the weekend as "wild west programmers" kicking out validation-less UI, radical interfaces and apps with only the fun functionality done to contrast their day jobs of tedious construction and meticulous polishing. And the devs get to pick which project they want to be a part of and support and join that team for the weekend, which may not always be the case at work. It's also a great opportunity for team leads and senior devs to see their staff devs and junior devs work in a free and fun environment.

The deal is it's a Friday evening to Sunday evening event. Friday is pitch night and team building. Saturday is the solidify, design, evaluate, with experienced investors and technical coaches if the team isn't itself technical, and git-er-done day. Sunday is final changes and preparation for the presentation at the end of the event.

Startup Weekend is worldwide with a goal to create an environment where passionate people can come together to get things done; to learn, network, bridge the gap between trades, expose potential and see actual results. Startup Weekend is held 125+ events in 170+ cities around the world throughout the year. Our event will be held on August 16-18, 2013 and we have confirmed a venue at Bon Carre / Louisiana Technology Park. Our participant count is expected to be around 75, but last minute registrations could raise that figure closer to 100. 

Clayton White
South Coast Angel Fund
Manuel Valencia
Vice President of Strategy @ Excelerant

Jennifer Fowler
LSU Student Incubator Director @ Louisiana Business & Technology Center

Stephen St.Cyr
Owner @ Vivid Ink Graphics
John Edel
Managing Member @ Edel Patents LLC
Jeremy Switzer
Head of Development @ LocalMed, LLC

Mike Melancon
Software Architect @ Ultix Technologies, Inc.

Chris Sutherland
Software Architect @ VoterVoice LLC
  

If you have any questions or would like to discuss the opportunity further in a technical sense you can contact me, if you or other members of the company want non-technical information you can reach out to Wendy Overton, the main organizer at wendy@startupweekend.org. And of course the website: http://batonrouge.startupweekend.org.

This info was passed along to me by Phillip Jackson, a fellow tech community organizer in Baton Rouge.

Sunday, August 04, 2013

Recap from SQL Saturday Baton Rouge 2013

As the head of the planning committee for SQL Saturday Baton Rouge this year, I had the distinct pleasure of welcoming several hundred colleagues from around the world to Baton Rouge on August 3 for a day of free training, networking, professional development and fun.

The day-ending raffle in the Auditorium of the LSU Business Education Complex (photo credit: Cody Arnould)
from L-R: Mike Huguet, Justin Yesso, Seth Valdetero, Dave Baxter, William Assaf, James Coolman
According to the official SQLSaturday.com website admin page, we had 750 people register for the event in total, a staggering and exciting number for us to prepare for.


But as any SQL Saturday organizer knows, the conversion from registered "planning to attend" and "on site" attendees is not 100%. This is a topic for a future blog post, but based on my experience over the past four years and watching the pre-registration trending, doubling the number of pre-registered attendees roughly 29-31 days away from the event is a strong indicator of actual headcount.

Our final headcount onsite at Baton Rouge SQL Saturday was 560, give or take a few stragglers, just 12 folks short of breaking SQL Saturday Atlanta's record of 571. This puts us on par with what we pulled in last year to Baton Rouge SQL Saturday as well.

With pride, I still can call Baton Rouge SQL Saturday one of the largest in the world, or in the universe, according to our keynote speaker Ob Soonthornsima, CIO and Chief Security Officer of Blue Cross Blue Shield Louisiana, our Diamond Sponsor this year.

Having spent countless hours over the past three months preparing for the event (with a lot of help from my friends), I breathe a sigh of relief but also of excitement. We were that close to breaking our record, and yet the fundamental parts of the event were in place to hold even more. LSU's brand new Business Education Complex is a fantastic host, with 15 classrooms of 60+ or more and a large auditorium for keynotes and raffles, all in very close proximity.

Sparkhound, home of my managers, coworkers and best friends, was our Platinum sponsor this year and led the charge with nine speaker sessions by Sparkhound Baton Rouge employees, at least one in each time slot, including back-to-back-to-back sessions in our CIO/CTO/IT Manager track from three members of Sparkhound upper management. 

Also huge thanks to my wife Christine Assaf (hrtact.com), who spoke twice, in the Career Track and CIO track, and also spent her free time between talks in nine 15-minute coaching sessions for attendees of her sessions. Christine and entrepreneur/author Tonia Askins each offered free one-on-one coaching for attendees via a signup sheet after their sessions on "Mastering your Resume and Interview" and "Exploring Entrepreneurship" respectively, a great idea that future SQL Saturday organizers should consider.

Also big thanks to Microsoft as usual, who was represented very well at our conference, including the entire speaker lineup of the Windows Server Infrastructure track with Nicholas Jones, Randy Nale, Steven Rachui, and Matt Hester. Mike Huguet of Microsoft was a member of the planning committee and also spoke three times, including a lunch session on "What's it like to Work for Microsoft?"

I want to extend a huge thank you to everyone on the planning committee to helped make this conference a great one this year, to our twenty one generous sponsors and to 560 of my closest friends and colleagues this past weekend at SQL Saturday Baton Rouge!

I received a lot of help from these good friends and members SQL Saturday Baton Rouge Planning Committee, in no specific order:
Mike Huguet of Microsoft
Laurie Firmin of Ameritas
Jacques Steward of Turner Industries
Stacy Vicknair of Sparkhound
Kenny Neal of Amedisys
Adrian Aucoin of Corporate Services
James Coolman of Sparkhound (and family!)
Justin Yesso of Blue Cross Blue Shield
Diann Kelley of Sparkhound
Cody Gros of Sparkhound
Beth Curry of Antares
Kyle Summers of Sparkhound
Christine Assaf of Waste Management Inc
Cherie Sheriff of GE Capital
Cody Arnould of Sparkhound
Paul Barbin of Amedisys
and countless volunteers as well, thanks!

See you next year!


Tuesday, June 11, 2013

SQL Saturday Baton Rouge 2013

SQL Saturday is a global event to bring Information Technology speakers and professionals together for a community-driven, community-attended free day of technical training. The Baton Rouge SQL Saturday 2013 event is sponsored by the Baton Rouge SQL Server and .net User Groups, and will be our fifth annual event on campus at LSU.

We expect over 500 regional IT professionals and national speakers to join us. This free conference is open to the public and is perfect for students, CIO's, database administrators, developers, IT managers, server admins and job-seekers. Please visit sqlsaturday.com for complete details on this free community event.

What does SQL Saturday Baton Rouge look like?



We've got 60 sessions in 13 tracks and a broad range of expertise in technologies coming to our event at the LSU Business Education Complex on August 3.

SQL DBA Foundations
SQL Dev
Business Intelligence Foundations
Business Intelligence Advanced
Dev .net Foundations
Dev .net/ALM
Dev Web/Mobile Dev App Support
SharePoint
Powershell
Windows Server Infrastructure
Career Development
CIO/CTO/IT Manager

Friday, May 03, 2013

Use the Default System Trace to Find Autogrowth Events

The SQL Server Management Studio built-in standard report for "Disk Usage" is very handy to view the performance-sucking autogrowth events for a database... until you want to see if autogrowth events have been occurring in any database on the server.

Autogrowth events are very problematic and can cause sudden, unpredictable and difficult-to-troubleshoot application timeouts and performance drops. This applies to any SQL Server database file, including those in SharePoint databases, system databases (such as the tempdb), and any user database data file or log file.


[As an aside, your friends in SharePoint know all about Autogrowth settings for their content databases as well.  Friend of the blog Drew at WerePoint posted this relevant blog entry recently about the timing of growth and recycle-bin emptying events. http://werepoint.blogspot.com/2013/05/on-proper-care-and-feeding-of-content.html ]


Ideally, as professional DBAs, we don't want to ever see an autogrowth event occur on our databases.  By proactively monitoring the space in our database files, we can grow database files on our own terms, on our own schedule (outside of busy hours), and without interrupting a user transaction.

Autogrowth events occur when a data file or log file has exhausted its reservation. SQL is forced to request the operating system that more space to be added to a file. During this request and the file growth, very little happens to the data file. This typically occurs during a user transaction - bad news!

The increment of the autogrowth might be defaulted to just 1 MB on your system, which means a single large transaction could result in a series of sequential autogrowth events before the transaction finally commits.  In the end, the argument over fixed autogrowth size versus percentage-based autogrowth settings are academic - a vigilant DBA should try to avoid autogrowth events in the first place.

But, while we're on the topic, 10% is a completely reasonable growth rate for data and log files for most databases. Depending on your available storage space, moving to a fixed growth rate (100mb-500mb) is more appropriate.

Instant File Initialization can also make a big difference in the time required to grow a database file (as well as restore database backups.) Note that this may need to be enabled at the virtual guest and SAN level for some virtual environments.

Here's a script to view the available space in every database file in a SQL Server instance, using the hacky old sp_MSforeachdb undocumented stored procedure:
exec sp_MSforeachdb  'use [?]; 
SELECT 
  ''DatabaseName_____________'' = d.name
, Recovery = d.recovery_model_desc
, ''DatabaseFileName_______'' = df.name
, ''Location_______________________________________________________________________'' = df.physical_name
, df.File_ID
, FileSizeMB = CAST(size/128.0 as Decimal(9,2))
, SpaceUsedMB = CAST(CAST(FILEPROPERTY(df.name, ''SpaceUsed'') AS int)/128.0 as Decimal(9,2))
, AvailableMB =  CAST(size/128.0 - CAST(FILEPROPERTY(df.name, ''SpaceUsed'') AS int)/128.0 as Decimal(9,2))
, ''Free%'' = CAST((((size/128.0) - (CAST(FILEPROPERTY(df.name, ''SpaceUsed'') AS int)/128.0)) / (size/128.0) ) * 100. as Decimal(9,2))
 FROM sys.database_files df
 cross apply sys.databases d
 where d.database_id = DB_ID() 
 and size > 0
'
One important piece of information we can get from the default trace when we look for autogrowth events is the time and duration of each autogrowth event.  I've personally used these two pieces of information to prove that random application timeouts experienced by a client were because of database file autogrowth events stalling their transaction for whole seconds while more space was added.

How far back does the default trace go? It depends on how much activity is being written to the default trace (which obviously needs to be enabled for this to work). The current active default trace file keeps up to 20mb of data.  Five default trace files are kept of 20mb each. This script only reads the latest file.
DECLARE @tracepath nvarchar(500)

SELECT 
 @tracepath = path 
 --select *
FROM sys.traces 
WHERE is_default = 1

--The trace automatically finds _n files, trim off the _nnn portion of the file name.
 select @tracepath = substring(@tracepath, 0, charindex('\log_', @tracepath,0)+4) + '.trc'

 print @tracepath

 SELECT 
  DBName    = g.DatabaseName
 , DBFileName   = mf.physical_name
 , FileType   = CASE mf.type WHEN 0 THEN 'Row' WHEN 1 THEN 'Log' WHEN 2 THEN 'FILESTREAM' WHEN 4 THEN 'Full-text' END
 , EventName   = te.name
 , EventGrowthMB  = convert(decimal(19,2),g.IntegerData*8/1024.) -- Number of 8-kilobyte (KB) pages by which the file increased.
 , EventTime   = g.StartTime
 , EventDurationSec = convert(decimal(19,2),g.Duration/1000./1000.) -- Length of time (in milliseconds) necessary to extend the file.
 , CurrentAutoGrowthSet= CASE
         WHEN mf.is_percent_growth = 1
         THEN CONVERT(char(2), mf.growth) + '%' 
         ELSE CONVERT(varchar(30), convert(decimal(19,2), mf.growth*8./1024.)) + 'MB'
        END
 , CurrentFileSizeMB = convert(decimal(19,2),mf.size* 8./1024.)
 , @tracepath 
 FROM fn_trace_gettable(@tracepath, default) g
 cross apply sys.trace_events te 
 inner join sys.master_files mf
 on mf.database_id = g.DatabaseID
 and g.FileName = mf.name
 WHERE g.eventclass = te.trace_event_id
 and  te.name in ('Data File Auto Grow','Log File Auto Grow')
 order by StartTime desc

 



EDIT: Smarter way to search the trace file _n files. This has also been reflected in the "autogrow events.sql" file in the toolbox.

Hunt Down Tables Without Clustered Indexes

This one's a gimme, but an underrated utility script when I enter a new environment and look for low-hanging fruit for easy improvements.

You'd be surprised how many vendors ship applications without a single index, much less properly aligned clustered indexes.

Heaps are bad.  You can't defragment a heap by performing index maintenance, despite old myths about the create/drop a clustered index trick.

Similarly, the myth of a heap performing better on pure inserts vs a table with a clustered index is an academic one.  Heaps are impossible to order, which means selecting from a heap is always a scan.

If your table will be constantly written to and rarely read, put a clustered index on it.  The writes will be sequential, resulting in fewer page breaks and less fragmentation, which mean more efficient storage and maintenance.

EDIT: Page "breaks" (I meant splits) aren't possible on heaps, but forwarding pointers, which create a maze of bad performance in heaps with nonclustered indexes. Thanks to commenters for pointing this out!

And when you have to look in that table for a stray logging record, it won't be a full table scan. If your table will be constantly written to and never read, why is it in your database?  Such a table doesn't exist.

This script is an easy one to use for finding heaps in a database, along with the row count and size of jumbled data that is haunting your database.  It is in my DBA toolbox folder alongside many other scripts.

--Doesn't work on 2000 databases or databases in 2000 compatability mode.  Need to change the db_id() syntax if so.

select 
 [Database Name] = db_name()
, [Table Name] = s.name + '.' + o.name
, p.row_count
, SizeMb= (p.reserved_page_count*8.)/1024.
from 
 sys.objects o
inner join  sys.schemas s on o.schema_id = s.schema_id
inner join  sys.dm_db_partition_stats p  on p.object_id = o.object_id 
inner join  sys.indexes si on si.object_id = o.object_ID
WHERE si.type_desc = 'Heap'
and  is_ms_shipped = 0
order by SizeMb desc


As an aside, but a different blog post altogether - a good clustered key is narrow, unique, unchanging and sequential - which is why IDENTITY integer columns are the perfect clustered key.  Compound clustered keys can be a big waste of space.

Thursday, March 21, 2013

TRY_CONVERT broken by WITH SCHEMABINDING in SQL Server 2012 Data Tools build

Here's a bug you might find if attempting to schemabind an object that uses the new TRY_CONVERT function.

To reproduce, run the following code in a SSDT database solution:

CREATE VIEW [dbo].[aview] WITH SCHEMABINDING 
AS 
SELECT id = TRY_CONVERT(int, id) 
FROM dbo.arealtable

The database project should build with this error:
Error 3 SQL70561: Cannot schema bind view '[dbo].[aview]' because name 'int' is invalid for schema binding. Names must be in two-part format and an object cannot reference itself. C:\Users\...\Documents\Visual Studio 2010\Projects\...\Views\dbo.aview.sql
This view is perfectly valid in SQL Server, and can be created without a problem. In a Data Tools database solution, it prevents the solution from being built with the above error.


In the above screenshot, "try_convert(int" is underlined in red because SSDT isn't recognizing the syntax correctly.  Again, this only appears when the view is created with schemabinding.

Note that in my bug report, the project's target platform is SQL 2012 and the project's database settings compatibility mode is SQL 2012, using Microsoft Visual Studio 2010 Version 10.0.40219.1 SP1 and SQL Server Data Tools 10.3.21101.1.

This has been marked as "resolved" on Microsoft Connect, and the fix "will appear in an upcoming release of DACFx/SSDT", as of March 2013: https://connect.microsoft.com/SQLServer/feedback/details/772804/try-convert-broken-by-with-schemabinding-in-sql-server-2012-data-tools-build#details

UPDATE: Confirmed that this is now resolved in recent releases of SSDT.

Friday, March 08, 2013

Conversation History: The Continuing Case Against GUIDs

We had a cooperative relational database design exercise at the office last week as part of our regular department "Lunch and Learn" series, and inevitably one topic that came up is the use of GUIDs as the unique key for a table.

And before long, I had been goaded onto my soapbox to deliver my hellfire-and-brimstone sermon against the use of GUIDs as the primary key, much less the clustered index, of a table. (If you haven't heard this, you need to attend more Baton Rouge SQL Server User Group meetings.)

The case for GUIDs traces back to an oil rig case study, an actual design example we (at Sparkhound) encountered here in the Gulf South. There are hundreds of oil rigs out in the middle of the Gulf of Mexico, each with a sometimes-available Internet connection, and the need to sync back home to a onshore server.  By using GUIDs, each server can write a unique key to the same table, so the argument says, without any conflicts.

(This is not my complete soap-box against GUIDs.  Another day.)

Why not use a compound key of integers?  Surely, you must still identify what oil rig is sending the data, then using a rig-side identity column would provide a compound primary key with guaranteed uniqueness and at half the cost (4 bytes x 2) of a GUID (16 bytes).  That storage space adds up, and has a big impact on nonclustered indexes, fragmentation, page splits, etc.

After that lunch and learn ended, one of our bright new hires engaged me afterwards...

Friday, March 01, 2013

YYYYMM Math

I had a project recently where I had to do math on a number that represented a date, YYYYMM.

For example, February 2013 would be stored as an integer, 201302.

As an aside, YYYYMM makes WAY more sense than ever storing anything MMYYYY.  Why?  For one, it sorts correctly.  And the biggest reason - no need to worry about leading zeroes, and therefore the data can be stored as NUMERIC, not VARCHAR data.

I needed to do month math on this data. For example, give me the previous month, six months ago, one year ago, nine months in the future, etc.  This gets tricky because though it is tempting to do -1, -6, -12, +9, the rollover of years does not work at all.

The following function solves the problem.
ALTER FUNCTION dbo.[YearMonth_Math]
(
@Yearmonth int, @months smallint
)
RETURNS INT
AS
BEGIN
RETURN
 CASE WHEN @months THEN CASE --This case handles flipping January to the previous years December.
 WHEN convert(int, Right(@Yearmonth,2)) <= abs(@months) --will we need to flip a year?
 THEN convert(int,
  convert(char(4), Left(@Yearmonth,4) + (((@Months)) / 12)  - case when right(@YearMonth, 2) + (@months % 12) < 1 Then 1 else 0 end)
  + convert(char(2), right('0' +  convert(varchar(3), (right(@yearmonth, 2) + (@months % 12) 
  + case when right(@YearMonth, 2) + (@months % 12) < 1 Then 12 else 0 end)),2))
  )
  --Otherwise, this previous month calculation is very simple.
  ELSE @Yearmonth + @months
 END
 WHEN @months >0
 THEN CASE --This case handles flipping December to the next years January.
  WHEN 12 - convert(int, Right(@Yearmonth,2)) <= @months --will we need to flip a year?
  THEN convert(int,
   convert(char(4), left(@YearMonth,4) + ((@months + right(@yearMonth,2) -1) / 12) ) +   
   convert(char(2), right('0' + convert(varchar(3), (right(@YearMonth, 2) + (@months % 12)  
   - case when right(@YearMonth, 2) + (@months % 12) > 12 THen 12 else 0 end)),2))
   )
   --Otherwise, this previous month calculation is very simple.
   ELSE @Yearmonth + @months
  END
 ELSE @YearMonth
 END
END
Here's some testing to verify all the different possibilities.
select  dbo.[YearMonth_Math] (201212,1), 201301
select  dbo.[YearMonth_Math] (201212,2), 201302
select  dbo.[YearMonth_Math] (201212,7), 201307
select  dbo.[YearMonth_Math] (201212,12), 201312
select  dbo.[YearMonth_Math] (201212,13), 201401
select  dbo.[YearMonth_Math] (201212,24), 201412
select  dbo.[YearMonth_Math] (201212,25), 201501
select  dbo.[YearMonth_Math] (201212,36), 201512
select  dbo.[YearMonth_Math] (201201,-1), 201112
select  dbo.[YearMonth_Math] (201201,-2), 201111
select  dbo.[YearMonth_Math] (201201,-7), 201106
select  dbo.[YearMonth_Math] (201201,-12), 201101
select  dbo.[YearMonth_Math] (201201,-13), 201012
select  dbo.[YearMonth_Math] (201201,-24), 201001
select  dbo.[YearMonth_Math] (201201,-25), 200912
select  dbo.[YearMonth_Math] (201212,-1), 201211
select  dbo.[YearMonth_Math] (201212,-2), 201210
select  dbo.[YearMonth_Math] (201212,-7), 201205
select  dbo.[YearMonth_Math] (201212,-12), 201112
select  dbo.[YearMonth_Math] (201212,-13), 201111
select  dbo.[YearMonth_Math] (201212,-24), 201012
select  dbo.[YearMonth_Math] (201212,-25), 201011
select  dbo.[YearMonth_Math] (201212,-36), 200912
select  dbo.[YearMonth_Math] (201206,1), 201207
select  dbo.[YearMonth_Math] (201206,-1), 201205
select  dbo.[YearMonth_Math] (201206,2), 201208
select  dbo.[YearMonth_Math] (201206,-2), 201204
select  dbo.[YearMonth_Math] (201201,1), 201202
select  dbo.[YearMonth_Math] (201201,13), 201302
select  dbo.[YearMonth_Math] (201201,25), 201402

Thursday, February 28, 2013

"An item with the same key has already been added."

In SQL Server Reporting Services, when adding a new dataset to a report, you may see an error that looks like this:

"An item with the same key has already been added."




















Based on some quick googling, it looks like there are several potential causes for the error, but here's why I received it today.

The stored procedure I had declared as the query source had the same column name declared twice.  Even if you use different data from different tables, even if you declare them differently, SSRS needs all column names in the final dataset to be unique.
select

tablea.samecolumnname

,tableb.samecolumnname

,samecolumnname = tablec.columnname

,tabled.columnname as samecolumnname

...
That error message isn't helpful, but that was the solution for me, for what was a clumsy mistake to begin with.

Tuesday, February 19, 2013

Using Foreign Keys to Determine Table Insertion Order

Here's a script to determine, based on your database's foreign key relationships, what the insertion order would be for, say, a testing dataset.

      with cteFK (pktable, fktable) as ( 
       select             
            pktable = s1.name + '.' + o1.name 
       ,    fktable = isnull(s2.name + '.' + o2.name, '')        
       from sys.objects o1       
       left outer join sys.sysforeignkeys fk on o1.object_id = fk.fkeyid        
       left outer join sys.objects o2 on o2.object_id = fk.rkeyid        
    left outer join sys.schemas s1 on o1.schema_id = s1.schema_id
    left outer join sys.schemas s2 on o2.schema_id = s2.schema_id
       where o1.type_desc = 'user_table'       
       and o1.name not in ('dtproperties','sysdiagrams')        
       group by s1.name + '.' + o1.name 
      , isnull(s2.name + '.' + o2.name, '')       
), cteRec (tablename, fkcount) as  ( 
       select tablename = pktable 
       ,    fkcount = 0
       from cteFK    
       
       UNION ALL       
       
    select tablename = pktable 
       , fkcount = 1
       from cteFK  
       cross apply cteRec        
       where cteFK.fktable = cteRec.tablename    
       and cteFK.pktable <> cteRec.tablename
) 
select
 TableName
, InsertOrder = dense_rank() OVER ( ORDER BY max(fkcount) asc )
from (       
       select
              tablename = fktable
       ,      fkcount = 0 
       from cteFK 
       group by fktable    
        
       UNION ALL    
 
       select tablename = tablename, fkcount = sum(ISNULL(fkcount,0))   
       from cteRec      
       group by tablename
     ) x 
where x.tablename <> ''
group by tablename 
order by InsertOrder asc, TableName asc

Use the sample script from the previous post on how to "Script Out Foreign Keys With Multiple Keys" for an example of building a complicated set of foreign key relationships to test this script out.

Here's the results from that example:
















Similarly, this script would generate an order for you to unravel the data - the numbers descending would allow you to delete in the proper order,
delete from fktable11
delete from fktable10
delete from fktable9
delete from fktable8
delete from fktable6
delete from fktable4
delete from fktable2
delete from fktable7
delete from fktable5
delete from fktable3
delete from fktable1
... or drop the tables in the proper order.
drop table fktable11
drop table fktable10
drop table fktable9
drop table fktable8
drop table fktable6
drop table fktable4
drop table fktable2
drop table fktable7
drop table fktable5
drop table fktable3
drop table fktable1
UPDATED 20140507: changed old system reference objects (sysobjects) to new system reference objects (sys.objects) UPDATED 20140624: added "and cteFK.pktable <> cteRec.tablename", see comments for explanation.

Monday, February 18, 2013

Script Out Foreign Keys With Multiple Keys

It's easy enough to use sys.foreign_keys and sys.foreign_key_columns to identify foreign keys.  But what if you want to script out your foreign keys (and only your foreign keys)... that have compound primary keys?

For example,
--Script 1
create table dbo.fktable1(
  id1 int not null
, id2 int not null
, id3 int not null
, text1 varchar(20) not null
, CONSTRAINT pk_fktable1 primary key (id1, id2, id3))

create table dbo.fktable2(
  id int not null identity(1,1) primary key
, id1 int not null
, id2 int not null
, id3 int not null
, text1 varchar(20) not null
, CONSTRAINT [FK_fktable1_fktable2] 
  FOREIGN KEY (id1, id2, id3) 
  REFERENCES dbo.fktable1 (id1, id2, id3))

Combining those multiple records in the sys.foreign_key_columns into a concatenated string in order to get this is tricky:

--Script 2
ALTER TABLE [dbo].[fktable2]  WITH CHECK 
  ADD  CONSTRAINT [FK_fktable1_fktable2] 
  FOREIGN KEY([id1], [id2], [id3])
  REFERENCES [dbo].[fktable1] ([id1], [id2], [id3])


Here's how I recently did this.   It actually turned out to be more complicated than I thought, certainly more complicated that your standard throw-strings-together-based-on-system-tables.  This is because we need to build a recurse of the multi-key values that are both referenced and referencing in foreign keys. 

Tuesday, February 05, 2013

Adding the Clustered Key To Your Nonclustered Index? Part 2

Of course, a rather obvious answer presents itself the next day.  After reviewing the actual environment that my colleague was working in, it popped quickly into mind.

What if the table has a compound primary key?  See comments for a play-by-play.

drop table dbo.testclusteredinclude
go
create table dbo.testclusteredinclude
(             id1 int not null
,             id2 int not null
,             id3 int not null
,             text1 varchar(30) Not null
,             constraint pk_testclusteredinclude primary key  (id1, id2, id3)
)
go
insert into dbo.testclusteredinclude (id1, id2, id3, text1) values (1,2,3,'test1'); --put in our seed row

--filler data of 10000 rows
with cte3pk (id1, id2, id3) as
(select id1=2,id2=3,id3=4
union all
select id1+1, id2+1, id3+1
from cte3pk
where id1 <= 10000
)
insert into dbo.testclusteredinclude (id1, id2, id3, text1)
select id1, id2, id3, 'test2' from cte3pk
OPTION (MAXRECURSION 10000);
go
alter index all on dbo.testclusteredinclude  rebuild
go

--turn on show actual exec plan

--Second key of the Clustered Key can benefit, this easy to understand.
-- We're simply giving SQL a narrower set of data, via the nonclustered index.
select id2, text1 from dbo.testclusteredinclude where
id2 = 1001
go
create nonclustered index idx_nc_testclusteredinclude_id2_text1
on dbo.testclusteredinclude (id2, text1)
go
select id2, text1 from dbo.testclusteredinclude where
id2 = 1001
go
drop index idx_nc_testclusteredinclude_id2_text1 on dbo.testclusteredinclude
go

--Still, putting a subsequent key of a compound clustered key in the include column doesn't help.
-- SQL can still do an index seek on id2, even when the index doesn't contain it (idx_nc_testclusteredinclude_text1).
select id2, text1 from dbo.testclusteredinclude where
text1 = 'test2'
go
create nonclustered index idx_nc_testclusteredinclude_text1
on dbo.testclusteredinclude (text1)
go
select id2, text1 from dbo.testclusteredinclude where
text1 = 'test2'
go
create nonclustered index idx_nc_testclusteredinclude_text1_inc_id2
on dbo.testclusteredinclude (text1) include (id2)
go
select id2, text1 from dbo.testclusteredinclude where
text1 = 'test2'
go

drop index idx_nc_testclusteredinclude_text1 on dbo.testclusteredinclude
drop index idx_nc_testclusteredinclude_text1_inc_id2 on dbo.testclusteredinclude
go

One final note - none of the SELECT statements above generate any missing index suggestions in SQL 2012 SP1 or SQL 2008 SP2, even though without any nonclustered indexes they all generated Clustered Index Scans.