pointers, solutions and scripts for the SQL DBA
Not intended to replace common sense

Showing posts with label sql. Show all posts
Showing posts with label sql. Show all posts

2/24/2014

Challenges with Downgrading TFS 2010 from SQL Enterprise to SQL Standard Edition

I recently had the challenge of migrating an installation of Microsoft Team Foundation Server 2010 from a shared SQL Server 2008 R2 Enterprise instance to its own SQL Server 2008 R2 Standard instance on a new server.

Because of some Enterprise edition-specific feature usage, this turned out to have two problems during the restoration:
  • Use of database compression on some TFS 2010 tables 
  • Use of perspectives in the TFS 2010 SSAS database "TFS_Analytics" 
Neither feature is available in Standard edition and conveniently the error message when restoring the SQL or SSAS databases from Enterprise to Standard clearly indicated this. After making the client aware, the decision made was to try and remove these features from their TFS installation.

After removing these Enterprise features from copies of the databases, I was able to back up and restore the copied databases (during the migration outage) to the new server without any more edition failures.

Here's how:

Remove Compressed Indexes

You may encounter this error if you attempt to restore any database that uses data compression from Enterprise to Standard edition:

cannot be started in this edition of SQL Server because part or all of object 'foo' is enabled with data compression or vardecimal storage format

Here is a script to look through all tables for compressed partitions (either heaps or indexes) and REBUILD them with DATA_COMPRESSION = NONE. This obviously only works on Enterprise Edition of SQL 2008 or higher.

--Enable WITH (ONLINE = ON) if possible

Declare @sqltext nvarchar(1000), @tbname sysname, @index_id int, @index_name nvarchar(100)

Declare tbl_csr cursor
FOR
select name = '['+s.name+'].['+o.name+']', p.index_id, i.name
from sys.partitions p
inner join sys.objects o on o.object_id = p.object_id
inner join sys.schemas s on s.schema_id = o.schema_id
inner join sys.indexes i on i.index_id = p.index_id and i.object_id = o.object_id 
where p.data_compression <> 0
and o.type_desc <> 'INTERNAL_TABLE'

Open tbl_csr

Fetch Next from tbl_csr into @tbname, @index_id, @index_name

 While (@@FETCH_STATUS=0)
 Begin

   If @index_id =0 
    begin
     --catches heaps
     set @sqltext=N'ALTER Table '+@tbname + N' REBUILD WITH (DATA_COMPRESSION=NONE)' --, ONLINE = ON
     exec sp_executesql @sqltext
     print 'rebuild heap ' + @tbname 
    end
   else
    begin
     set @sqltext=N'ALTER INDEX ' + @index_name + ' ON '+@tbname + N' REBUILD WITH  (DATA_COMPRESSION=NONE)' --, ONLINE = ON
     exec sp_executesql @sqltext
     print 'rebuild index ' + @tbname 
    end

   Fetch next from tbl_csr into @tbname, @index_id, @index_name

 End

Close tbl_csr
Deallocate tbl_csr

Below is a demo you can use to simulate the script as it finds clustered indexes, nonclustered indexes and heaps to rebuild appropriately, while also ignoring XML indexes (which could present a problem if you take a blanket ALTER INDEX ALL ... REBUILD.)
use adventureworks
go

--test lab
if not exists (select 1 from sys.schemas where name = 'testschema')
exec (N'create schema testschema')

go
if exists (select 1 from sys.objects where name = 'testfeature_index') 
drop table testschema.testfeature_index
go
create table testschema.testfeature_index (id int not null identity(1,1) primary key , bigint1 bigint not null, xml1 xml null)
insert into testschema.testfeature_index (bigint1) values (123456789123456789),(1234567891234567891),(1234567891234567892),(1234567891234567893)

go
set nocount on 
insert into testschema.testfeature_index (bigint1)
select bigint1+5 from testschema.testfeature_index 
go 10
set nocount off
alter index all on testschema.testfeature_index rebuild with (data_compression = page)
create nonclustered index idx_nc_testfeature1 on testschema.testfeature_index (bigint1) with (data_compression = page)
create primary xml index idx_nc_testfeaturexml1 on testschema.testfeature_index (xml1) 
create xml index idx_nc_testfeaturexml2 on testschema.testfeature_index (xml1)  USING XML INDEX idx_nc_testfeaturexml1 FOR PATH
go
if exists (select 1 from sys.objects where name = 'testfeature_heap') 
drop table testschema.testfeature_heap
go
create table testschema.testfeature_heap (id int not null identity(1,1)  , bigint1 bigint not null)
insert into testschema.testfeature_heap (bigint1) values (123456789123456789),(1234567891234567891),(1234567891234567892),(1234567891234567893)
go
set nocount on 
insert into testschema.testfeature_heap (bigint1)
select bigint1+5 from testschema.testfeature_heap 
go 10
set nocount off
go
alter table testschema.testfeature_heap rebuild  with (data_compression = PAGE)
create nonclustered index idx_nc_testfeature1 on testschema.testfeature_heap (bigint1) with (data_compression = page)

go

--Enable WITH (ONLINE = ON) if possible

select name = '['+s.name+'].['+o.name+']', case p.index_id when 0 then 'Heap' when 1 then 'Clustered Index' else 'Index' end
from sys.partitions p
inner join sys.objects o on o.object_id = p.object_id
inner join sys.schemas s on s.schema_id = o.schema_id
where p.data_compression <> 0
and o.type_desc <> 'INTERNAL_TABLE'
go
Declare @sqltext nvarchar(1000), @tbname sysname, @index_id int, @index_name nvarchar(100)

Declare tbl_csr cursor
FOR
select name = '['+s.name+'].['+o.name+']', p.index_id, i.name
from sys.partitions p
inner join sys.objects o on o.object_id = p.object_id
inner join sys.schemas s on s.schema_id = o.schema_id
inner join sys.indexes i on i.index_id = p.index_id and i.object_id = o.object_id 
where p.data_compression <> 0
and o.type_desc <> 'INTERNAL_TABLE'

Open tbl_csr

Fetch Next from tbl_csr into @tbname, @index_id, @index_name

 While (@@FETCH_STATUS=0)
 Begin

   If @index_id =0 
    begin
     --catches heaps
     set @sqltext=N'ALTER Table '+@tbname + N' REBUILD WITH (DATA_COMPRESSION=NONE)' --, ONLINE = ON
     exec sp_executesql @sqltext
     print 'rebuild heap ' + @tbname 
    end
   else
    begin
     set @sqltext=N'ALTER INDEX ' + @index_name + ' ON '+@tbname + N' REBUILD WITH  (DATA_COMPRESSION=NONE)' --, ONLINE = ON
     exec sp_executesql @sqltext
     print 'rebuild index ' + @tbname 
    end

   Fetch next from tbl_csr into @tbname, @index_id, @index_name

 End

Close tbl_csr
Deallocate tbl_csr


go

select name = '['+s.name+'].['+o.name+']', case p.index_id when 0 then 'Heap' else 'Index' end
from sys.partitions p
inner join sys.objects o on o.object_id = p.object_id
inner join sys.schemas s on s.schema_id = o.schema_id
where p.data_compression <> 0
and o.type_desc <> 'INTERNAL_TABLE'

Remove Perspectives from SSAS Database

You may encounter this issue when restoring any SSAS database from Enteprise to Standard editions, not just in TFS.

Errors related to feature availability and configuration: The 'Perspectives' feature is not included in the '64 Bit Standard Edition' SKU.

The solution is multi-step but straightforward.

Here's a breakdown of the steps. The XMLA code to accomplish this will follow:

  1. Backup the SSAS database (TFS_Analytics) on the Enterprise SSAS instance.
  2. Restore the SSAS database with a new name (TFS_Analytics_std) to a new DbStorageLocation on the Standard SSAS instance.
  3. In Management Studio Object Explorer, script out the database as an ALTER statement. Find the <Perspectives> section of the code. (Note - "Perspectives" is plural.) Drag and select to the </Perspectives> tag. Be sure to capture all the <Perspective>...</Perspective> sections. Delete.
    • Easier way? Collapse the box to the left of the <Perspectives> tag. Select the collapsed line for the <Perspectives> tag. Delete.
  4. Execute the XMLA script. (Hit F5.) This will remove the perspectives from the database.
  5. Backup the TFS_Analytics_std database to a new location.
  6. Restore the backup of the TFS_Analytics_std database to the Standard Edition server, renaming the database back to TFS_Analytics.
Code examples below. Be aware that you may need to apply the <AllowOverwrite>true</AllowOverwrite> element to overwrite any .abf files during a backup, or databases during a restore. For safety reasons, this option has been set to false for these code examples.


  1. On the old Enteprise server, backup the SSAS database (TFS_Analytics).
    <backup xmlns="http://schemas.microsoft.com/analysisservices/2003/engine">
     <object>
      <databaseid>Tfs_Analysis</DatabaseID>
     </object>
     <file>M:\MigrationBackups\TFS_analysis_migrationbackup_2014.abf</file> <allowoverwrite>false</allowoverwrite>
    </backup> 
  2. On the old Enteprise server, restore the SSAS database with a new name (TFS_Analytics_std) to a new DbStorageLocation.
    <Restore xmlns="http://schemas.microsoft.com/analysisservices/2003/engine">
     <File>M:\migrationbackups\TFS_analysis_migrationbackup_2014.abf</File>
     <DatabaseName>TFS_Analysis_std</DatabaseName>
    <DbStorageLocation xmlns="http://schemas.microsoft.com/analysisservices/2008/engine/100/100">m:\migrationbackups\</DbStorageLocation
    >
    
  3. In Management Studio Object Explorer, script out the database as an ALTER statement.
  4. Find the <Perspectives> section of the code and remove it.
  5. Execute the XMLA script. (Hit F5.) This will remove the perspectives from the database.
  6. On the old Enteprise server, backup the TFS_Analytics_std database to a new location.
    <Backup xmlns="http://schemas.microsoft.com/analysisservices/2003/engine">
      <Object>
        <DatabaseID>Tfs_Analysis_std</DatabaseID>
     </Object>
      <File>M:\MigrationBackups\TFS_analysis_migrationbackup_2014_std.abf</File>
    </Backup>
    
  7. Create a new XMLA script on the target Standard Edition server. Restore the backup of the TFS_Analytics_std database to the Standard Edition server, renaming the database back to "TFS_Analytics".

    If there are no other Enterprise-only features in use in the SSAS database, this backup should restore successfully.

    Note also that the restore here occurs across the wire, using the UNC path to a temporary folder share. The SSAS service account on the new Standard edition server must have permissions to view this folder share.
    <Restore xmlns="http://schemas.microsoft.com/analysisservices/2003/engine">
     <File>\\serverpathnamehere\MigrationBackups\TFS_analysis_migrationbackup_2014_std.abf</File>
    <DatabaseName>TFS_Analysis</DatabaseName>
     <AllowOverwrite>false</AllowOverwrite>
    

12/12/2013

It's Dangerous to ISNUMERIC, Take This Try_Convert

There's an issue with some characters in T-SQL when validating numeric values out of raw varchar fields, and it can be easily solved with a function new to SQL Server 2012.

Consider this sample setup, where we are stuck importing values from a varchar data source into an integer data destination, a common task in heterogeneous ETL activities:
create table tablefoo
(id int identity(1,1) not null primary key,
foo varchar(20) not null)
go
--add some valid integers
insert into tablefoo (foo) values
(123),
(456)

--add some dollar values
insert into tablefoo (foo) values
('$123'),
('$456')

--add some alphanumeric strings
insert into tablefoo (foo) values
('abc123'),
('def456')

--add scientific notation
insert into tablefoo (foo) values
('1e234')
go
select * from tablefoo 

/*
id foo
1 123
2 456
3 $123
4 $456
5 abc123
6 def456
7 1e234
*/
Let's try to validate only proper integer values in tablefoo.foo for insertion into a column with a data type bigint.
--this fails! why?
select 
      CASE   WHEN ISNUMERIC(c.foo) = 1 THEN CONVERT(bigint, c.foo ) ELSE NULL END
from tablefoo c
The error message is
Msg 8114, Level 16, State 5, Line 1 Error CONVERTing data type varchar to bigint.
Let's dig into the data case-by-case...
--this succeeds as designed, returning NULL for the alphanumeric value.
select 
      CASE   WHEN ISNUMERIC(c.foo) = 1 THEN CONVERT(bigint, c.foo ) ELSE NULL END
from tablefoo c
where foo = 'abc123'

--This fails because the dollar sign passes the ISNUMERIC check 
-- BUT is not valid for an int field 
select 
      CASE   WHEN ISNUMERIC(c.foo) = 1 THEN CONVERT(bigint, c.foo ) ELSE NULL END
from tablefoo c
where foo = '$123'

--This also fails because 1e234 is valid scientific notation 
-- BUT is not valid for an int field.
select 
      CASE   WHEN ISNUMERIC(c.foo) = 1 THEN CONVERT(bigint, c.foo ) ELSE NULL END
from tablefoo c
where foo = '1e234'
Our culprit was the curious behavior of certain characters (in this case, "$" and "e") which is valid for the ISNUMERIC built-in function, commonly used to detect numeric strings.

How to clean this up?
--this returns only integer values properly for the whole table
-- and is much prettier.
select 
 c.foo
, ISNUMERIC(c.foo) 
, TRY_CONVERT(bigint, c.foo )
from tablefoo c
Note the above values of ISNUMERIC and the error-less conversion of the TRY_CONVERT function. So not only is TRY_CONVERT downright sexy with its efficiency and compactness, it can help you avoid errors commonly encountered while using ISNUMERIC with characters like the dollar sign ($), decimal point (.), scientific notation (e) and comma (,).


10/17/2013

"SQL Server: Bolts to Buzzwords" presentation for the Hammond .NET User Group at Southeastern Louisiana University

One of my favorite groups to speak for (aside from the Baton Rouge SQL Server User Group) is the Hammond .NET User Group at Southeastern Louisiana University in Hammond, LA. A mostly undergraduate crowd of enthusiastic future developers (and perhaps some developers-turned-DBAs, like myself) make for a fun audience that asks great questions.

Here's my slidedeck for an introductory presentation on databases and SQL Server, perfect for a college crowd, most of which hadn't completed a capstone project requiring a database solution. In fact, the first line of the slidedeck details the highly important lesson of how to pronounce "SQL" - as in, "sequel" NOT "ess kew el".

Here's the download link for the PowerPoint slidedeck

3/08/2013

Conversation History: The Continuing Case Against GUIDs

We had a cooperative relational database design exercise at the office last week as part of our regular department "Lunch and Learn" series, and inevitably one topic that came up is the use of GUIDs as the unique key for a table.

And before long, I had been goaded onto my soapbox to deliver my hellfire-and-brimstone sermon against the use of GUIDs as the primary key, much less the clustered index, of a table. (If you haven't heard this, you need to attend more Baton Rouge SQL Server User Group meetings.)

The case for GUIDs traces back to an oil rig case study, an actual design example we (at Sparkhound) encountered here in the Gulf South. There are hundreds of oil rigs out in the middle of the Gulf of Mexico, each with a sometimes-available Internet connection, and the need to sync back home to a onshore server.  By using GUIDs, each server can write a unique key to the same table, so the argument says, without any conflicts.

(This is not my complete soap-box against GUIDs.  Another day.)

Why not use a compound key of integers?  Surely, you must still identify what oil rig is sending the data, then using a rig-side identity column would provide a compound primary key with guaranteed uniqueness and at half the cost (4 bytes x 2) of a GUID (16 bytes).  That storage space adds up, and has a big impact on nonclustered indexes, fragmentation, page splits, etc.

After that lunch and learn ended, one of our bright new hires engaged me afterwards...

7/12/2012

SQL Server DBA Tool Kit

Right next to my batarang and bat-zip-line-hook-shooter-gun on my utility belt is my SQL Server DBA Tool Kit, full of scripts I've picked up along the way.

I take little credit for some of these, as I've picked them up from various sources.  I have modified them, for example, adding the ability to see the cached execution plan to MS PSS's "worst queries" query, or adding a blocking chain and cached execution plan to the typical dm_exec_sessions and dm_exec_requests queries out there to replace sp_who2.

There are some of my handwritten queries in there, for example the query to find the largest objects in the database and their compression state, a proof of sql_modules vs INFORMATION_SCHEMA.routines, and the "sessions and requests" with blocking chain query that I've presented on before and use daily.  Nothing novel or ground-breaking, but real practical utility queries I use personally as a SQL Server consultant.

I presented my toolkit last night to the Baton Rouge SQL Server User Group and it turned out to be one of our best meetings in a while - over an hour of solid, experience-based conversation about queries, best practices, example experiences that was a wealth of information.  Thanks to everyone who attended and helped make it a great meeting - it certainly wasn't all me and my fancy tool kit.

You can download the toolkit.zip as presented last night, July 11, at the Baton Rouge SQL Server User Group website here: http://www.brssug.org/group-news/july12batonrougesqlserverusergroupmeeting

Thanks to all who attended last night, and see you at Baton Rouge SQL Saturday on August 4!

8/05/2011

WRKF Feature Story on SQL Saturday #64 in Baton Rouge on Aug 6

Check out this link for a feature story on SQL Saturday Baton Rouge from local radio station WRKF:

http://www.wrkf.org/batonrouge&newsID=1794

9/02/2010

Presentation on SQL DR Overview

Here's the link to the Lightning Round presentation I made to the Baton Rouge IT Professionals User Group on August 31.  The end was a demo on how to set up basic maintenance plans in SQL Server Management Studio 2008 R2.  I opened for a great presentation by Chris Eveler from Dell on Equalogic servers!

8/10/2010

Backup SQL with Powershell (long)

Had to set up back up a SQL Express 2005 database.  Why some vendors insist on creating software dependent on SQL Express, and then not giving their clients any way to back up said data, is mind-bogglingly stupid.

Here's a PowerShell script I wrote to accept a server name, database, and target backup folder.  It performs a single full backup when called. 

This script, which includes error-handling, sub-folder creation and more robust error reporting, is a lot more involved than any examples I found online, had to amalgamate the knowledge of dozens of google bing searches. 

Some key syntax features of Powershell that I learned about and implemented here:
  1. The InnerException loop is the only way to get the actual SQL Error out.  Otherwise, you're just stuck with Error 3041, "BACKUP failed to complete the command", which is not informative at all.  This is probably the most critical part of this, the ability to get meaningful errors into a text file log in the event of a backup failure.
  2. The Test-Path verifies if the container exists, so that you can create a directory if it doesn't.
  3. The $SMOBackupObject.Database wants a Database name (string) not a Database Object.  That really threw me off for an hour or so.  My fault, ofcourse.
  4. I used the SilentlyContinue ErrorActionPreference so that the user would see a clean error message (in my if $error.count section) instead of a messy red error from PowerShell, which would not be informative anyway (see #1).
  5. I used both the Trap syntax (to access the Exception object) and the if $error.count, because I wanted to have it return something to the end user regardless. 
  6. The out-null parts also hide messy returns to the end-user.
  7. The $error.clear() is also pretty important, as it clears out the errors that had run beforehand, if the user wants to try running the script again.  Not sure why this is necessary, its not intuitive that errors persist like that.  Open to any advice.

    function SQLFullBackup ([string]$SQLServerName, [string]$SQLDBname, [string]$BackupFolder) 
    {  
     
        # SQL Server Backup
        # Executes a single FULL backup of a given database 
     
        $error.clear()
        $ErrorActionPreference = 'SilentlyContinue'
        $BackupDate = Get-Date -format yyyyMMddHHmmss
     
        # SPECIFY SQLServer\Instance name with this variable
        $SQLServer =  New-Object ("Microsoft.SqlServer.Management.Smo.Server") $SQLServerName
     
     
        Trap [Exception] {
            $err = $_.Exception
            while ( $err.InnerException )
                {
                $err = $err.InnerException
                $errorfullmessage += $err.Message + " "
                };
     
            $errorfullmessage | out-File ($BackupFolder + "\BackupHistory_Error_" + $SQLDBName + "_" `
                + $BackupDate + ".txt");
        }
     
        #Create the subfolder if it does not already exist.
        if ((Test-Path -path ($BackupFolder + "\" + $SQLDBName) -pathtype container) -ne $True)
        {
            New-Item -path $BackupFolder -name $SQLDBName -type directory | out-null
        }
     
        [System.Reflection.Assembly]::LoadWithPartialName("Microsoft.SqlServer.SMO") | Out-Null
        [System.Reflection.Assembly]::LoadWithPartialName("Microsoft.SqlServer.SmoExtended") | Out-Null
        [System.Reflection.Assembly]::LoadWithPartialName("Microsoft.SqlServer.ConnectionInfo") | Out-Null
        [System.Reflection.Assembly]::LoadWithPartialName("Microsoft.SqlServer.SmoEnum") | Out-Null
     
        $SMOBackupObject = New-Object ("Microsoft.SqlServer.Management.Smo.Backup")
        $SMOBackupObject.Database = $SQLDBName
        $SMOBackupObject.Action = "Database" # File, Log
        $SMOBackupObject.Incremental = $false
        $SMOBackupObject.Devices.AddDevice($BackupFolder + "\" + $SQLDBName + "\" + $SQLDBName + "_" `
                + $BackupDate + ".bak", "File")
        $SMOBackupObject.SqlBackup($SQLServer) 
     
        if($error.count -gt 0)
        {   
            "Backup of database " + $SQLDBName+ " failed.  See log file in " + $BackupFolder + "\BackupHistory_Error_" `
            + $SQLDBName + "_" + $BackupDate + ".txt"
        }
                Else
        {
            "Backup of database " + $SQLDBName + " successful."
        }
     
    };
     
    Here are the function calls, per database, for the above function:

    SQLFullBackup ".\SQLEXPRESS" "express" "c:\Program Files\Microsoft SQL Server\MSSQL.1\MSSQL\backup"
    SQLFullBackup ".\SQLEXPRESS" "express2" "c:\Program Files\Microsoft SQL Server\MSSQL.1\MSSQL\backup"
     
    Here's sample output of a successful call or a failed call.


    PS C:\Windows\system32>C:\powershell_scripts\sqlfullbackup calls.ps1
    Backup of database express successful.
    Backup of database express2 failed.  See log file in 
    c:\Program Files\Microsoft SQL Server\MSSQL.1\MSSQL\backup\BackupHistory_Error_wexpress2_20100810094357.txt 

    Here's the files created:
    BackupHistory_Error_express2_20100810094357.txt
    \express\express_20100810094357.bak

    And the error log actually contains an informative SQL Error.  In this case:

    An exception occurred while executing a Transact-SQL statement or batch. 
    Could not locate entry in sysdatabases for database 'express2'. No entry found with that name. 
    Make sure that the name is entered correctly.
    BACKUP DATABASE is terminating abnormally. 


    How did this DBA first get into Powershell?  Here's a blog post about it.  

    4/05/2010

    TFS Sidekicks

    I've do a lot of DB Pro solutions and DBA code reviews at my office and on client sites. This free sidekick add-on for Visual Studio has been invaluable.

    http://www.attrice.info/downloads/index.htm

    Whenever I want to look at files included in a TFS changeset, or all files checked in to TFS for one work item, or comparison across changesets, I find this Visual Studio plugin extremely handy.

    For example, I use it for Code Reviews, to compare changes across all Changesets for a given Work Item. It has a quick function to compare the latest version of a file to the latest version of a file before this work item. Ridiculously efficient, I've forgotten how I'd do this before.

    Very quick install, low-impact. I imagine it could be very handy for lots of different Source Control-related sources and diffs in Visual Studio that you just can’t do with Visual Studio alone. Anyone else use it?

    2/21/2010

    A message on default constraints

    One of my assignments as a consultant DBA for a local Fortune 500 company is to support development of a large project. I was in a meeting on naming conventions and best practices from a DBA standpoint with the development leads this past week when the practice of naming default contraints came up.

    The benefit of providing user-defined names on your default constraints is that SQL won't create a standardized name with a random string at the end. That random string can be a real pain when it comes to database object comparison, plus, it looks ugly and isn't nearly as informative as a user-defined name could be.

    Here's the email sent to the developers, who were unsure of the syntax and the deprecated behavior of default objects. Names have been changed to protect the innocent tables and columns.

    You are correct, creating a CONSTRAINT object is deprecated behavior and will not be supported. Creating the constraints in the CREATE TABLE script is the preferred method. Constraints can still be named, however.

    CREATE TABLE [dbo].[TABLECD](

    [CD] [char](1) NOT NULL DEFAULT ' ',

    [id] [varchar](15) NOT NULL DEFAULT ' ',

    [ADDUSER] [int] NOT NULL DEFAULT ((0)),

    [ADDDATE] [datetime] NOT NULL DEFAULT GETDATE(),

    [EDITUSER] [int] NOT NULL DEFAULT ((0)),

    [EDITDATE] [datetime] NOT NULL DEFAULT CONVERT(datetime, '1899/12/31'),

    [DELETEUSER] [int] NOT NULL DEFAULT ((0)),

    [DELETEDATE] [datetime] NOT NULL DEFAULT CONVERT(datetime, '1899/12/31'),

    [DELETEFLAG] [char](1) NOT NULL DEFAULT ' ',

    When the above script is run, below is what actually gets created. Notice the names generated.

    DEFAULT on column ADDDATE DF__TABLECD__ADDDA__59FA5E80 (getdate())

    DEFAULT on column ADDUSER DF__TABLECD__ADDUS__59063A47 ((0))

    DEFAULT on column CD DF__TABLECD__CD__571DF1D5 (' ')

    DEFAULT on column DELETEUSER DF__TABLECD__DELET__5CD6CB2B ((0))

    DEFAULT on column DELETEDATE DF__TABLECD__DELET__5DCAEF64 (CONVERT([datetime],'1899/12/31',0))

    DEFAULT on column DELETEFLAG DF__TABLECD__DELET__5EBF139D (' ')

    DEFAULT on column EDITDATE DF__TABLECD__EDITD__5BE2A6F2 (CONVERT([datetime],'1899/12/31',0))

    DEFAULT on column EDITUSER DF__TABLECD__EDITU__5AEE82B9 ((0))

    DEFAULT on column id DF__TABLECD__id__5812160E (' ')

    Here’s a version of the script that sets user-provides constraint names:

    CREATE TABLE [dbo].[TABLECD](

    [CD] [char](1) NOT NULL CONSTRAINT [DF_TABLECD_CD] DEFAULT ' ',

    [id] [varchar](15) NOT NULL CONSTRAINT [DF_TABLECD_id] DEFAULT ' ',

    [ADDUSER] [int] NOT NULL CONSTRAINT [DF_TABLECD_ADDUSER] DEFAULT ((0)),

    [ADDDATE] [datetime] NOT NULL CONSTRAINT [DF_TABLECD_ADDDATE] DEFAULT GETDATE(),

    [EDITUSER] [int] NOT NULL CONSTRAINT [DF_TABLECD_EDITUSER] DEFAULT ((0)),

    [EDITDATE] [datetime] NOT NULL CONSTRAINT [DF_TABLECD_EDITDATE] DEFAULT CONVERT(datetime, '1899/12/31'),

    [DELETEUSER] [int] NOT NULL CONSTRAINT [DF_TABLECD_DELETEUSER] DEFAULT ((0)),

    [DELETEDATE] [datetime] NOT NULL CONSTRAINT [DF_TABLECD_DELETEDATE] DEFAULT CONVERT(datetime, '1899/12/31'),

    [DELETEFLAG] [char](1) NOT NULL CONSTRAINT [DF_TABLECD_DELETEFLAG] DEFAULT ' '

    )

    And here’s the output. Notice the constraint names are not truncated or appended with random strings.

    DEFAULT on column ADDDATE DF_TABLECD_ADDDATE (getdate())

    DEFAULT on column ADDUSER DF_TABLECD_ADDUSER ((0))

    DEFAULT on column CD DF_TABLECD_CD (' ')

    DEFAULT on column DELETEDATE DF_TABLECD_DELETEDATE (CONVERT([datetime],'1899/12/31',0))

    DEFAULT on column DELETEFLAG DF_TABLECD_DELETEFLAG (' ')

    DEFAULT on column DELETEUSER DF_TABLECD_DELETEUSER ((0))

    DEFAULT on column EDITDATE DF_TABLECD_EDITDATE (CONVERT([datetime],'1899/12/31',0))

    DEFAULT on column EDITUSER DF_TABLECD_EDITUSER ((0))

    DEFAULT on column id DF_TABLECD_id (' ')


    “There are no constraints on the human mind, no walls around the human spirit, no barriers to our progress except those we ourselves erect.” -Ronald Reagan

    2/12/2010

    SQL + Vark

    In honor of Google's acquisition of Aardvark, thought I'd share a SQL-related Vark from a while back.

    Here's how I spent part of my morning in an empty house.

    http://vark.com/t/83c38c
    Question
    Dec 05, 2009
    Sent to Aardvark via IM
    Friend 1
    22 / M / Omaha, NE
    In database design, are relational schemas created with ER diagrams usually in 3NF? Is there a case where ERD schemas wouldn't be in 3NF?

    Answer 1
    + 11 minutes
    Aardvark found someone who knows about database

    William A. (<--- that's me)
    I am a database architect. For a relational system, yes, diagrams are created with at least third normal form. It is a part of the design process to consider business rules, usage and business entities... strictly designing a system to a normalization spec is not realistic. When designing a data warehouse however, converting higher-normalization relational data into a less normalized form may lead to better performance in a system that will be primarily for reads, not writes.

    Friend 1
    Is there a something about the ERD approach to designing relational schemas that makes it generally create schemas that satisfy third normal form, or is it just something you keep in mind during the design process?

    William A.
    So, I wasn't aware ERD was an "approach". Typically a CERD is followed by a PERD. That is, a Conceptual ERD, where relationships are specified as M-M, and objects are in business terms that a client would be able to easily recognize and comprehend. A Physical ERD is not easily consumable by a client or business user because it translates the CERD into database tables and foreign key relationships. The PERD contains join tables that break many to many relationships into a pair of 1-M relationships, for example. So building an ERD is to me separate from proper Normalization methodology.

    Friend 1
    Yeah, ERD schema design and 3NF normalization are usually not thought of simultaneously, which is why I'm having a hard time finding any resources that relate the two. Much harder is figuring out how well ERD creates schemas which satisfy 3NF, whether that was the intention of the design or not.

    William A.
    To me, ERD is not a design process or architecture methodology. To me, it is a document. Yes, ERD can display a schema that has been created in 3NF well, though typically only a PERD would be capable to displaying proper normalization. I'd suggest papers or books by CJ Date or EF Codd

    Friend 1
    I'll look into it, thanks for your help.

    William A.
    good luck

    He'll need it.

    Aardvark is a really neat social tool that is sort of like the oft-advertised KGB, except the answers come from other users who have signed up for knowledge in general categories. I signed up for Aardvark after hearing a story about it on NPR, and had a delightful conversation about Xbox360's and the weather with a fellow named Mert in Turkey. As you can see, the above conversation was brought to me because I signed up as knowledgeable in the area of database.

    I've also Vark'd (is that a word?) about Conan O'Brien's TV ratings and martial arts schools for my kid. But since this is a SQL blog, I figured I'd share this one.

    UPDATE: Sadly, Google killed Aardvark in September 2011.  Oh well.

    1/29/2010

    Upgrade SQL 2005 to 2008 checklist

    Pre-upgrade checklist
    • Make sure you don't have an sa accounts with blank passwords. This is a really good idea regardless.
    • On the sql server, get your .net installation up to .NET Framework 3.5 SP1 or later. Best way to do this is to install Visual Studio 2008 SP1.
    • For Server 2003, install Windows Installer v4.5 from the SQL 2008 media. Not needed for Server 2008.
    • Make sure that all DBAs, sysadmins, and dev team leads know that they will need to install the SSMS 2008. (Still a good idea if you have SQL 2005 servers in your environment to keep SSMS 2005 and BIDS 2005 installed at your workstation, especially for replication and SSIS packages.)
    General tips

    • Easiest way I can think to do this is to install a SQL 2008 instance, detach/attach the 2005 databases to the new edition. Obviously, this creates a server with a different instance name. This may or may not be a big deal, if you can change your connections strings easily enough.

      In all likelihood, you'll instance want to upgrade in-place your SQL 2005 instance. This can be done through the wizard, using the option "Upgrade from SQL Server 2000 or SQL Server 2005."

    • Remember that upgraded, attached or copied databases from SQL 2005 may still be in SQL 2005 compatibility mode. Be sure to do your post-implementation testing against databases in SQL 2008 compatibility mode!
    If you have anything to add, please do so in the comments below so that others who find this blog post can benefit.

    Links for more reading:
    http://msdn.microsoft.com/en-us/library/ms144267.aspx
    http://msdn.microsoft.com/en-us/library/bb677619.aspx
    Clustered environment: http://msdn.microsoft.com/en-us/library/ms191295.aspx
    http://www.sqlservercentral.com/articles/SQL+Server+2008/67066/

    10/16/2009

    Try, try again with strange SQL 2008 error on Win7/Server 2k8

    "Invoke or BeginInvoke cannot be called on a control until the window handle has been created."

    Got this error when trying to install SQL 2008 Service Pack 1 on Windows Server 2008.

    Went on a wild search of trying to find yet another solution to a SQL 2k8 install problem, including downloading a CU, but then I found this.
    Generally, if you just rerun it won't hit the issue again.
    It worked.

    Sigh.

    Trying again. Its not just for bicycle riding any more!

    "The definition of insanity is doing the same thing over and over again and expecting different results." - NOT BY Einstein, Ben Franklin, Mark Twain, or anybody with a brain.

    2008 install problem with Framework 3.5

    Problem installing SQL 2008 on a new virtual server. Gave an error almost immediately after running setup.exe, didn't even get to the links screen.

    "Microsoft .NET Framework 3.5 installation has failed.

    SQL Server 2008 Setup requires .NET Framework 3.5 to be installed."

    The solution I found deep in a thread in MSDN. The point of this blog is to make things like this easier to find on the web, so here goes.

    1. Copy your media to a folder on the server. You need to update a file in there.
    2. Download a fresh copy of dotNetFx35setup.exe from Microsoft
    3. Copy the new file and overwrite the old in this location in the media: ..\x64\redist\DotNetFrameworks\

    7/17/2009

    correct the name of a SQL server in SQL 2005

    Renamed a virtual server, but your SQL server instance still reflects the old name?

    verify what SQL Server thinks it is by

    select @@servername

    rename the server with:

    sp_dropserver [sqlserveroldname<\instancename>]
    go
    sp_addserver [sqlservernewname<\instancename>], local

    restart the SQL Server Service

    verify the new name by select @@servername

    Note: this isn't a huge deal in SQL 2005 as it was in previous versions. You'll notice, for example, you can still access your SQL server even if it thinks it is in the wrong name, and the SSMS object explorer looks accurate. But for remote logins and I would assume linked server connections, having those names out of sync could be fun.

    MSDN: http://msdn.microsoft.com/en-us/library/ms143799.aspx

    EDIT: Note that you cannot rename the instance name of a named instance, only the server name.

    5/21/2009

    clear out DTA indexes and statistics

    Here's a helpful script I keep around in my portable HD. When the Database Tuning Advisor fails or is aborted for whatever reason, it can leave behind its hypothetical indexes and stats. This script will remove these active indexes and stats from your database. Uses the is_hypothetical flag to find indexes. Uses the _dta prefix to find stats. This assumes that when you implement DTA-recommended stats, you change their name (always a best practice). If you have implemented DTA-recommneded stats and not changed their name, this script will remove those too.


    WHILE EXISTS (SELECT * FROM sys.indexes WHERE is_hypothetical = 1)
    BEGIN
    DECLARE @sql varchar(max)
    SELECT @sql = 'drop index ' + indexname + ' on [' + tablename + ']'
    FROM (
    SELECT TOP 1 indexname = i.name, tablename = o.name
    FROM sys.indexes i
    INNER JOIN sys.objects o
    ON i.object_id = o.object_id
    WHERE i.is_hypothetical = 1 and o.type = 'u'
    ) x
    SELECT @sql
    EXEC (@sql)
    END
    GO

    WHILE EXISTS (SELECT * FROM sys.stats i WHERE OBJECTPROPERTY(i.[object_id],'IsUserTable') = 1 AND i.[name] LIKE '_dta%' and user_created = 0)
    BEGIN
    DECLARE @sql varchar(max)
    SELECT @sql = 'drop statistics [' + object_name(i.[object_id]) + '].['+ i.[name] + ']'
    FROM sys.stats i
    WHERE
    OBJECTPROPERTY(i.[object_id],'IsUserTable') = 1 AND
    i.[name] LIKE '_dta%' and user_created = 0
    SELECT @sql
    EXEC (@sql)
    END


    See also:
    I originally posted this on tsqlscripts.com (drink!)
    http://www.graytechnology.com/Blog/post.aspx?id=e21cbab0-8ae2-478e-a027-1b3b14e7d0b9
    http://weblogs.sqlteam.com/mladenp/archive/2007/07/16/60257.aspx
    http://www.sqlservercentral.com/Forums/Topic398549-360-1.aspx


    "The great tragedy of science - the slaying of a beautiful hypothesis by an ugly fact." -T.H. Buxley