Showing posts with label sql. Show all posts
Showing posts with label sql. Show all posts

Friday, March 24, 2023

SQL Server 2022 Administration Inside Out

SQL Server 2022 Administration Inside Out is now available for pre-order from Microsoft Press. 

This book is far more than a quick update. We've rewritten and refreshed large portions of the book, with applicability for modern security and Azure integration features. We reorganized content especially around Azure SQL DB and Azure SQL MI and all the new performance features and integrations.

Biggest of thanks for countless personal hours of deep edits and rewrites go out to Randolph West. Major contributions from the book team: Joseph D'Antoni, Louis Davidson, Meagan Longoria, Elizabeth Noble, and Melody Zacharias. Thanks also and congrats to technical editors William Carter and Josh Smith for their first big book experience. And thanks for infinite patience and tireless updates from our Pearson editor Loretta Yates.

The book should be on shelves and available in digital formats on May 8.

https://www.microsoftpressstore.com/store/sql-server-2022-administration-inside-out-9780137899883

Monday, September 14, 2020

[Updated] Prepping the local SSD D: in an Azure VM for SQL Server TempDB

One of the few good, safe uses for the local SSD volume of an Azure VM is for SQL Server tempdb data and log files. 

Note: this blog post has been updated, specifically to solution 3b below. See comments.

Typically this is the D: volume of an Azure VM, though not always in the case of migrated VMs.

Moving your SQL Server instance's tempdb files to the D: volume is recommended for performance, as long as the tempdb files fit it the D: that has been allocated, based on your VM size. 

When the D: is lost due to deallocation, as expected, the subfolder you created for the tempdb files (if applicable) and the NTFS permissions granting SQL Server permission to the folder are no longer present. SQL Server will be unable to create the tempdb files in the subfolder and will not start. Even if you put the tempdb data and log files in the root of D:, after deallocation, that's still not a solution, as the NTFS permissions to the root of D: won't exist. In either case, SQL Server will be unable to create the tempdb files and will not start.

In the Windows Application Event log of the Azure VM, you'll see something like:
Event ID: 17204
FCB::Open failed: Could not open file D:\tempdb.mdf for file number 1.  OS error: 2(The system cannot find the file specified.).
Event ID: 5123
CREATE FILE encountered operating system error 5(Access is denied.) while attempting to open or create the physical file 'D:\tempdb.mdf'.
On Azure Windows VMs, the local SSD temporary disk is D:, I'll assume it is for you as well. On Azure Linux VMs, the local SSD temporary disk is /dev/sdb by default. I mention these solutions for Windows OS below.

Solutions


There are three solutions to this problem in Windows. To test any of these solutions, don't just restart your VM, you have to deallocate it (stop it in the Azure Portal, then restart it). That's the way to be sure the contents of D: has been lost. However, data on the temporary disk may be lost during Azure maintenance, which is mostly out of your control, so it is important to solve this issue when using the D: for tempdb.

Note that simply shutting down the Azure VM from inside operating system does not deallocate the VM, it only stops the VM. During a normal, successful reboot of the VM OS, data on the D: will not be lost. A reboot initiated from inside an RDP session, for example, shouldn't trigger the contents of D: to be lost. You must stop the VM from the Azure Portal to deallocate the VM. This is the difference between a VM's status of "Stopped" or "Stopped (deallocated)". 

1. Windows Scheduled Task Calling PowerShell

The first solution it appears is the one Microsoft documented years ago here but is no longer necessary. To be clear, Solution 3 below, using the IaaS Agent, is the recommended and best option now.

The old workaround was something like this. A simple PowerShell script running in Windows scheduled task, triggered on Startup. It also requires setting the SQL Server service to "Automatic (Delayed Startup)". 

The script creates the subfolder on D: for the tempdb, then grants permissions to the SQL Server service account, before SQL Server tries to start.

This type of script and delayed startup is a yet-unsolved necessary complication, but this solution works fine.

2. Add SQL service account to local Administrators group

The second solution is to place the tempdb files on the root of D: but then make the SQL Server service account a local Windows administrator, so that it has permissions to write to the root of D:.  This violates most security best practices (though it is more common than you'd think). 

Key here is placing tempdb in the root of D:. Even as a member of the local Administrators group on the server, SQL Server will not create subfolders upon startup, and a subfolder on D: won't exist after deallocation. 

I'm not a fan but it's a solution, and this was certainly the simple, though it violated the principles of least privilege.

3a. Azure SQL Server IaaS Agent Extension, installed with VM (recommended)

A third and best solution involves the Azure SQL VM Resource Provider. It places the SQL Server IaaS Agent Extension that connects the SQL instance to the Azure portal for all sorts of easy integrations, bringing a lot of manageability to the Azure Portal. It is generally highly recommended. 

The SQL Server IaaS Agent Extension should handle preparing D: for the tempdb, solving startup issues for an Azure VM with SQL Server. This definitely works for VM images with SQL Server from the Azure marketplace, which by default configure tempdb on the local SSD D: anyway. I have tested this out with the newest "SQL Server 2019 on Windows Server 2019" image available in the Azure Marketplace. It works. For example:


"If you create the VM from Azure portal or Azure quickstart templates and place Temp DB on the Local Disk then you do not need any further action." Turns out, Solution 3 here is very similar to Solution 1. When installed from the marketplace with the SQL Server IaaS Agent installed, the Azure VM has a Windows Scheduled task triggered At Startup to call "C:\Program Files\Microsoft SQL Server IaaS Agent\Bin\SqlIaaSExtension.SqlServerStarter.exe".  

I speculate that this app runs on startup to prep the D: for tempdb, handling the permissions on the  D:\TempDb subfolder for the SQL Server service account, which is not a member of the local admins group by default. In this case, unlike Solution #1 above, the SQL Server service is not configured to "Automatic (Delayed Startup)", it is still configured to "Automatic", as usual.

Again, to test this out, don't just restart your VM, you have to deallocate it (stop it in the Azure Portal, start it). That's the way to be sure the D: has been deallocated and restarted.

I have split this solution into two parts because 3a only appears to work if SQL and the SQL VM Resource Provider are pre-installed via the image gallery. 

3b. Azure SQL Server IaaS Agent Extension installed manually

But what about for an existing VM, not created with the SQL Server IaaS Agent Extension pre-installed from the Azure VM image gallery? This is currently unclear. [UPDATE August 2023]: This now works. The IaaS Agent now configures tempdb on the ephemeral local SSD D: drive, even if registered on the VM after SQL Server was installed. This solution should work the same as 3a, above: Manage SQL Server virtual machines in Azure by using the Azure portal - SQL Server on Azure VMs.

We can install the SQL Server IaaS Agent Extension ourselves by registering the VM with the SQL VM resource provider. Once the RP is associated with your Azure subscription, adding the VM is easy. Install the SQL Server IaaS Agent Extension in lightweight management mode, without restarting the VM from the Azure CLI:
az sql vm create --name vm-iaasag-postsql --resource-group vm-iaasag-postsql_group --location eastus2 --license-type PAYG
Options for PowerShell are available in the same link.

However, you'll notice the SQL Server IaaS Agent Extension in lightweight management mode doesn't actually create the executable found in "C:\Program Files\Microsoft SQL Server IaaS Agent\Bin\". For that to be present, get up to "Full management mode". You can upgrade from lightweight management mode, or go straight to full management mode, but it costs a SQL Server service restart either way for Full management mode.

Currently though, if I create a new Windows VM without SQL Server, install a SQL instance, install the SQL Server IaaS Agent Extension in full management mode, and even schedule "C:\Program Files\Microsoft SQL Server IaaS Agent\Bin\SqlIaaSExtension.SqlServerStarter.exe" in a Scheduled Task triggered on startup (just like in a VM from the image gallery has), it doesn't appear to prep the local SSD D:\ drive for TempDB upon startup after deallocation. SQL Server service still fails to start because it cannot create TempDB. 

So, the SQL Server IaaS Agent Extension appears to prep the local SSD D:\ drive for the TempDB files, but only if SQL and the SQL VM RP are pre-installed via the image gallery. I'm still actively working on a solution to this and reaching out to Microsoft. I will update this blog post at a later date.

Thursday, May 28, 2020

SOLVED! Untrusted SAS certs after SQL startup

If you've tried doing Backup to URL with SQL Server using a Shared Access Signature (SAS) certificate and received this error:

Error: 18204, Severity: 16, State: 1.BackupDiskFile::CreateMedia: Backup device 'https://account.blob.core.windows.net/container/folder/msdb_log_202005170801.trn' failed to create. Operating system error 50(The request is not supported.).Cannot open backup device 'https://account.blob.core.windows.net/container/folder/msdb_log_202005170801.trn'. Operating system error 50(The request is not supported.). [SQLSTATE 42000] (Error 3201) BACKUP LOG is terminating abnormally. [SQLSTATE 42000] (Error 3013). NOTE: The step was retried the requested number of times (1) without succeeding. The step failed. 


You may have received the same error I encountered.

This error popped up only after startup of SQL Server. To resolve the problem, we'd recreate the SAS key, using the same cert in the same script, and the backups would start working again. This affected all types of SQL database backups.

The issue was not our syntax (note the options, the syntax here is standard, but you do need to provide MAXTRANSFERSIZE = 4194304 and BLOCKSIZE = 65536 to backup to Azure blob storage.)
BACKUP DATABASE DB_name
TO URL = 'https://account.blob.core.windows.net/container/server-name/msdb_log_202005170801.trn'
WITH COMPRESSION, CHECKSUM, FORMAT, MAXTRANSFERSIZE = 4194304, BLOCKSIZE = 65536;


The issue was not the SAS key itself, which we tried creating via Storage Explorer, with PowerShell, with various expiration dates, with shared policies, etc.

Rather, the issue was with our SAS-based cert. Here was our syntax:
CREATE CREDENTIAL [https://account.blob.core.windows.net/container/server-name]
WITH IDENTITY='Shared Access Signature'
, SECRET='sv=2019-02-02&xxxxx';
Again, the syntax was correct in the sense that backups would start working again right away. But after a reboot, the cert would not longer be available to SQL. 

On the Azure Storage side, the logged failure was:

AuthenticationErrorDetail:Signed expiry time [Fri, 01 May 2020 13:30:52 GMT] has to be after signed start time [Tue, 12 May 2020 22:30:57 GMT]
Why would the exact same SAS key expire after reboot, but then start working again when it was dropped/created? And why wouldn't ALTER'ing the key work?

Our ticket with Microsoft moved through the storage team, the SQL team, and eventually the SQL engine engineering team, who did a complete stack dump to catch the error.

The issue turned out to be related to the hyphen in our credential creation above, note I included "server-name" because our server names have hyphens in them. Turns out, this is unnecessary and caused SQL to choke on trusting the certificate after startup. Here's the explanation from Microsoft support after analysis of the debugger dump:
When you create the credential, the data as we expected is stored in cache and is also persisted to disk but since the data is available in cache, we directly access with full URL name and it works. But after the SQL Server service is restarted and since the credential wont be available in cache, we try to query it from metadata and the way we do it there seems to be an issue on how we fragment the URL and we incorrectly look for container with foldername causing the issue. This is also why Alter wont work and only DROP\CREATE works since it’s in cache again.
We changed the credential to omit the server-specific folder name in the credential. Note that we had originally included the full path to the backup folder because I had read a little too much into this portion of the documentation: THE CREDENTIAL NAME argument requires that the name match the container path." 

This credential creation worked, and kept working after SQL startup.
CREATE CREDENTIAL [https://account.blob.core.windows.net/container] WITH IDENTITY='Shared Access Signature'
, SECRET='sv=2019-02-02&xxxxx';

Note the credential name ends with the container path but doesn't include the server name-specific subfolder, and still does not end with a '/'. 

The support escalation engineer marked the ticket as a bug and filed a defect with the engineering team, so we may have a solution in place for this soon. The end.




Oh, also, when we were first starting to use Backup to URL, we got this on some of our older instances of SQL Server:
"Backup/Restore to URL device error: Error while decoding the storage key."
Problem was trying to use an SAS key on SQL Server prior to 2016. Not supported! Gotta use the old syntax and a certificate for the  storage account identity and access key. 

Wednesday, January 22, 2020

Our SQL Server 2019 Administration Inside Out Fun Author Survey

SQL Server 2019 Administration Inside Out
Honored again to be leading the talented SQL Server 2019 Administration Inside Out author team, and share some fun insights from the team as the book is in its final edits.

Technical book writing is such a mixed bag of emotions - long hours writing, existential doubt and validation, progress in seeing chapters move through editing stages, Sisyphean edits, wrangling metadata and chapter status, pride in the finished product, and finally some post-traumatic stress as the next major release of SQL Server approaches. It's been a pleasure to be the ersatz leader of the team, even though I'm not the first name on the book, a lot of work from everyone on the cover goes into producing the content, including our two beloved technical editors Meagan Longoria and Louis Davidson.

So a "fun" look back at our time on the book, spent mostly in Q4'19, seems appropriate as we near publishing.

A little bit on the cover order. We broke up the "shares" of workload on the book by chapter. Authors who revised/added to existing chapters received one share. For net new chapters (for example, our entirely new final chapter on Big Data and Machine Learning features of SQL Server 2019) an author received three shares. Combine all those shares together, and we came up with a cover order, descending. It doesn't necessarily translate to chapter count or effort, and I can attest personally that with all the new changes necessary for SQL Server 2019, existing chapters' revisions were considerably more work than we thought! Exciting that the book has a lot of new value even for existing chapters, and something to note for next time as we plan workloads...

Similar to our fun author survey from the 2017 bookwe the author and tech editor team put answers to questions. This was a nice break while the chapters were progressing through their final edits. The 2019 edition of this book series should be on shelves in Q1'20 and is available for pre-order now everywhere books are sold.

As for this book's fun author survey, I found it interesting how half this team writes with music and half the team must not, how some of us write/edit in odd nooks and some in their home office enclaves, how some of us know how to spend free time and some of us have forgotten how, and how all of us love the new scalar function inlining feature of SQL Server 2019...

1. What music if any did you listen to while writing or editing?


Wednesday, June 05, 2019

Actual Emails: Will MSOLEDB work for connecting to an Availability Group?

Scenario:

We need to configure an existing legacy application from an external vendor to talk to our new SQL Server 2017 Availability Group, which spans multiple subnets. In the end, that last bit is the key. The old data provider MSOLEDB will work for connecting to single-subnet Availability Group listeners, but won't work consistently when connecting to a multisubnet Availability Group's listener. The key is the ability to specify MultiSubNetFailover=True in the connection parameters.

Client's software vendor:
The connection used is the Microsoft OLE DB Provider for SQL Server that is supplied by Microsoft to create the Data Link Properties. When configuring our Data Link, we use the "Microsoft OLE DB Provider for SQL Server". The connection string is formatted: 
Provider=SQLOLEDB.1;Password="whatever";Persist Security Info=True;User ID=username;Initial Catalog=Test;Data Source=ServerName 
Answer:
Good info, but we do need to make a change here. SQLOLEDB is the provider from back in the SQL 2000 era. Do not recommend its use for new development. It has been replaced by the Native Client (SNAC), which has since been replaced by MSOLEDBSQL (I linked below).  It should be easy and transparent to upgrade the provider from SQLOLEDB with no negative impact.
Here’s why we need to upgrade the data provider to talk to our SQL Availability Group. The SQL Server Listener for a multi-subnet Availability Group actually has two IP’s. When you perform at a command line:
Nslookup SQLListenerNameWhatever 
You get back an IP in each subnet (in our case, two), for each replica SQL instance in the Availability Group. 
When a connection string uses MultiSubNetFailover=True and connecting to the Availability Group Listener name (not the IP or either SQL Server instance name), BOTH IP’s are tried simultaneously and immediately, and the driver talks only to the IP that replies: the primary replica.  After a failover, the other IP begins to reply immediately, and so there is no delay in reconnectivity when a failover occurs. 
Without specifying MultiSubNetFailover=True, your application will (essentially randomly) pick ONE of the two IP’s for the Listener, and try it. There is no way to “rig” one IP to be first consistently over time. If it picks the primary replica, everything works! If it picks the IP for the current secondary replica… your application’s connection timeout will have to expire and then try the next IP.  This is why I’m bringing this up – the application will timeout upon SQL login without MultiSubNetFailover=True.
This hasn’t been an issue with your other clients if they aren’t using a multisubnet availability group. If they have an Availability Group all inside only one subnet, then the Listener only has one IP in DNS, and MultiSubNetFailover=True isn’t required.
You should be fine to install the MSOLEDBSQL provider released in 2018 and use that in your data link. Obviously it should be tested for due diligence, but it should work. At the very least, you could try instead the SQL Native Client 11 (SQLNCLI11), which was released for SQL Server 2012, and it also should work just fine for both OLEDB or ODBC.
Let me know if you have any more questions.
Connection string information for SQL Server Availability Groups:
  • In SSIS, the new MSOLEDBSQL OLE DB driver appears as "Native OLE DB\Microsoft OLE DB Driver for SQL Server". The old OLE DB driver is "Native OLE DB\Microsoft OLE DB Provider for SQL Server". 
Note: you also need to make sure your Windows Cluster has RegisterAllProviderIP's set to ON for a multisubnet Availability Group!

Edit March 2023: Added latest OLE DB/ODBC information.

Monday, April 29, 2019

Four Data Integration Design Questions to Ask

We get requests to move data between custom systems regularly, even within systems. I was advising a client on something fairly simplea collection of tables out of one vendor system to feed anotherand I thought I'd make a blog post out of the array of questions that always come up.

Regardless of the strategy for data movement, whether it be:
  • SQL Server Integration Services (SSIS) locally or in Azure Integration Runtime (IR)
  • Stored procedures
  • SQL replication
  • Secondary readable Availability Groups
  • Azure Data Factory 2.0 (not 1.0, oh goodness, never 1.0)
  • Transactional movement featuring message queues or APIs
  • Any streaming solution
  • ETL or ELT
  • Any other kind of transformation I'm forgetting to mention
The following questions should be asked before designing a data movement plan.

(There are no correct answers to these questions of course, but you must be able to determine the answers from the business case.)

1. What is the latency requirement for the changes from the data source(s) to be copied to the destination?
Common answers: Instantly, no longer than 5 min, or 30 min, or nightly.

2. How many rows are expected to change in the source(s) in a given time period? 
Common answers: Anywhere from few rows per month to all/most the rows in a table every day.

3. What types of data changes are performed in the source(s)? 
Is the source data inserted, updated, and/or deleted? 

4. Do we have a reliable way to identify "the delta"? 
How do we know which rows have changed, including hard deleted rows (vs soft deleted rows)?

Let's dive more into the last question, because this is where the design meets the implementation method. There's a reason we always design tables with an IDENTITY column and some basic auditing fields.

First off, a truncate/insert strategy is not scalable. I have redesigned more of these strategies than I can remember, often because of database developer myopia. A truncate/reinsert strategy, even a bulk insert strategy, will inevitably outgrow its time boundary identified in Question 1. Don't waste your time and resources on such a strategy, you need to identify a way to find out what changed the in data source now.

But what if we don't or can't trust the application to always modify a "ChangeDate"? This is certainly the easiest way to know if the row has changed, but what if the original table wasn't designed with such a field? We should consider whether we can alter the data source(s) with useful, built-in SQL Server features like Change Tracking (CT), Change Data Capture (CDC), or a more recently-introduced feature called Temporal Tables. The latter can provide a reliable, built-in modified date and row history, transparent to applications. All of these strategies are well documented and have easy to use labs available.

Each of these solutions is very useful and recommended in its use case, and much preferred over a trigger-based system which will add complexity and overhead to transactions. A "pull" of recent changes is much preferred for most scenarios over a "push" of each change inside the transaction.

Caveats remain howeverand this came up with a recent clientthe impact on future updates/patches for databases must account for implementations of CT, CDC, or Temporal Tables. The same caveats apply to replication (useful in spots) and database triggers. Don't enable these SQL features without consulting with and advising the maintaining developers on the potential impact and need for testing.

One more crucial factor often overlooked as part of Question 4 are the intermediate transactions, especially in the case of less-than-instant data movement. If a row changes from status 1, to status 2, to status 3, can we just send over the row state with status 3? Or must we apply an insert for status 1, an update for status 2, and then another update for status 3 to the destination? This could be a major problem if the destination has an indirect dependency on evaluating the status changes; for example, to calculate the durations between statuses.

I once designed a data warehouse for tracking the performance of auditors, and we were analyzing the workflow for the bottlenecks in a 20-step process. Each of the 20 steps and its corresponding row state and time stamp were the whole point of the analysis. This demanded some sort of row-versioning in the data source. Not all change detection strategies work for this, however. Change Tracking, for example, would not suffice. Know your solutions!

You shouldn't move forward with any data movement design before answering these questions.

Are there any other common questions you'd ask for before deciding on a plan for a project like this?

Thursday, August 20, 2015

Your Devs' Questions Answered With sys.dm_server_services

It starts with "I'm a developer...

"... with no RDP access, how can I tell if the SQL Server Agent service is running?"
"... and I don't think CDC is working. Is the SQL Agent on?"
"... how can I tell how long has the SQL Server been running? Did somebody reboot it last night?"
"... so hey I need to know if the SQL Server service set to automatic startup."

Here's an easy script that anyone can use, it can be especially helpful for developers without RDP access to the Windows Server that is hosting the SQL Server service. It'll answer these developer questions and more, in addition to a lot of other utility for you as a DBA.
SELECT  servicename -- Ex: SQL Server (SQL2K8R2)
, startup_type_desc -- Manual, Automatic
,  status_desc -- Running, Stopped, etc.
,  process_id
,  last_startup_time -- datetime
,  service_account
,  filename
,  is_clustered -- Y/N
,  cluster_nodename
FROM   sys.dm_server_services

You don't need to be a member of the sysadmin server role to run this, but you do need VIEW SERVER STATE. Developers can be given this permission, even in production, as it gives them access to many key dynamic management views (DMVs), including DMV's for diagnostics and performance tuning.

MSDN reference: https://technet.microsoft.com/en-us/library/Hh204542(v=SQL.110).aspx

Saturday, September 13, 2014

Houston TechFest 2014: SQL Admin Best Practices with DMV's

Awesome crowd this morning for my SQL Admin Best Practices with DMV's presentation at Houston TechFest 2014, thanks very much for attending! There were some exceptionally useful questions raised today, great job to my audience.

Here is the .zip file as promised for my presentation slide deck and .sql files, including the new content for SQL 2014. Download here

Tuesday, September 02, 2014

The Nine SQL Server Careers

Microsoft SQL Server is a mature and broad technology platform that supports a diverse set of careers - this blog post is an attempt to provide technical detail to my personal theory on careers in the Microsoft SQL Server world.

A mid-tier SQL Server professional who may carry the title "Database Administrator" could find career traction with high-level skill in only three or four of these categories. It would be a rare accomplishment to find someone with honest expertise in all nine of these buckets, and most mid-tier SQL Server professionals have significant experience with no more than six of these roles.

It is also important for any IT professional to be aware of his/her limitations. We should all "know what we don't know," so this blog post is an effort to quantify these items. It is also very likely that the modern "DBA" possesses skill sets in .NET and other surrounding technologies which I do not aim to include here.


Tuesday, August 26, 2014

Painful or Helpful? No SSMS Multiserver Results if One Instance Errors

I've been going back and forth with this question for a few days now.

If one instance in a multiserver query presents an error, no resultsets from any of the other instances are displayed, even if n "row(s) affected)" is displayed in the Messages tab.

This is particularly painful when querying dynamic management objects across server groups, because new DMV's and new DMV columns are being introduced in every version, sometimes with service packs.
  • Is this a feature preventing you from using code that won't work against the group as a whole, saving you from accidentally assuming that all instances returned data?
  • Or, is this a pain that causes you to need version-specific registered server groups to get at data from some instances?
There is no right answer to this question, only the opportunity to be aware of this behavior.

There is an old MSConnect item from 2007 for a similar issue that was marked as "won't fix" in 2011, so it is not a bug. I am not sure if there is any beneficial intent from MS to prevent you from getting any data when one instance in a group has a syntax error.

Imagine if, within a large list of production SQL Servers, a handful are of a prior SQL version where some utility scripts won't work.

Should I then create another registered server group for instances that can run sys.dm_os_volume_stats, or a group for instances that support the new columns added to sys.dm_exec_query_stats? No correct answer here either, in the absence of an option to allow for partial resultsets to be returned from multiserver queries, you will need to consider what suits your needs best.

Here's the steps to reproduce for a simple scenario:

1. Add a SQL 2008 instance and a SQL 2008 R2 instance to a new Registered Servers group.
2. Create a new multiserver query.
3. Attempt to query a DMF or DMV that was introduced in SQL 2008 R2, such as sys.dm_os_volume_stats. (Sample script below.)
4. The message table includes one error and one rows returned message, like below. No Results tab is returned, so the rows are not visible even though the query worked successfully on the SQL 2008 R2 instance.
sqldemo1\sql2008(domain\user): Msg 208, Level 16, State 1, Line 2Invalid object name 'sys.dm_os_volume_stats'.sqldemo1\sql2008r2(domain\user): (2 row(s) affected)
Sample script:
select distinct
vs.volume_Mount_point,
file_system_type,
   drive_size_GB = convert(decimal(19,2), vs.total_bytes/1024./1024./1024. ) ,
   drive_free_space_GB = convert(decimal(19,2), vs.available_bytes/1024./1024./1024. ),
   drive_percent_free = CONVERT(DECIMAL(9,2), vs.available_bytes * 100.0 / vs.total_bytes)
FROM
   sys.master_files AS f
CROSS APPLY
   sys.dm_os_volume_stats(f.database_id, f.file_id) vs

I have confirmed this behavior in Microsoft SQL Server Management Studio 12.0.2000.8, 11.0.3128.0 and 10.50.4000.0.

Monday, February 24, 2014

Challenges with Downgrading TFS 2010 from SQL Enterprise to SQL Standard Edition

I recently had the challenge of migrating an installation of Microsoft Team Foundation Server 2010 from a shared SQL Server 2008 R2 Enterprise instance to its own SQL Server 2008 R2 Standard instance on a new server.

Because of some Enterprise edition-specific feature usage, this turned out to have two problems during the restoration:
  • Use of database compression on some TFS 2010 tables 
  • Use of perspectives in the TFS 2010 SSAS database "TFS_Analytics" 
Neither feature is available in Standard edition and conveniently the error message when restoring the SQL or SSAS databases from Enterprise to Standard clearly indicated this. After making the client aware, the decision made was to try and remove these features from their TFS installation.

After removing these Enterprise features from copies of the databases, I was able to back up and restore the copied databases (during the migration outage) to the new server without any more edition failures.

Here's how:

Remove Compressed Indexes

You may encounter this error if you attempt to restore any database that uses data compression from Enterprise to Standard edition:

cannot be started in this edition of SQL Server because part or all of object 'foo' is enabled with data compression or vardecimal storage format

Here is a script to look through all tables for compressed partitions (either heaps or indexes) and REBUILD them with DATA_COMPRESSION = NONE. This obviously only works on Enterprise Edition of SQL 2008 or higher.

--Enable WITH (ONLINE = ON) if possible

Declare @sqltext nvarchar(1000), @tbname sysname, @index_id int, @index_name nvarchar(100)

Declare tbl_csr cursor
FOR
select name = '['+s.name+'].['+o.name+']', p.index_id, i.name
from sys.partitions p
inner join sys.objects o on o.object_id = p.object_id
inner join sys.schemas s on s.schema_id = o.schema_id
inner join sys.indexes i on i.index_id = p.index_id and i.object_id = o.object_id 
where p.data_compression <> 0
and o.type_desc <> 'INTERNAL_TABLE'

Open tbl_csr

Fetch Next from tbl_csr into @tbname, @index_id, @index_name

 While (@@FETCH_STATUS=0)
 Begin

   If @index_id =0 
    begin
     --catches heaps
     set @sqltext=N'ALTER Table '+@tbname + N' REBUILD WITH (DATA_COMPRESSION=NONE)' --, ONLINE = ON
     exec sp_executesql @sqltext
     print 'rebuild heap ' + @tbname 
    end
   else
    begin
     set @sqltext=N'ALTER INDEX ' + @index_name + ' ON '+@tbname + N' REBUILD WITH  (DATA_COMPRESSION=NONE)' --, ONLINE = ON
     exec sp_executesql @sqltext
     print 'rebuild index ' + @tbname 
    end

   Fetch next from tbl_csr into @tbname, @index_id, @index_name

 End

Close tbl_csr
Deallocate tbl_csr

Below is a demo you can use to simulate the script as it finds clustered indexes, nonclustered indexes and heaps to rebuild appropriately, while also ignoring XML indexes (which could present a problem if you take a blanket ALTER INDEX ALL ... REBUILD.)
use adventureworks
go

--test lab
if not exists (select 1 from sys.schemas where name = 'testschema')
exec (N'create schema testschema')

go
if exists (select 1 from sys.objects where name = 'testfeature_index') 
drop table testschema.testfeature_index
go
create table testschema.testfeature_index (id int not null identity(1,1) primary key , bigint1 bigint not null, xml1 xml null)
insert into testschema.testfeature_index (bigint1) values (123456789123456789),(1234567891234567891),(1234567891234567892),(1234567891234567893)

go
set nocount on 
insert into testschema.testfeature_index (bigint1)
select bigint1+5 from testschema.testfeature_index 
go 10
set nocount off
alter index all on testschema.testfeature_index rebuild with (data_compression = page)
create nonclustered index idx_nc_testfeature1 on testschema.testfeature_index (bigint1) with (data_compression = page)
create primary xml index idx_nc_testfeaturexml1 on testschema.testfeature_index (xml1) 
create xml index idx_nc_testfeaturexml2 on testschema.testfeature_index (xml1)  USING XML INDEX idx_nc_testfeaturexml1 FOR PATH
go
if exists (select 1 from sys.objects where name = 'testfeature_heap') 
drop table testschema.testfeature_heap
go
create table testschema.testfeature_heap (id int not null identity(1,1)  , bigint1 bigint not null)
insert into testschema.testfeature_heap (bigint1) values (123456789123456789),(1234567891234567891),(1234567891234567892),(1234567891234567893)
go
set nocount on 
insert into testschema.testfeature_heap (bigint1)
select bigint1+5 from testschema.testfeature_heap 
go 10
set nocount off
go
alter table testschema.testfeature_heap rebuild  with (data_compression = PAGE)
create nonclustered index idx_nc_testfeature1 on testschema.testfeature_heap (bigint1) with (data_compression = page)

go

--Enable WITH (ONLINE = ON) if possible

select name = '['+s.name+'].['+o.name+']', case p.index_id when 0 then 'Heap' when 1 then 'Clustered Index' else 'Index' end
from sys.partitions p
inner join sys.objects o on o.object_id = p.object_id
inner join sys.schemas s on s.schema_id = o.schema_id
where p.data_compression <> 0
and o.type_desc <> 'INTERNAL_TABLE'
go
Declare @sqltext nvarchar(1000), @tbname sysname, @index_id int, @index_name nvarchar(100)

Declare tbl_csr cursor
FOR
select name = '['+s.name+'].['+o.name+']', p.index_id, i.name
from sys.partitions p
inner join sys.objects o on o.object_id = p.object_id
inner join sys.schemas s on s.schema_id = o.schema_id
inner join sys.indexes i on i.index_id = p.index_id and i.object_id = o.object_id 
where p.data_compression <> 0
and o.type_desc <> 'INTERNAL_TABLE'

Open tbl_csr

Fetch Next from tbl_csr into @tbname, @index_id, @index_name

 While (@@FETCH_STATUS=0)
 Begin

   If @index_id =0 
    begin
     --catches heaps
     set @sqltext=N'ALTER Table '+@tbname + N' REBUILD WITH (DATA_COMPRESSION=NONE)' --, ONLINE = ON
     exec sp_executesql @sqltext
     print 'rebuild heap ' + @tbname 
    end
   else
    begin
     set @sqltext=N'ALTER INDEX ' + @index_name + ' ON '+@tbname + N' REBUILD WITH  (DATA_COMPRESSION=NONE)' --, ONLINE = ON
     exec sp_executesql @sqltext
     print 'rebuild index ' + @tbname 
    end

   Fetch next from tbl_csr into @tbname, @index_id, @index_name

 End

Close tbl_csr
Deallocate tbl_csr


go

select name = '['+s.name+'].['+o.name+']', case p.index_id when 0 then 'Heap' else 'Index' end
from sys.partitions p
inner join sys.objects o on o.object_id = p.object_id
inner join sys.schemas s on s.schema_id = o.schema_id
where p.data_compression <> 0
and o.type_desc <> 'INTERNAL_TABLE'

Remove Perspectives from SSAS Database

You may encounter this issue when restoring any SSAS database from Enteprise to Standard editions, not just in TFS.

Errors related to feature availability and configuration: The 'Perspectives' feature is not included in the '64 Bit Standard Edition' SKU.

The solution is multi-step but straightforward.

Here's a breakdown of the steps. The XMLA code to accomplish this will follow:

  1. Backup the SSAS database (TFS_Analytics) on the Enterprise SSAS instance.
  2. Restore the SSAS database with a new name (TFS_Analytics_std) to a new DbStorageLocation on the Standard SSAS instance.
  3. In Management Studio Object Explorer, script out the database as an ALTER statement. Find the <Perspectives> section of the code. (Note - "Perspectives" is plural.) Drag and select to the </Perspectives> tag. Be sure to capture all the <Perspective>...</Perspective> sections. Delete.
    • Easier way? Collapse the box to the left of the <Perspectives> tag. Select the collapsed line for the <Perspectives> tag. Delete.
  4. Execute the XMLA script. (Hit F5.) This will remove the perspectives from the database.
  5. Backup the TFS_Analytics_std database to a new location.
  6. Restore the backup of the TFS_Analytics_std database to the Standard Edition server, renaming the database back to TFS_Analytics.
Code examples below. Be aware that you may need to apply the <AllowOverwrite>true</AllowOverwrite> element to overwrite any .abf files during a backup, or databases during a restore. For safety reasons, this option has been set to false for these code examples.


  1. On the old Enteprise server, backup the SSAS database (TFS_Analytics).
    <backup xmlns="http://schemas.microsoft.com/analysisservices/2003/engine">
     <object>
      <databaseid>Tfs_Analysis</DatabaseID>
     </object>
     <file>M:\MigrationBackups\TFS_analysis_migrationbackup_2014.abf</file> <allowoverwrite>false</allowoverwrite>
    </backup> 
  2. On the old Enteprise server, restore the SSAS database with a new name (TFS_Analytics_std) to a new DbStorageLocation.
    <Restore xmlns="http://schemas.microsoft.com/analysisservices/2003/engine">
     <File>M:\migrationbackups\TFS_analysis_migrationbackup_2014.abf</File>
     <DatabaseName>TFS_Analysis_std</DatabaseName>
    <DbStorageLocation xmlns="http://schemas.microsoft.com/analysisservices/2008/engine/100/100">m:\migrationbackups\</DbStorageLocation
    >
    
  3. In Management Studio Object Explorer, script out the database as an ALTER statement.
  4. Find the <Perspectives> section of the code and remove it.
  5. Execute the XMLA script. (Hit F5.) This will remove the perspectives from the database.
  6. On the old Enteprise server, backup the TFS_Analytics_std database to a new location.
    <Backup xmlns="http://schemas.microsoft.com/analysisservices/2003/engine">
      <Object>
        <DatabaseID>Tfs_Analysis_std</DatabaseID>
     </Object>
      <File>M:\MigrationBackups\TFS_analysis_migrationbackup_2014_std.abf</File>
    </Backup>
    
  7. Create a new XMLA script on the target Standard Edition server. Restore the backup of the TFS_Analytics_std database to the Standard Edition server, renaming the database back to "TFS_Analytics".

    If there are no other Enterprise-only features in use in the SSAS database, this backup should restore successfully.

    Note also that the restore here occurs across the wire, using the UNC path to a temporary folder share. The SSAS service account on the new Standard edition server must have permissions to view this folder share.
    <Restore xmlns="http://schemas.microsoft.com/analysisservices/2003/engine">
     <File>\\serverpathnamehere\MigrationBackups\TFS_analysis_migrationbackup_2014_std.abf</File>
    <DatabaseName>TFS_Analysis</DatabaseName>
     <AllowOverwrite>false</AllowOverwrite>
    

Thursday, December 12, 2013

It's Dangerous to ISNUMERIC, Take This Try_Convert

There's an issue with some characters in T-SQL when validating numeric values out of raw varchar fields, and it can be easily solved with a function new to SQL Server 2012.

Consider this sample setup, where we are stuck importing values from a varchar data source into an integer data destination, a common task in heterogeneous ETL activities:
create table tablefoo
(id int identity(1,1) not null primary key,
foo varchar(20) not null)
go
--add some valid integers
insert into tablefoo (foo) values
(123),
(456)

--add some dollar values
insert into tablefoo (foo) values
('$123'),
('$456')

--add some alphanumeric strings
insert into tablefoo (foo) values
('abc123'),
('def456')

--add scientific notation
insert into tablefoo (foo) values
('1e234')
go
select * from tablefoo 

/*
id foo
1 123
2 456
3 $123
4 $456
5 abc123
6 def456
7 1e234
*/
Let's try to validate only proper integer values in tablefoo.foo for insertion into a column with a data type bigint.
--this fails! why?
select 
      CASE   WHEN ISNUMERIC(c.foo) = 1 THEN CONVERT(bigint, c.foo ) ELSE NULL END
from tablefoo c
The error message is
Msg 8114, Level 16, State 5, Line 1 Error CONVERTing data type varchar to bigint.
Let's dig into the data case-by-case...
--this succeeds as designed, returning NULL for the alphanumeric value.
select 
      CASE   WHEN ISNUMERIC(c.foo) = 1 THEN CONVERT(bigint, c.foo ) ELSE NULL END
from tablefoo c
where foo = 'abc123'

--This fails because the dollar sign passes the ISNUMERIC check 
-- BUT is not valid for an int field 
select 
      CASE   WHEN ISNUMERIC(c.foo) = 1 THEN CONVERT(bigint, c.foo ) ELSE NULL END
from tablefoo c
where foo = '$123'

--This also fails because 1e234 is valid scientific notation 
-- BUT is not valid for an int field.
select 
      CASE   WHEN ISNUMERIC(c.foo) = 1 THEN CONVERT(bigint, c.foo ) ELSE NULL END
from tablefoo c
where foo = '1e234'
Our culprit was the curious behavior of certain characters (in this case, "$" and "e") which is valid for the ISNUMERIC built-in function, commonly used to detect numeric strings.

How to clean this up?
--this returns only integer values properly for the whole table
-- and is much prettier.
select 
 c.foo
, ISNUMERIC(c.foo) 
, TRY_CONVERT(bigint, c.foo )
from tablefoo c
Note the above values of ISNUMERIC and the error-less conversion of the TRY_CONVERT function. So not only is TRY_CONVERT downright sexy with its efficiency and compactness, it can help you avoid errors commonly encountered while using ISNUMERIC with characters like the dollar sign ($), decimal point (.), scientific notation (e) and comma (,).


Thursday, October 17, 2013

"SQL Server: Bolts to Buzzwords" presentation for the Hammond .NET User Group at Southeastern Louisiana University

One of my favorite groups to speak for (aside from the Baton Rouge SQL Server User Group) is the Hammond .NET User Group at Southeastern Louisiana University in Hammond, LA. A mostly undergraduate crowd of enthusiastic future developers (and perhaps some developers-turned-DBAs, like myself) make for a fun audience that asks great questions.

Here's my slidedeck for an introductory presentation on databases and SQL Server, perfect for a college crowd, most of which hadn't completed a capstone project requiring a database solution. In fact, the first line of the slidedeck details the highly important lesson of how to pronounce "SQL" - as in, "sequel" NOT "ess kew el".

Here's the download link for the PowerPoint slidedeck

Friday, March 08, 2013

Conversation History: The Continuing Case Against GUIDs

We had a cooperative relational database design exercise at the office last week as part of our regular department "Lunch and Learn" series, and inevitably one topic that came up is the use of GUIDs as the unique key for a table.

And before long, I had been goaded onto my soapbox to deliver my hellfire-and-brimstone sermon against the use of GUIDs as the primary key, much less the clustered index, of a table. (If you haven't heard this, you need to attend more Baton Rouge SQL Server User Group meetings.)

The case for GUIDs traces back to an oil rig case study, an actual design example we (at Sparkhound) encountered here in the Gulf South. There are hundreds of oil rigs out in the middle of the Gulf of Mexico, each with a sometimes-available Internet connection, and the need to sync back home to a onshore server.  By using GUIDs, each server can write a unique key to the same table, so the argument says, without any conflicts.

(This is not my complete soap-box against GUIDs.  Another day.)

Why not use a compound key of integers?  Surely, you must still identify what oil rig is sending the data, then using a rig-side identity column would provide a compound primary key with guaranteed uniqueness and at half the cost (4 bytes x 2) of a GUID (16 bytes).  That storage space adds up, and has a big impact on nonclustered indexes, fragmentation, page splits, etc.

After that lunch and learn ended, one of our bright new hires engaged me afterwards...

Thursday, July 12, 2012

SQL Server DBA Tool Kit

Right next to my batarang and bat-zip-line-hook-shooter-gun on my utility belt is my SQL Server DBA Tool Kit, full of scripts I've picked up along the way.

I take little credit for some of these, as I've picked them up from various sources.  I have modified them, for example, adding the ability to see the cached execution plan to MS PSS's "worst queries" query, or adding a blocking chain and cached execution plan to the typical dm_exec_sessions and dm_exec_requests queries out there to replace sp_who2.

There are some of my handwritten queries in there, for example the query to find the largest objects in the database and their compression state, a proof of sql_modules vs INFORMATION_SCHEMA.routines, and the "sessions and requests" with blocking chain query that I've presented on before and use daily.  Nothing novel or ground-breaking, but real practical utility queries I use personally as a SQL Server consultant.

I presented my toolkit last night to the Baton Rouge SQL Server User Group and it turned out to be one of our best meetings in a while - over an hour of solid, experience-based conversation about queries, best practices, example experiences that was a wealth of information.  Thanks to everyone who attended and helped make it a great meeting - it certainly wasn't all me and my fancy tool kit.

You can view my SQL toolbox on Github.


EDIT: 20140813 Updated toolbox link.
EDIT: 20140929 Updated toolbox link.
EDIT: 20170830 Updated toolbox link to include Github.

Thursday, September 02, 2010

Presentation on SQL DR Overview

Here's the link to the Lightning Round presentation I made to the Baton Rouge IT Professionals User Group on August 31.  The end was a demo on how to set up basic maintenance plans in SQL Server Management Studio 2008 R2.  I opened for a great presentation by Chris Eveler from Dell on Equalogic servers!

Tuesday, August 10, 2010

Backup SQL with Powershell (long)

Had to set up back up a SQL Express 2005 database.  Why some vendors insist on creating software dependent on SQL Express, and then not giving their clients any way to back up said data, is mind-bogglingly stupid.

Here's a PowerShell script I wrote to accept a server name, database, and target backup folder.  It performs a single full backup when called. 

This script, which includes error-handling, sub-folder creation and more robust error reporting, is a lot more involved than any examples I found online, had to amalgamate the knowledge of dozens of google bing searches. 

Some key syntax features of Powershell that I learned about and implemented here:
  1. The InnerException loop is the only way to get the actual SQL Error out.  Otherwise, you're just stuck with Error 3041, "BACKUP failed to complete the command", which is not informative at all.  This is probably the most critical part of this, the ability to get meaningful errors into a text file log in the event of a backup failure.
  2. The Test-Path verifies if the container exists, so that you can create a directory if it doesn't.
  3. The $SMOBackupObject.Database wants a Database name (string) not a Database Object.  That really threw me off for an hour or so.  My fault, ofcourse.
  4. I used the SilentlyContinue ErrorActionPreference so that the user would see a clean error message (in my if $error.count section) instead of a messy red error from PowerShell, which would not be informative anyway (see #1).
  5. I used both the Trap syntax (to access the Exception object) and the if $error.count, because I wanted to have it return something to the end user regardless. 
  6. The out-null parts also hide messy returns to the end-user.
  7. The $error.clear() is also pretty important, as it clears out the errors that had run beforehand, if the user wants to try running the script again.  Not sure why this is necessary, its not intuitive that errors persist like that.  Open to any advice.

    function SQLFullBackup ([string]$SQLServerName, [string]$SQLDBname, [string]$BackupFolder) 
    {  
     
        # SQL Server Backup
        # Executes a single FULL backup of a given database 
     
        $error.clear()
        $ErrorActionPreference = 'SilentlyContinue'
        $BackupDate = Get-Date -format yyyyMMddHHmmss
     
        # SPECIFY SQLServer\Instance name with this variable
        $SQLServer =  New-Object ("Microsoft.SqlServer.Management.Smo.Server") $SQLServerName
     
     
        Trap [Exception] {
            $err = $_.Exception
            while ( $err.InnerException )
                {
                $err = $err.InnerException
                $errorfullmessage += $err.Message + " "
                };
     
            $errorfullmessage | out-File ($BackupFolder + "\BackupHistory_Error_" + $SQLDBName + "_" `
                + $BackupDate + ".txt");
        }
     
        #Create the subfolder if it does not already exist.
        if ((Test-Path -path ($BackupFolder + "\" + $SQLDBName) -pathtype container) -ne $True)
        {
            New-Item -path $BackupFolder -name $SQLDBName -type directory | out-null
        }
     
        [System.Reflection.Assembly]::LoadWithPartialName("Microsoft.SqlServer.SMO") | Out-Null
        [System.Reflection.Assembly]::LoadWithPartialName("Microsoft.SqlServer.SmoExtended") | Out-Null
        [System.Reflection.Assembly]::LoadWithPartialName("Microsoft.SqlServer.ConnectionInfo") | Out-Null
        [System.Reflection.Assembly]::LoadWithPartialName("Microsoft.SqlServer.SmoEnum") | Out-Null
     
        $SMOBackupObject = New-Object ("Microsoft.SqlServer.Management.Smo.Backup")
        $SMOBackupObject.Database = $SQLDBName
        $SMOBackupObject.Action = "Database" # File, Log
        $SMOBackupObject.Incremental = $false
        $SMOBackupObject.Devices.AddDevice($BackupFolder + "\" + $SQLDBName + "\" + $SQLDBName + "_" `
                + $BackupDate + ".bak", "File")
        $SMOBackupObject.SqlBackup($SQLServer) 
     
        if($error.count -gt 0)
        {   
            "Backup of database " + $SQLDBName+ " failed.  See log file in " + $BackupFolder + "\BackupHistory_Error_" `
            + $SQLDBName + "_" + $BackupDate + ".txt"
        }
                Else
        {
            "Backup of database " + $SQLDBName + " successful."
        }
     
    };
     
    Here are the function calls, per database, for the above function:

    SQLFullBackup ".\SQLEXPRESS" "express" "c:\Program Files\Microsoft SQL Server\MSSQL.1\MSSQL\backup"
    SQLFullBackup ".\SQLEXPRESS" "express2" "c:\Program Files\Microsoft SQL Server\MSSQL.1\MSSQL\backup"
     
    Here's sample output of a successful call or a failed call.


    PS C:\Windows\system32>C:\powershell_scripts\sqlfullbackup calls.ps1
    Backup of database express successful.
    Backup of database express2 failed.  See log file in 
    c:\Program Files\Microsoft SQL Server\MSSQL.1\MSSQL\backup\BackupHistory_Error_wexpress2_20100810094357.txt 

    Here's the files created:
    BackupHistory_Error_express2_20100810094357.txt
    \express\express_20100810094357.bak

    And the error log actually contains an informative SQL Error.  In this case:

    An exception occurred while executing a Transact-SQL statement or batch. 
    Could not locate entry in sysdatabases for database 'express2'. No entry found with that name. 
    Make sure that the name is entered correctly.
    BACKUP DATABASE is terminating abnormally. 


    How did this DBA first get into Powershell?  Here's a blog post about it.  

    Monday, April 05, 2010

    TFS Sidekicks

    I've do a lot of DB Pro solutions and DBA code reviews at my office and on client sites. This free sidekick add-on for Visual Studio has been invaluable.

    http://www.attrice.info/downloads/index.htm

    Whenever I want to look at files included in a TFS changeset, or all files checked in to TFS for one work item, or comparison across changesets, I find this Visual Studio plugin extremely handy.

    For example, I use it for Code Reviews, to compare changes across all Changesets for a given Work Item. It has a quick function to compare the latest version of a file to the latest version of a file before this work item. Ridiculously efficient, I've forgotten how I'd do this before.

    Very quick install, low-impact. I imagine it could be very handy for lots of different Source Control-related sources and diffs in Visual Studio that you just can’t do with Visual Studio alone. Anyone else use it?

    Sunday, February 21, 2010

    A message on default constraints

    One of my assignments as a consultant DBA for a local Fortune 500 company is to support development of a large project. I was in a meeting on naming conventions and best practices from a DBA standpoint with the development leads this past week when the practice of naming default contraints came up.

    The benefit of providing user-defined names on your default constraints is that SQL won't create a standardized name with a random string at the end. That random string can be a real pain when it comes to database object comparison, plus, it looks ugly and isn't nearly as informative as a user-defined name could be.

    Here's the email sent to the developers, who were unsure of the syntax and the deprecated behavior of default objects. Names have been changed to protect the innocent tables and columns.

    You are correct, creating a CONSTRAINT object is deprecated behavior and will not be supported. Creating the constraints in the CREATE TABLE script is the preferred method. Constraints can still be named, however.

    CREATE TABLE [dbo].[TABLECD](

    [CD] [char](1) NOT NULL DEFAULT ' ',

    [id] [varchar](15) NOT NULL DEFAULT ' ',

    [ADDUSER] [int] NOT NULL DEFAULT ((0)),

    [ADDDATE] [datetime] NOT NULL DEFAULT GETDATE(),

    [EDITUSER] [int] NOT NULL DEFAULT ((0)),

    [EDITDATE] [datetime] NOT NULL DEFAULT CONVERT(datetime, '1899/12/31'),

    [DELETEUSER] [int] NOT NULL DEFAULT ((0)),

    [DELETEDATE] [datetime] NOT NULL DEFAULT CONVERT(datetime, '1899/12/31'),

    [DELETEFLAG] [char](1) NOT NULL DEFAULT ' ',

    When the above script is run, below is what actually gets created. Notice the names generated.

    DEFAULT on column ADDDATE DF__TABLECD__ADDDA__59FA5E80 (getdate())

    DEFAULT on column ADDUSER DF__TABLECD__ADDUS__59063A47 ((0))

    DEFAULT on column CD DF__TABLECD__CD__571DF1D5 (' ')

    DEFAULT on column DELETEUSER DF__TABLECD__DELET__5CD6CB2B ((0))

    DEFAULT on column DELETEDATE DF__TABLECD__DELET__5DCAEF64 (CONVERT([datetime],'1899/12/31',0))

    DEFAULT on column DELETEFLAG DF__TABLECD__DELET__5EBF139D (' ')

    DEFAULT on column EDITDATE DF__TABLECD__EDITD__5BE2A6F2 (CONVERT([datetime],'1899/12/31',0))

    DEFAULT on column EDITUSER DF__TABLECD__EDITU__5AEE82B9 ((0))

    DEFAULT on column id DF__TABLECD__id__5812160E (' ')

    Here’s a version of the script that sets user-provides constraint names:

    CREATE TABLE [dbo].[TABLECD](

    [CD] [char](1) NOT NULL CONSTRAINT [DF_TABLECD_CD] DEFAULT ' ',

    [id] [varchar](15) NOT NULL CONSTRAINT [DF_TABLECD_id] DEFAULT ' ',

    [ADDUSER] [int] NOT NULL CONSTRAINT [DF_TABLECD_ADDUSER] DEFAULT ((0)),

    [ADDDATE] [datetime] NOT NULL CONSTRAINT [DF_TABLECD_ADDDATE] DEFAULT GETDATE(),

    [EDITUSER] [int] NOT NULL CONSTRAINT [DF_TABLECD_EDITUSER] DEFAULT ((0)),

    [EDITDATE] [datetime] NOT NULL CONSTRAINT [DF_TABLECD_EDITDATE] DEFAULT CONVERT(datetime, '1899/12/31'),

    [DELETEUSER] [int] NOT NULL CONSTRAINT [DF_TABLECD_DELETEUSER] DEFAULT ((0)),

    [DELETEDATE] [datetime] NOT NULL CONSTRAINT [DF_TABLECD_DELETEDATE] DEFAULT CONVERT(datetime, '1899/12/31'),

    [DELETEFLAG] [char](1) NOT NULL CONSTRAINT [DF_TABLECD_DELETEFLAG] DEFAULT ' '

    )

    And here’s the output. Notice the constraint names are not truncated or appended with random strings.

    DEFAULT on column ADDDATE DF_TABLECD_ADDDATE (getdate())

    DEFAULT on column ADDUSER DF_TABLECD_ADDUSER ((0))

    DEFAULT on column CD DF_TABLECD_CD (' ')

    DEFAULT on column DELETEDATE DF_TABLECD_DELETEDATE (CONVERT([datetime],'1899/12/31',0))

    DEFAULT on column DELETEFLAG DF_TABLECD_DELETEFLAG (' ')

    DEFAULT on column DELETEUSER DF_TABLECD_DELETEUSER ((0))

    DEFAULT on column EDITDATE DF_TABLECD_EDITDATE (CONVERT([datetime],'1899/12/31',0))

    DEFAULT on column EDITUSER DF_TABLECD_EDITUSER ((0))

    DEFAULT on column id DF_TABLECD_id (' ')


    “There are no constraints on the human mind, no walls around the human spirit, no barriers to our progress except those we ourselves erect.” -Ronald Reagan