Friday, December 16, 2011

View execution plan for a sproc in the wild

Got stored procedure problems?  I feel sorry for you son.

Here's a quick query to pull the cached execution plan out of memory.

I italicized cached because it's not going to stay there forever.

I used this query to find the execution plan of a stored proc that had been running in a SQL job.  Instead of pulling down the sproc and trying to estimate the query plan, I chose to see what the cached plan looked like.

Open_transaction_count in sys.dm_exec_sessions

This news may be a few months old, but it is worth noting that there is now a column called open_transaction_count in the sys.dm_exec_sessions in SQL Server 2012.

This should put away any need for you to use sys.sysprocesses for session information.

Wednesday, October 19, 2011

Using sys.dm_exec_requests to find a blocking chain

A query that calls sys.dm_exec_requests and pulls the statement text out of the request is a handy script for any DBA's toolbox.  A simple version of the query performs a cross apply to dm_exec_sql_text and pulls out the request's TSQL statement-in-progress using the unicode character offsets.

This is simple enough and does a good job as a queryable sp_who2 and DBCC INPUTBUFFER replacement to an extent.

In an environment that suffers from frequent spells of query blocking, the ability to identify which query was leading the way becomes important in finding the query or queries to troubleshoot.

Whether the end result of his query was a simple KILL command, and/or a walk down the hall to investigate the "business case" would be a situational decision.

I wanted a blocking chain - a list of all other spids that a session was blocking.

Here's a query that dumps the output of dm_exec_requests into a temporary table, then uses the XML PATH to conveniently transform the blocked_by list into a comma-delimited list in the row of the blocker.  The  sort order is the length of this comma-delimited list, putting the cause of a pileup right at the top.

Doing this in a temp table was necessary because the blocking_session_id is a highly volatile column.  As a result, the output of this query is several ms behind "live".  After a few executions, the spid hanging out at the top is easy enough to spot.

Here's an example of the nightmare kind of scenario that prompted this script into existence:




 declare @showallspids bit 
 select @showallspids =1

 create table #ExecRequests  (
  id int IDENTITY(1,1) PRIMARY KEY
 , session_id smallint not null 
 , request_id int null
 , request_start_time datetime null
 , login_time datetime not null
 , login_name nvarchar(256) null
 , client_interface_name nvarchar(64)
 , session_status nvarchar(60) null
 , request_status nvarchar(60) null
 , command nvarchar(32) null
 , sql_handle varbinary(64) null
 , statement_start_offset int null
 , statement_end_offset int null
 , plan_handle varbinary (64) null
 , database_id smallint null
 , user_id int null
 , blocking_session_id smallint null
 , wait_type nvarchar (120) null
 , wait_time_s int null
 , wait_resource nvarchar(120) null
 , last_wait_type nvarchar(120) null
 , cpu_time_s int null
 , tot_time_s int null
 , reads bigint null
 , writes bigint null
 , logical_reads bigint null
 , [host_name] nvarchar(256) null
 , [program_name] nvarchar(256) null
 , blocking_these varchar(1000) NULL
 , percent_complete int null
 , session_transaction_isolation_level varchar(20) null
 , request_transaction_isolation_level varchar(20) null
 )

 insert into #ExecRequests (session_id,request_id, request_start_time, login_time, login_name, client_interface_name, session_status, request_status, command,sql_handle,statement_start_offset,statement_end_offset,plan_handle,database_id,user_id,blocking_session_id,wait_type,last_wait_type,wait_time_s,wait_resource,cpu_time_s,tot_time_s,reads,writes,logical_reads,[host_name], [program_name] , session_transaction_isolation_level , request_transaction_isolation_level )
       select s.session_id,request_id, r.start_time, s.login_time, s.login_name, s.client_interface_name, s.status, r.status,command,sql_handle,statement_start_offset,statement_end_offset,plan_handle,r.database_id,user_id,blocking_session_id,wait_type,r.last_wait_type, r.wait_time/1000.,r.wait_resource ,r.cpu_time/1000.,r.total_elapsed_time/1000.,r.reads,r.writes,r.logical_reads,s.[host_name], s.[program_name], s.transaction_isolation_level, r.transaction_isolation_level
 from sys.dm_exec_sessions s 
 left outer join sys.dm_exec_requests r on r.session_id = s.session_id
 where 1=1
 and s.session_id >= 50 --retrieve only user spids
 and s.session_id <> @@SPID --ignore myself
 and  (@showallspids = 1 or r.session_id is not null) 

 update #ExecRequests 
 set blocking_these = LEFT((select isnull(convert(varchar(5), er.session_id),'') + ', ' 
       from #ExecRequests er
       where er.blocking_session_id = isnull(#ExecRequests.session_id ,0)
       and er.blocking_session_id <> 0
       FOR XML PATH('') 
       ),1000)
 select * from 
 (
  select  
   timestamp = getdate()
  , r.session_id , r.host_name , r.program_name
  , r.session_status
  , r.request_status
  , r.blocking_these
  , blocked_by =  r.blocking_session_id
  , r.wait_type 
  , r.wait_resource
  , r.last_wait_type
  , DBName = db_name(r.database_id)
  , r.command
  , login_time
  , login_name
  , client_interface_name
  , request_start_time
  , r.tot_time_s, r.wait_time_s, r.cpu_time_s, r.reads, r.writes, r.logical_reads
  --, [fulltext] = est.[text]
  , offsettext = CASE WHEN r.statement_start_offset = 0 and r.statement_end_offset= 0 THEN NULL
         ELSE SUBSTRING (  est.[text]
             , r.statement_start_offset/2 + 1, 
              CASE WHEN r.statement_end_offset = -1 THEN LEN (CONVERT(nvarchar(max), est.[text])) 
               ELSE r.statement_end_offset/2 - r.statement_start_offset/2 + 1
              END )
       END
  , r.statement_start_offset, r.statement_end_offset
  , cacheobjtype = LEFT (p.cacheobjtype + ' (' + p.objtype + ')', 35)
  , QueryPlan  = qp.query_plan 
  , request_transaction_isolation_level = case request_transaction_isolation_level 
    when 0 then 'Unspecified'
    when 1 then 'ReadUncommitted'
    when 2 then 'ReadCommitted'
    when 3 then 'Repeatable'
    when 4 then 'Serializable'
    when 5 then 'Snapshot' end 
  , session_transaction_isolation_level = case session_transaction_isolation_level 
    when 0 then 'Unspecified'
    when 1 then 'ReadUncommitted'
    when 2 then 'ReadCommitted'
    when 3 then 'Repeatable'
    when 4 then 'Serializable'
    when 5 then 'Snapshot' end 
  , p.plan_handle
  , stat.execution_count, stat.total_worker_time, stat.last_worker_time, stat.total_elapsed_time, stat.last_elapsed_time, stat.total_physical_reads, stat.total_logical_writes, stat.total_logical_reads
  --next two lines are SQL 2012 only!
  --, stat.total_rows, stat.last_rows
  from #ExecRequests r
  LEFT OUTER JOIN sys.dm_exec_cached_plans p ON p.plan_handle = r.plan_handle 
  OUTER APPLY sys.dm_exec_query_plan (r.plan_handle) qp
  OUTER APPLY sys.dm_exec_sql_text (r.sql_handle) est
  LEFT OUTER JOIN sys.dm_exec_query_stats stat on stat.plan_handle = r.plan_handle
  and r.statement_start_offset = stat.statement_start_offset  
  and r.statement_end_offset = stat.statement_end_offset
  
 
 ) a
 order by LEN(blocking_these) desc, blocking_these desc, blocked_by desc, session_id asc
 go
 drop table #ExecRequests  

Updated 20140813: latest tweaks and changes Updated 20140929: latest tweaks and changes

Saturday, October 15, 2011

Slides and Sample code: SQL Server Admin Best Practices with DMV's from the Houston TechFest 2011

Here's my official blog page for my presentation on SQL Server Admin Best Practices with DMV's from the Houston TechFest 2011 on October 15.  Thanks for attending my session in the of a great event put on by Michael and the entire Houston TechFest team.  The University of Houston was a great host for the event!

Thanks also to all the sponsors who attendees.  In the speaker room I heard some discussion of HTF's decision to have back-to-back Keynotes in the morning, not starting the actual sessions until 11AM and then breaking for lunch at noon.  I have to agree with what I heard - the vendors paid the money to put this free event on, why not give them the freshest audience?  As a veteran of three SQL Saturdays in Baton Rouge, I have to admit it was an uncommon but inventive scheduling decision.  If it increases vendor satisfaction and retention for next year's event, nobody should mind, especially the attendees.

Download slides and sample code here.

You can give me a rating on SpeakerRate here.

Monday, October 10, 2011

Resource Governor clamps down on Greedy Reports

One of the things I run into every now and then is this provincial idea that reports can and should be read straight out of the transactional database.  From an architectural standpoint this is of course flawed at and worst disastrous to the performance of both the read/write transactional system and the read-intensive reporting system.

At scale, architectural decisions to place heavy-duty reports - even reports that run nightly, monthly or annually - can be paralyzing to an enterprise.

Using various tools like replication, ETL via SSIS, ETL via SSIS with CDC, database mirroring, or snapshots to provide an off-site, contention-free reporting environment is an elementary architectural decision.  Simply put - don't mix your read/writes with your heavy reads.

But what if you're stuck with a host of hard-hitting reports running against your application database?

Long-term, look into providing the business a viable reporting alternative - a data warehouse with slowly changing dimensions for historical trending, Analysis Services cubes, self-service BI via PowerPivot...

Short-term, take some aggressive steps like using the SQL Server Resource Governor... within reason.  Introduced with SQL 2008, surely this tool can let you wield tyrannical power over connections.  Instead of using this as a punitive hammer on your offending reports, or as a wide-sweeping penalizing stroke, use the Resource Governor as an insulator for your key apps.  Take the approach of protecting the application's read/writes from the reports' massive reads, and the resulting resource stranglehold.

Take this configuration for example.  This is a step-by-step on how to set up Resource Governor.  Note a large number of commented-out example configuration options.



use master
go
--http://msdn.microsoft.com/en-us/library/bb933944(v=SQL.100).aspx
CREATE WORKLOAD GROUP GovGroupReports;
go
CREATE FUNCTION dbo.fnResourceGovernorClassifier() RETURNS sysname 
WITH SCHEMABINDING
AS
BEGIN
      -- Note that any request that does not get classified goes into the 'default' group.
      -- This function is run in the context of the new connection.
      --http://msdn.microsoft.com/en-us/library/bb933865.aspx
     
      DECLARE @grp_name sysname
     
      IF (
                  SUSER_SNAME() IN ('ReportUser')
            AND IS_MEMBER ('db_datareader') = 1
            --AND HOST_NAME() IN ('ReportingServer')
            --AND SUSER_SNAME() IN ('ReportUser')
            --AND IS_SRVROLEMEMBER('bulkadmin') = 1
            --AND IS_MEMBER ('DOMAIN\READONLYGROUP') = 1
            --AND APP_NAME() LIKE 'Report Server%'
      )
      BEGIN
            SET @grp_name = 'GovGroupReports';
      END

      RETURN @grp_name

END;
GO
-- Register the classifier function with Resource Governor
ALTER RESOURCE GOVERNOR WITH (CLASSIFIER_FUNCTION= dbo.fnResourceGovernorClassifier);
--ALTER RESOURCE GOVERNOR WITH (CLASSIFIER_FUNCTION= null);
GO
CREATE RESOURCE POOL GovPoolMaxMemory
WITH (--http://msdn.microsoft.com/en-us/library/bb934024.aspx
            MAX_MEMORY_PERCENT = 75
      --,   MIN_CPU_PERCENT = 0
      --,   MAX_CPU_PERCENT = 75
      --,   MIN_MEMORY_PERCENT = 0
);
GO
ALTER WORKLOAD GROUP GovGroupReports
WITH (--http://msdn.microsoft.com/en-us/library/bb895361.aspx
            MAX_DOP = 1
      --,   IMPORTANCE = { LOW | MEDIUM | HIGH }
      --,   REQUEST_MAX_MEMORY_GRANT_PERCENT = 25
      --,   REQUEST_MAX_CPU_TIME_SEC = 600
      --,   REQUEST_MEMORY_GRANT_TIMEOUT_SEC = 600
      --,   GROUP_MAX_REQUESTS = 50

)
USING GovPoolMaxMemory
GO

-- Start Resource Governor
ALTER RESOURCE GOVERNOR RECONFIGURE;
GO

--Just in case
--ALTER RESOURCE GOVERNOR DISABLE

What we've accomplished with this Resource Governor implementation is to cap connections we've identified connections using ReportUser login.  We could identify these connections a number of different ways.

Remember that connections we don't catch will always fall through the function into the default bucket, and will be ignored by the Resource Governor.  Only groups we name will be directly affected.

The Resource Pool GovPoolMaxMemory we've created is pretty self explanatory - to simplify, it will cap usage of all connections in the pool to 75% of the SQL Server's available memory.  This is probably still very generous - we may want to make sure that other connections (the transactional application) will have more than 25% of all available memory.  This strategy does prevent reports from hogging ALL of the memory, and remember that is our strategy - to protect, not to punish.  

Similarly, the Resource Group we've created will restrict queries to one degree of parallelism.  That is a topic for another blog post, I recommend reading up on it.  In many cases, this can reduce CXPACKET wait types, further protecting the server from being crushed under the load of the reports.

After the governor is configured, use the query below to verify that new connections are dropping into the new bucket:  

SELECT
            es.session_id
      ,     es.login_time
      ,     es.login_name
      ,     [Group Name]            = wg.name
      ,     [Host Name]             = es.host_name
      ,     Application             = es.program_name
      ,     es.client_interface_name
      ,     es.memory_usage
      ,     es.total_elapsed_time
      ,     Pool_Max_Memory_kb      = rp.max_memory_kb
      ,     Pool_Max_Memory_kb      = rp.used_memory_kb
--SELECT *
FROM
      sys.dm_resource_governor_workload_groups wg
INNER JOIN
      sys.dm_exec_sessions es
ON es.group_id = wg.group_id
LEFT OUTER JOIN
      sys.dm_resource_governor_resource_pools rp
on rp.pool_id = wg.pool_id
WHERE
      wg.group_id > 2 --look only at connections being governed


"Serve the public trust, protect the innocent, uphold the law." -RoboCop

Thursday, October 06, 2011

Houston TechFest UG Leadership Summit

At the Houston TechFest 2011 Conference at the University of Houston I will be hosting a User Group Leadership Summit at NOON as part of the "Advanced" track.

I invite anyone current involved with a local .net, IT Pro, SQL Server, Sharepoint, or any other user group, in any region, to attend.

We will have an open forum for discussion with voted topics, I will merely introduce myself and proctor and nothing more.

This will be my second year going to the Houston TechFest, and last year (though I spoke in the last slot - some time in the early evening, in a hot, dim, sleepy room) was a blast.  I was very impressed with the crowd, the speakers, and the vendors and I am looking forward to seeing them all again.  As part of the committee that runs SQL Saturday Baton Rouge each year (#17, #28, #64 and again 2012 as well) I took some pointers from it.  HTF is definitely a wider diversity of technologies and topics, but not something that a SQL Saturday can't hope to be.

I will be making the road trip to Houston with more than a dozen of my coworkers for the event, and I will be joined in speakers shirts by three of my coworkers: Ryan Richard, Mike Huguet and Kerry Huguet.  I will be presenting on "SQL Server Best Practices with DMVs" at 2:30pm in the SQL Server track.

Friday, September 30, 2011

New website, exciting future for BRSSUG

Just wanted to push out an announcement that the Baton Rouge SQL Server User Group has a new website.

www.brssug.org

We've got a great few months ahead of us with our buddies at the BR .net User Group!

October's meeting is on the topic of Change Data Capture (CDC) as a real-world replacement for transactional replication, presented by one of my colleagues Matt Maddox from Lamar Advertising.  November will feature Microsoft's own Patrick Leblanc talking about one of the many new features of SQL Denali.  December's meeting will be replaced by a User Group social.  In January, I'm very excited to host Devlin Liles, who will speak to the joint audience of BRSSUG and BRDNUG.

BRSSUG is coming off a 20-person meeting and a very applied, original talk on Data Mining in SQL Analysis Services by Sona Pal, a recent graduate of LSU.  We're definitely on the upswing!

Monday, September 12, 2011

Striped backups in SQL Server

Some DBA's don't realize that SQL server can perform a database backup that breaks up the files - stripes them - in a single command.  SQL can stripe the backup into an unlimited number of files, creating a multi-file backup set.

Friday, September 02, 2011

Moving the SQL replication snapshot folder

I recently addressed an issue where a client server's data volume was getting dangerously tight on free space.  It contained data files (.mdf's) as well as the replication snapshot folder, which hosted dozens of publications.

Wednesday, August 24, 2011

Moving Heaps to another Filegroup

For large databases on a server with access to multiple physical volumes (more spindles) it is a good practice to move large objects out of the PRIMARY filegroup.

Monday, August 08, 2011

Info and Usage of new COLUMNSTORE indexes

COLUMNSTORE indexes is one of the new features of SQL Denali which is currently in CTP3 and available for public consumption.

COLUMNSTORE indexes are nonclustered indexes store data in a column by itself in its own pages, a strategy to make certain types of access to this data in an isolated environment more efficient.  This cost benefit scales with massive rowcounts.

COLUMNSTORE indexes are intended for data warehouses or dedicated OLTP reporting systems, NOT traditional OLTP applications.  There is staggering limitation in the use of COLUMNSTORE indexes that force your hand when considering where to use them.

COLUMNSTORE indexes make your table read-only

Sunday, August 07, 2011

SQL Server Maintenance Best Practices with DMV's - SQL Saturday 64

First off, big thanks to the 375+ people who showed up for SQL Saturday 64 in Baton Rouge on August 6. It was a fantastic event.

Major thank-you's to Mike Huguet, Laurie Firmin, Thomas LeBlanc, Mark Verret, Stacy Vicknair, Justin Yesso, all our volunteers, and the great sponsors we had both local and nationwide.

A final thank you to our speakers, you were part of one of the biggest and most accomplished lineups of SQL and .net speakers I've ever seen at a SQL Saturday. I don't want to name-drop, but check out the schedule for the all the recognizable faces from the user community at large. WOW.

My employer Sparkhound had a whopping six speakers at the conference: Stacy Vicknair (VS Lightswitch), Justin Yesso (SSRS for .net developers), Ryan Richard (MVVM), Mike Huguet (Open XML), Victor Chataboon (Sharepoint admin) and myself.  This is just a sample of the capable, communicative and competent coworkers I have the pleasure of working with.

Here's the link to download the slidedeck and all sample scripts (including the few that we skipped) from my presentation on SQL Admin Best Practices with DMV's at SQL Saturday #64 yesterday in Baton Rouge:

Download scripts here

Sunday, July 31, 2011

SQLSaturday Baton Rouge #64


Free SQL Server and .Net training!  Chance to win great prizes!  What more could a SQL Server Professional ask for? If this sounds good to you then don’t miss your opportunity to attend SQL Saturday! #64, the largest FREE training event dedicated exclusively to SQL Server, .Net, Development and Business Intelligence in Louisiana.
SQL Saturday! #64? What is it? When is it? Where is it? How do I get involved? Let me fill you in.
What:
An all day FREE training event with SQL Server and .Net sessions spread out over six tracks of Business Intelligence, Database Administration, SQL Development, and .Net.
When:
Saturday, August 6, 2011. Online registration is now open, but it is filling up fast so reserve your spot now. Attendee check-in will begin at 7:00am until 8:45am with opening comments from 8:30am to 9:00am and the first sessions beginning at 9:15am. A full list of session tracks and schedule is available.
Where:
Louisiana State University, College of Business
1109 Patrick F. Taylor Hall
We’ve posted a Google map on the Location page, but just in case here is the location and text directions!
Link to directions: http://www.eng.lsu.edu/misc/directions.html
Lodging:
The Cook Hotel
3848 West Lakeshore Drive
Baton Rouge, LA  70808
Call:  (225) 383 2665
Note:  Mention SQL Saturday when making reservations for event rates.  All reservations must be received before 7/13/2011.
www.thecookhotel.com
Hotel Map: link
How: All you have to do to get involved is register online to reserve your spot. Show up and attend the sessions that you like.
So what are you waiting for? Register today to be a part of SQL Saturday! #64!
SQL Saturday! #64 is presented by the Baton Rouge Chapter of the Professional Association for SQL Server (PASS) and brought to you by these sponsors.
Twitter info:
The event hash tag is #sqlsat64.  Follow the official SQL Saturday Baton Rouge twitter account at @SQLSatBR!  The Baton Rouge chapter of PASS tweets at @BRSSUG.

Thursday, July 14, 2011

Finding Missing Indexes in Your Ill-Begotten Databases

Last night I presented at (and won!) the Baton Rouge SQL Server and .net User Groups Speaker Idol Competition Summer 2011.  My topic was a humor-ridden presentation of nonclustered indexing strategies for SQL Server and how indexes impact application performance, geared towards a majority-developer audience.

Thanks to all who attended and voted, and a big congrats on a job well done to the other contenders!

Click here to download the slidedeck and sample scripts, in a zip file.

Monday, June 06, 2011

Forward Engineering from Visio 2010

As we noted in a previous blog post, the lack of so-called "enterprise architect" edition in versions of Visio since 2007 is a pain.  The Visio 2007 solution is decent - though it requires a specific version of .net 1.1, XP compatibility mode, and some custom .xslt.  Visio 2010 has a great third-party Forward Engineering solution from Alberto Ferrari, which is really, really, really easy. 

Install the codeplex project, install.net 4.0 if you haven't already, activate the add-on (not an obvious step) and you're done.  It integrates into the Office 2010 ribbon and you're a couple clicks away.

Here's the project: http://forwardengineer.codeplex.com/

Unfortunately, this solution also doesn't pick up the "notes" for Visio, which would be click if created as Extended Properties, but I can't complain.

"All truths are easy to understand once they are discovered; the point is to discover them." -Galileo Galilei

Saturday, June 04, 2011

DMV Overview Presentation from SQL Saturday #77 Pensacola 2011

Here's the link to a .zip file of my slidedeck and all sample demo files from today's SQL Saturday Pensacola.  This is my DMV Overview presentation from room 2171 at 1:45pm. 

Download here

Thursday, May 19, 2011

New Features of SQL 2008 R2 SP1 CTP Presentation at the Baton Rouge SQL Server User Group

Here's my slide deck and three sample .sql files from the May 2011 Baton Rouge SQL Server User Group meeting where I presented on the new features of SQL 2008 R2 SP1 CTP.  Thanks for all who attended, see you next time!

Wednesday, April 13, 2011

Lightning Round - SQL Azure Overview

Here is the slidedeck for my April 13 presentation at the joint Baton Rouge SQL Server and .net User Group meeting on the campus of LSU.

Friday, March 18, 2011

Float like an approximation; stings like a bee

I distinctly remember having to tell my CIO and a VP of Finances in 2005 that the reason their reports were off by one cent from the ledger was because the person who designed their reporting database had chosen to use the float datatype instead of the decimal datatype.  The implications of that one cent variation were astounding to me, but I wasn't a business finances guy.

Thursday, March 17, 2011

Slidedeck: Using SSRS 2008 R2 to natively report from Sharepoint lists

Included in the below file is the slide deck from my lightning round presentation at the Baton Rouge SQL Server User Group on March 16, 2011 on using SSRS 2008 R2 to natively report from Sharepoint lists.  Thanks for attending!

Click here to download (PPT 1mb)

Thursday, March 03, 2011

TSQL: Get the first date of a quarter

Here’s a script to find the first calendar date in the quarter, given any date.

So, given March 3, 2011, we want to return January 1, 2011, the first date of Quarter 1.

Friday, February 18, 2011

"Heterogeneous queries and use of OLEDB providers are not supported in fiber mode."

This is a classic.


I've seen this in environments where someone is trying desperately to resolve SQL Server performance issues.  Out of the blue, linked server/DTC queries will begin to fail with the following error message:


Heterogeneous queries and use of OLEDB providers are not supported in fiber mode.

Monday, February 14, 2011

Using a DDL trigger push changes over a linked server

Here's a script I put together to automatically propagate schema changes to keep two SQL Server databases in sync using a DDL trigger.  This could just as easily be applied to two databases connected by linked server connections.

This isn't the only solution for schema-only change propagation, I'll have a blog post on that later and update this one when I do.  

Monday, January 31, 2011

Deadlock info from XEvents XML

Here's a handy script I developed to take advantage of the built-in XEvents System Health session in SQL 2008.

XEvents is the new infrastructure for diagnostics data collection, set to usurp SQL Traces in utility for this purpose.  XEvents are already up and running in SQL 2k8 and 2k8R2 but without an official GUI.  The Denali RC's currently sport an XEvents interface in Management Studio, which will officially get the ball rolling.

Right now though, you can still use XEvents, and the pre-packaged System Health trace session that is running by default.

Here's a great way to get some fairly-easy to read (for XML) diagnostic info on deadlocks that happened on your server... without enabling a Trace flag or a SQL Trace.

declare @xml xml
select @xml = CAST(t.target_data as xml)
from sys.dm_xe_session_targets t
inner join sys.dm_xe_sessions s
     on s.address = t.event_session_address
inner join sys.server_event_sessions es
     on s.name = es.name
     where s.name = 'system_health'
     and t.target_name = 'ring_buffer'

select @xml.query('/RingBufferTarget/event [@name="xml_deadlock_report"]')

You can clearly see the victim and its blocking, the InputBuffer strings for each (which won't see through a stored procedure unfortunately) and the same stuff you'd be used to seeing in a Trace - spid, hostname, clientapp, date/time.  Use any XML viewer to view the output, including SSMS's native ability to view XML.

Sunday, January 30, 2011

Slides and sample code from DMV Overview presentation at SQL Saturday Houston, January 2011

Thanks to a great room at 8:30AM at SQL Saturday Houston in Bammel, TX on January 29, 2011.

I had to leave town before lunch but it was a pleasure meeting you all.

Click here to download a .zip file with the slidedeck and all sample files - even the ones we didn't get to during the presentation.

Please hit me with a rating at SpeakerRate: http://speakerrate.com/speakers/11375

Wednesday, January 26, 2011

"The attempt to add a row to the Data Flow task buffer failed with error code 0xC0047020."

I know this doesn't make a lot of sense so far, but from what I can tell, the Fast Load OLE DB Destination mode can fail if an unconnected ADO destination in the same Data Flow also fails.  

Got this error in SSIS with an OLE DB Destination:
"The attempt to add a row to the Data Flow task buffer failed with error code 0xC0047020."
This error only popped up when an error in an unrelated transformation in the same data flow had an error.  I stopped receiving this error when I changed from a "Table or View - fast load" to "Table or View".

That's the short of it.  Here's more:

This is a puzzling error apparently caused by an error with a transformation with an ADO NET Destination, that runs concurrently and in parallel - that is, unconnected - to my OLE DB Destination.

I don't really have the time to figure out all the different permutations that allow this error to keep coming up, but safe to say its not really an error... it seems to be caused by other errors, but the important thing to note is that an error in one transformation was able to allow an unrelated transformation to fail.  This can be very misleading when trying to debug a screen full of red blocks.

The error that came up was a run of the mill aggravating data type error caused by SSIS being picky -
[SLRP ADO [4902]] Error: An exception has occurred during data insertion, the message returned from the provider is: The given value of type String from the data source cannot be converted to type nvarchar of the specified target column.
Will keep an eye out for more stuff like this in SSIS, open to feedback.