Friday, December 16, 2011
Here's a quick query to pull the cached execution plan out of memory.
I italicized cached because it's not going to stay there forever.
I used this query to find the execution plan of a stored proc that had been running in a SQL job. Instead of pulling down the sproc and trying to estimate the query plan, I chose to see what the cached plan looked like.
This should put away any need for you to use sys.sysprocesses for session information.
Wednesday, October 19, 2011
This is simple enough and does a good job as a queryable sp_who2 and DBCC INPUTBUFFER replacement to an extent.
In an environment that suffers from frequent spells of query blocking, the ability to identify which query was leading the way becomes important in finding the query or queries to troubleshoot.
Whether the end result of his query was a simple KILL command, and/or a walk down the hall to investigate the "business case" would be a situational decision.
I wanted a blocking chain - a list of all other spids that a session was blocking.
Here's a query that dumps the output of dm_exec_requests into a temporary table, then uses the XML PATH to conveniently transform the blocked_by list into a comma-delimited list in the row of the blocker. The sort order is the length of this comma-delimited list, putting the cause of a pileup right at the top.
Doing this in a temp table was necessary because the blocking_session_id is a highly volatile column. As a result, the output of this query is several ms behind "live". After a few executions, the spid hanging out at the top is easy enough to spot.
Here's an example of the nightmare kind of scenario that prompted this script into existence:
declare @showallspids bit select @showallspids =1 create table #ExecRequests ( id int IDENTITY(1,1) PRIMARY KEY , session_id smallint not null , request_id int null , request_start_time datetime null , login_time datetime not null , login_name nvarchar(256) null , client_interface_name nvarchar(64) , session_status nvarchar(60) null , request_status nvarchar(60) null , command nvarchar(32) null , sql_handle varbinary(64) null , statement_start_offset int null , statement_end_offset int null , plan_handle varbinary (64) null , database_id smallint null , user_id int null , blocking_session_id smallint null , wait_type nvarchar (120) null , wait_time_s int null , wait_resource nvarchar(120) null , last_wait_type nvarchar(120) null , cpu_time_s int null , tot_time_s int null , reads bigint null , writes bigint null , logical_reads bigint null , [host_name] nvarchar(256) null , [program_name] nvarchar(256) null , blocking_these varchar(1000) NULL , percent_complete int null , session_transaction_isolation_level varchar(20) null , request_transaction_isolation_level varchar(20) null ) insert into #ExecRequests (session_id,request_id, request_start_time, login_time, login_name, client_interface_name, session_status, request_status, command,sql_handle,statement_start_offset,statement_end_offset,plan_handle,database_id,user_id,blocking_session_id,wait_type,last_wait_type,wait_time_s,wait_resource,cpu_time_s,tot_time_s,reads,writes,logical_reads,[host_name], [program_name] , session_transaction_isolation_level , request_transaction_isolation_level ) select s.session_id,request_id, r.start_time, s.login_time, s.login_name, s.client_interface_name, s.status, r.status,command,sql_handle,statement_start_offset,statement_end_offset,plan_handle,r.database_id,user_id,blocking_session_id,wait_type,r.last_wait_type, r.wait_time/1000.,r.wait_resource ,r.cpu_time/1000.,r.total_elapsed_time/1000.,r.reads,r.writes,r.logical_reads,s.[host_name], s.[program_name], s.transaction_isolation_level, r.transaction_isolation_level from sys.dm_exec_sessions s left outer join sys.dm_exec_requests r on r.session_id = s.session_id where 1=1 and s.session_id >= 50 --retrieve only user spids and s.session_id <> @@SPID --ignore myself and (@showallspids = 1 or r.session_id is not null) update #ExecRequests set blocking_these = LEFT((select isnull(convert(varchar(5), er.session_id),'') + ', ' from #ExecRequests er where er.blocking_session_id = isnull(#ExecRequests.session_id ,0) and er.blocking_session_id <> 0 FOR XML PATH('') ),1000) select * from ( select timestamp = getdate() , r.session_id , r.host_name , r.program_name , r.session_status , r.request_status , r.blocking_these , blocked_by = r.blocking_session_id , r.wait_type , r.wait_resource , r.last_wait_type , DBName = db_name(r.database_id) , r.command , login_time , login_name , client_interface_name , request_start_time , r.tot_time_s, r.wait_time_s, r.cpu_time_s, r.reads, r.writes, r.logical_reads --, [fulltext] = est.[text] , offsettext = CASE WHEN r.statement_start_offset = 0 and r.statement_end_offset= 0 THEN NULL ELSE SUBSTRING ( est.[text] , r.statement_start_offset/2 + 1, CASE WHEN r.statement_end_offset = -1 THEN LEN (CONVERT(nvarchar(max), est.[text])) ELSE r.statement_end_offset/2 - r.statement_start_offset/2 + 1 END ) END , r.statement_start_offset, r.statement_end_offset , cacheobjtype = LEFT (p.cacheobjtype + ' (' + p.objtype + ')', 35) , QueryPlan = qp.query_plan , request_transaction_isolation_level = case request_transaction_isolation_level when 0 then 'Unspecified' when 1 then 'ReadUncommitted' when 2 then 'ReadCommitted' when 3 then 'Repeatable' when 4 then 'Serializable' when 5 then 'Snapshot' end , session_transaction_isolation_level = case session_transaction_isolation_level when 0 then 'Unspecified' when 1 then 'ReadUncommitted' when 2 then 'ReadCommitted' when 3 then 'Repeatable' when 4 then 'Serializable' when 5 then 'Snapshot' end , p.plan_handle , stat.execution_count, stat.total_worker_time, stat.last_worker_time, stat.total_elapsed_time, stat.last_elapsed_time, stat.total_physical_reads, stat.total_logical_writes, stat.total_logical_reads --next two lines are SQL 2012 only! --, stat.total_rows, stat.last_rows from #ExecRequests r LEFT OUTER JOIN sys.dm_exec_cached_plans p ON p.plan_handle = r.plan_handle OUTER APPLY sys.dm_exec_query_plan (r.plan_handle) qp OUTER APPLY sys.dm_exec_sql_text (r.sql_handle) est LEFT OUTER JOIN sys.dm_exec_query_stats stat on stat.plan_handle = r.plan_handle and r.statement_start_offset = stat.statement_start_offset and r.statement_end_offset = stat.statement_end_offset ) a order by LEN(blocking_these) desc, blocking_these desc, blocked_by desc, session_id asc go drop table #ExecRequests
Updated 20140813: latest tweaks and changes Updated 20140929: latest tweaks and changes
Saturday, October 15, 2011
Thanks also to all the sponsors who attendees. In the speaker room I heard some discussion of HTF's decision to have back-to-back Keynotes in the morning, not starting the actual sessions until 11AM and then breaking for lunch at noon. I have to agree with what I heard - the vendors paid the money to put this free event on, why not give them the freshest audience? As a veteran of three SQL Saturdays in Baton Rouge, I have to admit it was an uncommon but inventive scheduling decision. If it increases vendor satisfaction and retention for next year's event, nobody should mind, especially the attendees.
Download slides and sample code here.
You can give me a rating on SpeakerRate here.
Monday, October 10, 2011
At scale, architectural decisions to place heavy-duty reports - even reports that run nightly, monthly or annually - can be paralyzing to an enterprise.
Using various tools like replication, ETL via SSIS, ETL via SSIS with CDC, database mirroring, or snapshots to provide an off-site, contention-free reporting environment is an elementary architectural decision. Simply put - don't mix your read/writes with your heavy reads.
But what if you're stuck with a host of hard-hitting reports running against your application database?
Long-term, look into providing the business a viable reporting alternative - a data warehouse with slowly changing dimensions for historical trending, Analysis Services cubes, self-service BI via PowerPivot...
Short-term, take some aggressive steps like using the SQL Server Resource Governor... within reason. Introduced with SQL 2008, surely this tool can let you wield tyrannical power over connections. Instead of using this as a punitive hammer on your offending reports, or as a wide-sweeping penalizing stroke, use the Resource Governor as an insulator for your key apps. Take the approach of protecting the application's read/writes from the reports' massive reads, and the resulting resource stranglehold.
Take this configuration for example. This is a step-by-step on how to set up Resource Governor. Note a large number of commented-out example configuration options.
Thursday, October 06, 2011
I invite anyone current involved with a local .net, IT Pro, SQL Server, Sharepoint, or any other user group, in any region, to attend.
We will have an open forum for discussion with voted topics, I will merely introduce myself and proctor and nothing more.
This will be my second year going to the Houston TechFest, and last year (though I spoke in the last slot - some time in the early evening, in a hot, dim, sleepy room) was a blast. I was very impressed with the crowd, the speakers, and the vendors and I am looking forward to seeing them all again. As part of the committee that runs SQL Saturday Baton Rouge each year (#17, #28, #64 and again 2012 as well) I took some pointers from it. HTF is definitely a wider diversity of technologies and topics, but not something that a SQL Saturday can't hope to be.
I will be making the road trip to Houston with more than a dozen of my coworkers for the event, and I will be joined in speakers shirts by three of my coworkers: Ryan Richard, Mike Huguet and Kerry Huguet. I will be presenting on "SQL Server Best Practices with DMVs" at 2:30pm in the SQL Server track.
Friday, September 30, 2011
We've got a great few months ahead of us with our buddies at the BR .net User Group!
October's meeting is on the topic of Change Data Capture (CDC) as a real-world replacement for transactional replication, presented by one of my colleagues Matt Maddox from Lamar Advertising. November will feature Microsoft's own Patrick Leblanc talking about one of the many new features of SQL Denali. December's meeting will be replaced by a User Group social. In January, I'm very excited to host Devlin Liles, who will speak to the joint audience of BRSSUG and BRDNUG.
BRSSUG is coming off a 20-person meeting and a very applied, original talk on Data Mining in SQL Analysis Services by Sona Pal, a recent graduate of LSU. We're definitely on the upswing!
Monday, September 12, 2011
Friday, September 02, 2011
Wednesday, August 24, 2011
Monday, August 08, 2011
COLUMNSTORE indexes are nonclustered indexes store data in a column by itself in its own pages, a strategy to make certain types of access to this data in an isolated environment more efficient. This cost benefit scales with massive rowcounts.
COLUMNSTORE indexes are intended for data warehouses or dedicated OLTP reporting systems, NOT traditional OLTP applications. There is staggering limitation in the use of COLUMNSTORE indexes that force your hand when considering where to use them.
COLUMNSTORE indexes make your table read-only
Sunday, August 07, 2011
Major thank-you's to Mike Huguet, Laurie Firmin, Thomas LeBlanc, Mark Verret, Stacy Vicknair, Justin Yesso, all our volunteers, and the great sponsors we had both local and nationwide.
A final thank you to our speakers, you were part of one of the biggest and most accomplished lineups of SQL and .net speakers I've ever seen at a SQL Saturday. I don't want to name-drop, but check out the schedule for the all the recognizable faces from the user community at large. WOW.
My employer Sparkhound had a whopping six speakers at the conference: Stacy Vicknair (VS Lightswitch), Justin Yesso (SSRS for .net developers), Ryan Richard (MVVM), Mike Huguet (Open XML), Victor Chataboon (Sharepoint admin) and myself. This is just a sample of the capable, communicative and competent coworkers I have the pleasure of working with.
Here's the link to download the slidedeck and all sample scripts (including the few that we skipped) from my presentation on SQL Admin Best Practices with DMV's at SQL Saturday #64 yesterday in Baton Rouge:
Download scripts here
Friday, August 05, 2011
Sunday, July 31, 2011
Thursday, July 14, 2011
Thanks to all who attended and voted, and a big congrats on a job well done to the other contenders!
Click here to download the slidedeck and sample scripts, in a zip file.
Monday, June 06, 2011
Install the codeplex project, install.net 4.0 if you haven't already, activate the add-on (not an obvious step) and you're done. It integrates into the Office 2010 ribbon and you're a couple clicks away.
Here's the project: http://forwardengineer.codeplex.com/
Unfortunately, this solution also doesn't pick up the "notes" for Visio, which would be click if created as Extended Properties, but I can't complain.
"All truths are easy to understand once they are discovered; the point is to discover them." -Galileo Galilei
Saturday, June 04, 2011
Thursday, May 19, 2011
Wednesday, April 13, 2011
Friday, March 18, 2011
Thursday, March 17, 2011
Click here to download (PPT 1mb)
Thursday, March 03, 2011
Friday, February 18, 2011
I've seen this in environments where someone is trying desperately to resolve SQL Server performance issues. Out of the blue, linked server/DTC queries will begin to fail with the following error message:
Heterogeneous queries and use of OLEDB providers are not supported in fiber mode.
Monday, February 14, 2011
Monday, January 31, 2011
XEvents is the new infrastructure for diagnostics data collection, set to usurp SQL Traces in utility for this purpose. XEvents are already up and running in SQL 2k8 and 2k8R2 but without an official GUI. The Denali RC's currently sport an XEvents interface in Management Studio, which will officially get the ball rolling.
Right now though, you can still use XEvents, and the pre-packaged System Health trace session that is running by default.
Here's a great way to get some fairly-easy to read (for XML) diagnostic info on deadlocks that happened on your server... without enabling a Trace flag or a SQL Trace.
You can clearly see the victim and its blocking, the InputBuffer strings for each (which won't see through a stored procedure unfortunately) and the same stuff you'd be used to seeing in a Trace - spid, hostname, clientapp, date/time. Use any XML viewer to view the output, including SSMS's native ability to view XML.
Sunday, January 30, 2011
I had to leave town before lunch but it was a pleasure meeting you all.
Click here to download a .zip file with the slidedeck and all sample files - even the ones we didn't get to during the presentation.
Please hit me with a rating at SpeakerRate: http://speakerrate.com/speakers/11375
Wednesday, January 26, 2011
Got this error in SSIS with an OLE DB Destination:
"The attempt to add a row to the Data Flow task buffer failed with error code 0xC0047020."This error only popped up when an error in an unrelated transformation in the same data flow had an error. I stopped receiving this error when I changed from a "Table or View - fast load" to "Table or View".
That's the short of it. Here's more:
This is a puzzling error apparently caused by an error with a transformation with an ADO NET Destination, that runs concurrently and in parallel - that is, unconnected - to my OLE DB Destination.
I don't really have the time to figure out all the different permutations that allow this error to keep coming up, but safe to say its not really an error... it seems to be caused by other errors, but the important thing to note is that an error in one transformation was able to allow an unrelated transformation to fail. This can be very misleading when trying to debug a screen full of red blocks.
The error that came up was a run of the mill aggravating data type error caused by SSIS being picky -
[SLRP ADO ] Error: An exception has occurred during data insertion, the message returned from the provider is: The given value of type String from the data source cannot be converted to type nvarchar of the specified target column.Will keep an eye out for more stuff like this in SSIS, open to feedback.