Showing posts with label dmv. Show all posts
Showing posts with label dmv. Show all posts

Sunday, May 23, 2021

SQL Server Admin Best Practices with DMVs at GroupBy Americas 2021

On Tuesday I'm looking forward to presenting again at GroupBy Americas on a topic that was voted on by the attending public. This presentation "SQL Server Admin Best Practices with DMVs" is a talk that I've been giving for more than a decade and been tweaking and tuning ever since. 

This talk covers basic roll-your-own maintenance strategies using data inside SQL Server including wait stats, query stats and the Query Store, fragmentation and index maint, index DMVs for new/existing indexes, and a ton more helpful DMVs introduced in SQL 2017 and 2019. 

You can register for this talk here: groupby.org.

You can download my presentation here, and the toolbox of sample scripts referenced throughout can be found in here.


Saturday, September 13, 2014

Houston TechFest 2014: SQL Admin Best Practices with DMV's

Awesome crowd this morning for my SQL Admin Best Practices with DMV's presentation at Houston TechFest 2014, thanks very much for attending! There were some exceptionally useful questions raised today, great job to my audience.

Here is the .zip file as promised for my presentation slide deck and .sql files, including the new content for SQL 2014. Download here

Saturday, September 28, 2013

Houston Tech Fest 2013 - SQL Admin Best Practices with DMV's

Here is the downloadable file with all my sample scripts and slidedeck from my SQL Admin Best Practices with DMV's presentation from the 2000+ person Houston Tech Fest 2013 at the Reliant Center in Houston!

Thanks for attending! If you attended my talk, shoot me some feedback here: http://speakerrate.com/william.assaf

Download scripts and slidedeck

Friday, May 03, 2013

Hunt Down Tables Without Clustered Indexes

This one's a gimme, but an underrated utility script when I enter a new environment and look for low-hanging fruit for easy improvements.

You'd be surprised how many vendors ship applications without a single index, much less properly aligned clustered indexes.

Heaps are bad.  You can't defragment a heap by performing index maintenance, despite old myths about the create/drop a clustered index trick.

Similarly, the myth of a heap performing better on pure inserts vs a table with a clustered index is an academic one.  Heaps are impossible to order, which means selecting from a heap is always a scan.

If your table will be constantly written to and rarely read, put a clustered index on it.  The writes will be sequential, resulting in fewer page breaks and less fragmentation, which mean more efficient storage and maintenance.

EDIT: Page "breaks" (I meant splits) aren't possible on heaps, but forwarding pointers, which create a maze of bad performance in heaps with nonclustered indexes. Thanks to commenters for pointing this out!

And when you have to look in that table for a stray logging record, it won't be a full table scan. If your table will be constantly written to and never read, why is it in your database?  Such a table doesn't exist.

This script is an easy one to use for finding heaps in a database, along with the row count and size of jumbled data that is haunting your database.  It is in my DBA toolbox folder alongside many other scripts.

--Doesn't work on 2000 databases or databases in 2000 compatability mode.  Need to change the db_id() syntax if so.

select 
 [Database Name] = db_name()
, [Table Name] = s.name + '.' + o.name
, p.row_count
, SizeMb= (p.reserved_page_count*8.)/1024.
from 
 sys.objects o
inner join  sys.schemas s on o.schema_id = s.schema_id
inner join  sys.dm_db_partition_stats p  on p.object_id = o.object_id 
inner join  sys.indexes si on si.object_id = o.object_ID
WHERE si.type_desc = 'Heap'
and  is_ms_shipped = 0
order by SizeMb desc


As an aside, but a different blog post altogether - a good clustered key is narrow, unique, unchanging and sequential - which is why IDENTITY integer columns are the perfect clustered key.  Compound clustered keys can be a big waste of space.

Monday, February 04, 2013

Adding the Clustered Key To Your Nonclustered Index?

A colleague of mine was understandably confused when using the almost-perfect missing index engine in SQL 2008. The engine recommended he include the clustered index key in a new nonclustered index. Read that again - you're right, it doesn't make a lot of sense.

From my colleague:
Why would you ever need to have your primary key or clustered index key as a key field on a non-clustered index? That don't make any sense to me. If the SQL engine needs to make a lookup on the clustered index have the primary key as an include on your non-clustered index. ...I researched it and the only difference is that my current nonclustered has the primary key as a key field and now it's suggesting making it a include column.

I've got a wild hunch that the clustered key might make sense to have as a key in the nonclustered index as long as it isn't the first key, and the query is ordering on a nonclustered key. However, I can't think of a representative situation to prove that, so I might be wrong. I'm open to suggestions for a reproducible lab sample, perhaps your starting point would be the sample script below.

That said, I don't think it would make any sense to put the clustered key in the INCLUDE of a nonclustered index.  I'm puzzled as to why the missing index feature would suggest this.

Here's a lab for proof. In this scenario, the index was 19.320312 MB without the include, and 19.320312 with the include. SQL doesn't store an extra copy of the clustered key in the nonclustered index just because you asked for it in the INCLUDE or in the key. That's because the clustered key is already a part of any nonclustered index key.

This reinforces the point that the clustered key should always been as unique (so it's helpful) and narrow (so it doesn't waste space) as possible. That's why for almost all tables, an integer IDENTITY(1,1) key is a perfect clustered index and primary key. 4 bytes per row. And that's why for all tables, a GUID is a less efficient choice for your clustered key. 16 burdensome bytes per row...which is then lugged around by all your nonclustered keys...  </GUIDrant>
use somedatabasenotinproduction 
go
create table dbo.testclusteredinclude 
(      id int identity(1,1) not null primary key 
,      text1 varchar(30) Not null) 
go
insert into dbo.testclusteredinclude (text1) values ('test1'), ('test2') 
go 
insert into dbo.testclusteredinclude (text1) 
select text1 from dbo.testclusteredinclude where text1 = 'test2' 
go 20 
--turn on show actual exec plan 
select id, text1 from dbo.testclusteredinclude where text1 = 'test1' 
go 
create nonclustered index idx_nc_testclusteredinclude_text1  
on dbo.testclusteredinclude (text1) 
go 
select id, text1 from dbo.testclusteredinclude where text1 = 'test1' 
go 
drop index idx_nc_testclusteredinclude_text1 
on dbo.testclusteredinclude 
go 
create nonclustered index idx_nc_testclusteredinclude_text1 
on dbo.testclusteredinclude (text1)  include (id) 
go 
select id, text1 from dbo.testclusteredinclude where text1 = 'test1'
 

Thursday, October 11, 2012

Performance Tuning Presentation from October 2012 BRSSUG Meeting

Thanks to the 15+ folks who attended the Baton Rouge SQL Server User Group meeting last night!  I hope everyone enjoyed "everyone gets a book" night!

I presented on SQL Server Performance tuning, which began with Michelle Ufford's article from GoDaddy on how she saw massive performance gains just by correcting columns to more appropriate data types.  Here's a link to that article: https://web.archive.org/web/20130629210916/http://inside.godaddy.com/scaling-database-data-types/

Then we went through database compression, indexing and execution plans, index INCLUDE structure, the missing indexes feature, how foreign keys can affect performance, isolation modes and MAXDOP.

I've uploaded all my sample scripts in a .zip file to the brssug.org website here for you to download.

Thanks for attending, see you November 14!

UPDATE 20141104: Link to a cached copy of Ufford's article.

Saturday, September 08, 2012

SQL Server Best Practices for DMV's - Houston Tech Fest 2012

It is a pleasure meeting everyone today at Houston Tech Fest!  The new facility is miles away - literally and figuratively - from the previous Houston Tech Fest facility.

Here's a link below for downloading the presentation files and sample scripts from my presentation on "SQL Server Best Practices for DMV's" at Houston Tech Fest 2012 at the Reliant Center.

Download the .zip file here.

Thursday, July 12, 2012

SQL Server DBA Tool Kit

Right next to my batarang and bat-zip-line-hook-shooter-gun on my utility belt is my SQL Server DBA Tool Kit, full of scripts I've picked up along the way.

I take little credit for some of these, as I've picked them up from various sources.  I have modified them, for example, adding the ability to see the cached execution plan to MS PSS's "worst queries" query, or adding a blocking chain and cached execution plan to the typical dm_exec_sessions and dm_exec_requests queries out there to replace sp_who2.

There are some of my handwritten queries in there, for example the query to find the largest objects in the database and their compression state, a proof of sql_modules vs INFORMATION_SCHEMA.routines, and the "sessions and requests" with blocking chain query that I've presented on before and use daily.  Nothing novel or ground-breaking, but real practical utility queries I use personally as a SQL Server consultant.

I presented my toolkit last night to the Baton Rouge SQL Server User Group and it turned out to be one of our best meetings in a while - over an hour of solid, experience-based conversation about queries, best practices, example experiences that was a wealth of information.  Thanks to everyone who attended and helped make it a great meeting - it certainly wasn't all me and my fancy tool kit.

You can view my SQL toolbox on Github.


EDIT: 20140813 Updated toolbox link.
EDIT: 20140929 Updated toolbox link.
EDIT: 20170830 Updated toolbox link to include Github.

Saturday, April 21, 2012

SQL Saturday Houston #107 - SQL Server Best Practices with DMV Presentation Files

Thanks for joining me for a standing room only presentation this morning at 9:45am at SQL Saturday #150 Houston.  This is the second year speaking at SQL Saturday Houston and the event was one of the best I've been to, kudos to the organizers.  I was especially impressed that all ten tracks were solidly SQL server related - very little was not related SQL Server, and that isn't easy to draw.  Bravo!

Click here to download the slidedeck and all sample scripts - including the ones we skipped - from my DMV presentation.  Thanks for all the great questions and feedback during the session, it was one of the most enjoyable I've had at a SQL Saturday event - including Baton Rouge's, and despite the fact that my presentation mouse batteries died!

Friday, April 20, 2012

SQL Script Toolbox for Developers

I work with a motley crowd of developers and have the opportunity to give lunch and learn training sessions on SQL Development.

Most recently, I put together a .zip file of T-SQL scripts that developers would find handy.  You can view the toolbox contents here, with .zip files included that are automatically generated.

Here's a summary:

Missing Indexes - missing indexes.sql, missing index setup demo.sql - more about this here
Index Usage - index usage.sql
Sessions and Requests - sessions and requests.sql - more about this here
Most expensive queries - worst queries.sql - from MS PSS
Tables without clustered indexes - tables without clustered indexes.sql
Deadlocks XEvents - deadlocks in xevents.sql -  more about this here
Dependencies - dependencies.sql -  more about this here
Stored Proc Execution plans - cached exec plans.sql -  more about this here
Volume Stats - volume stats.sql
Modules vs. Routines - modules vs routines.sql -  more about this here
Much more...

EDIT: 20131017 refreshed the toolbox.zip file with latest updates from my personal toolbox.
EDIT: 20140813 refreshed the toolbox.zip file with latest updates from my personal toolbox.
EDIT: 20140929 refreshed the toolbox.zip file with latest updates from my personal toolbox.
EDIT: 20180815 refreshed the Toolbox reference to Github repo

Thursday, April 19, 2012

Using sys.dm_exec_requests to find a blocking chain and much more

As with every good DBA's toolbox, my blocking list query using sys.dm_exec_requests is evolving.

I've added the following features:
  • The query execution plan of the active request in the QueryPlan column to the right.
  • Variable at the top which changes the relationship of the join between sys.dm_exec_sessions and  sys.dm_exec_requests. 
    • When set to 0, this query now displays all sessions, even those without active requests.  I recently found this helpful when researching sleeping sessions that were blocking active sessions.
    • When set to 1, this query displays as it used to - only session and active request data.  
  • Percent_Complete column - great for finding the progress of backup and restores 
  • A few other minor helpful columns
Update March 4, 2014:
  • Added Transaction Isolation Level for Session and Request
  • Added Status for Session and Request
  • Added Login_name, Client_Interface_name
Update June 24, 2016:
  • Added resource governor group and pool identifiers
  • Bugfix for sometimes when the offset text wouldn't display anything
 declare @showallspids bit 
 select @showallspids =1

 create table #ExecRequests  (
  id int IDENTITY(1,1) PRIMARY KEY
 , session_id smallint not null 
 , request_id int null
 , request_start_time datetime null
 , login_time datetime not null
 , login_name nvarchar(256) null
 , client_interface_name nvarchar(64)
 , session_status nvarchar(60) null
 , request_status nvarchar(60) null
 , command nvarchar(32) null
 , sql_handle varbinary(64) null
 , statement_start_offset int null
 , statement_end_offset int null
 , plan_handle varbinary (64) null
 , database_id smallint null
 , user_id int null
 , blocking_session_id smallint null
 , wait_type nvarchar (120) null
 , wait_time_s int null
 , wait_resource nvarchar(120) null
 , last_wait_type nvarchar(120) null
 , cpu_time_s int null
 , tot_time_s int null
 , reads bigint null
 , writes bigint null
 , logical_reads bigint null
 , [host_name] nvarchar(256) null
 , [program_name] nvarchar(256) null
 , Governor_Group_Id int null
 , blocking_these varchar(1000) NULL
 , percent_complete int null
 , session_transaction_isolation_level varchar(20) null
 , request_transaction_isolation_level varchar(20) null
 )


 insert into #ExecRequests (session_id,request_id, request_start_time, login_time, login_name, client_interface_name, session_status, request_status, command,sql_handle,statement_start_offset,statement_end_offset,plan_handle,database_id,user_id,blocking_session_id,wait_type,last_wait_type,wait_time_s,wait_resource,cpu_time_s,tot_time_s,reads,writes,logical_reads,[host_name], [program_name] , session_transaction_isolation_level , request_transaction_isolation_level , Governor_Group_Id)
 select s.session_id,request_id, r.start_time, s.login_time, s.login_name, s.client_interface_name, s.status, r.status,command,sql_handle,statement_start_offset,statement_end_offset,plan_handle,r.database_id,user_id,blocking_session_id,wait_type,r.last_wait_type, r.wait_time/1000.,r.wait_resource ,r.cpu_time/1000.,r.total_elapsed_time/1000.,r.reads,r.writes,r.logical_reads,s.[host_name], s.[program_name], s.transaction_isolation_level, r.transaction_isolation_level, s.group_id 
 from sys.dm_exec_sessions s 
 left outer join sys.dm_exec_requests r on r.session_id = s.session_id
 where 1=1
 and s.session_id >= 50 --retrieve only user spids
 and s.session_id <> @@SPID --ignore myself
 and  (@showallspids = 1 or r.session_id is not null) 

 print 'insert done'

 update #ExecRequests 
 set blocking_these = LEFT((select isnull(convert(varchar(5), er.session_id),'') + ', ' 
       from #ExecRequests er
       where er.blocking_session_id = isnull(#ExecRequests.session_id ,0)
       and er.blocking_session_id <> 0
       FOR XML PATH('') 
       ),1000)
 
 print 'update done'

 select * from 
 (
  select  
   timestamp = getdate()
  , r.session_id , r.host_name , r.program_name
  , r.session_status
  , r.request_status
  , r.blocking_these
  , blocked_by =  r.blocking_session_id
  , r.wait_type 
  , r.wait_resource
  , r.last_wait_type
  , DBName = db_name(r.database_id)
  , est.objectid
  , r.command
  , login_time
  , login_name
  , client_interface_name
  , request_start_time
  , r.tot_time_s, r.wait_time_s
  , r.cpu_time_s --cpu_time is not accurate prior to SQL Server 2012 SP2.  http://blogs.msdn.com/b/psssql/archive/2014/11/11/how-come-sys-dm-exec-requests-cpu-time-never-moves.aspx
  , r.reads, r.writes, r.logical_reads
  --, [fulltext] = est.[text]
  , offsettext = CASE WHEN r.statement_start_offset = 0 and r.statement_end_offset= 0 THEN left(est.text, 4000)
         ELSE SUBSTRING (  est.[text]
             , r.statement_start_offset/2 + 1, 
              CASE WHEN r.statement_end_offset = -1 THEN LEN (CONVERT(nvarchar(max), est.[text])) 
               ELSE r.statement_end_offset/2 - r.statement_start_offset/2 + 1
              END )
       END
  , r.statement_start_offset, r.statement_end_offset
  , cacheobjtype = LEFT (p.cacheobjtype + ' (' + p.objtype + ')', 35)
  , QueryPlan  = qp.query_plan 
  , request_transaction_isolation_level = case request_transaction_isolation_level 
    when 0 then 'Unspecified'
    when 1 then 'ReadUncommitted'
    when 2 then 'ReadCommitted'
    when 3 then 'Repeatable'
    when 4 then 'Serializable'
    when 5 then 'Snapshot' end 
  , session_transaction_isolation_level = case session_transaction_isolation_level 
    when 0 then 'Unspecified'
    when 1 then 'ReadUncommitted'
    when 2 then 'ReadCommitted'
    when 3 then 'Repeatable'
    when 4 then 'Serializable'
    when 5 then 'Snapshot' end 
  , p.plan_handle
  , stat.execution_count, total_worker_time_s = stat.total_worker_time/1000./1000., last_worker_time_s = stat.last_worker_time/1000./1000., total_elapsed_time_s = stat.total_elapsed_time/1000./1000., last_elapsed_time_s = stat.last_elapsed_time/1000./1000., stat.total_physical_reads, stat.total_logical_writes, stat.total_logical_reads
  , Governor_Group_Name = wg.name
  , Governor_Group_ID  = r.Governor_Group_Id
  , Governor_Pool_Name = wp.name 
  , Governor_Pool_ID  = wg.Pool_id
  --next two lines are SQL 2012 only!
  --, stat.total_rows, stat.last_rows
  from #ExecRequests r
  LEFT OUTER JOIN sys.dm_exec_cached_plans p ON p.plan_handle = r.plan_handle 
  OUTER APPLY sys.dm_exec_query_plan (r.plan_handle) qp
  OUTER APPLY sys.dm_exec_sql_text (r.sql_handle) est
  LEFT OUTER JOIN sys.dm_exec_query_stats stat on stat.plan_handle = r.plan_handle
  and r.statement_start_offset = stat.statement_start_offset  
  and r.statement_end_offset = stat.statement_end_offset
  LEFT OUTER JOIN sys.resource_governor_workload_groups  wg
  on wg.group_id = r.Governor_Group_Id
  LEFT OUTER JOIN sys.resource_governor_resource_pools wp
  on wp.pool_id = wg.Pool_id
  
 
 ) a
 order by LEN(blocking_these) desc, blocking_these desc, blocked_by desc, session_id asc
 
 print 'done'
 go
 drop table #ExecRequests  
 


Thursday, March 29, 2012

SQLSaturday #107 Houston

About 3 weeks to go until SQLSaturday #107 at YES Prep Public Schools - North Central Campus, 13703 Aldine-Westfield, Houston, TX, 77039!  Register today if you haven't already!

Remember, in addition to traditional DBA topics, the event will have some topics of interest to developers that work with SQL Server. We also have a full slate of BI sessions (Data Warehouse, Reporting, Integration Services, etc.) as well as several sessions which will focus on the newly released SQL Server 2012.

I will be giving my SQL Admin Best Practices with DMV's presentation in the DBA Track at 9:45 AM.

Friday, December 16, 2011

View execution plan for a sproc in the wild

Got stored procedure problems?  I feel sorry for you son.

Here's a quick query to pull the cached execution plan out of memory.

I italicized cached because it's not going to stay there forever.

I used this query to find the execution plan of a stored proc that had been running in a SQL job.  Instead of pulling down the sproc and trying to estimate the query plan, I chose to see what the cached plan looked like.

Open_transaction_count in sys.dm_exec_sessions

This news may be a few months old, but it is worth noting that there is now a column called open_transaction_count in the sys.dm_exec_sessions in SQL Server 2012.

This should put away any need for you to use sys.sysprocesses for session information.

Wednesday, October 19, 2011

Using sys.dm_exec_requests to find a blocking chain

A query that calls sys.dm_exec_requests and pulls the statement text out of the request is a handy script for any DBA's toolbox.  A simple version of the query performs a cross apply to dm_exec_sql_text and pulls out the request's TSQL statement-in-progress using the unicode character offsets.

This is simple enough and does a good job as a queryable sp_who2 and DBCC INPUTBUFFER replacement to an extent.

In an environment that suffers from frequent spells of query blocking, the ability to identify which query was leading the way becomes important in finding the query or queries to troubleshoot.

Whether the end result of his query was a simple KILL command, and/or a walk down the hall to investigate the "business case" would be a situational decision.

I wanted a blocking chain - a list of all other spids that a session was blocking.

Here's a query that dumps the output of dm_exec_requests into a temporary table, then uses the XML PATH to conveniently transform the blocked_by list into a comma-delimited list in the row of the blocker.  The  sort order is the length of this comma-delimited list, putting the cause of a pileup right at the top.

Doing this in a temp table was necessary because the blocking_session_id is a highly volatile column.  As a result, the output of this query is several ms behind "live".  After a few executions, the spid hanging out at the top is easy enough to spot.

Here's an example of the nightmare kind of scenario that prompted this script into existence:




 declare @showallspids bit 
 select @showallspids =1

 create table #ExecRequests  (
  id int IDENTITY(1,1) PRIMARY KEY
 , session_id smallint not null 
 , request_id int null
 , request_start_time datetime null
 , login_time datetime not null
 , login_name nvarchar(256) null
 , client_interface_name nvarchar(64)
 , session_status nvarchar(60) null
 , request_status nvarchar(60) null
 , command nvarchar(32) null
 , sql_handle varbinary(64) null
 , statement_start_offset int null
 , statement_end_offset int null
 , plan_handle varbinary (64) null
 , database_id smallint null
 , user_id int null
 , blocking_session_id smallint null
 , wait_type nvarchar (120) null
 , wait_time_s int null
 , wait_resource nvarchar(120) null
 , last_wait_type nvarchar(120) null
 , cpu_time_s int null
 , tot_time_s int null
 , reads bigint null
 , writes bigint null
 , logical_reads bigint null
 , [host_name] nvarchar(256) null
 , [program_name] nvarchar(256) null
 , blocking_these varchar(1000) NULL
 , percent_complete int null
 , session_transaction_isolation_level varchar(20) null
 , request_transaction_isolation_level varchar(20) null
 )

 insert into #ExecRequests (session_id,request_id, request_start_time, login_time, login_name, client_interface_name, session_status, request_status, command,sql_handle,statement_start_offset,statement_end_offset,plan_handle,database_id,user_id,blocking_session_id,wait_type,last_wait_type,wait_time_s,wait_resource,cpu_time_s,tot_time_s,reads,writes,logical_reads,[host_name], [program_name] , session_transaction_isolation_level , request_transaction_isolation_level )
       select s.session_id,request_id, r.start_time, s.login_time, s.login_name, s.client_interface_name, s.status, r.status,command,sql_handle,statement_start_offset,statement_end_offset,plan_handle,r.database_id,user_id,blocking_session_id,wait_type,r.last_wait_type, r.wait_time/1000.,r.wait_resource ,r.cpu_time/1000.,r.total_elapsed_time/1000.,r.reads,r.writes,r.logical_reads,s.[host_name], s.[program_name], s.transaction_isolation_level, r.transaction_isolation_level
 from sys.dm_exec_sessions s 
 left outer join sys.dm_exec_requests r on r.session_id = s.session_id
 where 1=1
 and s.session_id >= 50 --retrieve only user spids
 and s.session_id <> @@SPID --ignore myself
 and  (@showallspids = 1 or r.session_id is not null) 

 update #ExecRequests 
 set blocking_these = LEFT((select isnull(convert(varchar(5), er.session_id),'') + ', ' 
       from #ExecRequests er
       where er.blocking_session_id = isnull(#ExecRequests.session_id ,0)
       and er.blocking_session_id <> 0
       FOR XML PATH('') 
       ),1000)
 select * from 
 (
  select  
   timestamp = getdate()
  , r.session_id , r.host_name , r.program_name
  , r.session_status
  , r.request_status
  , r.blocking_these
  , blocked_by =  r.blocking_session_id
  , r.wait_type 
  , r.wait_resource
  , r.last_wait_type
  , DBName = db_name(r.database_id)
  , r.command
  , login_time
  , login_name
  , client_interface_name
  , request_start_time
  , r.tot_time_s, r.wait_time_s, r.cpu_time_s, r.reads, r.writes, r.logical_reads
  --, [fulltext] = est.[text]
  , offsettext = CASE WHEN r.statement_start_offset = 0 and r.statement_end_offset= 0 THEN NULL
         ELSE SUBSTRING (  est.[text]
             , r.statement_start_offset/2 + 1, 
              CASE WHEN r.statement_end_offset = -1 THEN LEN (CONVERT(nvarchar(max), est.[text])) 
               ELSE r.statement_end_offset/2 - r.statement_start_offset/2 + 1
              END )
       END
  , r.statement_start_offset, r.statement_end_offset
  , cacheobjtype = LEFT (p.cacheobjtype + ' (' + p.objtype + ')', 35)
  , QueryPlan  = qp.query_plan 
  , request_transaction_isolation_level = case request_transaction_isolation_level 
    when 0 then 'Unspecified'
    when 1 then 'ReadUncommitted'
    when 2 then 'ReadCommitted'
    when 3 then 'Repeatable'
    when 4 then 'Serializable'
    when 5 then 'Snapshot' end 
  , session_transaction_isolation_level = case session_transaction_isolation_level 
    when 0 then 'Unspecified'
    when 1 then 'ReadUncommitted'
    when 2 then 'ReadCommitted'
    when 3 then 'Repeatable'
    when 4 then 'Serializable'
    when 5 then 'Snapshot' end 
  , p.plan_handle
  , stat.execution_count, stat.total_worker_time, stat.last_worker_time, stat.total_elapsed_time, stat.last_elapsed_time, stat.total_physical_reads, stat.total_logical_writes, stat.total_logical_reads
  --next two lines are SQL 2012 only!
  --, stat.total_rows, stat.last_rows
  from #ExecRequests r
  LEFT OUTER JOIN sys.dm_exec_cached_plans p ON p.plan_handle = r.plan_handle 
  OUTER APPLY sys.dm_exec_query_plan (r.plan_handle) qp
  OUTER APPLY sys.dm_exec_sql_text (r.sql_handle) est
  LEFT OUTER JOIN sys.dm_exec_query_stats stat on stat.plan_handle = r.plan_handle
  and r.statement_start_offset = stat.statement_start_offset  
  and r.statement_end_offset = stat.statement_end_offset
  
 
 ) a
 order by LEN(blocking_these) desc, blocking_these desc, blocked_by desc, session_id asc
 go
 drop table #ExecRequests  

Updated 20140813: latest tweaks and changes Updated 20140929: latest tweaks and changes

Saturday, October 15, 2011

Slides and Sample code: SQL Server Admin Best Practices with DMV's from the Houston TechFest 2011

Here's my official blog page for my presentation on SQL Server Admin Best Practices with DMV's from the Houston TechFest 2011 on October 15.  Thanks for attending my session in the of a great event put on by Michael and the entire Houston TechFest team.  The University of Houston was a great host for the event!

Thanks also to all the sponsors who attendees.  In the speaker room I heard some discussion of HTF's decision to have back-to-back Keynotes in the morning, not starting the actual sessions until 11AM and then breaking for lunch at noon.  I have to agree with what I heard - the vendors paid the money to put this free event on, why not give them the freshest audience?  As a veteran of three SQL Saturdays in Baton Rouge, I have to admit it was an uncommon but inventive scheduling decision.  If it increases vendor satisfaction and retention for next year's event, nobody should mind, especially the attendees.

Download slides and sample code here.

You can give me a rating on SpeakerRate here.

Sunday, August 07, 2011

SQL Server Maintenance Best Practices with DMV's - SQL Saturday 64

First off, big thanks to the 375+ people who showed up for SQL Saturday 64 in Baton Rouge on August 6. It was a fantastic event.

Major thank-you's to Mike Huguet, Laurie Firmin, Thomas LeBlanc, Mark Verret, Stacy Vicknair, Justin Yesso, all our volunteers, and the great sponsors we had both local and nationwide.

A final thank you to our speakers, you were part of one of the biggest and most accomplished lineups of SQL and .net speakers I've ever seen at a SQL Saturday. I don't want to name-drop, but check out the schedule for the all the recognizable faces from the user community at large. WOW.

My employer Sparkhound had a whopping six speakers at the conference: Stacy Vicknair (VS Lightswitch), Justin Yesso (SSRS for .net developers), Ryan Richard (MVVM), Mike Huguet (Open XML), Victor Chataboon (Sharepoint admin) and myself.  This is just a sample of the capable, communicative and competent coworkers I have the pleasure of working with.

Here's the link to download the slidedeck and all sample scripts (including the few that we skipped) from my presentation on SQL Admin Best Practices with DMV's at SQL Saturday #64 yesterday in Baton Rouge:

Download scripts here

Saturday, June 04, 2011

DMV Overview Presentation from SQL Saturday #77 Pensacola 2011

Here's the link to a .zip file of my slidedeck and all sample demo files from today's SQL Saturday Pensacola.  This is my DMV Overview presentation from room 2171 at 1:45pm. 

Download here

Sunday, January 30, 2011

Slides and sample code from DMV Overview presentation at SQL Saturday Houston, January 2011

Thanks to a great room at 8:30AM at SQL Saturday Houston in Bammel, TX on January 29, 2011.

I had to leave town before lunch but it was a pleasure meeting you all.

Click here to download a .zip file with the slidedeck and all sample files - even the ones we didn't get to during the presentation.

Please hit me with a rating at SpeakerRate: http://speakerrate.com/speakers/11375

Saturday, October 09, 2010

DMV Overview Presentation from Houston TechFest 2010 on October 9

Here's the .Zip file link to slide deck and scripts (including the scripts I didn't get to) for my talk on DMV's from Houston TechFest 2010.

What a great event!  Good job to the event coordinators. 


Could have used more signage to the building, and more signage within the building. 


This is probably my best experience giving this talk, and I've given it at three SQL Saturdays and a SQL Server user group meeting too. Excellent questions, excellent discussion, and I don't mind at all that we went long.  It was my second straight event speaking in the last session slot, starting at 5pm.  First off, starting at 5pm sucks.  At a previous SQL Saturday, I had to turn off the lights because of a weak projector and I lost half my audience to the dark and late time. 

But despite all that lined up against me, I had more than a dozen folks in attendance and they were very engaged in the conversation.  Thanks and congrats to all of you in attendance for making it a great session.