Friday, May 03, 2013

Use the Default System Trace to Find Autogrowth Events

The SQL Server Management Studio built-in standard report for "Disk Usage" is very handy to view the performance-sucking autogrowth events for a database... until you want to see if autogrowth events have been occurring in any database on the server.

Autogrowth events are very problematic and can cause sudden, unpredictable and difficult-to-troubleshoot application timeouts and performance drops. This applies to any SQL Server database file, including those in SharePoint databases, system databases (such as the tempdb), and any user database data file or log file.


[As an aside, your friends in SharePoint know all about Autogrowth settings for their content databases as well.  Friend of the blog Drew at WerePoint posted this relevant blog entry recently about the timing of growth and recycle-bin emptying events. http://werepoint.blogspot.com/2013/05/on-proper-care-and-feeding-of-content.html ]


Ideally, as professional DBAs, we don't want to ever see an autogrowth event occur on our databases.  By proactively monitoring the space in our database files, we can grow database files on our own terms, on our own schedule (outside of busy hours), and without interrupting a user transaction.

Autogrowth events occur when a data file or log file has exhausted its reservation. SQL is forced to request the operating system that more space to be added to a file. During this request and the file growth, very little happens to the data file. This typically occurs during a user transaction - bad news!

The increment of the autogrowth might be defaulted to just 1 MB on your system, which means a single large transaction could result in a series of sequential autogrowth events before the transaction finally commits.  In the end, the argument over fixed autogrowth size versus percentage-based autogrowth settings are academic - a vigilant DBA should try to avoid autogrowth events in the first place.

But, while we're on the topic, 10% is a completely reasonable growth rate for data and log files for most databases. Depending on your available storage space, moving to a fixed growth rate (100mb-500mb) is more appropriate.

Instant File Initialization can also make a big difference in the time required to grow a database file (as well as restore database backups.) Note that this may need to be enabled at the virtual guest and SAN level for some virtual environments.

Here's a script to view the available space in every database file in a SQL Server instance, using the hacky old sp_MSforeachdb undocumented stored procedure:
exec sp_MSforeachdb  'use [?]; 
SELECT 
  ''DatabaseName_____________'' = d.name
, Recovery = d.recovery_model_desc
, ''DatabaseFileName_______'' = df.name
, ''Location_______________________________________________________________________'' = df.physical_name
, df.File_ID
, FileSizeMB = CAST(size/128.0 as Decimal(9,2))
, SpaceUsedMB = CAST(CAST(FILEPROPERTY(df.name, ''SpaceUsed'') AS int)/128.0 as Decimal(9,2))
, AvailableMB =  CAST(size/128.0 - CAST(FILEPROPERTY(df.name, ''SpaceUsed'') AS int)/128.0 as Decimal(9,2))
, ''Free%'' = CAST((((size/128.0) - (CAST(FILEPROPERTY(df.name, ''SpaceUsed'') AS int)/128.0)) / (size/128.0) ) * 100. as Decimal(9,2))
 FROM sys.database_files df
 cross apply sys.databases d
 where d.database_id = DB_ID() 
 and size > 0
'
One important piece of information we can get from the default trace when we look for autogrowth events is the time and duration of each autogrowth event.  I've personally used these two pieces of information to prove that random application timeouts experienced by a client were because of database file autogrowth events stalling their transaction for whole seconds while more space was added.

How far back does the default trace go? It depends on how much activity is being written to the default trace (which obviously needs to be enabled for this to work). The current active default trace file keeps up to 20mb of data.  Five default trace files are kept of 20mb each. This script only reads the latest file.
DECLARE @tracepath nvarchar(500)

SELECT 
 @tracepath = path 
 --select *
FROM sys.traces 
WHERE is_default = 1

--The trace automatically finds _n files, trim off the _nnn portion of the file name.
 select @tracepath = substring(@tracepath, 0, charindex('\log_', @tracepath,0)+4) + '.trc'

 print @tracepath

 SELECT 
  DBName    = g.DatabaseName
 , DBFileName   = mf.physical_name
 , FileType   = CASE mf.type WHEN 0 THEN 'Row' WHEN 1 THEN 'Log' WHEN 2 THEN 'FILESTREAM' WHEN 4 THEN 'Full-text' END
 , EventName   = te.name
 , EventGrowthMB  = convert(decimal(19,2),g.IntegerData*8/1024.) -- Number of 8-kilobyte (KB) pages by which the file increased.
 , EventTime   = g.StartTime
 , EventDurationSec = convert(decimal(19,2),g.Duration/1000./1000.) -- Length of time (in milliseconds) necessary to extend the file.
 , CurrentAutoGrowthSet= CASE
         WHEN mf.is_percent_growth = 1
         THEN CONVERT(char(2), mf.growth) + '%' 
         ELSE CONVERT(varchar(30), convert(decimal(19,2), mf.growth*8./1024.)) + 'MB'
        END
 , CurrentFileSizeMB = convert(decimal(19,2),mf.size* 8./1024.)
 , @tracepath 
 FROM fn_trace_gettable(@tracepath, default) g
 cross apply sys.trace_events te 
 inner join sys.master_files mf
 on mf.database_id = g.DatabaseID
 and g.FileName = mf.name
 WHERE g.eventclass = te.trace_event_id
 and  te.name in ('Data File Auto Grow','Log File Auto Grow')
 order by StartTime desc

 



EDIT: Smarter way to search the trace file _n files. This has also been reflected in the "autogrow events.sql" file in the toolbox.

No comments: