This script is database specific. It does not use the server-wide DMV's alone, it joins to INFORMATION_SCHEMA.TABLES to generate index create scripts. This is a very fast way to enter an environment, and take a peek at the indexing situation, something I do regularly as a DBA consultant.
Pardon blogspot's inane raw code posting ability.
improv = migs.avg_total_user_cost * (avg_user_impact/50)
, create_index_statement = 'CREATE NONCLUSTERED INDEX IDX_NC_' + t.table_name
+ replace(replace(replace(ISNULL(mid.equality_columns,'') , '], [' ,'_'),'[','_'),']','')
+ replace(replace(replace(ISNULL(mid.inequality_columns,''), '], [' ,'_'),'[','_'),']','')
+ ' ON ' + statement
+ ' (' + ISNULL (mid.equality_columns,'')
+ CASE WHEN mid.equality_columns IS NOT NULL AND mid.inequality_columns IS NOT NULL THEN ',' ELSE '' END
+ ISNULL (mid.inequality_columns, '')
+ ISNULL (' INCLUDE (' + mid.included_columns + ')', '')
, unique_compiles, migs.user_seeks, migs.user_scans, last_user_seek, avg_total_user_cost, avg_user_impact, mid.equality_columns, mid.inequality_columns, mid.included_columns, t.table_catalog
FROM sys.dm_db_missing_index_groups mig
INNER JOIN sys.dm_db_missing_index_group_stats migs
ON migs.group_handle = mig.index_group_handle
INNER JOIN sys.dm_db_missing_index_details mid
ON mig.index_handle = mid.index_handle
INNER JOIN INFORMATION_SCHEMA.tables t
on t.table_name = replace(replace(substring(substring(statement, charindex('.', statement)+2, len(statement)), charindex('.', substring(statement, charindex('.', statement)+1, len(statement))), len(statement)),'[',''),']','')
and t.table_schema = replace(replace(substring( substring( statement , charindex('.', statement)+2 , len(statement) ), 0, charindex('.', substring( statement , charindex('.', statement)+2 , len(statement) ) )),'[',''),']','')
and db_name() = t.table_catalog
and db_ID() = mid.database_id
--and (datediff(week, last_user_seek, getdate())) <> 1
AND migs.avg_total_user_cost * (avg_user_impact/75) > 5
or migs.avg_user_impact > 50
- The improv calc up top is simply to evaluate the relative use of a potential index against others that show up in the report. Its not terribly scientific but weighs usage and value.
- The WHERE clause items can be used depending on the situation. If your server has been operating for a while and you'd like to see what indexing needs recent changes have been generating, use the date filter. Have a lot of ad hocs? Filter on unique_compiles > 1. The third WHERE clause is the same as the improv value above, helps me filter out things that are too small to care about. Lastly, I filter out anything that won't have at least a 50% impact. Oftentimes, I'll set this last value much higher.
- Joins to INFORMATION_SCHEMA.TABLES to generate a proper index create statement. It follows a naming convention I prefer, IDX_NC_ for nonclustered indexes, followed by the table name and the key fields. For covering indexes, I'll remove the key fields from the name and just put "covering". Your naming convention for indexes is probably different.
- Before creating any indexes from the missing index DMV's, do a little research on them and keep in mind that they reset with the server. It maybe useful to retain these over time, spanning multiple resets.
- Oftentimes, you'll see many of the rows in this result overlap. The INCLUDES might be in a different order, or you'll see one suggested index that has all but one other field that another suggestion has. Intelligently combine these together, keeping in mind that order DOES matter in the key fields.
- Lastly, if you've examined existing indexing on the table, use the provided creation script in the column create_index_statement .
- This script, aside from a few tweaks here and there, is hardly an original creation of this blog. As part of your research on missing index DMV's, don't miss: http://blogs.msdn.com/bartd/archive/2007/07/19/are-you-using-sql-s-missing-index-dmvs.aspx
"As I leave you I want you to know... just think how much you'll be missing." -Richard Nixon