Today for a client I got a call stating that they need me to fine tune their query. This is a regular occurrence for any DBA.
Something about the query, this query had to return only 2147 rows, but doing this was taking way too much time. I waited for it to complete for an hour then had to stop it as it was able to return only 15 rows. The query consisted of multiple joins, and every join type that exists..
So as any DBA I tried running the query on the database with Query Statistics and Show Actual Plan. But It was taking way too much time to be executed so had to rely on Estimated Execution plan to see what all operators it was using. There wasn’t anything wrong with the plan and from the estimated everything looked fine.
What can you do when you have to fix a query which you cannot even run?
I have setup Index rebuilds and Update stats on all the servers, whose frequency depends upon the server load. So my bad that I assumed about stats and fragmentation to be good.
I use below query to check stats last update date:
--Script to check the stats date for the current database SELECT SCHEMA_NAME(o.Schema_ID) + N'.' + o.[NAME] AS [Object Name], o.[type_desc] AS [Object Type], i.[name] AS [Index Name], STATS_DATE(i.[object_id], i.index_id) AS [Statistics Date], s.auto_created, s.no_recompute, s.user_created, s.is_incremental, s.is_temporary, st.row_count, st.used_page_count, sp.modification_counter, sp.rows FROM sys.objects AS o WITH (NOLOCK) INNER JOIN sys.indexes AS i WITH (NOLOCK) ON o.[object_id] = i.[object_id] INNER JOIN sys.stats AS s WITH (NOLOCK) ON i.[object_id] = s.[object_id] AND i.index_id = s.stats_id INNER JOIN sys.dm_db_partition_stats AS st WITH (NOLOCK) ON o.[object_id] = st.[object_id] AND i.[index_id] = st.[index_id] CROSS APPLY sys.dm_db_stats_properties(s.object_id,s.stats_id) as sp WHERE o.[type] IN ('U', 'V') AND st.row_count > 0 ORDER BY STATS_DATE(i.[object_id], i.index_id) DESC OPTION (RECOMPILE);
All the stats were updated around midnight itself but the modification counter was way much more in terms of percent of totals rows. The next thing I checked was the fragmentation for the database. I use below query for the same.
--Query to check the fragmentation level for all the indexes having 1000+ pages SELECT DB_NAME(ps.database_id) AS [Database Name], SCHEMA_NAME(o.[schema_id]) AS [Schema Name], OBJECT_NAME(ps.OBJECT_ID) AS [Object Name], i.[name] AS [Index Name], ps.index_id, ps.index_type_desc, ps.avg_fragmentation_in_percent, ps.fragment_count, ps.page_count, i.fill_factor, i.has_filter, i.filter_definition, i.[allow_page_locks] FROM sys.dm_db_index_physical_stats(DB_ID(),NULL, NULL, NULL , N'LIMITED') AS ps INNER JOIN sys.indexes AS i WITH (NOLOCK) ON ps.[object_id] = i.[object_id] AND ps.index_id = i.index_id INNER JOIN sys.objects AS o WITH (NOLOCK) ON i.[object_id] = o.[object_id] WHERE ps.database_id = DB_ID() AND ps.page_count > 1000 ORDER BY ps.avg_fragmentation_in_percent DESC OPTION (RECOMPILE);
Again from the upper query got to know that indexes were highly fragmented. On asking about the usage of the application go to know that this database was an ODS which gets loaded once every month, and last day also there was load.
I updated the stats and rebuild/reorganized the indexes (I use Ola Hallengren scripts for doing this). Once everything got updated the execution time got reduced to 1 sec from 1+ hour.
So the next time I will suggest you and will do the same myself to check the stats and indexes, irrespective if you have all the maintenance jobs in-place, especially for the queries which you are not even able to execute for getting the actual execution plan.
I hope it helps!