SQL Server performance sudden degradation

From MSDN:

"Insert Operations Occur on Ascending or Descending Key Columns Statistics on ascending or descending key columns, such as IDENTITY or real-time timestamp columns, might require more frequent statistics updates than the query optimizer performs. Insert operations append new values to ascending or descending columns. The number of rows added might be too small to trigger a statistics update. If statistics are not up-to-date and queries select from the most recently added rows, the current statistics will not have cardinality estimates for these new values. This can result in inaccurate cardinality estimates and slow query performance.

For example, a query that selects from the most recent sales order dates will have inaccurate cardinality estimates if the statistics are not updated to include cardinality estimates for the most recent sales order dates.

After Maintenance Operations Consider updating statistics after performing maintenance procedures that change the distribution of data, such as truncating a table or performing a bulk insert of a large percentage of the rows. This can avoid future delays in query processing while queries wait for automatic statistics updates."

You might use "EXEC sp_updatestats" from time to time on your system (scheduled some time) or use the function STATS_DATE on all the objects and see when their statistics were actually updated last time and if there was too much time since then, use UPDATE STATISTICS for that particular object. In my experience, even with Automatic statistics enabled we're still forced to update stats from time to time, because of insert operations that didn't trigger automatic update.

To add my personal code (used in a weekly job that builds dynamic statements for stats update):

select distinct
        'update statistics [' + stats.SchemaName + '].[' + stats.TableName + ']'
            + case when stats.RowCnt > 50000 then ' with sample 30 percent;'
            else 
                ';' end
        as UpdateStatement
    from (
        select
            ss.name SchemaName,
            so.name TableName,
            so.id ObjectId,
            st.name AS StatsName, 
            STATS_DATE(st.object_id, st.stats_id) AS LastStatisticsUpdateDate
            , si.RowModCtr
            , (select case si2.RowCnt when 0 then 1 else si2.RowCnt end from sysindexes si2 where si2.id = si.id and si2.indid in (0,1)) RowCnt
        from sys.stats st
            join sysindexes si on st.object_id = si.id and st.stats_id = si.indid
            join sysobjects so on so.id = si.id and so.xtype = 'U' --user table
            join sys.schemas ss on ss.schema_id = so.uid
    ) stats
    where cast(stats.RowModCtr as float)/cast(stats.RowCnt as FLOAT)*100 >= 10 --more than 10% of the rows have changed
    or ( --update statistics that were not updated for more than 3 months (and rows no > 0)
        datediff(month, stats.LastStatisticsUpdateDate, getdate()) >= 3
        and stats.RowCnt > 0
    )

Here I get all objects where didn't have stats updated for more then 3 months or since the last stats update it had more than 10% of the rows changed.


If your top wait is SOS_SCHEDULER_YIELD, then it would appear you have some pressure on CPU. But this could be a result of something else, such as your design no longer being sufficient for your queries. I know you said that you are only adding one day's worth of data, but you could have hit a tipping point.

How are your queries being issued? Is it dynamic SQL? Are you using stored procedures? Are you using sp_executesql? Is it possible that you have a case of parameter sniffing? What does your db design look like? What are the PK and FK relationships?

Do you have an example of a good plan? If you are able to determine a good plan, you could use plan guides to force the query to execute in a specific way.

Can you give an example of a good plan gone bad?

Lastly, go grab a copy of sp_whoIsActive (http://whoisactive.com/) from Adam Machanic and use that to determine more about the queries that are running. And if you want to be able to capture the output from sp_whoIsActive, go here http://www.littlekendra.com/2011/02/01/whoisactive/


My guess is that one or more of your tables are getting large enough that they are not hitting the 20% of changes needed to help mark that current statistics as stale so that the Auto Update Statistics will kick in and yet there's enough updates (or inserts) that having updated statistics would help out a lot. I found the same thing recently in a particular environment after upgrading from SQL 2000 to SQL 2008.

In addition to the other sites mentioned in the answers above, I would suggest checking out the following online resources.

1) Red-Gate has a number of free ebooks available for download including "SQL Server Statistics" by Holger Schmeling, where you'll find the following quote:

http://www.red-gate.com/our-company/about/book-store/

"tables with more than 500 rows at least 20% of a column’s data had to be changed in order to invalidate any linked statistics"

2) SQL Sentry has a free Plan Explorer tool that helps track down issues within a SQL plan, such as an estimate of too many or too few rows compared with the actual number of rows for a given table in a query. Just save the actual execution plan from SSMS and then walk through the different parts of the plan using Plan Explorer. It's not that the information isn't available in SSMS using the graphic execution plan, but the tool from SQL Sentry does make it much easier to see.

http://www.sqlsentry.com/plan-explorer/sql-server-query-view.asp

3) Check out the stats update date yourself for tables in the queries you are most interested in using STATS_DATE(), you can find a quick query to get the oldest stats using a query found in the following discussion.

http://blog.sqlauthority.com/2010/01/25/sql-server-find-statistics-update-date-update-statistics/

I hope this helps!

I think you'll especially enjoy the book from Red-Gate!

-Jeff