Dealing with CXPACKET waits - setting cost threshold for parallelism

CXPACKET is never a cause; it gets all the blame, but it's always a symptom of something else. You need to catch these queries in the act and figure out what "something else" is. It might be different from query to query, and turning off parallelism altogether is - as you've suggested - unnecessary overkill in most cases. But it is often the least amount of work, which is why it is such a prevalent "fix."

If you can get an actual plan for a query that seems to be responsible for high CXPACKET waits, load it into SentryOne Plan Explorer. There's usually a reason behind this; we show which parallel operations led to thread skew, and you can easily correlate that to estimates that are off (we highlight operations with estimates that are off by a at least certain threshold). Usually the underlying problem is really bad/out-of-date (or unavailable) statistics.

Unfortunately what you'll find in sys.dm_exec_cached_plans are estimated plans. They won't tell you whether the plan went parallel when it was actually used, because the actual plan is not what's cached. In some cases you expect to see both a serial and parallel plan for the same query; this is not how SQL Server deals with the situation for parallel plans that might be parallel at runtime. (Lots of information about that here.)


If you wish to see the actual execution plan of a query that is running.

SELECT plan_handle FROM sys.dm_exec_requests WHERE session_id = [YourSPID]

First then enter the result into this query.

SELECT query_plan FROM sys.dm_exec_query_plan (Enter the result here.)

That will show you actual execution plan that sql used for that query. You could use that execution plan to see which thread you are waiting on.

I have also found that turning off hyper threading drastically reduced my CXpacket wait times.

Hope that helps.


The above answer by Aaron is correct.

I'd just like to add that, if you're not already using SQL Performance Dashboard Reports and the built-in Data Collector, you should start.

You could also take the following query, and modify it as you see fit:

DECLARE @MinExecutions int; 
SET @MinExecutions = 5 

SELECT EQS.total_worker_time AS TotalWorkerTime 
      ,EQS.total_logical_reads + EQS.total_logical_writes AS TotalLogicalIO 
      ,EQS.execution_count As ExeCnt 
      ,EQS.last_execution_time AS LastUsage 
      ,EQS.total_worker_time / EQS.execution_count as AvgCPUTimeMiS 
      ,(EQS.total_logical_reads + EQS.total_logical_writes) / EQS.execution_count  
       AS AvgLogicalIO 
      ,DB.name AS DatabaseName 
      ,SUBSTRING(EST.text 
                ,1 + EQS.statement_start_offset / 2 
                ,(CASE WHEN EQS.statement_end_offset = -1  
                       THEN LEN(convert(nvarchar(max), EST.text)) * 2  
                       ELSE EQS.statement_end_offset END  
                 - EQS.statement_start_offset) / 2 
                ) AS SqlStatement 
      -- Optional with Query plan; remove comment to show, but then the query takes !!much longer!! 
      --,EQP.[query_plan] AS [QueryPlan] 
FROM sys.dm_exec_query_stats AS EQS 
     CROSS APPLY sys.dm_exec_sql_text(EQS.sql_handle) AS EST 
     CROSS APPLY sys.dm_exec_query_plan(EQS.plan_handle) AS EQP 
     LEFT JOIN sys.databases AS DB 
         ON EST.dbid = DB.database_id      
WHERE EQS.execution_count > @MinExecutions 
      AND EQS.last_execution_time > DATEDIFF(MONTH, -1, GETDATE()) 
ORDER BY AvgLogicalIo DESC 
        ,AvgCPUTimeMiS DESC