SQL Server Count is slow

I've been working with SSMS for well over a decade and only in the past year found out that it can give you this information quickly and easily, thanks to this answer.

  1. Select the "Tables" folder from the database tree (Object Explorer)
  2. Press F7 or select View > Object Explorer Details to open Object Explorer Details view
  3. In this view you can right-click on the column header to select the columns you want to see including table space used, index space used and row count: enter image description here

Note that the support for this in Azure SQL databases seems a bit spotty at best - my guess is that the queries from SSMS are timing out, so it only returns a handful of tables each refresh, however the highlighted one always seems to be returned.


Count will do either a table scan or an index scan. So for a high number of rows it will be slow. If you do this operation frequently, the best way is to keep the count record in another table.

If however you do not want to do that, you can create a dummy index (that will not be used by your query's) and query it's number of items, something like:

select 
    row_count
from sys.dm_db_partition_stats as p
inner join sys.indexes as i 
  on p.index_id = i.index_id
  and p.object_id = i.object_id
where   i.name = 'your index'

I am suggesting creating a new index, because this one (if it will not be used) will not get locked during other operations.

As Aaron Bertrand said, maintaining the query might be more costly then using an already existing one. So the choice is yours.


Very close approximate (ignoring any in-flight transactions) would be:

SELECT SUM(p.rows) FROM sys.partitions AS p
  INNER JOIN sys.tables AS t
  ON p.[object_id] = t.[object_id]
  INNER JOIN sys.schemas AS s
  ON s.[schema_id] = t.[schema_id]
  WHERE t.name = N'myTable'
  AND s.name = N'dbo'
  AND p.index_id IN (0,1);

This will return much, much quicker than COUNT(*), and if your table is changing quickly enough, it's not really any less accurate - if your table has changed between when you started your COUNT (and locks were taken) and when it was returned (when locks were released and all the waiting write transactions were now allowed to write to the table), is it that much more valuable? I don't think so.

If you have some subset of the table you want to count (say, WHERE some_column IS NULL), you could create a filtered index on that column, and structure the where clause one way or the other, depending on whether it was the exception or the rule (so create the filtered index on the smaller set). So one of these two indexes:

CREATE INDEX IAmTheException ON dbo.table(some_column)
  WHERE some_column IS NULL;

CREATE INDEX IAmTheRule ON dbo.table(some_column)
  WHERE some_column IS NOT NULL;

Then you could get the count in a similar way using:

SELECT SUM(p.rows) FROM sys.partitions AS p
  INNER JOIN sys.tables AS t
  ON p.[object_id] = t.[object_id]
  INNER JOIN sys.schemas AS s
  ON s.[schema_id] = t.[schema_id]
  INNER JOIN sys.indexes AS i
  ON p.index_id = i.index_id
  WHERE t.name = N'myTable'
  AND s.name = N'dbo'
  AND i.name = N'IAmTheException' -- or N'IAmTheRule'
  AND p.index_id IN (0,1);

And if you want to know the opposite, you just subtract from the first query above.


(How large is "large amount of data"? - should have commented this first, but maybe the exec below helps you out already)

If I run a query on a static (means no one else is annoying with read/write/updates in quite a while so contention is not an issue) table with 200 million rows and COUNT(*) in 15 seconds on my dev machine (oracle). Considering the pure amount of data, this is still quite fast (at least to me)

As you said NOLOCK is not an option, you could consider

exec sp_spaceused 'myTable'

as well.

But this pins down nearly to the same as NOLOCK (ignoring contention + delete/update afaik)