Is there any best number for rows count in a table to start using indexing?

An index is an on-disk structure associated with a table or view that speeds retrieval of rows from the table or view. An index contains keys built from one or more columns in the table or view. These keys are stored in a structure (B-tree) that enables SQL Server to find the row or rows associated with the key values quickly and efficiently.

I want to know if there is any specified point in a table's rows count that when we reach that point, then we could use an index.

The rowcount in a table does not dictate the use of Index. Its your queries (pattern) that should dictate the need for creating indexes, so that the data retrieval is as fast as possible.

The query optimizer relies on statistics to produce optimal query plan.

From SQL Server Index Design Guide :

The selection of the right indexes for a database and its workload is a complex balancing act between query speed and update cost.

Narrow indexes, or indexes with few columns in the index key, require less disk space and maintenance overhead.

Wide indexes, on the other hand, cover more queries. You may have to experiment with several different designs before finding the most efficient index. Indexes can be added, modified, and dropped without affecting the database schema or application design.

You should use SQL Server's DMV to tune your indexing strategy or even use sp_BlitzIndex to get more insights.

Refer to :

  • Query Tuning Fundamentals: Density, Predicates, Selectivity, and Cardinality
  • Ten Common Threats to Execution Plan Quality

I don't know about a best but at a small number there is no real value.

There is some overhead to using the index. Yes an index seek is faster than a table scan but there is some overhead to using the index. Index maintenance clearly has overhead.

If a table has a PK then you should use that as a PK and typically clustered.

Consider a table of USstates (50 rows)

ID PK identity tinyint
Name varchar(20)
Region tinyint

Region would be use to group states like NE, SE, ...

I personally would never use indexes on Name or Region - a table scan is still very fast. Region would be a FK but that does not automatically create an index (to my understanding). The whole table is right at the 2K page size. If sort on State.Name is used a lot then yes that index would be used but I just don't think you could even measure the performance gain.

Over a million rows then yes start building indexes up front.

Between a thousand and a million then consider building indexes on a case by case basis.

Even at 10,000 rows there are going to be a lot of cases of obvious indexes. A column like AddDate that is not likely to change and would be used a lot for search and sort I would index and maintain (de-fragment). A table with more than 10,000 rows that reference State as a FK I would index that column up front. But since you are asking the question maybe wait and optimize for real life queries.

I would not want you to take the other extreme and put an index on every column as it might be used. An index has overhead. An index will slow down insert and update. A highly fragment index can be slower than a table scan.

I get a lot of the users on this site want to optimize up front and have theoretical discussions. This is real life advice for a newbie in DBA.