what is logical reads in sql server? how to reduce no of logical?

Logical reads means records you are reading from the database. Let's take a small, stupid example:

select *
from
(
  select *
  from orders
  where client = 1234
)
where item = 9876;

Here you select all orders from client 1234. Then later you only take those for item 9876. So (provided the optimizer doesn't see through this and optimizes your query internally) you select many more records in the first step than needed. Reduce logical reads (and the according large intermediate result) by applying both criteria in one step:

select *
from orders
where client = 1234
and item = 9876;

(This may also effect physical reads, but doesn't necessarily have to. For instance the first query may access 100 records and then reduce that to 10, whereas the second only reads those 10. But all 100 records may be in one disk block, so both statements read one disk block, i.e. make one physical read. It can even be zero physical reads, by the way, in case the data happens to be already in the dbms cache, i.e. in memory. This also tells us that physical reads can vary for a query, while logical reads remain the same as long as the query and the data are not altered.)


From Microsoft SQL Server Documentation (Pages and Extents Architecture -> Reading Pages) has a good definition:

The I/O from an instance of the SQL Server Database Engine includes logical and physical reads. A logical read occurs every time the Database Engine requests a page from the buffer cache. If the page is not currently in the buffer cache, a physical read first copies the page from disk into the cache.

So, a logical read is when the query engine needs to read data. First, it looks in memory. If the page is already in SQL Server's memory, then it uses that. If it can't find it in memory, then that triggers a physical read and the data page is read from disk. A logical read without a subsequent physical read is a "cache hit," basically.

The buffer cache (also known as the buffer pool) is SQL Server's primary working memory for solving queries. When you set the amount of memory that SQL Server will use, you're controlling the size of the available buffer cache.

However, telling you what you need to do without seeing the query or knowing what the table contains and what the data look like and how the data are indexed and organized is basically impossible.

Large numbers of logical reads may not necessarily be bad -- or, rather, not necessarily preventable. What's bad is an inordinate number of logical reads. If you're returning 3 rows of data, but the query engine had to scan 200 million rows of the table to do it, that's going to be very slow and you can probably improve that by rewriting the query or adding an index.

I would start by looking at how complex the queries in your stored procedure are. Notably, I'd look for missing indexes. If you're running SELECT * FROM BigTable WHERE ProductDate >= '01/01/2014', then I'd look to see that there was an index on ProductDate. If you're running SELECT * FROM BigTable ORDER BY ProductDate DESC, however, then, yes, an index will still help, but you'll still need to return the entire data set so you have to read the whole table anyways. Additionally, note that logical reads refer to page reads, so if the ProductDate in question is evenly distributed around the disk, you might need to read every page or nearly every page anyways.

Beyond that, it could be that the statistics on the table are out-of-date. If you've added 20,000 rows to a table and SQL Server still thinks there's only 2000 there, it's going to completely throw of the query planning.