Field = Parameter OR Parameter IS NULL Pattern

They’re all excellent. Really. They all have the same impact of having two plans in the cache, which is what you want.

As you get more and more parameters, you will find the Dynamic SQL option is clearest, even though it looks scarier to beginners.

If this were a function I’d suggest avoiding multi-statement options, so that the QO could do its stuff more nicely.


If you are on a reasonably recent build of SQL Server one other option that could be considered is

SELECT [Field]
FROM   [dbo].[Table]
WHERE  [Field] = @Parameter
        OR @Parameter IS NULL
OPTION (RECOMPILE); 

To get an optimal plan for the runtime value each time at the expense of a compilation each time.

Your "Conditioned select" option is still vulnerable to parameter sniffing. If the procedure is first executed when @Parameter is null then the branch with the [Field] = @Parameter predicate will estimate 1 rows (rounded up from the 0 expected for a =NULL predicate).

In the specific example in your question where you are selecting a single column and that is the same column as you are filtering by this is unlikely to present a problem but it can do in other cases.

e.g. with the following example the first call to [dbo].[Get] 1 takes 333,731 logical reads as it chooses an inappropriate plan with key lookups. When the plan is removed from cache and recompiled with 1 passed first the logical reads fall to 4,330

DROP TABLE IF EXISTS [Table]

GO

CREATE TABLE [Table]
(
[Field1]  INT INDEX IX,
[Field2]  INT,
[Field3]  INT,
);

INSERT INTO [Table]
SELECT TOP 1000000 CRYPT_GEN_RANDOM(1)%3, CRYPT_GEN_RANDOM(4), CRYPT_GEN_RANDOM(4)
FROM sys.all_objects o1, sys.all_objects o2

GO

CREATE OR ALTER PROCEDURE [dbo].[Get] @Parameter INT = NULL AS BEGIN;
    IF(@Parameter IS NOT NULL) BEGIN;
        SELECT *
        FROM [dbo].[Table]
        WHERE [Field1] = @Parameter;
    END;
    ELSE BEGIN;
        SELECT *
        FROM [dbo].[Table];
    END;
END;

GO

SET STATISTICS TIME ON
SET STATISTICS IO ON


EXEC [dbo].[Get] 

EXEC [dbo].[Get] 1;

declare @plan_handle varbinary(64) = (select plan_handle from sys.dm_exec_procedure_stats where object_id = object_id('[dbo].[Get]'));

--Remove the plan from the cache 
DBCC FREEPROCCACHE (@plan_handle);  

--Re-execute it with NOT NULL passed first
EXEC [dbo].[Get] 1;

Based on the previous answers and comments from Aaron Bertrand, Martin Smith, and Rob Farley. I wanted to put together a pro/con list for each approach, including the additional approach OPTION(RECOMPILE):


Conditioned selects in same stored procedure

From Martin Smith's response:

Your "Conditioned select" option is still vulnerable to parameter sniffing. If the procedure is first executed when @Parameter is null then the branch with the [Field] = @Parameter predicate will estimate 1 rows (rounded up from the 0 expected for a =NULL predicate).

  • No recompile cost.
  • Plan cache reuse for every statement and the stored procedure.
  • Cached plans are vulnerable to parameters sniffing even when there is no significant variance in result set when @Parameter is NOT NULL.
  • Does not scale well administratively as the number of parameters increases.
  • Intellisense on all T-SQL.

Dynamic SQL within stored procedure

From Rob Farley:

As you get more and more parameters, you will find the Dynamic SQL option is clearest, even though it looks scarier to beginners.

  • No recompile cost.
  • Plan cache reuse for every statement and the stored procedure.
  • Cached plans are vulnerable to parameters sniffing only when there is significant variance in the result set when @Parameter is NOT NULL.
  • Scales well administratively as the number of parameters increases.
  • Does not provide Intellisense on all T-SQL.

Separate stored procedures

  • No recompile cost.
  • Plan cache reuse for every statement and the stored procedure.
  • Cached plans are vulnerable to parameters sniffing only when there is significant variance in the result set when @Parameter is NOT NULL.
  • Does not scale well administratively as the number of parameters increases.
  • Intellisense on all T-SQL.

OPTION(RECOMPILE)

From Martin Smith:

To get an optimal plan for the run-time value each time at the expense of a compilation each time.

  • CPU cost for recompile.
  • No plan cache reuse for statements followed by OPTION(RECOMPILE), only the stored procedure and statements without OPTION(RECOMPILE).
  • Scales well administratively as the number of parameters increases.
  • Not vulnerable to parameter sniffing.
  • Intellisense on all T-SQL.

My Personal Takeaway

If there is no significant variance in the result set with different scalar values of @Parameter, Dynamic SQL is a top performer, has the least system overhead, and is only marginally worse in regards to administrative overhead in comparison to OPTION(RECOMPILE). In more complex scenarios, where variance in parameter value can cause significant changes in result sets, using Dynamic SQL with a conditional inclusion or exclusion of OPTION(RECOMPILE) will be the best overall performer. Here is a link to Aaron Bertrand's article detailing the approach: https://blogs.sentryone.com/aaronbertrand/backtobasics-updated-kitchen-sink-example/