How does SQL Server determine precision / scale?

Determining precision and scale resulting from expressions is a rat's nest and I don't think anyone understands the exact rules in every scenario, especially when mixing decimal (or float!) and int. See this answer by gbn.

You can of course tailor the expressions to give you what you want by making much more verbose explicit conversions. This is probably overkill but:

SELECT 
   CONVERT(DECIMAL(15,6), CONVERT(DECIMAL(15,6), 0.15) 
   * CONVERT(DECIMAL(15,6), 30) 
   / CONVERT(DECIMAL(15,6), 360)),
   CONVERT(DECIMAL(15,6), CONVERT(DECIMAL(15,6), 0.15) 
   / CONVERT(DECIMAL(15,6), 360) 
   * CONVERT(DECIMAL(15,6), 30));

Neither result is rounded wrongly due to broken floating point math or wildly wrong precision/scale.

0.012500    0.012500

As Aaron Bertrand mentioned, expressions are very tricky to predict.

If you dare go there, you could try to gain some insight using the following snippet:

DECLARE @number SQL_VARIANT
SELECT @number = 0.15 / 360
SELECT @number
SELECT  
    SQL_VARIANT_PROPERTY(@number, 'BaseType') BaseType,
    SQL_VARIANT_PROPERTY(@number, 'MaxLength') MaxLength,
    SQL_VARIANT_PROPERTY(@number, 'Precision') Precision

This is the result:

------------
0.000416

(1 row(s) affected)

BaseType     MaxLength    Precision
------------ ------------ ----------
numeric      5            6

(1 row(s) affected)

Notwithstanding the excellent answers already added to this question, there is an explicitly defined order of precedence for conversion of data types in SQL Server.

When an operator combines two expressions of different data types, the rules for data type precedence specify that the data type with the lower precedence is converted to the data type with the higher precedence. If the conversion is not a supported implicit conversion, an error is returned. When both operand expressions have the same data type, the result of the operation has that data type.

SQL Server uses the following precedence order for data types:

user-defined data types (highest)
sql_variant
xml
datetimeoffset
datetime2
datetime
smalldatetime
date
time
float
real
decimal
money
smallmoney
bigint
int
smallint
tinyint
bit
ntext
text
image
timestamp
uniqueidentifier
nvarchar (including nvarchar(max) )
nchar
varchar (including varchar(max) )
char
varbinary (including varbinary(max) )
binary (lowest)

So, for instance, if you SELECT 0.5 * 1 (multiplying a decimal by an int) you get a result that is converted to a decimal value, since decimal is higher precedence than the int data type.

See http://msdn.microsoft.com/en-us/library/ms190309.aspx for further details.

Having said all that, SELECT @C * (@I * POWER(1 + @I, @N) / (POWER(1 + @I, @N) - 1 )); should probably return a decimal value, since practically all of the inputs are decimal. Interestingly, you can force a correct-ish result by modifying that SELECT to:

DECLARE @N INT = 360;
DECLARE @I DECIMAL(38,26) = 0.15 * 30 / 360;
DECLARE @C DECIMAL(38,26) = 1000000;

SELECT @C *  @I *  POWER(1 + @I, @N)  / (POWER(1 + @I, @N) - 1);
SELECT @C * (@I *  POWER(1 + @I, @N)  / (POWER(1E0 + @I, @N) - 1));

This returns:

enter image description here

I am at a loss to explain how that makes any difference, although clearly it does. My guess is the 1E0 (an explicit float) in the POWER( function forces SQL Server to make a different choice on output types for the POWER function. If my supposition is correct, that would indicate a possible bug in the POWER function, since the documentation states the first input to POWER() is a float, or a number that can be implicitly converted to a float.