Increasing the precision of a calculation

I'll go over the examples in your comment, and urge you to re-read the content & documentation links in the referenced answer. Bottom line, Mathematica tries to not give answers with false precision (digits that appear significant, but are not), and in general it "tracks" the precision of inputs, intermediate results, calculations, etc. to ensure this. In your examples:

Column[{Precision[0.9*Sqrt[3`50]],
  Precision[Sqrt[3`50]],
  Precision[0.9]}]

(* MachinePrecision : 50.301 : MachinePrecision *)

Note, the lowest precision component is MachinePrecision, so Mathematica will give you at most a MachinePrecision result.

Row[{Precision[0.9`50*Sqrt[3]],
  Precision[Sqrt[3]],
  Precision[0.9`50]}, " : "]

(* 50. : ∞ : 50. *)

Here, the lowest precision component has precision 50, so Mathematica will give you at most a result or precision 50.

Row[{Precision[3.0*2.0],
  Precision[3.0],
  Precision[2.0]}, " : "]

(* MachinePrecision : MachinePrecision : MachinePrecision *)

So, as with the rest, since the lowest precision component is MachinePrecision, that's the most Mathematica will produce in the result.

Lastly,

Row[{Precision[3.0`50*2.0],
  Precision[3.0*2.0`50],
  Precision[3.0`50*2.0`50]}, " : "]

(* MachinePrecision : MachinePrecision : 49.699 *)

So of course, if you only indicate to Mathematica with the back-tick that only one of the components has other than MachinePrecision (the default for inexact numbers, i.e., numbers entered with a decimal point and <= machine precision significant digits) it will produce and answer with at most MachinePrecision precision. Note that indicating to Mathematica that both components are "known" to precision 50 gives the expected higher precision result.

In general, if you want to get answers to a specific precision, easiest to use exact numbers where numbers are explicit, and produce the desired result by wrapping the whole shebang in N.


In some ways this is an extension of my answer here. To understand how Precision, works it helps to understand how floating point numbers work and how error is propagated. I don't wish to give a full explanation of the subject. It can be sought out elsewhere, such as in a book on numerical analysis. But I will show how one can calculate the precision of an expression out for oneself.

As explained in the tutorial Numerical Precision, an arbitrary-precision Real number in Mathematica represents a real value with a specified uncertainty encoded in its Precision lying in a interval $[x - dx, x+ dx]$. The precision $p$ is related to the uncertainty $dx$ by $$ p = -\log_{10}\left({dx \,\big/\,\left|x\right|} \right)\,.$$ The rules for computing the uncertainty of a calculation $y = f(x_1, x_2, \dots)$ basically come down to computing via differentials $$dy = \left|{\partial f\over\partial x_1}(x_1,x_2,)\right|\;dx_1 + \left| {\partial f\over\partial x_x}(x_1,x_2,\dots)\right|\;dx_2 +\cdots$$ At least computing the uncertainty this way seems to agree very closely with Mathematica.

Below are functions for calculating the uncertainty and precision of an expression evaluated at approximate real values passed as replacement rules in the argument x0. Note that in Mathematica, MachinePrecision corresponds to "complete unknown precision," at least in the sense that precision is not tracked. I represent that by a precision of Infinity, which examples show is how MachinePrecision is treated in the calculation of Precision of expressions.

If for a simple example we take the expression x y, the differential is given by

Dt[x y]
(*
  y Dt[x] + x Dt[y]
*)

Filling in the absolute values ourselves, we can see that the uncertainty depends on the magnitudes of x and y and their uncertainties, represented by Dt[x] and Dt[y]. The precision, roughly speaking, is the relative error this represents, expressed as the number of decimal digits that are taken to be correct. One of the OP's examples is of the form x / Sqrt[y], whose differential is

Dt[x/Sqrt[y]]
(*
  Dt[x]/Sqrt[y] - (x Dt[y])/(2 y^(3/2))
*)

Here, if we fill in the absolute values, we get

Dt[x/Sqrt[y]] /. Times[dx_Dt, rest__] :> Abs[Times[rest]] dx
(*
  Dt[x]/Sqrt[Abs[y]] + 1/2 Abs[x/y^(3/2)] Dt[y]
*)

which again shows how the uncertainty, and hence the precision, depends on the numbers.

Clear[uncertainty, precision];
uncertainty[x_Real /; Precision[x] === MachinePrecision] := Infinity;
uncertainty[x_Real] := Abs[x] 10^-Precision[x];
uncertainty[x_ /; Precision[x] == Infinity] := 0;
uncertainty[expr_, x0_List] := 
  Expand@Dt[expr] /. Times[dx_Dt, rest__] :> Abs[Times[rest]] dx /. 
    Thread[Dt /@ First /@ x0 -> (uncertainty /@ Last /@ x0)] /. x0;
precision[expr_, 
   x0_List] := -Log10[uncertainty[expr, x0]/ReplaceAll[expr, x0]];

The examples below verify that the uncertainty follows the formula for the differential,

2.0 * uncertainty[3.0`3] + 3.0 * uncertainty[2.0`2]
uncertainty[x y, {x -> 3.0`3, y -> 2.0`2}]
(*
  0.066
  0.066
*)

and if MachinePrecision numbers enter into a calculation, the result has MachinePrecision (that is, an unknown uncertainty or Infinity).

uncertainty[x y, {x -> 3.0, y -> 2.0`2}]
(*
  Infinity
*)

The calculations of precision and the built-in Precision agree.

precision[x y, {x -> 3.0`3, y -> 2.0`2}] // FullForm
Precision[3.0`3 * 2.0`2] // FullForm
(*
  1.958607314841775`
  1.958607314841775`
*)

They also agree in the x / Sqrt[y] example,

precision[x/Sqrt[y], {x -> 0.9`15, y -> 3.`2}] // FullForm
Precision[x/Sqrt[y] /. {x -> 0.9`15, y -> 3.`2}] // FullForm
(*
  2.301029995663894`
  2.301029995663894`
*)

even when they appear not to agree.

precision[x/Sqrt[y], {x -> 0.9`15, y -> 3.`3}] // FullForm
Precision[x/Sqrt[y] /. {x -> 0.9`15, y -> 3.`3}] // FullForm
% === %%
(*
  3.3010299956631126`
  3.301029995663113`
  True
*)

@Rasher explained nicely how precision works and implicitly how to get more precision: specify the precision required and watch out for "poisoining".

However, this involves attention to detail every time one enters a number. I like the approach offered by Mr Wizard here which I quote below (for the specific precision of 20 digits)

$PreRead = (# /. 
     s_String /; 
       StringMatchQ[s, NumberString] && 
        Precision@ToExpression@s == MachinePrecision :> s <> "`20." &);

3/1.5 + Pi/7

Precision[%]

Note that this affects all notebooks because $PreRead is a global variable, even if the CellContext option is set to Notebook (rather than the default Global) under Options-Preferences->Global Preferences->Cell Options->Evaluation Options->CellContext. (This setting makes symbol definitions local to each notebook, i.e. functions, variables, etc. are not shared between them.) Suggestions on limiting the scope to individual notebooks welcome.

For the benefit of Mathematica beginners, such as myself a couple of additional comments may be helpful subject to the caveat that as a beginner myself I will no doubt be sharing ignorance as well as info:

  1. The number 3 is given infinite precision as an integer and is differentiated from "3." which by default will have $MachinePrecision (typically ~<16 decimal digits on a modern PC) as a real. Rationals, as the ratio of two integers also inherit infinite precision, but divide an integer by a real and the result will be "poisoned" by the limited precision of the latter. 31/3.1 is exactly 10, but in Mathematica it is only 10 to machine precision.
  2. There is a conceptual subtlety here: even if you enter e.g. 17.1 because for you 17.1 is an exact, infinitely precise number, Mathematica considers it akin to a measurement, that is necessarily approximate. If you want it to be treated with infinite precision you would have to express it as 171/10; if you just want lots of precision SetPrecision etc. are the way to go. Whilst Sqrt[3] will propagate through functions as exactly Sqrt[3] until a numerical output is requested, Sqrt[17.1] becomes a numeric value immediately (unless of course it is subject to Hold...) Sqrt[17.1] would therefore "poison" higher precision results. Again Sqrt[171/10] would retain infinite precision (though expect simplifications along the way, in this case the precise answer is 3 Sqrt[19/10].
  3. Note also that SetPrecision increases the number of binary digits and pads out with binary zeroes - the backtick notation such as "3`10" pads with decimal zeroes.
  4. I found the documentation dry, terse and more than a little obscure in places: it really does help to generate test cases that are slightly more complex than the examples given and to proceed step-by-step to more complex cases: engineering rule - only change one variable at a time...