Math operations using System.Decimal in C#

Well, Double uses floating point math which isn't what you're after unless you're doing trigonometry for 3D graphics or something.

If you need to do simple math operations like division, you should use System.Decimal.

From MSDN: The decimal keyword denotes a 128-bit data type. Compared to floating-point types, the decimal type has a greater precision and a smaller range, which makes it suitable for financial and monetary calculations.

Update: After some discussion, the problem is that you want to work with Decimals, but System.Math only takes Doubles for several key pieces of functionality. Sadly, you are working with high precision numbers, and since Decimal is 128 bit and Double is only 64, the conversion results in a loss of precision.

Apparently there are some possible plans to make most of System.Math handle Decimal, but we aren't there yet.

I googled around a bit for math libraries and compiled this list:

  1. Mathdotnet, A mathematical open source (MIT/X11, LGPL & GPL) library written in C#/.Net, aiming to provide a self contained clean framework for symbolic algebraic and numerical / scientific computations.

  2. Extreme Optimization Mathematics Library for .NET (paid)

  3. DecimalMath A relative newcomer, this one advertises itself as: Portable math support for Decimal that Microsoft forgot and more. Sounds promising.


DecimalMath contains all functions in System.Math class with decimal argument analogy

Note : it is my library and also contains some examples in it


You haven't given us nearly enough information to answer the question.

decimal and double are both inaccurate. The representation error of decimals is zero when the quantity being represented is exactly equal to a fraction of the form (x/10n) for suitable choices of x and n. The representation error of doubles is zero when the quantity is exactly equal to a fraction of the form (x/2n) again for suitable choices of x and n.

If the quantities you are dealing with are not fractions of that form then you will get some representation error, period. In particular, you mention taking square roots. Many square roots are irrational numbers; they have no fractional form, so any representation format that uses fractions is going to give small errors.

Can you explain what you are doing in hugely more detail?

Tags:

C#

Decimal

Math