DotNumerics, AlgLib, dnAnalytics, Math.net, F# for Numerics, Mtxvec?

F# for Numerics is a product of my company, written in 100% F#. Our emphasis is on general techniques (everything from FFTs to random number generation) and not specifically linear algebra although basic linear algebra routines are provided (Cholesky, LU, QR, SVD on various matrix/element types) and we are particularly interested in ease of use from F#.

If you're after the full breadth of LAPACK then my recommendations are Alglib if you're on a budget or Extreme Optimization if you can afford it. Alglib is entirely managed code with an, umm, "quirky" API so it is relatively slow to run and cumbersome to use. Extreme Optimization is a nicer API wrapping the Intel MKL and some extra routines so it is easier to use and much faster to run.

I should warn you that the general quality of .NET libraries (free, commercial and even the framework itself) is comparatively poor if you are coming from an open source background. I tried many of the other libraries that you mentioned and was not at all impressed with them.


I also can suggest to view new one .net numerical library called FinMath, which I used in my development. It provides easy to use .net class wrappers for a lot of MKL (Intel Math Kernel Library on which it based) functionality, such as linear algebra (BLAS and LAPACK), statistics and FFT. Also, in addition it contains number of advanced methods such as linear and quadratic programming solver, cluster analysis and others. It also includes various .net to native c marshaling optimization which leads to high performance and easy to use single dll solution.

But unfortunately it's not open source, not free and in contrast to LAPACK, most methods supports only double precision floating point reals. And for few rarely used LAPACK methods wrapper are not provided.


A few notes of correction on previous replies: ALGLIB is not a translation of LAPACK, though it did include some routines in its matrix core also seen in LAPACK. It does a lot more, e.g. rational function and radial basis function interpolation, convex non-linear programming, as well as non-convex optimization, neural nets, a solver for differential equations and integration and it has a subset of the functionality of FFTW. This is more akin to MKL - which it accepts a hook to as a library plug-in for some of its routines. In contrast, LAPACK focuses almost exclusively on linear and matrix algebra - which mostly corresponds to the LinAlg module in ALGLIB. There is nothing like optimization, or iterative solvers in it. ALGLIB has a large number of iterative routines, particularly for its optimization routines.

ALGLIB is written in an internal language (AlgoPascal) and translated automatically to its target languages. In particular, there are a lot of translator artifacts in the C++ version, and it's not fully nativized C++, but more like C, with C++ capability simulated on top of it much like what you would get if you had used a cleaner, more intelligent, version of cfront (or of f2c or of some "Pascal to C" convertor). Then, a C++ API wrapper is put on top of it all. The translation is good enough, but sorely misses out on language features of C++ that it could take advantage of by making direct use of them and directly translating the core package to C++.

The AlgoPascal to C/C++ translator has not kept pace with the changes in the target languages. In addition to native multi-threading (including thread-local variables) in C and C++, there is now support for "tuples" in C++, which would really come in handy for the parameter-laden routines!

If you're working with ALGLIB, a quick speed-up would be to replace the floating point comparison functions by macros or in-line functions that equates them to C++'s native point comparison operators. The reason functions were used for floating point comparisons was to get around problems that older compilers had in implementing the comparisons in ways that worked consistently with the quirky architecture of the floating point co-processor on the Intel-based CPU's.

LAPACK includes a default implementation of BLAS, plus room to plug in an external, high-powered, BLAS library, and recommends against using its own default BLAS library, in favor of plugging in an external library, if you can. Its core is in Fortran, though it comes with API shells for C and C++. It supersedes the older EISPACK and LINPACK libraries (according to the cover pages, themselves, of the respective libraries on NetLib).

A front end for those latter two libraries was developed in the 1970's and early 1980's and distributed with version 10 UNIX - and was also written in Fortran - with core functionality for an interactive CLI user interface. Its name is MatLab.

The MatLab you know of today, by MathWorks, is a direct descendant of this, though it has evolved substantially both in terms of its source language (C or almost certainly C++ now), its user interface (GUI written on QT, now) and routines. Like LAPACK has done in recent times, it has also moved into the world of parallelism.

SciLab has more evolved versions of the old MatLab routines and also has a graphics and GUI core, and a large variety of code covering a wide range of fields (e.g. a multi-threaded C++ class for co-routines - which would come in handy for iterative solvers). It is both open and open-ended: I believe that it accepts contributions of new modules. Potentially that could even include a fully-nativized C++ version of ALGLIB or a C++ translation of LAPACK's core, if those were ever produced and provided.

The old MatLab had 3 main set of files, "Mat" had the parser for the front-end language. "Lib" had the core library routines, and "Sys" had the system-dependent routines - including the CLI user interface. It was about 8000 lines in all. There wasn't much in it beyond what you would find in LINPACK, particularly since it was one of the LINPACK/EISPACK co-authors who wrote it.

Since it is now proprietary in its later evolved forms, I can't say what these look like today in source form. However, there is also the freely-distributed answer to MatLab: Octave. Roughly speaking, "Mat" is now "libinterp", "Lib" is "liboctave" and "Sys" is "libgui", and the driver routine is in "src". Each of the lib directories is about 100000-250000 lines, with a lot more functionality in it, in an attempt to keep pace with MatLab. A small part of it is translated from Fortran, bearing heavy imprint of translator artifacts. A quick glance shows that it looks more like LAPACK, in terms of functionality, than MKL or ALGLIB. I don't see any iterative solvers, neural net routines or optimization routines in it.


One common pitfall of choosing math library is that we hope there exists a math library for everything.

Before finding a library, you should first ask "what kind of math library do I want?". Then you will have a list of criteria, such as open source or not, high performance or not, portable or not, easy to use or not.

Following are my comments on the libraries in your list (I haven't used the last two):

1) DotNumerics

(http://www.dotnumerics.com/)

They use a fortran2C# translator that translates the Lapack procedures code into C# classes. User friendly C# wrappers are written for the raw Lapack classes.

2) Alglib (http://www.alglib.net/)

This library is available in several languages, like delphi, c++ and c#. I believe it has longer history than any other libraries you listed.

Most of the functions are translated from Lapack. And its interface is not so user friendly. (But you have the flexibility of Lapack style interface.) Using lapack style interface means that you need to know more about the matrix and its operations.

3) dnAnalytics (http://dnanalytics.codeplex.com/)

This library is merging into Math.Net now. It seems that the merging is not done yet. A few functions in dnA is still not available in Math.Net.

4) Math.NET (http://www.mathdotnet.com/) Its implementation is from scratch, i.e., it is not a direct translation from Lapack. They aim to provide a purely managed library for .Net platform. That means easy usage and portability are two primary goals. One concern is that whether their own implementation is correct or not. One good thing is that this library is portable in the sense that you can use it on Mono, XNA, Windows Mobile Phone with little effort.

The above libraries dont' focus on F#. However one of the team members in Math.Net works for MS Research Cambridge and is an F# expert. Like Cuda said, they will work out an F# interface for the library. Also they will provide native wrappers. But maybe you will wait a long time, longer than "several months" :)

For the concern of high performance, the above libraries don't provide native wrappers (at least now). If you want native performance + .Net, you had better use a commercial library. There are some open source solutions:

1. http://ilnumerics.net/ This is a numpy like solution for .Net. They PInvoke to Lapack dlls (e.g. the non-optimized lapack at netlib, the optimized versions from AMD and Intel.)

2. math provider in F#. read my answer in this question. Since F# source code is now open sourced. I may revise the library and release my updates :)

Usually you don't need a big math library. You just need some functionality, e.g., if you need a fast matrix multiplication procedure, using PInovke to a platform optimized BLAS dll is the easiest way. If you need do a education oriented math software for kids, then the quality of Math.net is enough. If you are in a company and developing reliable math components, then why don't use a commercial one backed by a high-quality team?

Finding a perfect math library is hard. But finding a library solution to your problem is usually easy.