Mathematics for machine learning

For basic neural networks (i.e. if you just need to build and train one), I think basic calculus is sufficient, maybe things like gradient descent and more advanced optimization algorithms. For more advanced topics in NNs (convergence analysis, links between NNs and SVMs, etc.), somewhat more advanced calculus may be needed.

For machine learning, mostly you need to know probability/statistics, things like Bayes theorem, etc.

Since you are a biologist, I don't know whether you studied linear algebra. Some basic ideas from there are definitely extremely useful. Specifically, linear transformations, diagonalization, SVD (that's related to PCA, which is a pretty basic method for dimensionality reduction).

The book by Duda/Hart/Stork has several appendices which describe the basic math needed to understand the rest of the book.


Take a look at the web page for Michael Steele's course Probability Inequalities and Machine Learning, and the various texts linked to there.


I believe Ian Goodfellow and Yoshua Bengio's Deep Learning book covers the basics and also how you would use it for research. The chapters are also available online for free.