# 1 Introduction

This blog post will initiate a series of tutorial posts on formulations of numerical methods of linear algebra and analysis, and perhaps stochastics, in Haskell in comparison with Python. In the process, we will also analyze the algorithms and run benchmarks to evaluate the

performance overhead introduced by these high-level programming languages, as compared with the standard libraries,

performance overhead of embedding third-party numerical libraries via the FFI, subprocess calls, and bindings — with calls directly to the original numerical libraries such as LAPACK as our benchmark.

ease of expression of the algorithms in terms of the number of lines of code (LOC) and the mental burden, particularly in Haskell;

in this regard a short note: most algorithms as formulated in books are best suited for being expressed in imperative languages such as FORTRAN, C, or Python; their translation into a purely functional language is what constituted the mental burden; once you’re used to the functional paradigm, it becomes easier; the functional paradigm fits best within a functional or modular software architecture.

Due to the complexity of this approach, I will adopt the LEAN/Agile style of publishing here, incrementally extending the posts. If all goes well, I might **publish this as a book** on numerical Haskell subsequently, which I was pondering on for a while now. Of course, with the usual early-bird **discounts**, so stay tuned!

I plan to introduce an interface for comments and maybe integrate it with the most popular discussion boards such as Reddit by monitoring them for references to my domain name.

Moreover, a conventional blog post is usually intended to bring across the major ideas in a more or less colloquial way. In fact, it is hard to write informally about formal topics. I will try to manage a good balance between the scope of exposition and depth of formal information, without unnecessarily overburdening the reader with sometimes indeed important details, whose explanation might require an extra series of posts. So I’ll just have to gloss over them but will try to remark on their importance and point to a reference material for the interested reader. Many things therein would remain subjective. Feel free to point out any incomprehensible pieces. I will try to adopt a Medium-like marker tool to reference and comment on passages, or simply cross-post over there if it is permitted by their ToC (I haven’t consulted it just yet).

And as a last side-note, since I speak three languages daily, I might just mix up some idioms or other stylistic idiosyncrasies. Don’t hesitate to point it out, I’m eager to improve the clarity of expression. I know how it can “brush one against the …” (German: …), unless you’re used to it.

# 2 Plan for This Series

**Numerical linear algebra** is the working horse of virtually all numerical methods. As most problems are reduced to a linear algebra problem. Which may seem boring to an uninitiated, efficient methods of solution of the problems of linear algebra in fact are crucial to the overall efficiency of almost all numerical methods. In fact, simple post-iteration schemes can improve the quality of a solution dramatically. Similarly, the low-quality but simple explicit Euler method can be modified into the implicit Euler method that yields a dramatically better quality of the solution, by having to solve a system of linear equations in every iteration step; so if we manage to solve this linear system efficiently, we may be able to justify the use of this more complex method of significantly greater quality!

In fact, it may get a bit tedious and boring to be introduced to it directly, lacking relevant motivation. So we will sparingly mix it in in pieces and justify its overarching importance to our modern world.

We could start outright with partial differential equations, but this may seem overwhelming.

Our starting point is thus the **numerical solution of initial-value and boundary-value problems**. We will show that **they reduce to standard problems of numerical linear algebra and nonlinear optimization**.

We will conclude this series with **deep learning methods applied to ODE models**.

As a general note valid for all my posts here, even trying hard to stay correct, but as is usual in life, mistakes are hideous, they emerge at the least expected spots. Correctness of my words is important to me. So I will greatly appreciate if you send me your corrections. Simply pointing out a potential mistake is already greatly appreciated!