On dynamic programming and finding Fibonacci in linear time.

As software engineers we have one job, to solve problems as efficiently as possible. Understanding algorithms and theirs complexities (time and space, specifically) isn't just a theoretical exercise, it's a critical skill that directly impacts the quality of the work we ship to prod. Understanding the basics of data structures and algorithms helps us balance tight deadlines vs efficient solutions.

One classic example that illustrates this is the Fibonacci sequence. Fibonacci numbers can be calculated a number of ways, from a simple recursive approach that quickly gets out of hand to calculating it in constant time using Binet's formula.

Naive Recursive Approach

This one is an old friend of most of us. While I was learning the basics of programming, the concept of recursive functions was presented to me with this.

const fib = (n: number) => { let result: number; if (n <= 2) { result = 1; return result; }; result = fib(n - 1) + fib(n - 2); return result; };

This is probably the first approach most of us will go to when facing similar problems, it solves the problem and runs well for small numbers, but the problem is that in each step you call fib() twice, thus having an exponential asymptotic growth rate.

How can we improve it?

Introduction to Dynamic Programming

According to our good old friend Wikipedia, Dynamic Programming is both a mathematical optimization method and an algorithmic paradigm developed by Richard Bellman in the 1950s.

In summary, it's simplifying a complicated problem by breaking it down into simpler sub-problems in a recursive manner.

The two key attributes any problem needs in order for dynamic programming to be applicable:

  1. Optimal Substructure: A problem is said to have optimal substructure if an optimal solution can be constructed from optimal solutions of its subproblems. In our example, finding the Nth Fibonacci, F(n) requires us to find the previous Fibonacci numbers, and finding those require the same, thus forming an optimal substructure of nested problems.

  2. Overlapping subproblems: A problem is said to have overlapping subproblems if the problem can be broken down into subproblems which are reused several times or a recursive algorithm for the problem solves the same subproblem over and over rather than always generating new subproblems. As stated before, the subproblem of computing F(n − 1) can itself be broken down into a subproblem that involves computing F(n − 2). Therefore, the computation of F(n − 2) is reused, and the Fibonacci sequence thus exhibits overlapping subproblems.

Knowing that is clear that using the naive recursion we're solving the same problems over and over again. So a better approach is to solve each problem only once.

Memoization

Memoization is an optimization technique where, by storing the results of expensive function calls to pure functions and returning the cached results when the same inputs occur again.

We can use it to store the calculations of each number and when the function is called with the same argument it just fetches the result in constant time.

const memo: Map<number, number> = new Map(); const fib = (n: number) => { let result: number; if (memo.has(n)) return memo.get(n) as number; if (n <= 2) { result = 1; memo.set(n, result); return result; } result = fib(n - 1) + fib(n - 2); memo.set(n, result); return result; };

Since each fib number just needs to be calculated once, this approach reduces the complexity from exponential to linear.

We can further reduce this to constant time using an approximation with Binet's Formula, but I leave it as an exercise for the reader.