A Linear Algebra Trick for Computing Fibonacci Numbers Quick
I’ve been attending an internet studying group on the ebook “Thirtythree Miniatures: Mathematical and Algorithmic Applications of Linear Algebra” lead by Prof. Neeldhara Misra from IIT Gandhinagar. It’s the most unconventional math ebook I’ve come throughout. The primary two chapters are all about methods to rapidly compute Fibonacci numbers. The standard (memoization primarily based, or iterative) technique of computing Fibonacci numbers that we study in our programming programs is linear in time. However, the ebook reveals a way for computing them in roughly logarithmic time complexity. It’s attainable that a few of you would possibly concentrate on this method however this was information to me and I believed it’s price sharing with the remainder of you.
The standard memoized recursive algorithm for computing Fibonacci numbers is that this:
desk = {}
def fib(n):
if n in desk:
return desk[n]
if n == 0 or n == 1:
end result = n
else:
end result = fib(n  1) + fib(n  2)
desk[n] = end result
return end result
It is a linear time algorithm as a result of to compute the nth Fibonacci quantity, we have to compute all of the earlier n  1
numbers within the sequence. This leads us to the standard query requested — can we do higher?
Let’s attempt to discover out. One difficulty with the above algorithm is that computing an arbitrary worth within the sequence requires us to compute all of the earlier values. Subsequently, we have to arrive at an answer which is extra basic in nature and might compute the nth
worth instantly.
One instinct we’ve got is that we all know the values of the bottom circumstances (0 and 1), so if we will categorical the computation of the nth
Fibonacci quantity when it comes to the bottom circumstances, we would obtain our purpose. Let’s attempt to construct on this instinct.
The bottom circumstances of the sequence are:
F(0) = 0
F(1) = 1
Computing F(2) utilizing these two is trivial:
F(2) = F(1) + F(0)
Equally, we will additionally categorical F(1) when it comes to F(1) and F(0):
F(1) = 1 * F(1) + 0 * F(0)
We are able to conveniently categorical the computation of F(2) and F(1) in a single linear algebra operation, like so:
(start{align}
start{bmatrix}
F(2)
F(1)
finish{bmatrix}
=
start{bmatrix}
1 & 1
1 & 0
finish{bmatrix}
start{bmatrix}
F(1)
F(0)
finish{bmatrix}
Or,
start{bmatrix}
F(2)
F(1)
finish{bmatrix}
=
start{bmatrix}
1 & 1
1 & 0
finish{bmatrix}
start{bmatrix}
1
0
finish{bmatrix}
finish{align})
Let’s attempt to do the identical factor for F(3)
:
F(3) = F(2) + F(1)
F(2) = F(1) + F(0)
And representing it when it comes to matrix operation, we get this:
(start{bmatrix}
F(3)
F(2)
finish{bmatrix}
=
start{bmatrix}
1 & 1
1 & 0
finish{bmatrix}
start{bmatrix}
F(2)
F(1)
finish{bmatrix}
)
Nevertheless, we all know the worth of the vector [F(2), F(1)]
from the earlier step, substituting that worth provides us the next:
(start{bmatrix}
F(3)
F(2)
finish{bmatrix}
=
start{bmatrix}
1 & 1
1 & 0
finish{bmatrix}
start{bmatrix}
1 & 1
1 & 0
finish{bmatrix}
start{bmatrix}
1
0
finish{bmatrix}
)
Let’s make it easier:
(Let’s symbolize the matrix
start{bmatrix}
1 & 1
1 & 0
finish{bmatrix} as M
)
Which supplies us:
(start{bmatrix}
F(3)
F(2)
finish{bmatrix}
=
M^2
start{bmatrix}
1
0
finish{bmatrix}
)
Seems like we’re constructing a sample for computing the numbers when it comes to the matrix M
and the bottom circumstances F(1)
, and F(0)
. Let’s see if this sample holds for F(4)
.
F(4) = F(3) + F(2)
F(3) = F(2) + F(1)
Once more, representing these within the type of matrix operations:
(start{align}
start{bmatrix}
F(4)
F(3)
finish{bmatrix}
&=
start{bmatrix}
1 & 1
1 & 0
finish{bmatrix}
start{bmatrix}
F(3)
F(2)
finish{bmatrix}
Or,
start{bmatrix}
F(4)
F(3)
finish{bmatrix}
&=
M
start{bmatrix}
F(3)
F(2)
finish{bmatrix}
finish{align}
)
Substituting the worth of the vector [F(3), F(2)]
from the earlier computation, we get the next end result:
(start{bmatrix}
F(4)
F(3)
finish{bmatrix}
=
M
.
M^2
start{bmatrix}
1
0
finish{bmatrix}
)
Which turns into:
(start{bmatrix}
F(4)
F(3)
finish{bmatrix}
=
M^3
start{bmatrix}
1
0
finish{bmatrix}
)
Basically we will discover that this sample repeats and generalizes to this way:
(start{bmatrix}
F(n+1)
F(n)
finish{bmatrix}
=
M^n
start{bmatrix}
1
0
finish{bmatrix}
)
You can even show to your self that this holds true utilizing mathematical induction.
Seems like we’ve got an algorithm for computing the nth
Fibonacci quantity with no need to compute all of the earlier n1
values within the sequence. Nevertheless, it requires computing nth
energy of the matrix M
. The naive algorithm for computing powers can also be linear in n
, which implies this algorithm for computing Fibonacci numbers can also be linear and we’re again to the identical spot. Nevertheless it turns on the market are sooner methods for computing energy, let’s check out them.
The naive algorithm for evaluating the nth
energy requires n
multiplications and is subsequently linear in time. However there are sooner algorithms for doing this. As an illustration, if n
is an influence of two then we will compute M^n
by repeated squaring of M
, and do it in log2(n)
time.
In volume 2 of The Art of Computer Programming, Knuth reveals a extra generic algorithm for computing x^n
. This algorithm depends on the truth that any quantity may be damaged down into sums of powers of two. As an illustration, 5 may be written as 2^2 + 2^1
, and 9 may be written as 2^3 + 2^1
. Subsequently, we will write x^n
as following:
(x^n = x^{k_1} * x^{k_2} * x^{k_3}…x^{k_m}
)
The place k1, k2, …, km
are powers of two which all add as much as n
.
Although the concept is straightforward, the precise algorithm for doing that is barely extra convoluted as a result of there isn’t any direct solution to break down a quantity like this. As a substitute, we have to repeatedly divide the quantity by 2 and take a look at the rest of the end result. The third version of The Artwork of Pc Programming, Quantity 2 (web page 462) lists the next algorithm for computing x^n
:
In case you are somebody who prefers taking a look at code as a substitute of summary pseudo code, the linalg.matrix_power technique in numpy implements this very same algorithm (proven within the under picture).
What’s the complexity of this algorithm? The loop runs not more than log2(n) variety of steps as a result of we’re halving n at each step. And, throughout every step of the loop we’re multiplying Z
by itself, leading to log2(n)
multiplications. Additionally, each time we get the rest as 1, we’re additionally multiplying Z
with Y
, growing the variety of multiplications. Formally, the overall variety of multiplications carried out by this algorithm is:
(lfloor log_2(n) rfloor + v(n)
)
The place v(n)
is the variety of bits with worth 1 within the binary illustration of n
.
That is additionally the complexity of our Fibonacci quantity algorithm as a result of the one operation it’s doing which relies on n
is computing M^n
.
It seems we’ve got a extra environment friendly algorithm on our palms. We should always really run and examine the performances of the algorithms earlier than concluding victory. Earlier than we do this, there’s one other various approach of computing Fibonacci numbers, let’s take a look at that.
There’s a closed type formulation for instantly computing the nth Fibonacci quantity, which makes use of the golden ratio:
(F(n) = frac{1}{sqrt{5}} left(left(frac{1 + sqrt{5}}{2}proper)^n – left(frac{1 – sqrt{5}}{2}proper)^nright)
)
That is in all probability a way more wellknown formulation, and in addition derived in The Artwork of Pc Programming, Quantity 1 (web page 99). The thirty three miniatures ebook reveals an alternate proof of this way utilizing linear algebra. I’d not wish to reproduce these proofs right here within the curiosity of area and time, nevertheless, be at liberty to take a look at these books. I simply wished to point out this method in order that we’ve got a extra full benchmark to run. Let’s take a look at it within the subsequent part.
So we’ve got three completely different algorithms for computing Fibonacci numbers, it’s time we put them to check and in contrast their performances. The next plot reveals the efficiency of the three methods.
Right here:

fib(n)
refers back to the standard linear time algorithm. 
fib_fast(n)
refers back to the sooner algorithm we studied in 2nd part, which computesM^n
. 
fib_closed_form(n)
reveals the efficiency of the closed type formulation primarily based on the golden ratio.
It is a nice instance of the efficiency distinction of linear vs logarithmic time algorithms. In comparison with the efficiency of the linear time algorithm, the opposite algorithms seem like flat strains.
One other fascinating factor to notice right here is the efficiency of fib_fast
and fib_closed_form
algorithms. Each of those algorithms primarily compute the nth
energy of an expression and subsequently their complexities when it comes to n
are identical. Nevertheless, the golden ratio primarily based algorithm is computing the nth
energy of a scalar worth which may be accomplished a lot sooner than computing the nth
energy of a 2X2
matrix, thus the distinction of their precise runtime efficiency.
On the identical time, it is usually price noting that the closed type formulation is working with irrational numbers which might solely be represented roughly in computer systems, and thus in some uncommon circumstances the strategy could produce incorrect end result because of approximation errors. The matrix technique has no such points and can at all times give the best reply. After all, when implementing any of those algorithms it is advisable to deal with the potential of giant numbers, which can trigger overflow on fixedwidth knowledge sorts.
This brings us to the top of this whirlwind tour of Fibonacci quantity algorithms. After I first got here throughout the linear algebraic formulation for computing Fibonacci numbers, I discovered it magical. The creator of the thirty three miniatures ebook, Matoušek, mentions that the unique supply of that method isn’t recognized, so we could by no means understand how was it found. I consider that when taking part in round with such sequences, you uncover sure patterns which can lead you down the trail of such discoveries, and there won’t be a precise science behind the way it was arrived at. I attempted to point out a technique of the way it could possibly be arrived at however there is likely to be higher methods to do it. If any of you’ve higher instinct behind it, please share with me within the feedback.
I want to credit score Prof. Neeldhara Misra for main the session on this matter and for offering instinct behind the mathematics concerned. In case you are on discussions round CS and Math, you might wish to observe her on X/Twitter.
I’d additionally wish to thank
for reviewing and giving suggestions on this text. He writes an exquisite Substack on subjects associated to laptop science at, you might wish to test it out and subscribe!