Fourier Transform 101 — Part 5: Fast Fourier Transform (FFT)

Sho Nakagome
sho.jp
Published in
6 min readMar 17, 2019

--

Disclaimer! It’s not Final Fantasy Tactics! I mean I like the game, but this one is cool too.

It’s been a while, but we covered various types of Fourier Transforms. You can recap the summary from the below link. Today, we are going to cover something called Fast Fourier Transform (FFT) which is nothing but Discrete Fourier Transform in its optimized form for faster calculations.

I’m following the basic derivations of equations from awesome lectures taught by Dr. Wim van Drongelen. You can see his lectures on YouTube and I’m posting the video that covers the topic that we are going to discuss today here.

If you have some time, trying going through his lectures. They are indeed awesome and to the point!

Computational complexity and why DFT is not used as often

Let’s start off by talking about the motivation behind FFT. If you remember the equation for Discrete Fourier Transform (DFT), it was something like below:

If you unroll the summation, we have N computations to deal with.

Now, let’s take a closer look at each of the X(k). The above was just for a particular X(k), but in reality, we have X(0), X(1), X(2),… X(N-1) to deal with. This is because if you remember the whole purpose of performing Fourier Transform is to basically convert the signals from a time domain to a frequency domain. To do this, we need to go through different frequencies to see if the original signal has such components in it.

So, after examining the calculations, we can conclude the following.

The total computation cost for DFT is and this is bad news. If you are not familiar with a notion called “Big O”, take a look at the following graph.

Basically, if the computation cost is , the cost to compute such a thing would exponentially grow its scale and we don’t want that. Here’s an easy example. Let’s say it takes 2 ms to compute N = 1. Then it takes 200 ms to compute N = 100. That’s fine because until at this point, it’s still a linear scale. But when it comes to an exponential scale, this becomes 20000 ms = 20 seconds for . That’s already quite a wait for only N = 100. What if N = 5,000? OMG, that’s = 25,000,000 so it’s 25,000,000 * 2 ms = 50,000 seconds = 833.33 mins. That’s more than 13 hours!! Do you want to wait more than 13 hours just to finish one N = 5,000 calculations of DFT? NO!!

That’s why some smart people came up with the idea of Fast Fourier Transform (FFT). Just in case if you are wondering, FFT has a computational cost of N log(N). I know, it’s not perfect, but see the power of log. Let’s take the example we just went through. N = 5,000. With N log(N) = 18,494.85 so it’s 18,494.85 * 2 ms = 36,989.7 ms = approximately 37 seconds! Wow!! More than 13 hours to 37 seconds. Do you feel the power of log now?

Then let’s see how people broke down this to N log(N).

How to achieve N log(N)

If you see the original equation of DFT, there’s something called “Twiddle Factor”.

This is nothing but a periodic function if you remember the famous euler’s formula. This “periodicity” is the key.

Let’s look at the example to see why such periodicity can reduce the computation complexity. Below is an example where N = 4. When the k increases, you start to see that the twiddle factor overlaps with each other. In other words, there’s a repetition in the calculation because when those terms overlap, basically it’s the exact same value you already calculated.

In the above example, we have the following overlaps.

Now you are getting there. See the following equations. These are nothing but DFT, but unrolling each to eliminate the summation Σ.

By using the overlap we just observed, we can replace some of the terms above.

I just highlighted the terms I replaced. This is the first part where computational complexity could be reduced.

Now another part. Take a closer look at each of the terms connected with summations. Don’t you see some patterns? I mean you should see some of the terms are repeated again, right?

I highlighted the ones with different colors below. YES, this is another part where we can save the computation time because in reality, when you compute something, you can simply save the results on memory. Later, you can just call the value back from the memory without calculating the term again. This is another part where you can reduce the computation in DFT.

Summary

To summarize, FFT is an optimized faster equivalent to DFT. DFT has a computation complexity of and FFT reduces this to N log(N). FFT does this by:

  1. Using a periodicity in the twiddle factor to avoid calculating the term that gives you the same value as the one you already calculated before.
  2. Avoid calculating the same term across DFT components so that you could simply recall the value that you used before.

I hope this helps! See you next time!

--

--

Sho Nakagome
sho.jp

A Neuroengineer and Ph.D. candidate researching Brain Computer Interface (BCI). I want to build a cyberbrain system in the future. Nice meeting you!