Fractional Derivatives
What’s the
What an absurd question - does it even make sense? I think so, but in order to build up some intuition let’s take a few steps back.
Imagine that you lived in the middle ages and you were comfortable
with the concepts of addition and multiplication. You even understand
exponents, as a shorthand for repeated multiplication. Then someone
asks you, what’s
Nonsense, right?
Well, as I’m sure you know, yes - you can. But think about it for a
second. What does it mean? What does it mean to multiply by
Which brings us to the (obvious because we already learned it) answer,
which is that
Let’s generalize a bit and talk about repeated function application.
Consider the function
Ok, how about
Let’s check it:
Ok, how about another this one? If
Alright, now let’s level up. Previously we were dealing with
functions from a number to a number, but functions can take other
types of things too. How about a function
Can we guess the answer for
Ok, now for the finale. What if our function takes the derivative of the input function? In other words:
Eek… that is a bit harder.
Let’s take a quick detour and draw an analogy to linear algebra,
specifically eigenvectors. If you want to multiply a vector,
- Compute the eigenvectors of the matrix
. These are the vectors that, when multiplied by , are just scaled by a constant (the constant being the eigenvalue). - Decompose your vector into a linear combination (weighted sum) of those eigenvectors.
- Your answer is the linear combination of those eigenvectors, where
each eigenvector is first scaled by its eigenvalue to the
th power.
I tried to explain why this works in depth here, but the quick summary is that we
found special inputs (the eigenvectors) which were particularly easy
compute for our function (multiplication by
One thing to mention is that this only work for linear functions,
i.e. functions
Does the derivative function have these properties? Actually yes:
The derivative is a linear function (often called a linear operator). So, we can utilize the same trick.
Can you think of any functions which have a derivative that are equal to the function itself (or, maybe, a scaled version of it)?
Yep, you bet:
So, if we could represent our input function
Oh, what’s that you say? The fourier transform can convert any function into a integral (read: weighted sum) of complex exponential functions (sometimes called complex sinusoids)?
So, we’ve rewritten our function as a weighted sum of eigenfunctions of the derivative operator. The weights are
At this point, we’ve solved how to take the
Lucky for us, the fourier transform of
So that’s a single complex exponential function. What if we add one more which rotates at exactly the same rate but in the opposite direction, and then add the two values together?
The imaginary (vertical) components cancel each other out perfectly
and all we’re left with is a real number, which is twice a
Analytically,
Why multiply
Let’s test our function for a few values of
So far so good. How about its derivative?
Well, we know what it should come out to,
Yes, and here’s one way to think about it (you could also plug in a few values of
What this also makes apparent, though, is that

Ok, this interesting and all, but let’s solve the problem.
And, in general:
Ok, one last thing (I promise!). We’ve been focusing on fractional
derivatives, but how about negative ones? We have a general formula
in terms of
So, in conclusion, the
-
Note this is using the mathematician’s definition of trivial, i.e. “theoretically possible” ↩