Sunday Function

Here's a simple function that's not so often found in mathematical physics, but it's still a nice showpiece for exhibiting some interesting behavior:

i-d4bb9fe09d6b78eb832985821d8a307f-1.png

i-4458e9cb2eb36a753e350927af4c76b8-xx.png

I've only plotted it for positive real x, because for x less than zero it starts spitting out complex numbers in a very unfriendly way. We're only considering it along the domain in which it's purely a real function. Just from the graph it's clear that this is a very fast-growing function. This isn't exactly a shock. 3^3 = 3x3x3 = 27, while 5^5 = 5x5x5x5x5 = 3125. Larger x produces much larger f(x) at an astonishing rate.

But just what exactly is this rate? To find out, we'd want to find the derivative of this function and take a look at its behavior for large x. But the usual algorithms for differentiation aren't really any help here, this function isn't a simple exponent or power and so the exponent and power rules can't do anything for us. We either have to be especially clever, or as I prefer we can learn from clever people before us and use their methods. The trick to this one is to find the logarithm of our function and then differentiate that. Just for the moment, switch notations for convenience and rename f(x) to y. The derivative of our function y will be denoted as y':

i-88516ca505d84f969b7ab15a965c3fb3-2.png

Now take the natural logarithm of both sides, and remember the rules for exponents:

i-d18063683dcc0d42b9be45451a84d1e3-3.png

Now since there's no screwball power towers lurking about, we can take the derivative. Remember that the derivative of ln(y) with respect to x is is 1/y, multiplied by the derivative of y with respect to x, which earlier we decided to just call y'. The product rule gives us the derivative of the right side:

i-8d7070ce28f0af28047c7a5fb8246d53-4.png

Multiply both sides by y to get the derivative by itself:

i-bb5c2c6b0452df43a61e3974bd9b473f-5.png

And remember that y is just x^x:

i-80f9713f6d11461837a9f9b540684e36-6.png

Bam, we're done. The derivative of x^x is just itself times a slowly logarithmically growing function. Its rate of growth thus increases even faster than the function itself. This makes it clear that this Sunday Function grows faster than the famously quick exponential function, which after all only grows exactly in proportion to itself. In fact, x^x grows even faster than the factorial, which is usually the canonical "faster than exponential" example.

Though in mathematics there's nothing wrong with such a property at all, the laws of nature are generally structured in such a way that usually doesn't produce functions like x^x. Even when super-exponential functions do appear in nature, they're usually in the denominator of some statistical-mechanics function that tends rapidly to zero with increasing x. But rarity is no excuse to gloss over crazy function like this. It's just when you think you'll never have a use for a particular kind of math that you end up needing it.

More like this

One of the fundamental branches of modern math - differential and integral calculus - is based on the concept of limits. In some ways, limits are a very intuitive concept - but the formalism of limits can be extremely confusing to many people. Limits are basically a tool that allows us to get a…
Slides rules are actually astonishingly powerful things. The simple slide rule does multiplication and division using the C and D scales; strictly speaking, you can have a basic rule with nothing but C and D. But you almost never see a rule that simple. (The only one I've ever seen with only the…
(This is a heavily edited repost of the first article in my original Haskell tutorial.) (I've attempted o write this as a literate haskell program. What that means is that if you just cut-and-paste the text of this post from your browser into a file whose name ends with ".lhs", you should be able…
So in the last few posts, I've been building up the bits and pieces that turn lambda calculus into a useful system. We've got numbers, booleans, and choice operators. The only thing we're lacking is some kind of repetition or iteration. In lambda calculus, all iteration is done by recursion. In…

Nice post, Matt! You can do with this without logarithms, too.

Suppose you have a function of two variables, say

f = f(x,y)

then

df/dx = \partial{f}/\partial{x} + \partial{f}/\partial{y}*dy/dx

by the normal chain rule.

now set y = x and you see that for a function where "x" appears in two places, you can take the derivative with respect to the first and add the derivative with respect to the second.

in this example, we have

f(x) = x^x

which is like

f(x,y) = x^y

with y=x

so

df/dx = y*x^(y-1) + ln(y)x^y

and plugging in y=x

df/dx = x*x^(x-1) + ln(x)x^x = x^x(1+ln(x))

You can also just rewrite the function as exp(x*log x) and differentiate using the normal chain rule and product rule.

By the way you might wanna notice what the limit of the function is as x tends to zero.u Why we don't define 0^0 is an interesting question and the behavior of this function can be compared with for example 0^x to give the answer.

We're not far away from factorial. Stirling says, ln x! = x ln x, or thereabouts.

By Lassi Hippeläinen (not verified) on 23 Feb 2010 #permalink

@Johan: While x^x actually does have a well-defined limit as x -> 0, the reason we don't define 0^0 is because 0^x does not have a well-defined limit as x -> 0. Specifically, the limit you get depends on your direction of approach: it's 0 if you approach from the positive side and infinity if you approach from the negative side.

By Eric Lund (not verified) on 23 Feb 2010 #permalink

That, and the limits x^0 and 0^x are different. It's one of those things where any definition we force 0^0 into is going to end up conflicting with other more fundamental definitions.

Regarding 0^0, I defer to Knuth.

http://www.jhauser.us/publications/HandlingFloatingPointExceptions.html discusses it a bit:

To quote from Concrete Mathematics (Graham, Knuth, and Patashnik):

Some textbooks leave the quantity 0^0 undefined, because the functions x^0 and 0^x have different limiting values when x decreases to 0. But this is a mistake. We must define x^0 = 1, for all x, if the binomial theorem is to be valid when x = 0, y = 0, and/or x = -y. The theorem is too important to be arbitrarily restricted! By contrast, the function 0^x is quite unimportant.
<\blockquote>