Basic Concepts: Energy

Having talked about force and fields, it seems fairly natural to move on to talking about energy, next. Of course, it also would've made sense to talk about energy first, and then fields and forces. These are interlocking concepts.

A concise one-sentence definition of energy might go something like:

the energy content of an object is a measure of its ability to change its own motion, or the motion of another object.

That's a little longer than the previous one-sentence descriptions, but I'm trying to avoid the recursion effect of the usual one-sentence definition, "Energy is ability to do work," which then requires a definition of work, which then requires a definition of force, and pretty soon you're playing six degrees of Wikipedia. Anyway, I think that the above captures the essence of energy without introducing new words requiring definition.

An object can have energy because it is moving, and it can have energy because it is stationary in a place where some interaction is likely to cause it to move. Massive objects have energy simply by virtue of having mass, and objects at finite temperature have energy because of thermal fluctuations. All of these forms of energy can be used to set a stationary object into motion, or to stop or deflect an object that is moving.

There are two basic types of energy associated with objects that are already in motion. The first is kinetic energy, which is the energy contained in an object with some mass either moving through space, or sitting in one place and spinning. A baseball flying through the air has kinetic energy, and the spinning wheel of a stationary bike has rotational kinetic energy.

There's also energy associated with the random motion of the particles making up a macroscopic object, generally referred to as thermal energy or possibly heat energy (which is arguably redundant, but whatever. This is really the sum of all of the kinetic energy of the individual atoms and molecules making up the object, but it behaves a little differently than the kinetic energy associated with an object whose center of mass is moving through space, so it gets a different name. It may not seem obvious that thermal energy can be used to change the motion of objects, but that's essentially how a steam generating plant works-- the thermal energy in a boiler converts water to steam, which turns a turbine, and generates electricity. Thermal energy turns into kinetic energy, which turns into electrical energy.

Objects that are not actually moving can have the potential to start moving, which we describe in terms of potential energy. A heavy object on a high shelf has potential energy: it's not actually moving, but it has the potential to acquire a substantial amount of kinetic energy when you bump the shelf and it falls on your foot. Two charged objects held close to one another have potential energy: when you release them, they'll either rush together, or fly apart, depending on what their charges are.

Energy is an essential concept in physics because of energy conservation, which does not mean turning off the heat when you're not at home. The law of conservation of energy says that the total energy (all forms) of a system of interacting objects is a constant, provided that nothing outside the system under consideration has a significant interaction with the objects in the system. The energy can shift from one form to another, but is neither created nor destroyed.

For example, the total energy of a system consisting of a mass hanging at the end of a string and the Earth exerting a gravitational force on the mass is a constant. As the mass swings back and forth as a pendulum, the energy changes from potential to kinetic and back, but the total remains the same. If I reach in and push the mass, though, that's an interaction from outside the system, and the energy of the pendulum-Earth system changes as a result (the energy of the larger pendulum-Earth-me system stays the same, though-- the energy I add to the motion of the pendulum comes out of the chemical energy stored in the katsudon I had for dinner).

Conservation of energy is one of the most important tools in physics because it can be used to reduce complicated physical situations to exercises in bookkeeping. If you know the total energy at the beginning of some problem, and can determine the energy of some of the objects at the end of the problem, you can find the rest of the energy by just subtracting the bit you know from the total you started with. A colleague of mine likes to make an analogy between energy and money, and I've stolen this from him for some intro classes: kinetic energy is like money in your pocket, potential energy is like money in the bank, and thermal energy is like money that you've spent. If you can balance a checkbook, you can do conservation of energy problems.

Conservation of energy is a bedrock principle of the universe. In a very fundamental way, it's the result of the fact that the laws of physics are invariant in time-- that is, that they're the same now as they were in the past, and will be in the future. This is an idea called "Noether's Theorem," and is beyond the scope of this discussion. Suffice to say that no matter what scale of physics you're working on, from subatomic to cosmological, energy will be conserved (albeit in an average sense, for certain classes of quantum problems, about which more later).

As a result, physicists like to talk about everything in terms of energy. We can describe forces in terms of the potential energy due to an interaction between two objects, and in this picture the force on a particle is represented by the tendency of objects to move toward points of lower potential energy (which you can find with a fairly trivial application of vector calculus). Above a certain fairly basic level of physics, most problems are described by writing down potential energy functions, or sketching potential energy curves-- in fact, that's really the only way to talk about interactions in quantum mechanics at the level where you're dealing with whole atoms and molecules (when you get down to the real details of quantum systems at extremely low scales, you sort of abandon the potential energy picture again-- if you're talking in terms of the exchange of force carrying bosons, there's no longer a potential energy to write down, but this, again, is beyond the scope of this discussion).

The other slippery thing about energy is its equivalence to mass, expressed through Einstein's famous equation:

E = mc2

Basically, when you start trying to work out the energy of an object moving at relativistic speeds, you find that it depends on the mass of the object (which is not surprising), and that it doesn't go to zero as the speed goes to zero (which is). Instead, you find that a stationary object has an energy equal to its mass times the speed of light squared, which is a really huge number. We don't really notice this under ordinary conditions, because the rest energy just provides a sort of constant offset, and we mostly look at changes in energy. Going from zero to one joules of kinetic energy has the same dynamical effect as going from one trillion joules of rest energy to one trillion joules of rest energy plus one joule of kinetic energy, so we mostly just ignore the rest energy in everyday circumstances.

The equivalence of mass and energy has some interesting consequences, though. Small amounts of mass can be turned into large amounts of energy, which provides the basis for nuclear fusion-- two protons and two neutrons have a tiny bit more mass than the nucleus of one helium atom. When they fuse together to make a helium nucleus in the core of a star, that extra mass is converted into energy, which keeps the star hot, and provides heat and light for any planets in the immediate neighborhood. The amount of mass converted into energy in a single fusion reaction is pretty small, but there are an awful lot of protons in the Sun, and if you add together the energy of enough fusion reactions you get, well, solar flares a hundred times the size of the Earth.

The equivalence of mass and energy is also what lets us do particle physics. Not only can you take mass and convert it to energy, but you can take energy and convert it to mass. If you take a spring, and compress it in your hands, the mass of that spring increases by an infinitesimal amount due to the potential energy you've added to the spring. That's not terribly interesting, or even detectable, but if you start with smaller masses, you can put this to use.

If you take a subatomic particle-- a proton, say-- and accelerate it to very high speeds, close to the speed of light, it acquires a large amount of kinetic energy. If that proton then collides with another particle-- an antiproton headed in the other direction, for example, that energy can be converted into matter, in the form of particles and anti-particles of all different types. Conservation of energy just tells us that the total energy of the proton-antiproton system has to remain the same-- the initial kinetic energy can be converted into any other form of energy you might want, and mass is just a lumpy sort of energy. There are some rules-- the mass tends to come in the form of particles paired with anti-particles, which can annihilate with one another and return to energy (in the form of high-energy photons, usually)-- but the whole slew of mesons and baryons and leptons that you hear particle types nattering on about can be created in accelerator experiments, given enough energy.

This is why high-energy physicists are always looking for bigger colliders, too. The larger the collider, the more kinetic energy is given to the particles being accelerated, and the higher the mass of the particles that can be created during the collision. Increasing the energy of a particle accelerator increases the number of possible particles it can make and detect, and opens the way to new physics.

The other cool thing about energy and energy conservation is that energy isn't always conserved. Conservation of energy can be violated, as long as the violation doesn't last very long. This is expressed in the energy-time uncertainty relation, which is the equation on the right in the banner to this blog.

The easiest way to understand energy-time uncertainty is to think about the energy carried by the electromagnetic field. We know from Planck and Einstein that the energy of a photon is determined by the oscillation frequency of the field associated with that photon-- higher frequencies have more energy. If you want to measure the energy of a light field, then, what you're really trying to do is to measure the frequency of oscillation of that field, and the best way to do that is to measure the time required for some large number of oscillations. The more oscillations you measure, the smaller the uncertainty in the frequency, and thus the energy. But then, the more oscillations you measure, the more time you spend doing the measurement, and the greater the uncertainty in exactly when you can be said to have made that measurement. If you do a fast measurement, you get a large uncertainty in the energy, and if you do a slow measurement, you get a low uncertainty in the energy.

At the quantum scale, this leads to the idea of "virtual particles." Particle-antiparticle pairs can pop into existence out of nowhere, as long as they go away very quickly-- in a time less than Planck's constant divided by the rest energy (give or take). An electron-positron pair can pop into existence for 10-20 seconds or so, a proton-antiproton pair for about 10-23 seconds, and a bunny-antibunny pair a whole lot less than that.

This might seem like an interesting curiosity with no practical consequences, given how short these times are, but that's not the case. It means that, in a certain sense, empty space is actually positively boiling with particles popping in and out of existence. No one pair sticks around for all that long, but the instants during which they exist are enough to show some effects-- an electron moving along through space is constantly being tugged on by interactions with virtual particles, and these interactions change the way that the electron interacts with an electromagnetic field. Which, in turn, leads to a very small change in the energy levels of a hydrogen atom, which we can measure with enough precision to clearly see the effect, which is called the "Lamb Shift."

So, not only is energy useful when it's conserved, we can see the effects when it isn't, even though it only lasts a very short time. Amazing stuff, energy. And that's pretty much everything I can think of to say about it.

More like this

Over the weekend, in an attempt to cheer me up, a kind and generous reader sent me a link [to a *really* wonderful site of crackpot science][adams]. It's a crackpot theory about how physics has it all wrong. You see, there is no such thing as gravity - it's all just pressure. And the earth (and all…
Over in the thread about Engineer Borg and his wacked-out electromagnetic theory of gravity, a commenter popped up and pointed at the web-site of someone named Tom Bearden, who supposedly has shown how to generate free "vacuum" energy using electronic and/or electromagnetic devices. I hadn't…
It's that time again - yes, we have yet another wacko reinvention of physics that pretends to have math on its side. This time, it's "The Electro-Magnetic Radiation Pressure Gravity Theory", by "Engineer Xavier Borg". (Yes, he signs all of his papers that way - it's always with the title "Engineer…
One of my fellow SBers, Kevin over at Dr. Joan Bushwell's Chimpanzee Refuge wrote a scathing article reviewing an incredibly bad anti-evolution blog. There's no way that I can compete with Kevin's writing on the topic - you should really check it out for a great example of just how to take a…

This is probably either nitpicky or perhaps wrong, but your definition doesn't work without appealing to entropy, true? A closed system at a uniform finite temperature can't do any work, but does have energy. Over time the thermal energy of the universe is getting more and more evenly distributed; once there is no longer any energy gradient there is no more ability to do work (at least at any macroscopic scale). I can't put the heat in my living room (assuming uniform temperature) to any useful purpose unless I allow that energy to move to another area (like outside, where it's freakin' cold).

I suppose this breaks down when one invokes Maxwell's demon and such.

Please disabuse me of any errors here - my physics education ended 11 years ago and since then I'm been slowly becoming a [shudder] social scientist [/shudder].

"objects at finite temperature have energy because of thermal fluctuations."

It might be worth pointing out to outsiders here that "finite temperature" means "temperature greater than absolute zero". In particular, finite is being used to contrast with zero, rather than to infinite.

A statistical mechanics nitpick: you compare thermal energy to atomic kinetic energy. But thermal energy is really more like "all the microscopic energies we're sweeping under the rug and ignoring", and as such, includes both the kinetic and potential energies of the atoms. See, e.g., the heat capacity of a solid: it receives contributions from both the kinetic and the potential terms. Often one's thermodynamic intuition is based on gases, which are rarefied enough that the interaction potential is negligible, so you only need to pay attention to the kinetic energy. But that ignores the thermodynamics of condensed matter.

By Ambitwistor (not verified) on 30 Jan 2007 #permalink

Noether's theorem 1:1 couples symmetries to observables: A divergence-free current (conserved property) arises if the Lagrangian or the action is invariant under continuous (as such or by Taylor series) transformation. Of some two dozen symmetries in physics, only parity is outside Noether's theorem. Parity attacks Lorentz, Poincaré, and diffeomorphism symmetries. Metric gravitation is parity-even. Einstein-Cartan, Weitzenböck; affine, teleparallel; non-commutative spacetime, etc. gravitations are parity-odd.

Only parity is disjoint in gravitation theories. Gravitation is empirically blind to every physical and chemical variable imagined during 420+ years of testing. Parity has never been tried (two days in commercial hardware). Resolution of gravitation theories requires another not a Noether approach.

This is probably either nitpicky or perhaps wrong, but your definition doesn't work without appealing to entropy, true? A closed system at a uniform finite temperature can't do any work, but does have energy. Over time the thermal energy of the universe is getting more and more evenly distributed; once there is no longer any energy gradient there is no more ability to do work (at least at any macroscopic scale).

I'm not sure you need to specifically invoke entropy, but you definitely need a thermal gradient in order to do useful work with thermal energy-- you need at least one object that is much hotter (or colder) than its surroundings. Despite taking thermo and stat-mech twice, I retain very little knowledge of either, so I'm a little hazy on entropy in a macroscopic thermodynamic context.

It might be worth pointing out to outsiders here that "finite temperature" means "temperature greater than absolute zero". In particular, finite is being used to contrast with zero, rather than to infinite.

Good catch.
"Finite" is another jargon term, like "order of magnitude," that I've internalized to the point that I no longer notice myself using it.

Often one's thermodynamic intuition is based on gases, which are rarefied enough that the interaction potential is negligible, so you only need to pay attention to the kinetic energy. But that ignores the thermodynamics of condensed matter.

That's exactly what happened-- when I think of anything even vaguely thermodynamic, I think in terms of a gas. I apologize unreservedly to all the interaction potentials whose contributions I slighted in my thoughtlessness.

I was about to point out the same thing as Evan above: From the Second Law of Thermodynamics, you can't do work with the thermal energy of a homogenous closed system. "A single heat reservoir can't do work", "entropy increases with time", "heat always flows from hotter to colder bodies", "the Carnot cycle is the most efficient heat engine possible", are all (I believe equivalent) formulations of the Second Law.

I found your one-sentence definition of energy very concise and profound in meaning. As a physics student it's my favourite definition of energy by far so far. So thank you. May I request a one-sentence definition of Momentum also?

From the Second Law of Thermodynamics, you can't do work with the thermal energy of a homogeneous closed system.

Do truly homogeneous closed systems occur in reality, or are they idealizations?

I don't think that requirements of energy gradients for work is reserved for thermal energy alone. Two objects at rest on the ground can be said to be at equilibrium with each other; as well as two objects travelling on parallel paths through space. Both of these systems have energy in them, but are doing no useful work. An object with a different velocity (energy gradient)would be required.

Georg: or, as Flanders and Swann put it:

"Heat won't pass from a cooler to a hotter
You can try it if you like but you far better notter
'Cause the cold in the cooler will get hotter as a ruler
'Cause the hotter body's heat will pass to the cooler
And that's a physical law!"

Another nitpick. Yo say:

"Conservation of energy is a bedrock principle of the universe. (...) Suffice to say that no matter what scale of physics you're working on, from subatomic to cosmological, energy will be conserved."

Actually, it is not clear whether energy is conserved in a cosmological context, and in which sense. As Einstein's equations couple the energy-momentum tensor of matter to the gravitational field, matter can lose (or gain) energy over time. For example, photons in an expanding universe lose energy and becoome redshifted (this is not, as many popularizations suggest, simply a Doppler effect). Of course you could say that the energy is being transferred to the gravitational field, but the problem is that in GR there is no formal definition for what "energy of a gravitational field" means, except in restricted situations like asymptotically flat spaces. So perhaps one ought to say that the energy of matter is simply not conserved when the background space is dynamical.

(Following the standard and confusing GR convention, I use "matter" to mean "everything that is not gravity", including radiation.)

In response to the several questions above about closed systems, in statistical thermodynamics, you have to be more careful about what you call a closed system, because there are many.

1. Constant number, volume, and energy: NVE ensemble, this is the idealized microcanonical ensemble. This is a system where the only governing equations are either the Schroedinger equation or Newton's (Hamilton's, Lagrangian, pick a method) equations, as we write them generically in intro physics courses for arbitrary numbers of particles. Such a system can do no external work, because by construction, it's not coupled to anything that the system can act upon in any way. The question of whether or not it can actually evolve in entropy (aka entropy production) is a significant part of nonequilibrium statistical mechanics.

2. NPH (constant number, pressure, and enthalpy), NVT (number, volume, and temperature) and several other ensembles are all closed to particles, but open to energy and thermal work exchange in one form or another.

3. At equilibrium, ensemble equivalence in the infinite number, infinite volume limit means that you use whichever construction is most convenient to solve the problem you're asking. Because it's so restrictive in the variables that you can do calculus on, we almost never actually use the NVE (microcanonical) ensemble except to derive ideal gas / ideal crystal results, or to talk generically about the number of states of a system, or in many cases these days to talk about Liupunov exponents and non-linear dynamics.

Most importantly for Chad's overall discussion, in all of these types of ensembles, there is an energy that is conserved, but there is only one where the energy that is conserved is exclusively the Hamiltonian
energy, and that's the NVE ensemble, which again is exactly the same thing as saying that we're solving the generic Hamiltonian in the form that we usually first see it for 2 particles. For all the other ensembles, the energies that are interesting are more generally "Free Energies", ie Temperature, Gibbs Free Energy, Enthalpy; Entropy in this sense is a Free Energy as well, since there are ways to construct constant Entropy ensembles, as well.

It might be worth mentioning, in the context of virtual particles and energy uncertainty, that the Casimir force (force between closely spaced conductors) is an observable effect due to the fact that vacuum is not really empty but filled with virtual photon pairs. This is interesting in that it is a macroscopic measurement of such vacuum fluctuations.

Why is it "delta T Delta E GREATER Than or equal to h-bar over 2" and not "delta T Delta E LESS Than or equal to h-bar over 2"? It would seem to me that the greater than would mean that the amount by which energy is not conservered could arbitraly high.