Clarke and Lagrange

I’ve spent a lot of time in airplanes and airports recently, the one benefit of which is catching up on my reading. A couple of days ago, I finally read Arthur C. Clarke’s 1961 novel A Fall of Moondust, which I bought back in April, I believe, when it was on sale for \$2 in the Kindle Store. Because of my recent equation-filled post, I smiled when the story mentioned satellites at the Earth-Moon Lagrange points. But I was puzzled to find that Clarke’s nomenclature and calculations didn’t match my expectations.

First, there’s the nomenclature. In the story, there’s a satellite on a line between the Earth and the Moon. It’s used for observation of and communication with the near side of the Moon because it stays in place relative to the Moon’s orbit. This is clearly the first Lagrange point, or L1, of the Earth-Moon system, and yet Clarke gives the satellite the name Lagrange II:

High above the surface of the Moon, from an antenna which, curiously enough, was aimed directly at the face of the Earth, a radio pulse launched itself into space. In a sixth of a second it had flashed the fifty thousand kilometres to the relay satellite known as Lagrange II, directly in the line between Moon and Earth. Another sixth of a second and the pulse had returned, much amplified, flooding Earthside North from Pole to Equator.

Later, he mentions a second satellite that occupies a similar spot above the far side of the Moon. This is the L2 point, but he calls the satellite Lagrange I:

When a ship’s down on the Moon, it can be spotted very quickly from one of the satellites—either Lagrange II above Earthside, or Lagrange I over Farside.

The choice of names can’t be coincidental. Clarke must have intended them to named after their respective points. He even says Lagrange II is

[b]alanced here between Earth and Moon, in a cosmic tight-rope act made possible by one of the obscurer consequences of the Law of Gravitation…

So did Clarke get the nomenclature backwards or has the nomenclature been changed in the past 55 years? I’ve never seen any reference to Lagrange points using Clarke’s numbering, but I still can’t believe he got it wrong. This isn’t just any science fiction author we’re talking about here. It’s the guy who invented the geosynchronous orbit.

With that mystery unresolved, I decided to check up on the 50,000 km distance mentioned in the first quote.

Rewriting one of the equations from my earlier post with a slight change in notation, the equation for the distance from the moon to the L1 point is

where $\rho_m$ is the distance from the center of the Moon to the L1 point expressed as a fraction of the distance from the Earth to the Moon, and $\mu$ is the mass of the Moon expressed as a fraction of the sum of the masses of the Earth and Moon. From NASA’s Moon Fact Sheet, we calculate $\mu = 0.0122$ and can then solve the equation to get $\rho_m = 0.1509$.

The mean center-to-center distance from Earth to Moon is 384,400 km, so the distance from the center of the Moon to L1 is 58,000 km. Subtracting off the radius of the Moon, that puts L1 56,300 km above the Moon’s surface.

I guess I’m a little surprised Clarke would round this number to 50,000, but “fifty thousand” does have a nice sound to it.

Function strip

This is the computer I’m typing on.1

This was the first good MacBook Air, replacing that weird thing with the flip-out door. I bought it in the fall of 2010, shortly after it was released, and it was a revelation. So thin, so light, so quiet, so quick to boot up. It instantly became my favorite Mac, displacing even the SE/30.2

But it’s thisclose to six years old, and even the best computers don’t last forever. It had a near-death experience last summer, but has somehow managed to heal itself. In the last month I’ve noticed an intermittent problem with the power cord. It’s still running Yosemite, because I figured it wasn’t worth the bother to upgrade the OS on a computer I wouldn’t be using much longer.

Frankly, the Air should have been replaced at least a year ago, but I was waiting for the release of a Retina Air. The MacBook killed that idea and didn’t replace it with a computer I wanted. The relatively low power of the MacBook doesn’t bother me, but the 12″ screen does. While this is my portable computer for business travel, it’s also my only computer for home. I don’t want a screen smaller than 13″.

That leaves the MacBook Pro, last updated when Steve Jobs was still CEO. That might be an exaggeration, but I’m very leery of buying computers that seem to be on the verge of a big upgrade. I’m still traumatized by my purchase of an LC II back in the early 90s—it was almost immediately replaced by the LC III, which was 50% faster and cost 40% less. Curse you, 90s Apple!

Anyway, Mark Gurman says there’s a new MacBook Pro coming, which means everyone else who’s been saying there’s a new MacBook Pro coming must have been right. And he says the row of function keys across the top will be replaced by a touch-sensitive OLED strip, which means everyone else who’s been saying the row of function keys across the top will be replaced by a touch-sensitive OLED strip must have been right.

Lots of people are bemoaning this loss of real keys with tactile feedback. When I mentioned on Twitter that no one touch-types up there, I got some guff from poor, self-deluded souls who swear up and down that they do.

“I use the media playback and volume controls all the time without looking,” was the most common claim. I’m sure you do. Even I can do that, but it isn’t touch-typing, and it doesn’t need to be done without looking. Hitting the media and brightness keys is a context shift. However brief it is, or however brief you imagine it to be, it isn’t done in the flow of creation the way touch-typing is.

I’m more sympathetic to vi users3 who are worried about the loss of the Escape key. Even though switching from insert mode to command mode is, by definition, a context shift, it’s a very minor one, done in service to the overall act of writing or programming.

But I, for one, welcome our touch-sensitive OLED overlords. The flexible, fungible function strip could be a boon to user interfaces, providing both a gentle assistance to new and fearful users and a great customization tool for power users.

But there is this nagging thought in the back of my head. Can Apple pull this off? Does it still have the UX chops to figure out the right way to implement what could be a very powerful addition to the Mac? So much of what’s good about Apple products, both hardware and software, seems to be based on wise, user-centric decisions made years ago. Can it still make those decisions?

This worry is not unwarranted. Some recent versions of Apple’s Mac software—iTunes and the iWork suite, for example—have been regressions. They’ve managed to be both more confusing to average users and less powerful for advanced users.

The Apple Watch is another example. Despite the brave face put on by members of the Apple press, the lack of outright praise meant the watch wasn’t nearly what it could have been, what it should have been. This has been made even more clear by the reaction to watchOS 3. It wouldn’t seem like such a great leap forward if the earlier versions hadn’t been so backward.

On the other hand, the story of watchOS 3 is an indication that Apple still has the goods, that it can still make good decisions, even if it means reversing much-hyped earlier decisions. That’s the Apple I hope to see in the new MacBook Pro.

1. And the screenshot is from the indispensible MacTracker app, free in both the Mac and iOS App Stores. ↩︎

2. If you ask me to list my favorite Macs, I’ll still put the SE/30 at the top, just to keep its memory alive. But it’s a lie. ↩︎

3. I know it’s common now to refer to these people as Vim users, not vi users, but the Escape key has been a critical part of vi use since the Bill Joy days. It’s a testament to Vim’s dominance that relatively few people even know that other versions of vi exist. ↩︎

Lagrange points redux

About a year ago, inspired by this GIF from the DSCOVR satellite, I wrote a post about the first Lagrange point of the Sun-Earth system, which is where DSCOVR is located. I wanted someday to return to the topic and get the locations of all the Lagrange points. This is the day.

Background

Lagrange

Lagrange points are points in the orbital plane of a planet1 that orbit the sun with the same period as the planet. You might think you could put a satellite at any point along a planet’s orbital path and Kepler’s laws would ensure that it has the same period as the planet. But Kepler’s laws apply only to a two-body system. This is a three-body problem, in which the satellite’s motion is influenced by the gravitational pulls of both the sun and the planet. While there is no solution to the general three-body problem, the Lagrange points—so named because they were worked out by the 18th century natural philosopher Joseph-Louis Lagrange—represent special cases where the solution is possible.

In last year’s post, I showed how to find the first Lagrange point, L1, by balancing the two gravitational forces acting on it to create a centripetal acceleration that keeps a satellite at L1 in place. This approach works, but it’s a very non-Lagrangian way of solving the problem.

Lagrange was all about energy. He took Newtonian mechanics and recast it to eliminate the need to balance forces and inertias. In Lagrangian mechanics, you get solutions by taking derivatives of the kinetic and potential energy functions. It’s an elegant technique, well suited to the explosion of analysis on the Continent back at that time.

Two-body results

Let’s start by assuming we’ve already solved the two-body problem of a sun and its planet in a circular orbit. We’ll take their masses to be $m_s$ and $m_p$, respectively, and the distance between their centers to be $R$. We’ll then introduce a nondimensional quantity, $\mu$, to represent the planet’s fraction of the total mass, $M$. Thus,

The center of mass of the two-body system—which astronomers call the barycenter because it sounds more scientific—is on a line between the two bodies a distance $\mu R$ from the sun and $(1 - \mu)R$ from the planet. Both the sun and the planet revolve about the barycenter with an angular speed $\omega$, where

The period is related to the angular speed through the relation

which leads to the well-known expression for Kepler’s Third Law, which states that the square of the period is proportional to the cube of the distance:

With these preliminaries out of the way, let’s move on to finding the Lagrange points. I want to start by pointing you to an excellent online resource, Richard Fitzpatrick’s Newtonian Dyanamics, which is available in both PDF and HTML format. Fitzpatrick, who teaches at the University of Texas at Austin (hook ’em), does a very nice job of explaining both the two-body problem and the restricted three-body problem. There’s one trick in particular that I stole directly from him to simplify a potential energy expression.

Energy

Reference frame

Here is our system of sun (yellow), planet (blue), and satellite (black) laid out on an $x\text{-}y$ coordinate system. We put the origin at the barycenter and the $x\text{-axis}$ on the line between the sun and the planet. Furthermore, we’re going to have our coordinate system rotate at a constant angular speed of $\omega$, precisely matching the movement of the sun and the planet about the barycenter. This will be our reference frame for the analysis. The advantage of using a rotating reference frame is that the sun and planet are, by definition, motionless in this frame, and our search for Lagrange points is reduced to finding points where the satellite will be motionless, too.

You may object to using a rotating reference frame.

A rotating reference frame isn’t inertial. That’s true.

You can’t do an analysis in a non-inertial reference frame. That’s not true.

Non-inertial reference frames are perfectly fine as long as you account for the acceleration terms correctly. This is the deeper truth behind d’Alembert’s Principle. Most of us learn d’Alembert’s Principle as simply moving the acceleration term in Newton’s Second Law over to the other side of the equation and treating it as an additional force.

But d’Alembert works in an energy context, too.

Potential energy

In our rotating frame of reference, the potential energy of the satellite has three terms.

The first two terms are the gravitational potential energy due to the sun and the planet, respectively, and the third term is the centrifugal potential energy due to the rotating frame. The third term wouldn’t appear in a potential energy expression written for an intertial frame.2

In the expression for $U$,

• $m$ is the mass of the satellite,
• $r$ is its distance from the center of rotation (the barycenter),
• $r_s$ is its distance from the sun, and
• $r_p$ is its distance from the planet.

See the figure above for details.

Non-dimensional form

The first thing to do is substitute our previous expressions for $m_s$, $m_p$, and $\omega^2$ into the expression for $U$.

We’re starting to see some common terms we can factor out. We can do even better if we rewrite the $r$ terms using nondimensional variables,

which allows us to write $U$ this way:

All of the terms with units have been factored out of the brackets into a constant scaling term. Finding the stationary points of $U$ now reduces to finding the stationary points of the nondimensional expression within the brackets, which we’ll call $u$.

In effect, we’ve switched from the $x\text{-}y$ coordinate system of the figure above to the $\xi\text{-}\eta$ system shown below.

Using $\rho$, $\rho_s$, and $\rho_p$ makes for a compact expression, but it isn’t convenient for plotting, which is what I want to do to help find the stationary points3 of $u$. We need to express $u$ in terms of $\xi$ and $\eta$, which we get from the Pythagorean formulas

So we end up with this,

which is a nasty mess, but we have computers to keep track of everything, so there’s no need to worry about losing terms.

Plotting the potential energy

Here’s the contour plot of $u$ as a function of $\xi$ (abscissa) and $\eta$ (ordinate). I’m plotting it for $\mu = 0.1$, because that’s a value that allows us to see all the Lagrange points. (For the Earth-Sun system, $\mu = 0.000003$, which would put L1 and L2 so close to the Earth itself we wouldn’t be able to distinguish them at this scale.)

The dirty yellow dot is the sun, the blue dot is the planet, the × is the barycenter, and the various crosses are the stationary points of $u$. You can click on the plot to see a bigger version.

The contour lines represent equal spacing in the value of $u$. They range from dark blue for the lowest points to dark red for the highest. We see that L1, L2, and L3 are colinear with the sun and planet and are at saddle points. L4 and L5 are at local maxima. The coordinates of the points, which I calculated using techniques we’ll get into later, are as follows:

Point $\xi$    $\eta$
L1 0.609 0.000
L2 1.260 0.000
L3 -1.042 0.000
L4 0.400 0.866
L5 0.400 -0.866

The $\xi$ coordinates of L1, L2, and L3 pretty much have to be calculated numerically. There’s no nice closed-form solution to get those values. But there is a simple, non-computational way to get the positions of L4 and L5, and the clue is in the values you see in the table.

That 0.866 you see for the $\eta$ value is the sine of 60°, and the 0.400 is exactly 0.1 less than the cosine of 60°. Remember that the sun is 0.1 to the left of the origin and the planet is 0.9 to the right of the origin. Putting this all together, we see that L4 is at the intersection of a 60° line up and out from the sun and a 60° line up and back from the planet. Similarly for L5, except that the lines are 60° down instead of up. Which means that L4 and L5 form equilateral triangles with the sun and the planet.

This is not a coincidence that just happens to work out when $\mu = 0.1$. It’s true regardless of the mass distribution between the sun and the planet. In the next section, we’ll prove that, but the math gets messy. If you want to just take it on faith, skip this next section.

Equilateral triangle diversion

For the fearless few, we’re going to use that trick I found in Richard Fitzpatrick’s book. There’s nothing especially hard in this; it’s just a lot of tedious algebra, and I’m going to show all the steps. Textbooks usually don’t for reasons of space, but there’s a lot of space on a web page.

Recall that

If we multiply the first of these by $1 - \mu$ and second by $\mu$ and add them together, we get (after some cancellation)

Therefore

We can substitute this into the compact expression for $u$ to get

or, after rearranging

Equations for the stationary points

What good is this? Well, although it may not seem like it, it actually makes it a little easier to take the partial derivatives of $u$ with respect to $\xi$ and $\eta$ in order to find the stationary points. We’ll use the chain rule to do it:

The partial derviatives with respect to $\rho_s$ and $\rho_p$ are simple:

The easy way to get the partials of $\rho_s$ and $\rho_p$ with respect to $\xi$ and $\eta$ is to take the total differentials of the expressions for $\rho_s^2$ and $\rho_p^2$:

Dividing the top equation by $2 \rho_s$ and the bottom by $2 \rho_p$ gives us

which means

Now we have all the pieces needed to build the equations for the stationary points:

Simplifying a bit we get

Solving the equations

The second equation is the key. First, we can factor out the $\eta$:

This means that either

which is what leads us to L1, L2, and L3 (we’ll get to that later), or

Let’s explore this condition. We’ll move the terms that don’t involve $\rho_s$ or $\rho_p$ to the other side of the equation.

An obvious solution to this equation is $\rho_s = \rho_p = 1$, which will work for all values of $\mu$. What we don’t know, though, is whether that’s the only solution for $\eta \ne 0$. To see if it is, we have to combine this result with the first stationary equation.

Let’s start by solving for $\rho_s^3$. We can multiply through by $\rho_s^3 \rho_p^3$ to get rid of the fractions:

And then solve for $\rho_s^3$:

We plug this into the first stationary equation to get

which simplifies first to

and then to

Once again, we can factor out a common term and simplify:

With this, we can say either

or

But the second of these is impossible because the $\xi$ and $\mu$ terms cancel, leaving $1 = 0$. So the only solution for $\eta \ne 0$ is

and therefore $\rho_p = 1$, which means $\rho_s = 1$, confirming our guess about the equilateral triangle solution for L4 and L5.

Determining the colinear positions

OK, now that we’ve confirmed the equilateral triangle postions for L4 and L5, let’s explore the colinear positions, L1, L2, and L3.

The two equations that must be satisfied for every Lagrange point are

(If you’re wondering where these equations came from, it’s because you skipped over the previous section. The path to enlightenment is not easy, grasshopper.)

An obvious condition that solves the second equation is $\eta = 0$. That’s the value of $\eta$ for L1, L2, and L3. All we need to do then is pull three solutions for $\xi$ out of the first equation. We’ll refer to this layout of the points to specialize the equation for each of the points:

L1

For very small values of $\mu$, $\rho_p$ will also be small, so it’s convenient to put the whole equation in terms of $\rho_p$:

Expanding and collecting terms gives

or

Most numerical equation solving routines will have no trouble with this equation, but as I said earlier, there is no simple closed-form solution for it. We can, however, take advantage of the fact that $\rho_p$ is relatively small when $\mu$ is very small to get a closed form approximate solution:

or

Notice that $\mu$ and $\rho_p$ are at different levels of “small.” The cube/cube root relationship means that $\mu$ is much smaller than $\rho_p$.

For $\mu = 0.1$, a numerical solution of the exact expression gives $\rho_s = 0.291$ which corresponds to $\xi = 0.609$ as given in the table above. The approximate solution is $\rho_s = 0.333$, which is pretty far off, mainly because $\rho_s$ just isn’t small enough.

L2

The determination of L2 follows the same pattern. For this position, with the point beyond the planet,

so

After expanding, collecting, and rearranging as we did above, we get

As with L1, this can be solved numerically without much trouble, but there is a decent closed-form approximation for small $\mu$ and $\rho_p$. It’s the same as the approximation for L1:

or

This puts the L2 position about as far outside the planet’s orbit as L1 is inside the planet’s orbit.

For $\mu = 0.1$, a numerical solution of the exact expression gives $\rho_s = 0.360$ which corresponds to $\xi = 1.260$ as given in the table above. The approximate solution is $\rho_s = 0.333$, which again is pretty far off.

L3

Finally, we have L3, where we have to be careful with the signs. Because they’re distances, $\rho_s$ and $\rho_p$ are positive, but the coordinate $\xi$ is negative.

In this case, we’ll write the first stationary equation in terms of $\rho_s$.

In this case, $\rho_s$ is going to be close to 1, so we can introduce a small value, $\delta$, such that $\rho_s = 1 - \delta$. That turns the stationary equation into

which looks like a real mess, but as before we expand, collect, and rearrange to get

Ignoring the higher-order terms in $\delta$, we get the approximation

In this case, $\mu$ and $\delta$ are at about the same order of “small.”

Using this approximation, the $\xi$ coordinate is

For $\mu = 0.1$, a numerical solution of the exact expression gives $\delta = 0.0584$ which corresponds to $\xi = -1.042$ as given in the table above. The approximate solution is $\delta = 0.0648$. The percent error in this approximation for $\delta$ is comparable to that of the earlier approximations for $\rho_p$.

The Sun-Earth Lagrange points

As mentioned earlier, $\mu = 0.000003$ for the Sun-Earth system. With such a small value of $\mu$, the approximations developed above should be pretty accurate. Let’s see.

• For L1, a numerical solution of the exact equation gives $\rho_p = 0.00997$, while the approximate solution is $\rho_p = 0.01000$.
• For L2, a numerical solution of the exact equation gives $\rho_p = 0.01003$, while the approximate solution is $\rho_p = 0.01000$.
• For L3, both the numerical solution of the exact equation and the approximate solution are $\delta = 1.75\times10^{-6}$. You have to go four more decimal places to see a difference.

As expected, the approximations are quite good. Probably not good enough for NASA, but good enough for a blog post.

The real value of the approximate formulas is not for computation, it’s for insight. By seeing how $\rho_p$ and $\delta$ scale with $\mu$, we get a sense of how the positions of the colinear Lagrange points change with changing mass distributions.

Stability

It’s often said that L4 and L5 are the stable Lagrange points. This seems wrong, because those points are at local maxima of the potential energy, not local minima, and stability is associated with minima. My understanding is that the stability comes from Coriolis forces, which tend to keep objects in orbit around L4 and L5. We didn’t include a Coriolis term in our potential energy expression because our analysis was designed to find places where the satellites would be stationary in our rotating frame of reference. Coriolis forces arise only when a body is moving relative to the rotating frame.

I may look into redoing the analysis with a Coriolis term. Check back in another year.

Update 08/18/2016 8:23 AM
The Trojan asteroids are clustered around the L4 and L5 positions of the Sun-Jupiter system. They got a mention from Jason Snell and Stephen Hackett on this week’s episode of their Liftoff podcast, which I just listened to this morning. The plan of the proposed Lucy space mission is to visit five of the Trojan satellites.

A tip from Jeff Youngstrom on Twitter led me to this remarkable page by Petr Scheirich, which has a wealth of graphics related to comets and asteroids, including this animation of the Trojan (green) and Hilda (red) groups as viewed in a reference frame that rotates with Jupiter.

The animation covers, I believe, one Jovian year. The in-and-out movement of Jupiter represents its elliptical orbit from perihelion to aphelion, and you can track the orbits of at least some of the green dots around the L4 and L5 positions.

1. Although we tend to be most interested in the Sun-Earth Lagrange points, there are similar points for every sun-planet combination and for every planet-moon combination, too. ↩︎

2. And it’s not a coincidence that it looks like a kinetic energy term with the sign changed. D’Alembert strikes again! ↩︎

3. Stationary points are where the function is at a local maximum, minimum, or saddle point. They’re the points where the slopes of the function’s surface are zero. ↩︎

Combinatorics and itertools

Yesterday I ran into an interesting little two-stage combinatorics problem. I solved the math portion of the problem (how many possibilities are there?) with a couple of elementary calculations, and I solved the enumeration portion of the problem (list all the possibilities) by learning how to use a couple of functions in the Python itertools library.

This is the problem:1

Six books are lying on a table in front of you. How many ways can you arrange the books, considering both the left-to-right order of the books and whether they’re set with the front cover facing up or down?

Here are a couple of example arrangements. Other manipulations of the books, like spinning them around or setting them on their spine, are not considered.

The key to solving this problem is recognizing that it’s essentially an overlay of two simple problems. The first is the ordering of the books, which is a permutation problem. Six items can be ordered in $6! = 720$ different ways. The second is the up-or-down sequence of the six books, which can be done in $2^6 = 64$ different ways.

Because each of the 720 orderings of the books can have 64 up/down orientation sequences, the total number of arrangements of this type is $720\times64=46,080$.

That’s the easy part. The harder part is listing all the arrangements. My first thought was to write a recursive function or two, but I stopped myself, figuring there must be a Python library that’s already solved this problem. And so there is; I just had to learn how to use it.

The first thing I learned was that the itertools library is, as its name implies, all about iterators. These are Python objects that represent a stream of data, but which don’t provide the entire stream at once. Printing an iterator object gets you a description like this,

<itertools.permutations object at 0x103b9e650>


not the full sequence. You have to step (or iterate) through an iterator to get its contents.

We’ll be using the itertools permutations and product functions.2

python:
from itertools import permutations, product


To make things fit here in the post, I’m going to reduce the size of the problem to three books, which we’ll call A, B, and C. We can define them as the characters in a string. Similarly, we’ll call the orientations of the books u and d.

python:
books = 'ABC'
orient = 'ud'


Let’s start by solving the book-ordering problem by itself. The simplest form of the permutations function does exactly what we want:

python:
for b in permutations(books):
print b


The output is

('A', 'B', 'C')
('A', 'C', 'B')
('B', 'A', 'C')
('B', 'C', 'A')
('C', 'A', 'B')
('C', 'B', 'A')


OK, maybe this isn’t exactly what we want. We gave it a string and it gave us back a sequence of tuples instead of a sequence of strings. Still, it is the $3! = 6$ permutations we wanted, and we can work with it.

The up/down orientation problem can be solved with product, which returns an iterator of the Cartesian product of the inputs. In a nutshell, what this means is that if you give it a pair of lists, product will return an iterator that walks through every possible pairwise combination of the inputs’ elements. Similarly, if you give it three lists, it returns all the possible triplets.

For our three-book problem, we want something like this:

python:
for p in product(orient, orient, orient):
print p


The output is the sequence of $2^3 = 8$ possibilities:

('u', 'u', 'u')
('u', 'u', 'd')
('u', 'd', 'u')
('u', 'd', 'd')
('d', 'u', 'u')
('d', 'u', 'd')
('d', 'd', 'u')
('d', 'd', 'd')


But product doesn’t have to be used in such a naive way. When the same input is repeated, you can tell it so.

python:
for p in product(orient, repeat=3):
print p


Even better, we can make it clear that the number of repeats is equal to the number of books.

python:
for p in product(orient, repeat=len(books)):
print p


The answer is the same sequence as above.

Now that we know how to solve the two individual problems, we can take the Cartesian product of them to get the all the arrangements of the combined problem.

python:
for c in product(permutations(books), product(orient, repeat=len(books))):
print c


This gives all $6\times8 = 48$ arrangements for this smaller problem.

(('A', 'B', 'C'), ('u', 'u', 'u'))
(('A', 'B', 'C'), ('u', 'u', 'd'))
(('A', 'B', 'C'), ('u', 'd', 'u'))
(('A', 'B', 'C'), ('u', 'd', 'd'))
(('A', 'B', 'C'), ('d', 'u', 'u'))
(('A', 'B', 'C'), ('d', 'u', 'd'))
(('A', 'B', 'C'), ('d', 'd', 'u'))
(('A', 'B', 'C'), ('d', 'd', 'd'))
(('A', 'C', 'B'), ('u', 'u', 'u'))
(('A', 'C', 'B'), ('u', 'u', 'd'))
.
.
.
(('C', 'A', 'B'), ('d', 'd', 'u'))
(('C', 'A', 'B'), ('d', 'd', 'd'))
(('C', 'B', 'A'), ('u', 'u', 'u'))
(('C', 'B', 'A'), ('u', 'u', 'd'))
(('C', 'B', 'A'), ('u', 'd', 'u'))
(('C', 'B', 'A'), ('u', 'd', 'd'))
(('C', 'B', 'A'), ('d', 'u', 'u'))
(('C', 'B', 'A'), ('d', 'u', 'd'))
(('C', 'B', 'A'), ('d', 'd', 'u'))
(('C', 'B', 'A'), ('d', 'd', 'd'))


The presentation is a mess, but we can clean that up easily enough by changing the print statement.

python:
for c in product(permutations(books), product(orient, repeat=len(books))):
print ' '.join(x + y for x,y in zip(*c))


This gives us a more readable output.

Au Bu Cu
Au Bu Cd
Au Bd Cu
Au Bd Cd
Au Cu Bu
Au Cu Bd
.
.
.
Cu Bu Au
Cu Bd Au
Cd Bu Au
Cd Bd Au


The permutations and product functions can take any sequence type as their arguments, so we could define books and orient this way,

python:
books = ("Cat's Cradle", "Slaughterhouse Five", "Mother Night", "Mr. Rosewater", "Breakfast of Champions", "Monkey House")
orient = ("up", "down")


and save the sequence of arrangements for the full problem as a CSV file:

python:
f = open('arrangements.csv')
for c in product(permutations(books), product(orient, repeat=len(books)):
f.write(','.join(x + ' ' + y for x,y in zip(*c)) + '\n')
f.close()


This gives us the full solution: a 6-column, 48,080-row table with entries that look like this:

Cat's Cradle down


The file can be imported into a spreadsheet or a Pandas data frame for later processing.

Sometimes just knowing how many arrangements are possible is all you need, but when you have to provide the arrangements themselves, itertools has you covered.

1. OK, this isn’t actually the problem. I’m loath to pose it the way it was posed to me because that might betray a confidence. But mathematical features of the problem as posed here is an exact match to those of the original problem. ↩︎

2. I used Python interactively to explore the itertools functions and solve the problem, so instead of presenting a full script, I’m going to give the solution as a series of steps with narrative in between. I used Jupyter in console mode, but there are other ways to use Python interactively. ↩︎