As you may recall, I had beaten my family by getting this Wordle in three guesses:
At the time, I went with TOTAL as my third guess because I couldn’t think of another word that fit the restrictions of the first two guesses. But a quick review with grep
,
egrep [^irepch][^irepcha]t[^irepch]{2} wordle.txt | egrep a
revealed 22 acceptable guesses,
altos dotal lotsa outta
antas gotta lotta sutta
antsy gutta lytta total
astun jotas motza untax
autos kutas notal
botas lotas oktas
of which TOTAL and ANTSY seemed like the only likely Wordle solutions.
Two things are annoying about using grep
the way I did:
grep
.There might be a clever way to avoid the second pass through grep
but if there is, it would make the regular expression even longer than it is.
Gruber’s script avoids the repetition in both calls to grep
and the building of the regex:
fives -without IREPCH -with a23 ..t..
As you can see, the argument to fives
is a simple regex with the correct (green) letters given in their positions and periods elsewhere. This is supplemented by the -without
option, which is a list of the (black) letters that can’t appear in the solution, and the -with
option, which gives the (yellow) letters which must appear and a list of positions where they cannot appear.
It wasn’t clear from this example how fives
handles situations in which we have more than one yellow letter. It could be that you should use one -with
for each yellow letter; it could be that the argument to -with
combines the information for all the yellow letters. Either way, -with
is a very smart way of handling the yellow letters. It combines the positive—this is a letter that must be in the solution—with the negative—these are the positions it cannot be.
Because Gruber didn’t send me fives
itself, I was compelled to write my own utility, with the less clever name of wordle
. Here’s how I’d use it to get the possible third guesses for the game above:
wordle -g ..t.. -b irepch -y a23
As you can see, I can’t be bothered with long option names, so I used the colors as their mnemonic. Also, I put the green letter string as an option instead of the argument to wordle
itself. That seemed more symmetric.
As you might expect, while Gruber wrote fives
in Perl, I wrote wordle
in Python. Here it is:
python:
1: #!/usr/bin/env python
2:
3: import re
4: import itertools
5: from docopt import docopt
6: import os
7: import sys
8:
9: # Usage message
10: usage = '''Usage:
11: wordle [-g GREEN -y YELLOW -b BLACK]
12:
13: Print possible Wordle guesses based on the current state of green,
14: yellow, and black letters.
15:
16: Options:
17: -g GREEN correct letter pattern [default: .....]
18: -y YELLOW string of present letters and positions [default: ]
19: -b BLACK string of absent letters [default: ]
20:
21: GREEN is a 5-character string like '..e.t', where the correct
22: letters are at the solved positions and the periods are at the
23: unsolved positions.
24:
25: BLACK is a list of letters that aren't in the word.
26:
27: YELLOW is a string of yellow letters followed by their positions.
28: For example, if your previous guesses have yellow Rs in the second
29: and fourth positions and a yellow E in the third position, the
30: argument would be 'r24e3'.
31: '''
32:
33: # Get all the words as a string with each word on its own line
34: wordle = open(os.environ['HOME'] + '/blog-stuff/wordle/wordle.txt').read()
35:
36: # Process the options
37: args = docopt(usage)
38:
39: # Green letters
40: green = args['-g']
41: greenPositions = [ i for i, v in enumerate(green) if v != '.' ]
42: greenPositions = set(greenPositions)
43:
44: # Black letters
45: black = args['-b']
46:
47: # Yellow letters. In the dictionary, the keys are the letters, and
48: # the values are sets of yellow positions.
49: yellow = {}
50: for m in re.finditer(r'([a-z])(\d+)', args['-y']):
51: yellow[m.group(1)] = set( int(i) - 1 for i in m.group(2) )
52:
53: # Dictionary of impossible positions for the yellow letters. Like
54: # the yellow dictionary above, but with the green letter positions
55: # added.
56: impossible = {}
57: for k in yellow.keys():
58: impossible[k] = yellow[k] | greenPositions
59:
60: # Base regex patterns for each character position. Start with the
61: # green positions, and then turn the periods into negated character
62: # classes from the black and yellow letters.
63: basePattern = list(green)
64: unsolved = sorted(list(set(range(5)) - greenPositions))
65: for i in unsolved:
66: basePattern[i] = '[^' + black + ']'
67: for k in yellow.keys():
68: if i in yellow[k]:
69: basePattern[i] = basePattern[i].replace(']', k + ']')
70: if basePattern[i] == '[^]':
71: basePattern[i] = '.'
72:
73: # Starting point for permuting the yellow letters
74: start = list(yellow.keys()) + ['~']*(5 - len(yellow.keys()))
75:
76: # Set of regexes for searching the wordle string. Each regex is
77: # based on the basePattern but with some of the negated character
78: # classes replaced by possible permutations of the yellow letters.
79: regexes = set()
80:
81: def possible(s):
82: for k in yellow.keys():
83: if s.index(k) in impossible[k]:
84: return False
85: return True
86:
87: for s in filter(possible, set(itertools.permutations(start))):
88: newPattern = basePattern[:]
89: for k in yellow.keys():
90: newPattern[s.index(k)] = k
91: regexes |= {'^' + ''.join(newPattern) + '$'}
92:
93: # Accumulate Wordle words that match
94: matches = set()
95: for r in regexes:
96: for m in re.finditer(r, wordle, re.M):
97: matches |= {m.group(0)}
98:
99: # Print out the matches in alphabetical order
100: print('\n'.join(sorted(list(matches))))
The overall idea behind wordle
is to create a set of regexes, each of which does the following:
The first three of these is basically what I was doing by hand in my grep
solution. The fourth is a sort of combinatoric way of dealing with the presence of the yellow letters in positions where they haven’t yet been. For the example we’ve been looking at, the regular expressions wordle
searches on are
^a[^irepcha]t[^irepch][^irepch]$
^[^irepch][^irepcha]t[^irepch]a$
^[^irepch][^irepcha]ta[^irepch]$
As you can see, the A is put sequentially in all of its possible positions: first, fourth, and fifth. Because this example has only one letter, it’s very simple, but wordle
can handle multiple yellow letters.
Suppose my first guess was LATER. What could the next guess be? Here’s the wordle
command I’d run:
wordle -g ..t.. -b er -y l1a2
Note that the -y
argument combines the yellow letters and their positions into a single string. The results are
altho cital ictal total
altos dital notal vital
aptly dotal octal
and the regexes that were searched were
^alt[^er][^er]$
^[^erl][^era]tal$
^[^erl]lt[^er]a$
^a[^era]tl[^er]$
^a[^era]t[^er]l$
^[^erl][^era]tla$
^[^erl]lta[^er]$
As you can see, the L and A are both prevented from being in the first and second positions, respectively, and are otherwise placed in all of their possible permutations.
Let’s go through the code and see what it does.
First, the options are handled by docopt
, a lovely library that parses the options from the usage message instead of creating a usage message from an options specification. It’s my favorite way of writing scripts that need options.
The newline-separated list of possible Wordle guesses is stored in $HOME/blog-stuff/wordle/wordle.txt
which is where Line 34 reads the text that we’re going to search.
Lines 40–42 parse the -g
option and create both a green
regex and a greenPositions
set of known character positions. Line 45 parses the -b
option and creates the black
string of letters that cannot be in the solution. We’ll use that to create a negated character class.
Lines 49–51 parse the -y
option and build a yellow
dictionary from it. yellow
’s keys are the yellow letters, and its corresponding values are sets of the positions of the yellow letters. Note that because Python lists are zero-based and most human beings are one-based, the positions in yellow
are one less than what’s given in the -y
argument. Note also that I’m using sets instead of lists to avoid repeating positions in the next step.
Lines 56–58 create an impossible
dictionary that extends the yellow
dictionary to include the green positions.
Lines 63–71 build a list of regexes for each character based on the green letters, the black letters and where the yellow letters can’t be. It’s basically the regex I would use in a grep
-based solution, except each character position is an item in a list. For the game at the beginning of the post, basePattern
would be
['[^irepch]', '[^irepcha]', 't', '[^irepch]', '[^irepch]']
Lines 74–91 use basePattern
and what we know about the yellow letters to build the set of regular expressions we’re going to search for. The start
variable in Line 74 is a five-character list that begins with the yellow characters and is filled out with tildes (any non-letter character would do for this fill). In Line 87, we generate all the permutations of this list using the permuatations
function from the itertools
library. This will always yield an iterator that’s 120 lists long (that’s 5 factorial), but many of the permutations will be, for our purposes, identical and can be eliminated by converting the iterator into a set.
Let’s use our LATER example to see how this works. Recall that if our first guess in the game above had been LATER, we would have executed wordle
this way:
wordle -g ..t.. -b er -y l1a2
That would give us a start
list of ['l', 'a', '~', '~', '~']
. A call to
itertools.permutations(start)
would yield these 120 lists (where I’ve collapsed the lists into strings to make it easier to read):
al~~~ a~~~l l~~a~ ~a~~l ~l~~a ~~la~
al~~~ a~~~l l~~a~ ~a~~l ~l~~a ~~la~
al~~~ a~~~l l~~~a ~a~~l ~l~~a ~~l~a
al~~~ a~~~l l~~~a ~a~~l ~l~~a ~~l~a
al~~~ la~~~ l~~~a ~a~~l ~~al~ ~~l~a
al~~~ la~~~ l~~~a ~a~~l ~~al~ ~~l~a
a~l~~ la~~~ l~~~a ~la~~ ~~al~ ~~l~a
a~l~~ la~~~ l~~~a ~la~~ ~~al~ ~~l~a
a~l~~ la~~~ ~al~~ ~la~~ ~~al~ ~~~al
a~l~~ la~~~ ~al~~ ~la~~ ~~al~ ~~~al
a~l~~ l~a~~ ~al~~ ~la~~ ~~a~l ~~~al
a~l~~ l~a~~ ~al~~ ~la~~ ~~a~l ~~~al
a~~l~ l~a~~ ~al~~ ~l~a~ ~~a~l ~~~al
a~~l~ l~a~~ ~al~~ ~l~a~ ~~a~l ~~~al
a~~l~ l~a~~ ~a~l~ ~l~a~ ~~a~l ~~~la
a~~l~ l~a~~ ~a~l~ ~l~a~ ~~a~l ~~~la
a~~l~ l~~a~ ~a~l~ ~l~a~ ~~la~ ~~~la
a~~l~ l~~a~ ~a~l~ ~l~a~ ~~la~ ~~~la
a~~~l l~~a~ ~a~l~ ~l~~a ~~la~ ~~~la
a~~~l l~~a~ ~a~l~ ~l~~a ~~la~ ~~~la
When creating permutations, each tilde is considered a separate item, which is why so many of these lists look the same. There are 6 identical lists (3 factorial) for each unique position of L and A. We don’t need the duplicates and can get rid of them with
set(itertools.permutations(start))
to reduce the number of lists down to just 20:
al~~~ la~~~ ~al~~ ~l~a~ ~~la~
a~l~~ l~a~~ ~a~l~ ~l~~a ~~l~a
a~~l~ l~~a~ ~a~~l ~~al~ ~~~al
a~~~l l~~~a ~la~~ ~~a~l ~~~la
Of course, even these 20 lists are more than we need because many of them are impossible. Anything with an L in the first position or an A in the second should be filtered out. That’s what the possible
function in Lines 81–85 and the filter
function in Line 87 are for. possible
returns False
for all the permutations that have one or more letters in an impossible
position and True
for all the others. Ultimately, the
for s in filter(possible, set(itertools.permutations(start))):
loop that starts in Line 87 goes through seven of these permutations and combines them with basePattern
to return the regexes
set that we showed above:
^alt[^er][^er]$
^[^erl][^era]tal$
^[^erl]lt[^er]a$
^a[^era]tl[^er]$
^a[^era]t[^er]l$
^[^erl][^era]tla$
^[^erl]lta[^er]$
Finally, Lines 94–97 search the wordle
string for each of these patterns in turn and collect all of them into the matches
set. Line 100 then sorts the collection of matches alphabetically and prints them out, one per line.
I could have written a script that hewed more closely to the logic of my grep
pipeline. Such a script would have created basePattern
, searched on that, and then searched that intermediate result for words that have all of the yellow letters. It would have been easier to write and might have run faster, too. But I wanted to do something new. While I’ve used the itertools
library before, I’ve never used permutations
. And the filter
function was new to me, too, despite it being built-in. I had fun writing wordle
and learned some things I may find useful in the future.
While I’m happy with my script, I hope John Gruber publishes his. He used to publish more scripty articles, and I miss them. He’s told me he might write about it, or at least publish it as a Gist, and I look forward to that. Don’t pester him about it, though—he had a tough weekend.
[If the formatting of equations looks odd in your feed reader, visit the original article]
]]>You will not or attempt to (and will not allow others to)
…
c) use or access the Licensed Materials to create or attempt to create a substitute or similar service or product to the Twitter Applications;
…
Of note is that this section of the agreement is entitled “Reverse Engineering and other Restrictions,” and Twitter reverse engineered it just yesterday to add this new restriction.
Michael Tsai had a good question today:
I don’t understand what this means for API users such as NetNewsWire that are not trying to create their own client. What counts as a substitute?
Because you can only read tweets through NetNewsWire that certainly shouldn’t count as a “substitute or similar service or product.” But who knows what’s going through the mind of Twitter nowadays?
On this week’s episode of Upgrade, Jason and Myke (mostly Myke) talk about how it would be a perfectly reasonable business decision for Twitter to shut down its API entirely. That goes too far, and I suspect they didn’t really mean it, because a lot of the API is valuable to Twitter’s most popular tweeters. Large, well-known organizations (like media companies) that tweet a constant stream of links to and excerpts from their own websites need the API to automate that process and keep it running smoothly. Twitter can’t kill the API without pissing off their most valuable content producers.
Which, I assume, is why the new section of the Developer Agreement is written the way it is. Rewriting the API and to make Tweetbot impossible while still giving CNN the freedom to spit out automatically generated tweets every few minutes would be really hard. Adding a provision that allows Twitter itself to decide what is and is not a “substitute or similar service or product” is really easy.
[If the formatting of equations looks odd in your feed reader, visit the original article]
]]>There are plenty of Twitter accounts I still want to read that haven’t set up parallel accounts on Mastodon. For example, @BridgesCanal is just photos of bridges over English canals. Most of them are quaint old masonry arch bridges that look like something you’d see in a period drama on PBS. They’re soothing to look at, and they speak to the structural engineer in me.
I read these accounts via RSS. Once upon a time, RSS feeds were part of Twitter, but those days have long since passed. Now you have to use either the Twitter features built into your RSS reader or use a specialized service like RSS.app.^{1}
As a NetNewsWire user, I set up its Twitter extension, which uses my Twitter account’s credentials, to access these accounts’ timelines and present them to me as if they were any other RSS subscription.
If you’re not a NetNewsWire user, your feed reader probably has the same feature but in a slightly different guise. I know Feedbin has it. I would have checked out Feedly, too, but it now presents itself as this weird AI-driven thing that frankly scares me a little.
Oh, and if you’re one of those people who chucked RSS years ago on the “Twitter is my RSS” theory, you can always come back.
Update 1/16/2023 4:44 PM
I saw today, via this post by John Voorhees, that Federico Viticci wrote about using RSS to follow Twitter last week on Club MacStories. I missed Federico’s article because I dropped my subscription to Club MacStories shortly after I got my M1 MacBook Air and moved away from the iPad and Shortcuts.
I wouldn’t be surprised if Federico also mentioned this on Connected or MacStories Unwind. If so, I missed it, as I haven’t listened to an Apple-focused podcast in several weeks. Too many other things going on in my life recently. I’ve been skipping a lot of my RSS feed, too, although luckily I did see John’s post.
Anyway, I imagine that most of the people who read me also read/listen to Federico and may have wondered why I didn’t link to him. It’s because I didn’t know until now.
Nota bene: I don’t have an account with RSS.app and know nothing about it other than that it can make RSS feeds from Twitter timelines. ↩
[If the formatting of equations looks odd in your feed reader, visit the original article]
]]>We all accuse each other of cheating from time to time, especially when the day’s winner gets the word in two guesses. But this was, I think the first time repeated letters were used as evidence. As evidence goes, it’s not bad. I normally would try to use five unique letters at this stage of the game. The only reason I didn’t was that I spent a good ten minutes after the second guess trying to come up with another word that had five unique letters and used what I knew from the board. Because I couldn’t, I went with TOTAL.
Later, when I opened my MacBook Air for the day (I always play Wordle on my phone), I decided to see if there were words I could have played other than TOTAL. As in this year-old post, I used grep
to explore, entering this at the command line:
egrep [^irepch][^irepcha]t[^irepch]{2} wordle.txt | egrep a
The pattern used by the first call to egrep
^{2} searches wordle.txt
—the file with all 12,972 legal guesses—for
The results from this first call will include words that have no As. The second call filters those out, resulting in
altos dotal lotsa outta
antas gotta lotta sutta
antsy gutta lytta total
astun jotas motza untax
autos kutas notal
botas lotas oktas
I can’t imagine any of these being a solution other than TOTAL and ANTSY. I confess that ANTSY never occurred to me. ALTOS and AUTOS are both reasonable words, but they violate the unwritten rule of no plurals. If any of the others were an actual solution, I—and most Wordle users, I suspect—would be really pissed. I mean, LOTSA?
By the way, the command shown above outputs the 22 words on separate lines. To get them in the more compact four-column form, I piped the output through the reshaping command I recently learned, rs
:
egrep [^irepch][^irepcha]t[^irepch]{2} wordle.txt | egrep a | rs -t -g4 0 4
The 0 4
tells rs
to output four columns and as many rows as necessary (the zero is kind of a placeholder that rs
ignores). The -t
tells rs
to print the output column-by-column instead of row-by-row. And the -g4
tells rs
to use a four-space gutter to separate the columns.
Longtime readers may remember that I copied the pre-New York Times version of Wordle and installed it on a server I control. If you play it at the Times site, what you see here will have nothing to do with game you played yesterday. ↩
egrep
is the same as grep
, except it uses “extended” regular expressions. Extended regular expressions are reasonably close to Perl-compatible regular expressions, which is what I’m used to. ↩
[If the formatting of equations looks odd in your feed reader, visit the original article]
]]>In Cook’s post, he starts with a triangle whose vertices are defined by the three complex variables \(z_1\), \(z_2\), and \(z_3\), where the real part of each variable is the x-coordinate and the imaginary part is the y-coordinate. The area of the triangle can be computed through this non-obvious formula:
\[A = \frac{i}{4} \, \begin{vmatrix} z_1 & \overline{z}_1 & 1 \\ z_2 & \overline{z}_2 & 1 \\ z_3 & \overline{z}_3 & 1 \end{vmatrix}\]where the straight lines on either side of the matrix represent the determinant, and a bar over a variable represents the complex conjugate.
Although this is a compact and easy-to-remember formula, I knew it wasn’t the one I learned back in my finite element analysis class, primarily because we didn’t use complex variables in that class. But we did use area coordinates, and that’s where I learned how to get the area of a triangle from a (different) determinant.
Area coordinates—also called natural coordinates—for triangles are a set of three variables, \(\xi_1\), \(\xi_2\), and \(\xi_3\), that conveniently define a point within a triangle. Given the point P shown below, its coordinates are
\[\xi_1 = \frac{A_1}{A}, \quad \xi_2 = \frac{A_2}{A}, \quad \xi_3 = \frac{A_3}{A}\]where \(A\) is the area of the entire triangle, and
\[A_1 + A_2 + A_3 = A\]The advantage of using area coordinates is shown in the drawing on the right.^{2} For each vertex, one of the coordinates is one and the other two are zero. Along each side, one of the coordinates is zero and the other two add to one. Another nicety of area coordinates is that the centroid of the triangle is at \(\left( \frac{1}{3}, \frac{1}{3}, \frac{1}{3} \right)\).
Of course, using three coordinates to define a point in a plane involves one degree of redundancy. If you know two of the coordinates, the third can always be calculated through
\[\xi_1 + \xi_2 + \xi_3 = 1\]which is a direct consequence of the area summation formula above.
If you’re dealing with just one triangle, there’s not much value in using area coordinates. But if you’re doing a finite element analysis with triangular elements, you can easily have thousands of elements, so the consistency of area coordinates can be a big help—if you can easily transform between the \(xy\) and area coordinate systems. And you can. Here’s the relationship:
\[\begin{Bmatrix} 1 \\ x \\ y \end{Bmatrix} = \begin{bmatrix} 1 & 1 & 1 \\ x_1 & x_2 & x_3 \\ y_1 & y_2 & y_3 \end{bmatrix} \begin{Bmatrix} \xi_1 \\ \xi_2 \\ \xi_3 \end{Bmatrix}\]Note that the top row of the transformation matrix represents the constraint equation among the area coordinates.
The inverse transformation isn’t as nice, but it’s not too bad:
\[\begin{Bmatrix} \xi_1 \\ \xi_2 \\ \xi_3 \end{Bmatrix} = \frac{1}{2A} \begin{bmatrix} x_2 y_3 - x_3 y_2 & y_2 - y_3 & x_3 - x_2 \\ x_3 y_1 - x_1 y_3 & y_3 - y_1 & x_1 - x_3 \\ x_1 y_2 - x_2 y_1 & y_1 - y_2 & x_2 - x_1 \end{bmatrix} \begin{Bmatrix} 1 \\ x \\ y \end{Bmatrix}\]If you remember how matrix inversion works, you may suspect that \(2A\) is the determinant of the original transformation matrix. And you’d be right. Therefore, the area can be calculated through this simple determinant:
\[A = \frac{1}{2} \begin{vmatrix} 1 & 1 & 1 \\ x_1 & x_2 & x_3 \\ y_1 & y_2 & y_3 \end{vmatrix}\]It’s as easy to remember as the formula Cook wrote about and easier to compute—no need to carry around two terms for each variable.
It’s not too hard to show that this determinant and Cook’s are equivalent, especially if you have a computer algebra system. Both areas can be reduced to
\[A = \frac{1}{2} \left[ \vphantom{y^2} x_1\: (y_2-y_3) + x_2\: (y_3-y_1) + x_3\: (y_1-y_2) \right]\]which takes up less space vertically but isn’t really any simpler than the area coordinates-based determinant form.
Amazon has the fourth and current edition (2001) of the Concepts and Applications of Finite Element Analysis, written by Robert Cook (no relation to John, as far as I know) and various coauthors. It’s still in print after 20 years without a revision, a testament to its value. I have the third edition (1989) and learned from the second edition (1981). ↩
I wish the triangle in the figure, which I scanned from Chapter 5 of Robert Cook’s book, didn’t look so much like an equilateral triangle. Area coordinates can be used on any triangle. ↩
[If the formatting of equations looks odd in your feed reader, visit the original article]
]]>Of course, if any of these were true, I wouldn’t be writing this post.
Let’s start with the drawing files I had. They had come to me via email and cloud links, and I had saved them all in a folder cleverly named “drawings.” Although their file names weren’t consistent, they did start with the file number, a string of 7–10 digits, and they were all PDFs. So I was able to put them all on the clipboard with this command:
ls *.pdf | egrep '^\d{7,10}' | pbcopy
I used egrep
to get what the grep
man page calls “extended” regular expressions and what I call “not brain dead” regular expressions.
If you’re wondering why I had files that weren’t drawings in the drawings
folder, I can only say in my defense that the non-drawing files I put in that folder—like material specifications—were close enough to drawings that I thought they belonged there.
I then pasted the list of drawing files into a fresh BBEdit document. Although there are ways to do the next few filtering steps directly from the command line, I find that things typically go faster if I can see the results (and can undo when I make a mistake). The list of drawings looked like this (except there were about 100 drawings, not 7):
0123456_B - Base Weldment.pdf
12345678_C - Base Assembly.pdf
23456789_C_1_2 - Carriage Casting.pdf
3002345678_C_Steel Bushing.pdf
3102345678 Wheel Bearings.pdf
5123456789 rev B.pdf
6987654321_A_2_2.pdf
Most of the file names had an underscore and a capital letter following the drawing number. These were the revision codes^{1} and were necessary to include in my list. So the first drawing above would go in my list as
0123456 Rev. B
As you can see, though, some files didn’t indicate any revisions and some had an explicit “rev” in their name. I wanted to handle all of them, if possible.
I brought up BBEdit’s Find dialog and built the regular expression necessary to do most of the extraction.
The Find regex,
^(\d{7,10})((_| rev )([A-Z]))?(.+)$
collects the drawing number in the first set of parentheses and uses alternation to collect the optional revision code after either an underscore or “rev”. The final set of parentheses collects the rest of the file name, including the extension. Looking back on it, that last bit didn’t need to be in parentheses, but they didn’t hurt.
The Replace string,
\1 Rev. \4
puts the drawing number and revision code in the format I want.
After clicking the Replace All button, I got this,
0123456 Rev. B
12345678 Rev. C
23456789 Rev. C
3002345678 Rev. C
3102345678 Rev.
5123456789 Rev. B
6987654321 Rev. A
which is obviously wrong for the drawings that had no revision code but worked for all the others. I’m sure there’s a clever way to make a regex that avoids the “dangling Rev,” but clever takes time, and BBEdit already has a simple way to delete strings at the end of a line. It’s the Prefix/Suffix Lines… command in the Text menu.
This removes the ” Rev.” from all the lines that end with that, leaving the others untouched.
Now we move on to the unfortunate Word document with multilevel lists that contained, in addition to all the drawings the client thought I had, a long list of other documents. I opened the document, copied all the text, and pasted it into a new BBEdit document. To get rid of all the lines that didn’t include a drawing number, I used BBEdit’s Process Lines Containing… command and set it up to delete lines that did not contain a string of 7 to 10 digits.
Because Word creates nested lists by changing the left margin and indentation, there were no leading tabs in any of the remaining lines. But every line had some combination of leading numbers, letters, periods, and closing parentheses followed by a tab. Like this:
1) 0123456B
d) 12345678C - Base Assembly.pdf
3. Carriage Casting Drawing 23456789C
a. 3002345678C
t. Drawing 3102345678
aa. Dwg 5123456789B
cc. 6987654321A
While this is a mess of inconsistency, it’s not that hard to pull out the drawing number and revision code.
The Find regex is
^.+?(\d{7,10})([A-Z])?.*$
and the Replace string is
\1 Rev. \2
As before, this left several “dangling Revs,” but I was able to use the Prefix/Suffix Lines… command to get rid of them.
With two files of clean drawing lists in hand, I used the comm
command, which has become a favorite, to get the list of drawings that I had but were not in the client’s list,
comm -23 in-folder.txt client-list.txt
and vice-versa,
comm -13 in-folder.txt client-list.txt
BBEdit is great at this sort of interactive data cleaning. It puts the power of traditional command-line text filtering tools into a nice GUI. For example, the Process Lines Containing… command I used above is basically a sed
command, but with an easy way to undo my mistakes.
Engineering drawings get changed as a design evolves and those changes are tracked through revision codes. Sometimes the revision codes are numbers, but in my experience letters are more common. ↩
[If the formatting of equations looks odd in your feed reader, visit the original article]
]]>I’ve had a long an difficult relationship with outlining and outlining apps. I much prefer outlining to mind-mapping (so don’t write to me with mind-mapping recommendations), but I keep running into practical problems when using outlining apps. I categorize these problems as me problems and them problems.
The me problems have to do with converting an outline into a finished piece of writing. I’ve always had this silly belief that I should be able to convert an outline into the skeleton of a report (or a blog post or whatever, but it’s usually a report) more or less automatically and then flesh it out into a final product. This doesn’t work because, except for the items at the top-level, the various items and subitems in outlines don’t correspond perfectly to sections and subsections of a report. Some outline items are subsections, but most are paragraphs or lists within a subsection. There’s no general way of knowing what an outline item is; its level doesn’t offer enough information to slot it into the proper place in the report.
My solution to the me problem has been to stop trying to do the conversion automatically. I now write my reports from scratch starting with a blank text file while referring to my outline. The outline could be in a window on my Mac or open on my iPad propped up next to the Mac. If the outline happens to have paragraphs or lists that would fit nicely in the final report, I copy and paste them. Otherwise, I just type away, following the outline’s structure.
I confess this way of working still nags at me. Surely, the back of my brain says, there must be a way to avoid the repetition. But the front of my brain argues back that years of trying have never led to that magical solution. There’s no way to avoid the actual work of writing.
The them problems are about sharing my outlines with the people I’m working with. Quite often, when a report is in its early stages, sharing an outline with a colleague and going through its structure is a good way to organize and divvy up the work. But the people I collaborate with are seldom Mac users, and even if they were, they’re unlikely to have the same outlining software I have. When I was using OmniOutliner, I’d print my outline to a PDF document and share that. But getting my outline into a form I liked for review and sharing was never as easy as I thought it should be. I like my outlines to be very spare and unadorned as I’m working on them, but to have numbered sections and specific types of spacing when printed or displayed for review. OmniOutliner, being a WYSIWYG app, forced me to change my outline before printing to PDF and then change it back afterward. I suppose I could have automated a lot of this, but it just seemed wrong to have to do so.
In some ways, Bike seems worse than OmniOutliner for both the me and them problems. It’s Mac-only, so I can’t just open a Bike outline on my iPad to look at while I write. It doesn’t even have a Print menu item, so I can’t turn my outlines into PDFs for reviewing and sharing. But what it does have is a file format that makes it easy to get around these deficiencies.
A Bike outline in its native format is just an HTML file. Here’s a screenshot of a simple example:
And here’s the source of the Example.bike
file:
xml:
<?xml version="1.0" encoding="UTF-8"?>
<html>
<head>
<meta charset="utf-8"/>
</head>
<body>
<ul id="le4V0ize">
<li id="a0">
<p>First item</p>
<ul>
<li id="7R9">
<p>Subitem 1</p>
</li>
<li id="cCF">
<p>Subitem 2</p>
<ul>
<li id="J8w">
<p>Subsubitem A</p>
</li>
<li id="A-L">
<p>Subsubitem B</p>
</li>
</ul>
</li>
<li id="WxM">
<p>Subitem 3</p>
</li>
</ul>
</li>
<li id="iU">
<p>Second item</p>
<ul>
<li id="8C3">
<p>Subitem 1</p>
</li>
<li id="jIH">
<p>Subitem 2</p>
</li>
</ul>
</li>
<li id="aaD">
<p>Third item</p>
</li>
</ul>
</body>
</html>
As you can see, it’s basically just a bunch of nested unordered lists. You can open a Bike file in a browser, and it’s perfectly readable, albeit a bit on the vanilla side.
Since vanilla is not what I want, I wrote a short script to add a CSS section to the file that gives me the style I want.
Here’s the script, called bike2html
:
python:
1: #!/usr/bin/env python3
2:
3: import sys
4: from docopt import docopt
5: import sys
6:
7: usage = """Usage:
8: bike2html [options] BIKEFILE
9:
10: Convert a Bike outline to HTML with hierarchically numbered items.
11:
12: Options:
13: -t TTTT title [default: Outline]
14: -h show this help message
15:
16: """
17:
18: # Handle the command line option.
19: args = docopt(usage)
20: title = args['-t']
21: bike = args['BIKEFILE']
22:
23: # CSS to insert after <head>
24: css = ''' <style type="text/css">
25: body {
26: font-family: Helvetica, Arial, Sans-Serif;
27: font-weight: normal;
28: font-size: 12pt;
29: line-height: 1.8em;
30: margin: 0;
31: }
32: h1 {
33: font-weight: bold;
34: font-size: 20pt;
35: line-height: 2em;
36: text-align: center;
37: }
38: ul {
39: list-style-type: none;
40: counter-reset: item;
41: }
42: li {
43: margin-top: .9em;
44: counter-increment: item;
45: }
46: li::before {
47: display: inline;
48: content: counters(item, ".");
49: padding-right: .75em;
50: }
51: li > p {
52: display: inline;
53: }
54: @page {
55: size: Letter;
56: margin: 1in 1in .75in .5in;
57: }
58: </style>
59: '''.splitlines(keepends=True)
60:
61: # Convert the input (first argument) to HTML with CSS
62: with open(bike) as f:
63: htmlLines = f.readlines()
64:
65: # Don't include the <?xml> line
66: del htmlLines[:1]
67:
68: # Put the <style> section after <head> and the title after <body>
69: headLine = htmlLines.index(' <head>\n')
70: htmlLines[headLine+1:headLine+1] = css
71: bodyLine = htmlLines.index(' <body>\n')
72: htmlLines[bodyLine+1:bodyLine+1] = [f' <h1>{title}</h1>\n']
73:
74: print(''.join(htmlLines), end='')
As you can see, much of the script is the CSS <style>
section (Lines 24–59) that gets inserted into the file in Line 70. I use docopt
to handle command-line options; currently, the only option is -t
, which I use to set the title of the outline (with an <h1>
tag) in Line 72. The script also deletes the <xml>
line at the top of the original Bike file.
The only clever part of the script is the CSS that does the item numbering. That’s in Lines 38–50. I’m not sure where I learned how to do nested counters, but it was probably this Mozilla Developer Network page. You’ll note that even though Bike defines its outline items with <ul>
tags, you can still assign numbers to them without changing them to <ol>
tags.
Using bike2html
is easy:
bike2html -t 'Example' Example.bike > Example.html
I suppose I should make the script smarter by using the filename as the default title.
I can send Example.html
to anyone, and they’ll be able to open it. The nice thing about the “1.2.3” style of numbering the items is that it makes it easy for everyone who has the outline to refer to particular items on the phone or in an email.
You may be wondering how I can show Example.html
on my iPad as I’m writing a report. Unlike Safari on the Mac, Safari on the iPad cannot open local files. There are two three ways to get around this:
Update 11/25/2022 4:46 PM
Thanks to Andrew Kerr (on Mastodon!) for reminding me of WorldWideWeb. I bought WWW when it came out and used it for this very purpose a couple of months ago. Not sure why I stopped using it; it’s ideal for viewing this sort of static page.
Continuity allows me to select and copy text on the iPad and paste it on my Mac. It’s a nice way to work.
I should mention that I do enjoy outlining in Bike. It doesn’t have a huge number of features, but the features it has are what I need. I can see why other people might find it offputting for a writing app to not have a Print command, but it’s just right for me.
[If the formatting of equations looks odd in your feed reader, visit the original article]
]]>The puzzle is this:
What are the chances that there are two people in London with the same number of hairs on their head?
There’s a bit of misdirection in posing the problem in probability terms, as it might lead the listener to think he’s asked about the chances of two randomly selected people in London having the same number of hairs on their heads. A more straightforward—and therefore less tricky—question would be
Are there two people in London with the same number of hairs on their heads?
The answer comes from being able to estimate, within an order of magnitude or so, the number of people in London and the number of hairs on people’s heads. The former is a specific number that’s continually changing, so no one knows it except as a range. And the latter is a range by definition.
I think most people know that the population of London is at least several million. The tougher estimate is of the range of hairs on people’s heads. Ben suggests up to around 100,000, based on a hair density of 100 hairs per square centimeter over a 30 cm × 30 cm area. This hair density is equivalent to a hair spacing of about 1 mm, which seems reasonable to me.
(I’m no hair follicle expert, but any parent who’s gotten a message from their child’s school about instances of head lice showing up in class knows what it’s like to go through their kid’s scalp hair by hair.)
So with the range of hair counts at least an order of magnitude less than the population of London, there have to be at least two people with the same number of hairs. The “chances” asked for is 100%. Puzzle solved.
But Ben didn’t really ask for the solution itself. He wanted your gut reaction—before you did any hair-density calculations. And it’s probably not immediately obvious to most people that the number of hairs on someone’s is well under a million.
This is where the psychology comes in. While watching the video, I thought of a mathematically very similar question, but one that would, I believe, get instant correct answers from almost everyone:
What are the chances that there are two people in London who were born on the same day?
The number of possible birth dates of living people has to be around 40,000, which is within an order of magnitude of the hairs-on-head number. So the answer to this question is also 100%, but I bet most people would answer it correctly without hesitation and without calculation.
Even people who’ve never seen a birth notice in a newspaper probably know they exist. And they know that it’s common for there to be multiple notices every day in a big city. And those who don’t know about birth notices probably know that it’s common for large hospitals to have more than birth per day—and that big cities have many hospitals.
It’s the combination of the familiar—the hair on our heads—with the unfamiliar—how many hairs are there?—that makes Ben’s question interesting. My question, because it’s so easy, wouldn’t be interesting, even though it’s mathematically the same. Good thing Brady has Ben instead of me.
[If the formatting of equations looks odd in your feed reader, visit the original article]
]]>If you want to write your own autotooting script, you’d do well to start by reading this post at DEV instead of following the Getting Started section of the API docs. The DEV post’s author, Joseph, makes the very useful suggestion to get your script’s authorization credentials through the Mastodon web interface instead of interacting with the API directly. Had I done that to begin with, it would have saved me about a quarter’s worth of time and frustration.
You get the authorization credentials by navigating to the Development page of your Mastodon account profile, clicking the NEW APPLICATION button, and filling in a couple of fields. Don’t bother changing the redirects or the scopes; the default entries will be fine.
With that done, you’ll have an access token to use in your script. The part of the script that creates a new status is really short, thanks to the wonders of Kenneth Reitz’s Requests module. Here are the key parts of my autotooting script:
python:
1: #!/usr/bin/env python3
2:
3: import requests
4:
5: [ more imports ]
6:
7: # Mastodon information.
8: murl = 'https://mastodon.cloud/api/v1/statuses'
9: auth = {'Authorization': 'Bearer XXXXXXXXXXXXXXXXXXXX'}
10:
11: [ Stuff to collect the info I want in the toot. ]
12: [ The pieces of text I need are the summary ]
13: [ and URL of the post. They're stored in variables ]
14: [ named "summary" and "url." ]
15:
16: toot = {'status': f'''☃️ {summary}\n{url}'''}
17:
18: # Send the toot and return its URL.
19: r = requests.post(murl, data=toot, headers=auth)
20: print(r.json()['uri'])
The Mastodon URL you post to (Line 8) will depend on which server you’re hosted at. The bunch of Xs in Line 9 is replaced by the access token described above. The text of your toot goes into the status
field of a dictionary (Line 16) that’s passed as the data
parameter to the Requests post
command (Line 19). That’s all.
Well, there is one other thing you may want to know. As you’re writing and debugging your script, you probably don’t want your followers to keep seeing your test toots. But you do want to see them yourself so you can make sure they’re working. To do that, add a visibility
field to the toot
dictionary.
16: toot = {'status': f'''☃️ {summary}\n{url}''', 'visibility': 'direct'}
By setting the visibility
to direct
, it will act like a direct message to no one. Only you will see it. After debugging, just remove that part. Thanks to mdhughes for the tip.
I will not mention the game, because the results were disgusting. I guess schools that are academically inferior need to win football games to make their alumni feel better about themselves. ↩
[If the formatting of equations looks odd in your feed reader, visit the original article]
]]>Shortly after that post went up, I got an email from reader Jason Reene.
When I plug “tan^-1(cot(x))” into my TI-89 (still my go-to quick symbolic algebra tool) it returns “mod(-x, π) - π/2”
It took me a few minutes to convince myself that was an equivalent result.
It took me a lot longer to convince myself that it was equivalent, partly because I wasn’t sure how the TI-89 handles the mod
function when the dividend is negative and the divisor is positive, but mostly because I was having a hard time figuring out which quadrant mod(-x, n)
would be in for a given quadrant of x.
But it does work, even for angles well outside the (0, π/2) domain that my problem was restricted to. Here’s Mathematica’s plot of ArcTan[Cot[x]]
over (-π, π):
And here’s its plot of Mod[-x, Pi] - Pi/2
over the same domain:
Apparently, Mathematica treats the Mod
function the same way Jason’s TI-89 does. Not all programming languages do.
After making these plots, I started thinking about the tangent, inverse tangent, and modulo functions, and how their definitions could easily change the answers Jason and I got.
Let’s start with tangent and its inverse. Here’s the tangent plotted over a decent range of angles:
To see its inverse, we exchange the horizontal and vertical axes:
Because this has multiple values for every argument, we have to choose which one our inverse tangent function will return. This is called the principal value. As far as I know, every programming language and every calculator chooses the one I’ve made a darker blue—the one that returns a value between -π/2 and π/2. So if you ask your calculator for the inverse tangent of a positive number, it gives you an angle in the first quadrant; if you ask it to give you the inverse tangent of a negative number, it gives you an angle in the fourth quadrant.
And if you’ve forgotten what that “quadrant” stuff is, this should refresh your memory. The arrows point in the positive directions.
Let’s move on to modulo. This is basically what you learned as “remainder” when you were first doing division. And when you’re dealing with positive integers only, modulo is exactly what you learned back then. Extending modulo to non-integer numbers is straightforward, but the tricky bits come when either the dividend (the number you’re dividing) or the divisor (the number you’re dividing it by) are negative.
For example, if you want -7 mod 3, you could think of it as
7 = 3 × (-2) – 1
so -7 mod 3 would be –1. Or you could think of it as
7 = 3 × (-3) + 2
and -7 mod 3 would be 2. Both answers are valid, but if you’re designing a programming language or a calculator, you have to choose one or the other. For this problem, Perl, Python, and Ruby return 2, while AppleScript and JavaScript return -1.
There’s a lot more to modulo—we haven’t discussed the divisor being negative—but that’s enough for our purposes. You can look up the various definitions on Wikipedia.
When the dividend is negative and the divisor positive, both Mathematica and the TI-89 return a positive result, which is why my Mod
graph above (made in Mathematica) and the result from Jason’s calculator agree.
To show how tricky the TI-89’s formula is, let’s see how it transforms an angle in the first quadrant. We’ll use π/6, the same angle we converted with the ArcTan[Cot[θ]]
formula in the last post. You can see the manipulations graphically in this image:
The hardest part to visualize is, of course, the modulo operation. It’s the smallest counterclockwise angle from an integer multiple of π to the purple line that was drawn in Step 2. In this case, the multiple of π we use is –π, the negative x-axis, and the CCW angle to the purple line is 5π/6.
As you can see, we do get to π/3, just not as easily as we did in the last post, where we just took the complement of the angle. On the other hand, this modulo formula works for angles outside of the first quadrant, and Jason’s TI-89 came up with it on its own. As you recall, Mathematica got stuck on ArcTan[Cot[θ]]
and wouldn’t reduce it further. We had to figure out that it was the complement of π ourselves.
Thanks to Jason for an interesting view on this problem, and congratulations to the computer algebra people at Texas Instruments for a clever solution.
[If the formatting of equations looks odd in your feed reader, visit the original article]
]]>Quite often, the results seem more complicated than they should because you carry in your head certain underlying assumptions about the nature of the variables that you haven’t told the program about.
For example, I’ve seen Mathematica return this,
\[\sqrt{r^2}\]which seems weird until you realize that it doesn’t know that \(r\) is a postive—or at least nonnegative—number. But if I add this at the top of the notebook,
$Assumptions = r ≥ 0
Mathematica does what I expect and simplifies the root to just \(r\).
Once I learned about $Assumptions
, and the related Assuming
function and Assumptions
option, I got more of the results I was expecting from Mathematica. But I’m still struggling with trigonmetric simplifications. Over the weekend, I was doing a definite integral, and got a result that included this term:
ArcTan[Cot[θ]]
First, I was a little surprised to see cotangent, a function I haven’t used since high school (or maybe even junior high). For whatever reason, the mathematical derivations I’ve seen since college have eschewed the upside-down trig functions—secant, cosecant, and cotangent—in favor of dividing by their “regular” counterparts. The only exception I can think of is the secant formula for eccentrically loaded columns.
Anyway, there certainly has to be a simpler way of expressing the arctangent of the cotangent of an angle. I tried everyone’s first line of defense, Simplify
, and just got ArcTan[Cot[θ]]
back. I then tried TrigExpand
, TrigReduce
, and TrigFactor
without much hope of success, as they’re geared toward rewriting powers of trig functions and trig functions applied to multiple angles. I got what I expected: still just ArcTan[Cot[θ]]
.
It didn’t take too much thought to get the simplified answer I was looking for: the complement of \(\theta\). Consider this plot of tangent (blue) and cotangent (red) in the first quadrant.^{1}
They are mirror images of each other about \(\pi/4\). So if we start at some angle, say \(\pi/6\), on the horizontal axis and go up to the red line, we’ll get its cotangent. Running horizontally over to the blue line gives us the arctangent of that value, which we read by dropping back down to the horizontal axis. Because of the mirror symmetry, we end up at the complement of our starting angle. Or, in algebraic terms,
\[\tan^{-1}(\cot \theta) = \frac{\pi}{2} - \theta\]Of course, I didn’t get this result through Mathematica, I got it through thinking.
I was able to use Mathematica to confirm this result, but it was by a roundabout path. I plotted ArcTan[Cot[θ]]
over the first quadrant and got what looked like a nice straight line.
If it is a straight line, the slope should be –1 everywhere. To check this, I took the derivative:
Come on, Mathematica, you can do better than that. I applied TrigFactor
to the result and (finally) got –1. The y-intercept looks like it’s at π/2, which I confirmed through
I couldn’t just plug in 0 for θ, because the cotangent of 0 is undefined. And I had to use a directional limit, because the limit from below goes to –π/2.
Although I got Mathematica to confirm that ArcTan[Cot[θ]]
is π/2 – θ, I never got it to give me that answer itself. Maybe that’s because Mathematica thinks ArcTan[Cot[θ]]
is a superior answer; more likely, it’s because I haven’t been using it long enough to know its tricks.
The physical problem I was dealing with meant the angle was always going to be in the first quadrant—between 0 and π/2 radians or 0° and 90°. And in case you’re wondering, yes, I did include that restriction in my $Assumptions
declaration; but it didn’t help. ↩
[If the formatting of equations looks odd in your feed reader, visit the original article]
]]>As it happens, I was not surprised to see this. I first read about it in one of Isaac Asimov’s science books back when I was a teenager. It’s stuck with me all these years, mainly because I didn’t really understand it and couldn’t visualize it. But after reading Cook and Brannen and fiddling around in Mathematica, I finally get it, some 45 years later.
Because the orbits of both the Moon and the Earth have low eccentricities, we can model them both as circles. Using Brannen’s coordinate system, with the origin at the Sun, the x and y position of the Moon can be expressed in parametric form as
\[x(\theta) = d \cos \theta + \cos p\, \theta\] \[y(\theta) = d \sin \theta + \sin p\, \theta\]where we have taken the radius of the Moon’s orbit to be 1, and
With this coordinate system, \(x\) and \(y\) are measured not in miles or kilometers, but in multiples of the Moon’s orbital radius. That may seem weird, but it’s often convenient to put things in nondimensional form like this.
Mathematica knows the orbital radii of the Earth and Moon so we can get the value of \(d\) via
d = Entity["Planet", "Earth"]["AverageOrbitDistance"] /
Entity["PlanetaryMoon", "Moon"]["AverageOrbitDistance"]
which is 388.6. Similarly, we can get the orbital periods of the Earth and Moon using
TEarth = Entity["Planet", "Earth"]["OrbitPeriod"]
which gives 365.25636 days, and
TMoon = Entity["PlanetaryMoon", "Moon"]["OrbitPeriod"]
which gives 27.322 days. Thus,
p = TEarth / TMoon
is 13.369.
You may be questioning these numbers. After all, doesn’t the Gregorian calendar tell us a year is about 365.2425 days long?^{2} And isn’t the time between new moons more like 29½ days? Yes, but these common values are from an Earth-centric frame of reference. The values we need for our equations are the sidereal (relative to the stars) values. Luckily, Mathematica gives us the sidereal figures, rather than the tropical year (365.2422 days) or the synodic month (29.53 days).
So if we plug these values in for \(d\) and \(p\) and plot out a year’s worth of Moon positions, we get this shape:
This is roughly a circle, but you can see the waviness. It’s an SVG image, so you can zoom in as far as you like. Or you can look at this section of the path, where I’ve added a blue dashed line to represent the Earth’s orbit.
This certainly looks like a convex figure, but how can we be sure? Brannen does so by investigating the curvature of the path. If the curvature is positive throughout the path, the curve is convex.
(If you’re wondering—as I was—why that’s so, go back to the definition of curvature. Curvature is the rate of change of direction of a line tangent to the path with movement along the path. In our situation, the Moon is moving counterclockwise along the red path in the figure above. If the tangent to the path is rotating counterclockwise as the Moon moves, then the path is convex—it’s always bulging outward.)
Mathematica has a function, ArcCurvature
, that I thought would do what I wanted without fiddling with all the individual derivatives. But while ArcCurvature
does return the curvature for a curve expressed in parametric form, it gives the unsigned curvature. So instead of taking a Mathematica shortcut, we have to go the long way around.
For a path defined parametrically, the curvature is
\[\kappa = \frac{x^{\prime} y^{\prime\prime} - y^{\prime} x^{\prime\prime}}{\:\:\left({x^{\prime}}^2 + {y^{\prime}}^2 \right)^{3/2}}\]where the primes indicate differentiation with respect to \(\theta\).
Because the denominator is the power of a sum of squares, it’s always positive. So the sign of \(\kappa\) depends on the sign of the numerator. For our parametric equations, the numerator works out to be
\[d^2 + p^3 + d\,p\,(p+1)\, \cos (1 - p) \theta\]Since cosine can’t be less than –1, the smallest this expression can be is
\[d^2 + p^3 - d\,p\,(p+1)\]which factors to
\[(d - p) (d - p^2)\]Looking back at our values for \(d\) and \(p\) above, it’s clear that both of these terms are positive, so our curvature is positive for all values of \(\theta\). Hence, the Moon’s path around the Sun is convex.
Let’s move on to Isaac Asimov. What he did that’s stuck with me all these years is compare the gravitational pull of the Sun on the Moon to the gravitational pull of the Earth on the Moon. He started with Newton’s Law of Gravitation,
\[F = \frac{G \, m_1 \, m_2}{r^2}\]and compared the force associated with the Moon/Sun system to that of the Moon/Earth system. The ratio, after the common terms drop out, is
\[\frac{m_S/m_E}{\left( r_S/r_E \right)^2}\]where
You’d be hard-pressed to find a reference with a table that gives you the mean distance from the Moon to the Sun, but it is, of course, the same as the orbital radius of the Earth. So the ratio in the denominator is just the value of \(d\) we were using in our earlier calculations.
The mass ratio can be calculated from Mathematica’s built-in data:
mRatio = Entity["Star", "Sun"]["Mass"] /
Entity["Planet", "Earth"]["Mass"]
which works out to \(3.329 \times 10^5\). The force ratio, then, is
\[\frac{3.329 \times 10^5}{388.6^2} = 2.2\]In other words, the Sun exerts more than twice as much gravitational force on the Moon as the Earth does. Asimov argued that this means the Moon is really orbiting the Sun, with some perturbation by the Earth, and that’s why the Moon’s path looks like a slightly wobbly circle around the Sun.
But Asimov didn’t get into the kind of detail about the Moon’s orbit the way Cook and Brannen did, so I had a hard time getting this image of its path—which comes from Brannen’s paper—out of my head.
Now I know better.
I was going to leave it there, but curiosity got the better of me. I wanted to find the Asimov book with this material and see if my memory was right.^{3} I knew that the book was one of his collections of science essays, probably a collection of his monthly columns in The Magazine of Fantasy and Science Fiction. Because these collections were often themed and given titles like Asimov on Something, I searched for Asimov on Astronomy.
I got a hit right away. The Internet Archive has a scan of the book, and Chapter 9, “Just Mooning Around,” has the stuff I remembered, although he calculated the force ratio the other way around, with the force of the Earth on the Moon being 0.46 that of the Sun. Here’s his conclusion:
We might look upon the Moon, then, as neither a true satellite of the Earth, nor captured one, but as a planet, in its own right, moving about the Sun, in careful step with the Earth. To be sure, from within the Earth-Moon system, the simplest way of picturing the situation is to have the Moon revolve about the Earth; but if you were to draw a picture of the orbits of the Earth and Moon about the Sun, exactly to scale, you would see that the Moon’s orbit is everywhere concave toward the Sun. It is always “falling” toward the Sun.
Asimov on Astronomy came out in 1974. The “Just Mooning Around” column was first published in the May 1963 issue of F&SF and had been previously collected in Of Time and Space and Other Things. Since I was being a completist, I grabbed the magazine cover image from the Internet Speculative Fiction Database.
This is what the internet is for.
The DOI of Brannen’s paper is 10.1080/07468342.2001.11921888
, so those of you with accounts at academic libraries can use that to get a copy. The rest of us can use Sci-Hub (whose home page may have moved by the time you read this). ↩
That’s 365 days in a normal year, plus one for every leap year (0.25), minus one for three out of every four century years (0.0075). ↩
Since I’ve added this section, it’s a good bet that it was. ↩
[If the formatting of equations looks odd in your feed reader, visit the original article]
]]>