I’ve been thinking a lot about the *NY Times* Connections game lately, mainly because my family keeps playing it wrong. They’d get mad at me if I told them that, so I’m telling you.

As I’ve said before, Connections is an obvious rip-off of the Connecting Wall segment of the BBC game show *Only Connect*. The main differences, apart from the Connecting Wall being much, much harder, are

- \n
- The Connecting Wall, being on TV, is timed. Connections is not. \n
- In the Connecting Wall, wrong guesses don’t count against you until you’ve solved two groups. Connections counts your wrong guesses right from the start. \n
- Three wrong guesses in the Connecting Wall and you’re out. It’s four wrong guesses in Connections. \n

These rule differences mean game play is different. In the early stages of the Connecting Wall, rapid guesses are a common way to eliminate red herrings. If there are five clues that could fit into a category, you can eliminate the red herring in no more than five quick guesses,^{1} and that’s a good strategy for figuring out groups. It’s after the first two groups are set that teams tend to take a more methodical approach to conserve wrong guesses.

In Connections, at least the way I think it should be played, the goal is not simply to get all four groups (I’ve never failed to do that) but to get them with no mistakes. Texting a four-row solution to your group of fellow players is the ideal. Just guessing from the start—as some people I know do—is an almost certain way to get less than a perfect score.

\n\nSo unless I’m in a hurry or have given up, I don’t start submitting guesses until I’m sure of at least three, and ideally all four, of the categories. This strategy means I almost always get a better score than my hasty wife and kids, and the additional need to keep track of all the categories in my head is another weapon in the fight against cognitive decline.^{2}

By the way, if you’re outside the UK and would like to see *Only Connect*, the wheelsongenius YouTube account uploads episodes shortly after they air. The show is currently in Series 19, and wheelsongenius has playlists for several of the older series.

\n

\n

\n

- \n
- \n
Combinatorics can work against you. If there are six clues that could fit in a category, it could take as many as 15 guesses to get through all the combinations. Keeping track of which two you’ve kept out of your previous guesses is basically impossible, even for the kind of quizzing champions that appear on

\n*Only Connect*. ↩ \n - \n
I’m skeptical of the claim that doing puzzles and other brainwork can ward off the effects of aging, but it can’t hurt. ↩

\n \n

\n

[If there are equations in this post, you will see them rendered properly in the original article.]

"}, {"title": "A shell script for blank calendars", "url": "https://leancrew.com/all-this/2023/09/a-shell-script-for-blank-calendars/", "author": {"name": "Dr. Drang"}, "summary": "In which I learn how to handle command-line options in bash. Will I ever use this again?", "date_published": "2023-09-16T21:34:13+00:00", "id": "https://leancrew.com/all-this/2023/09/a-shell-script-for-blank-calendars/", "content_html": "Generally speaking, I dislike writing shell scripts. The operators are cryptic (yes, I’ve enjoyed writing Perl), whitespace matters way more than it should (yes, I’ve enjoyed writing Python), and I just always feel I’m one step away from disaster. A lot of my scripts start out as shell scripts but get changed into Perl or Python once they get more than a few lines long. The script we’ll discuss below is an exception.

\nI wanted a script to help me print out blank monthly calendars. The program I’ve always used for this is `pcal`

, which is pretty easy to use. For example,

`pcal -e -S 10 2023 3\n`

\nwill create a three monthly calendars starting with this coming October. The `-e`

option tell `pcal`

to make empty calendars,^{1} and the `-S`

tells it not to include mini-calendars for the preceding and succeeding months.^{2} The result looks like this:

The thing about `pcal`

is that the *p* stands for *PostScript*, a great file format but one that’s been superseded^{3} by PDF. So to get `pcal`

’s output into a more modern format, I pipe its output to `ps2pdf`

:

`pcal -e -S 10 2023 3 | ps2pdf - -\n`

\nThe first hyphen tells `ps2pdf`

to get the PostScript from standard input and the second hyphen tells it to write the resulting PDF to standard output. Of course, I really don’t want the PDF code spewing out into my Terminal, so I use Apple’s very handy `open`

command to pipe it into Preview:

`pcal -e -S 10 2023 3 | ps2pdf - - | open -f -a Preview\n`

\nThe `-f`

option tells `open`

to take what being piped in through standard input and the `-a Preview`

tells it to open that content in the Preview application.

This isn’t the most complicated command pipeline in the world, but I have trouble remembering both the `-S`

option and the order of the month, year, and count arguments. So I decided to whip up a quick little shell script to replace my faulty memory.

You should know first that my main use of this command is to print a few upcoming months for my wife. She’s always preferred paper calendars but decided last December that 2023 would be different, so I didn’t get a 2023 calendar for her for Christmas. Partway through the year, she changed her mind. There’s a lot less selection for calendars in spring, and it would kill her to waste money on a full year when she’d only use eight months, so she asked me to print her a few months at a time.

\nMy first thought was to make a script that takes just two arguments: the starting month and the number of months—I could have the script figure out the year. That thinking led to this simple script, which I called `bcal`

:

`bash:\n1: #!/usr/bin/env bash\n2: \n3: y=$(date +%Y)\n4: pcal -e -S $1 $y $2 | ps2pdf - - | open -f -a Preview\n`

\nThis worked fine, but you’ve probably already seen the problem. What happens at the end of the year, when it’s December and she wants calendars for the first few months of the following year?

\nI could use `date`

to get the current month, `date +%m`

, and if it’s 12, add one to `$y`

. But what if I wanted to print out the upcoming January calendar in November? Instead of trying to have the program guess what I wanted, it seemed better for me to tell it what I wanted. That meant adding an option to `bcal`

to let me tell it I wanted next year instead of this year.

At this point, I was tempted to give up on bash and move to Python. I know how to handle options, dates, and external calls in Python, so the switch would have been fairly easy. But I had an itch to learn how to do options in bash. Couldn’t be too hard, could it?

\nIt wasn’t. The key command is `getopts`

, and it’s easy to find examples of its use. And once I had `getopts`

working, I expanded the script to add a help/usage message and one bit of error handling. Here’s the final version of `bcal`

:

`bash:\n 1: #!/usr/bin/env bash\n 2: \n 3: # Make PDF file with blank calendar starting on month of first argument\n 4: # and continuing for second argument months\n 5: \n 6: usage=\"Usage: bcal [-n] m c\n 7: Arguments:\n 8: m starting month number (defaults to this year)\n 9: c count of months to print\n10: Option:\n11: -n use next year instead of this year\"\n12: \n13: # Current year\n14: y=$(date +%Y)\n15: \n16: # If user asks for next year (-n), add one to the year\n17: while getopts \"nh\" opt; do\n18: case ${opt} in\n19: n) y=$((y + 1));;\n20: h) echo \"$usage\"; exit 0;;\n21: ?) echo \"$usage\"; exit 1;;\n22: esac\n23: done\n24: \n25: # Skip over any options to the required arguments\n26: shift $(($OPTIND - 1))\n27: \n28: # Exit with usage message if there aren't two arguments\n29: if (($# < 2)); then\n30: echo \"Needs two arguments\"\n31: echo \"$usage\"\n32: exit 1\n33: fi\n34: \n35: # Make the calendar, convert to PDF, and open in Preview\n36: pcal -e -S $1 $y $2 | ps2pdf - - | open -f -a Preview\n`

\nLines 17–23 handle the options. I decided on `-n`

as the option for “next year” and you can see in the `case`

statement that giving that option adds one to the current year. Any other options lead to the usage message and a halt to the script.

Line 26 uses `shift`

to skip over the options to the required arguments. `$OPTIND`

is the option index, which gets increased by one with each option processed by `getopts`

, so this command makes `$1`

point to the month and `$2`

point to the count, just as if there were no options.

The error handling in Lines 29–33 is limited to just making sure there are two required arguments. If the arguments are letters or negative numbers, the script will continue through this section and fail in a clumsy way. I’m not especially worried about that because this is a script for me, and I’m unlikely to invoke it as `bcal hello world`

.

Anyway, now I can get the next three months with

\n`bcal 10 3\n`

\nand the first two months of next year with

\n`bcal -n 1 2\n`

\nWhen Preview opens, it shows me a temporary file.

\n\nUsually I just print it out and the temporary file is deleted when I quit Preview. This is the nice thing about piping into `open`

: the script doesn’t create any files that I have to clean up later. But I can save the file if I think there’s a need to.

I should mention that `pcal`

can be installed through Homebrew, and `ps2pdf`

is typically installed as part of the Ghostscript suite, which is also in Homebrew.

Now that I kind of know how to use `getopts`

, I’ll probably extend my shell scripts before bailing out to Perl or Python. I’m not sure that’s a good thing.

\n

\n

\n

- \n
- \n
By default,

\n`pcal`

looks in your home directory for a file named`.calendar`

and parses it to print entries on the appropriate days. Back when I was a Linux user, this was how I kept track of my calendar. Whenever I added a new entry, I’d print out an updated calendar on the back of a sheet I pulled out of the recycling bin. It worked pretty well in those pre-smartphone days. ↩ \n - \n
English has more spelling anomalies than there are stars in the sky, but right now the one that’s bothering me the most is that

\n*succeeding*has a doubled E and*preceding*doesn’t. ↩ \n - \n
No doubled E! ↩

\n \n

\n

[If there are equations in this post, you will see them rendered properly in the original article.]

"}, {"title": "Simple drawings with Mathematica", "url": "https://leancrew.com/all-this/2023/09/simple-drawings-with-mathematica/", "author": {"name": "Dr. Drang"}, "summary": "Since I don’t have a 3D drawing app.", "date_published": "2023-09-15T23:24:57+00:00", "id": "https://leancrew.com/all-this/2023/09/simple-drawings-with-mathematica/", "content_html": "A couple of days ago, I wrote a post that included this image:

\n\nBecause I don’t have a 3D drawing app, I did it in Mathematica. And because I’m new to Mathematica, I fumbled around a bit before figuring out what to do. I decided to write up what I learned so I could refer to it later, and I decided to post it here in case it’s of any value to anyone else.

\nThe key function when creating 3D images (that aren’t plots) is `Graphics3D`

. As you can see from the linked documentation, it can take an enormous number of arguments and options. The main argument is a list of the objects to be drawn, which in the drawing above consisted of the boxy representation of an iPhone and three arrows representing the x, y, and z axes (I added the axis labels “by hand” in Acorn).

One of the first things I learned was to create the objects separately instead of trying to build them within the call to `Graphics3D`

. It’s certainly possible to make this image entirely within `Graphics3D`

, but the function call becomes really long and confusing if you do it that way. I started by defining variables with the dimensions of the phone (in millimeters):

`b = 71.5\nh = 147.5\nt = 7.85\n`

\nIn case you’re wondering, `b`

is commonly used in my field for the width of objects—it’s short for *breadth*. We avoid `w`

because we like to use it for weight.

The boxy iPhone is defined using the `Cuboid`

function:

`phone = Cuboid[{-b/2, -h/2, -t/2}, {b/2, h/2, t/2}]\n`

\nThe two arguments are its opposite corners.

\nIn theory, I could use Mathematica’s own knowledge of its coordinate system to draw the axes, but it defaults to drawing axes along the edges of a box that encloses the object, and I didn’t find any handy examples of overriding that default. It was easier to define the axes using the `Arrow`

function:

`xaxis = Arrow[{{0, 0, 0}, {b/2 + 25, 0, 0}}]\nyaxis = Arrow[{{0, 0, 0}, {0, h/2 + 25, 0}}]\nzaxis = Arrow[{{0, 0, 0}, {0, 0, t/2 + 25}}]\n`

\nThe argument to `Arrow`

is a list of two points: the “from” point and the “to” point. As you can see, each arrow starts at the origin (which is the center of the phone) and extends in the appropriate direction 25 mm past the edge of the phone. Why 25 mm? It looked about right when I tried it.

With the objects defined, I called `Graphics3D`

to draw them:

`Graphics3D[{Gray, phone, Black, Thick, xaxis, yaxis, zaxis},\nBoxed -> False, ImageSize -> Large]\n`

\n(I’ve split the command into two lines here to make it easier to read, and I’ll do the same from now on.)

\nAs you can see, the list of objects that makes up the first argument is interspersed with directives on how those objects are to be drawn. The first directive, `Gray`

, applies that color to `phone`

. Then `Black`

overrides `Gray`

and is applied to the three axes that follow. I added the `Thick`

directive before the axes when I saw that they looked too spindly by default.

The `Boxed->False`

option stops Mathematica from its default of including a wireframe bounding box in the image. `ImageSize->Large`

does what you think—it makes the image bigger than it otherwise would be.

Here’s what Mathematica displays:

\n\nMathematica obviously thinks the z direction should be pointing up. This makes sense, but it isn’t what I wanted. The notebook interface allows you “grab” the image and rotate it into any orientation, so that’s what I did, putting it into the position you see at the top of the post. Then I right-clicked on the image and selected

\n from the contextual menu. I opened the resulting image file in Acorn, added the axis labels, and uploaded the result to my web server.After publishing the post, I returned to Mathematica to see if I could get it to clean a few things up. First, I wasn’t happy with the brownish color that appeared on certain edges, depending on the orientation. That was cleared up with the `Lighting->Neutral`

option. Then I wanted programmatic control over the orientation, which I got via `ViewPoint->{-50, 30, 75}`

, which sets the location of the virtual camera, and `ViewVertical->{.1, 1, 0}`

, which rotates the camera about the axis of its lens until the given vector is pointing up in the image.

Finally, I wanted to add the axis labels in Mathematica instead of relying on another program. This meant adding `Text`

objects to the argument list, one for each axis. The final call to `Graphics3D`

looked like this:

`Graphics3D[{GrayLevel[.5], phone,\nBlack, Thick, xaxis, yaxis, zaxis, \nFontSize -> 16,\nText[\"x\", {b/2 + 25, -7, 0}], \nText[\"y\", {-7, h/2 + 25, 0}],\nText[\"z\", {-5, -5, t/2 + 25}]}, \nBoxed -> False, ImageSize -> Large,\nViewPoint -> {-50, 30, 75}, ViewVertical -> {.1, 1, 0},\nLighting -> \"Neutral\"]\n`

\nEach `Text`

object includes both the text and the point at which it is to be displayed. The `Text`

items are preceded by a `FontSize`

directive to make them big enough to see clearly. The `Black`

directive earlier in the list was still in effect, so the text color was black.

Here’s the result:

\n\nAs you can see, I’ve made the image more upright, and the neutral lighting has gotten rid of the weird brownish and bluish casts of the original. You may also note that I changed the original `Gray`

directive to `GrayLevel[.5]`

. This made no difference in the final output, but the `GrayLevel`

argument did let me play around with different shades of gray before deciding that the 50% provided by `Gray`

was just fine.

I still have a long way to go with Mathematica, but I’m making progress.

\n

[If there are equations in this post, you will see them rendered properly in the original article.]

"}, {"title": "Testing MathML", "url": "https://leancrew.com/all-this/2023/09/testing-mathml/", "author": {"name": "Dr. Drang"}, "summary": "No real content in this post. I’m just checking to see if MathML equations render properly in RSS readers", "date_published": "2023-09-14T15:53:10+00:00", "id": "https://leancrew.com/all-this/2023/09/testing-mathml/", "content_html": "As I mentioned on Mastodon yesterday, I expect to be be including more equations in future posts, and I’d like the equations to appear readable in my RSS feed. This is a test to see if MathML will work.

\nI’ve been using MathJax (and its predecessor, jsMath) for many years, and it works quite well here on the blog itself, but because it formats the equations via JavaScript, the equations aren’t formatted in the RSS feed. The RSS feed just shows the LaTeX code for each equation—not bad for short equations, but increasingly hard to read as the equations get longer. If you’re an RSS subscriber, you’ve noticed that the following disclaimer appears at the bottom of each article in the feed:

\n\n\nIf there are equations in this post, you will see them rendered properly in the original article.

\n

where “the original article” is a link to the blog, where MathJax can do its magic.

\nSo I’m thinking about ways to get the equations to look right in RSS readers. One obvious way is to render them as images, upload them, and insert `<img>`

tags at the appropriate spots,^{1} but this seems crude and very Web 1.0. Although I suppose I could render the equations as SVGs, which would allow users to zoom in without seeing jaggies.

MathML is the “right” way to do equations and is supported by all the browsers I can think of, so the math should look right for everyone who visits the blog directly.^{2} The question is whether it’ll be rendered properly in RSS readers. My guess is that it will be, since I believe that RSS readers use the same rendering engines used by browsers. But the only way to know for sure is to write a post with MathML and see how it looks. So here goes:

The general formula for the mass moment of inertia about the x-axis, $\begin{array}{}\end{array}left\backslash "\; columnspacing="\backslash 0.167em\backslash "\; displaystyle="\backslash true\backslash ">{I}_{xx}$, is

\n$\begin{array}{}\end{array}left\backslash "\; columnspacing="\backslash 0.167em\backslash "\; displaystyle="\backslash true\backslash ">{I}_{xx}={\int}_{V}\rho \phantom{\rule{\"0.167em\"}{0ex}}({y}^{2}+{z}^{2})dV$

\nThis can be specialized for certain geometries. For example, the moment of inertia of a thin rod about an axis through the rod’s center and perpendicular to it is

\n$\begin{array}{}\end{array}left\backslash "\; columnspacing="\backslash 0.167em\backslash "\; displaystyle="\backslash true\backslash ">{I}_{xx}=\frac{1}{12}m{L}^{2}$

\nFinally, for Dan Moren, the parallel axis theorem is

\n$\begin{array}{}\end{array}left\backslash "\; columnspacing="\backslash 0.167em\backslash "\; displaystyle="\backslash true\backslash ">{I}_{xx}^{P}={I}_{xx}^{C}+m{d}^{2}$

\nwhere $\begin{array}{}\end{array}left\backslash "\; columnspacing="\backslash 0.167em\backslash "\; displaystyle="\backslash true\backslash ">{I}_{xx}^{C}$ is the moment of inertia about an axis through the centroid of the body and $\begin{array}{}\end{array}left\backslash "\; columnspacing="\backslash 0.167em\backslash "\; displaystyle="\backslash true\backslash ">{I}_{xx}^{P}$ is the moment of inertia about a parallel axis a distance $\begin{array}{}\end{array}left\backslash "\; columnspacing="\backslash 0.167em\backslash "\; displaystyle="\backslash true\backslash ">d$ from the centroid.

\nAfter I publish this post, I’ll check my RSS feed in NetNewsWire and update the post with a note on how the equations looked.

\n\n

\n**Update 14 Sep 2023 10:54 AM**

\nAs I hoped, NetNewWire shows the equations rendered properly (apart from some baseline misalignment for the inline math) in my RSS feed. I’m interested in hearing how other feedreaders perform.

\n

\n\n

\n\n

\n

[If there are equations in this post, you will see them rendered properly in the original article.]

"}, {"title": "iPhone 15 Pro facts and estimates", "url": "https://leancrew.com/all-this/2023/09/iphone-15-pro-facts-and-estimates/", "author": {"name": "Dr. Drang"}, "summary": "A couple of things I learned and one reasonable guess.", "date_published": "2023-09-13T19:40:26+00:00", "id": "https://leancrew.com/all-this/2023/09/iphone-15-pro-facts-and-estimates/", "content_html": "During yesterday’s keynote, I learned some things about the switch from stainless steel to titanium in the iPhone 15 Pro that I’d been guessing about before. I also did some quick and dirty calculations that might explain why Jason Snell thought the 15 Pro seemed distinctly lighter than the 14 Pro, even though the weight reduction is only 9–10%.

\nThe titanium alloy used in the phone was revealed by Isabel Yang about 57 minutes into the presentation. She called it Grade 5 titanium, which is an ASTM designation. It’s also known as Ti-6Al-4V, because its major alloying elements are aluminum at 6% and vanadium at 4%. Allison Sheridan talked about its properties earlier this month, and I’ve been assuming that it would be the alloy Apple would choose ever since I heard they were switching to titanium for the band.

\nI guessed it would be Ti-6Al-4V because it’s the garden-variety alloy for titanium. A great material, but not exotic in any way. Apart from many aerospace applications, it’s also used in medical implants, so you know that skin contact won’t be a problem.

\nShortly after the introduction of the alloy, Yang talked about how the titanium band is attached to the rest of the phone’s structure, which is aluminum. According to Apple’s newsroom:

\n\n\nUsing an industry-first thermo-mechanical process, the titanium bands encase a new substructure made from 100 percent recycled aluminum, bonding these two metals with incredible strength through solid-state diffusion.

\n

In other words, the titanium and aluminum are welded together. Not the kind of welding you’re used to, to be sure, but still welding—solid-state welding with no melting of either material. The thermo part of the “thermo-mechanical process” is heating up the materials, and the mechanical part is smushing them together. In essence, this is the oldest form of welding, the kind the village smithy did under the spreading chestnut tree with a forge and a hammer.

\nI’m sure the process control needed to do solid-state welding with such thin parts is well beyond what other companies can achieve, and I can understand why Apple didn’t want to describe it using a term that conjures up images of sweaty guys in tilt-down helmets making sparks in a dusty manufacturing plant. But it’s still welding.

\nFinally, we come to Jason Snell’s surprise at how light the 15 Pro seemed when he played with it in the hands-on area. He mentioned this not only in his Macworld article, but also in the post-keynote episode of *Upgrade*. You wouldn’t expect a change from 206 g for the 14 Pro to 187 g for the 15 Pro would be that noticeable, but Greg Joswiak mentioned it in the keynote and Jason confirmed it. How can that be?

One answer is that people are just more sensitive than we give them credit for being. A 9–10% drop in weight may seem like a small amount to our brains but a large amount to our hands. But because it allowed me to do some simple calculations, I decided to look into another possibility.

\nYour ability to manipulate a phone is based primarily on its mass, but also on its moment of inertia. And since the reduction in mass when switching from stainless steel to titanium is occurring almost entirely at the perimeter of the phone, the moment of inertia should be reduced more than if the mass were reduced uniformly.

\nLet’s assume the two phones are the same size^{1}, 147.5 mm high by 71.5 mm wide (the 7.85 mm thickness can be ignored). We’ll set the origin at the geometric center of the phone and the x, y, and z axes will be associated with what would normally be called pitch, roll, and yaw. We’ll be doing enough approximating that there’s no point in trying to account for the phone’s rounded corners.

If the 187 g mass of the 15 Pro were distributed uniformly, its moment of inertia about the x-axis would be

\n\n\\[I_{xx}^{(15)} = \\frac{1}{12}(187 \\;\\mathrm{g})(147.5 \\;\\mathrm{mm})^2 = 339,035\\; \\mathrm{gm \\cdot mm^2}\\]\n\nIf we assume the 14 Pro’s additional 19 g of mass is distributed uniformly around the perimeter, we can say that the long sides have

\n\n\\[\\frac{147.5 \\;\\mathrm{mm}}{2(147.5 \\;\\mathrm{mm} + 71.5 \\;\\mathrm{mm})} (19 \\;\\mathrm{g}) = 6.4 \\;\\mathrm{g}\\]\n\nof extra mass and the short sides have

\n\n\\[\\frac{71.5 \\;\\mathrm{mm}}{2(147.5 \\;\\mathrm{mm} + 71.5 \\;\\mathrm{mm})} (19 \\;\\mathrm{g}) = 3.1 \\;\\mathrm{g}\\]\n\nof extra mass. The moment of inertia of the these four lines of additional mass about the x-axis is

\n\n\\[I_{xx}^{(lines)} = 2 \\left[ \\frac{1}{12}(6.4 \\;\\mathrm{g})(147.5 \\;\\mathrm{mm})^2 + (3.1 \\;\\mathrm{g})\\left(\\frac{147.5 \\;\\mathrm{mm}}{2}\\right)^2 \\right]\\]\n\n\\[I_{xx}^{(lines)} = 56,929 \\;\\mathrm{g \\cdot mm^2}\\]\n\nYou’ll note the use of the parallel axis theorem in the second term inside the brackets. I’m not calculating the moments of inertia of the top and bottom lines about their own axes because that’s too small to worry about.

\nTherefore, the moment of inertia of the 14 Pro is

\n\n\\[I_{xx}^{(14)} = I_{xx}^{(15)} + I_{xx}^{(lines)} = 395,964 \\;\\mathrm{g \\cdot mm^2}\\]\n\nand the reduction in the moment of inertia about the x-axis is

\n\n\\[\\frac{I_{xx}^{(14)} - I_{xx}^{(15)}}{I_{xx}^{(14)}} = \\frac{56,929}{395,964} = 0.144\\]\n\nor 14–15%. This reduction, which is more than the mass reduction, would make the iPhone 15 Pro easier to turn, and that may add to the impression that it’s significantly lighter than the 14 Pro.

\nThese calculations were fun, but the initial assumption, that the 15 Pro’s mass is uniformly distributed, is unquestionably wrong. How wrong depends on how non-uniform the mass distribution is, and if I knew that I wouldn’t have had to make the assumption in the first place. My guess is that the assumption is good enough for this kind of back-of-the-envelope calculation.

\nBut even if the numbers are further off than I think, the concept is correct. Reducing the mass at the perimeter, which the change from stainless steel to titanium has done, has definitely reduced the moment of inertia more than a uniform reduction in mass would have. And that will make the 15 Pro easier to manipulate and will contribute—at least somewhat—to the impression of lightness.

\nYou can, of course, do the same sort of calculation for the moments of inertia about the roll and yaw axes. This is left as an exercise for the reader.

\n\n

\n

\n

- \n
- \n
Yes, I know the 15 Pro is slightly smaller, but I want to follow out the consequences of changing only the mass out at the perimeter. ↩

\n \n

\n

[If there are equations in this post, you will see them rendered properly in the original article.]

"}, {"title": "A football score matrix", "url": "https://leancrew.com/all-this/2023/09/a-football-score-matrix/", "author": {"name": "Dr. Drang"}, "summary": "Decades of determining storage requirements for finite element matrices finally pays off.", "date_published": "2023-09-11T23:56:35+00:00", "id": "https://leancrew.com/all-this/2023/09/a-football-score-matrix/", "content_html": "John Cook posted a fun article today about the all the possible football scores. The key is to recognize that a team’s score can be any non-negative integer other than one.^{1} If the most points a team can score is *M*, then Cook’s reasoning is

\n\nOut of the M² pairs of two numbers coming from a set [of] M numbers, M of these pairs are tied, and in half of the rest the first number is higher than the second. So the number of possible scores, with each score bounded by M, is

\nM + (M² − M)/2 = M(M + 1)/2.

\nIf M = 73 [the most points scored by a team in NFL history], there are 2,701 possible scores.

\n

[Note that there are *M* possible scores even though 1 is impossible because 0 *is* possible.]

This is sound logic, but it isn’t how I would solve the problem. My first thought was to arrange the scores in an *M*×*M* matrix, with the columns representing the score of the winning (or tying) team and the rows representing the losing (or tying) team. Putting a checkmark at every possible score position and leaving the other positions blank (because the L team can’t score more than the W team), we get an upper triangular matrix:

This visual approach came to me because I’ve spent a lot of time dealing with upper (and lower) triangular matrices, and I don’t have to think much to come up with the formula for the number of nonzero terms:

\n\n\\[\\frac{M (M + 1)}{2}\\]\n\nYou may recognize this as Gauss’s smartass formula for summing the first *M* natural numbers.

By the way, Cook was inspired to look into this problem by his Texans losing to the Ravens 25–9, a score that, improbably enough, had never happened before.

\n\n

\n

\n

- \n
- \n
OK, there

\n*is*a way for a team to score one point under the one-point safety rule, but we’re going to follow Cook’s argument and ignore that rule. ↩ \n

\n

[If there are equations in this post, you will see them rendered properly in the original article.]

"}, {"title": "Trump combinatorics", "url": "https://leancrew.com/all-this/2023/09/trump-combinatorics/", "author": {"name": "Dr. Drang"}, "summary": "A judge's order in the Georgia Trump trial leads to some big numbers.", "date_published": "2023-09-06T16:14:45+00:00", "id": "https://leancrew.com/all-this/2023/09/trump-combinatorics/", "content_html": "This afternoon, there will be a hearing in the Georgia state court case against Donald Trump and his 18 co-defendants. Judge Scott McAfee ordered the hearing yesterday and asked DA Fanni Willis’s office to make

\n\n\n[a] good-faith estimate of the time reasonably anticipated to present the State’s case during a joint trial of all 19 co-defendants,

\nand alternatively any divisions thereof, including the number of witnesses likely to be called and the number and size of exhibits likely to be introduced.

Emphasis added because that’s the point of this post.

\nTaken at face value, McAfee is asking Willis to make these estimates for a single trial, 19 separate trials, and every possibility in between. Since this is an impossible task because of the monstrous number of trial combinations, we don’t take him at face value. But what if we did? How many different ways could this case be split into separate trials?

\nObviously, there’s just one way to have a single trial of all the defendants and just one way to have 19 trials, each with an individual defendant. Let’s consider the next arrangement on the complication scale: 18 trials. This would mean one trial with 2 defendants and 17 trials with individual defendants. The key to working out this figure is determine the number of ways we can pair 2 defendants from the 19. For that we need the binomial coefficient:

\n\n\\[\\binom{19}{2} = \\frac{19!}{2! \\, 17!} = 171\\]\n\nThe next most complicated arrangement is two trials. For this, we need to consider the nine ways to split up the defendants and the number of combinations associated with each of those splits.

\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nSplit of defendants | Formula | Count |
---|---|---|

1 and 18 | \\(\\dbinom{19}{2}\\) | 19 |

2 and 17 | \\(\\dbinom{19}{2}\\) | 171 |

3 and 16 | \\(\\dbinom{19}{3}\\) | 969 |

4 and 15 | \\(\\dbinom{19}{4}\\) | 3,876 |

5 and 14 | \\(\\dbinom{19}{5}\\) | 11,628 |

6 and 13 | \\(\\dbinom{19}{6}\\) | 27,132 |

7 and 12 | \\(\\dbinom{19}{7}\\) | 50,388 |

8 and 11 | \\(\\dbinom{19}{8}\\) | 75,582 |

9 and 10 | \\(\\dbinom{19}{9}\\) | 92,378 |

Total | 262,143 |

Two of those numbers should be familiar.

\nAt this point, I think it’s time to give up on the binomial coefficient. There may be a way to use it to work out the number of ways to have three trials, four trials, and so on up to 17 trials, but I don’t want try it. More powerful tools are available, and we should take advantage of them.

\nThe Stirling numbers of the second kind are what we need. As the MathWorld article says, they are

\n\n\n[t]he number of ways of partitioning a set of

\nnelements intomnonempty sets…

The key words here are *partitioning* and *nonempty*. When we partition a set into subsets, the subsets do not intersect with each other and their union is the original set. Translated to our problem, that means each defendant is in one and only one trial. And the subsets are nonempty because we can’t have a trial with no defendant.

The Stirling numbers of the second kind are in the OEIS, but the list on that page doesn’t go up high enough. The tables in Abramowitz & Stegun do,

\n\nbut there’s no way I can enter the numbers for \\(n = 19\\) without making several typos. So let’s fire up Mathematica and use its `StirlingS2`

function

`Table[{n, StirlingS2[19, n]}, {n, 1, 19}]\n`

\nyields

\n`{{1, 1},\n {2, 262143},\n {3, 193448101},\n {4, 11259666950},\n {5, 147589284710},\n {6, 693081601779},\n {7, 1492924634839},\n {8, 1709751003480},\n {9, 1144614626805},\n {10, 477297033785},\n {11, 129413217791},\n {12, 23466951300},\n {13, 2892439160},\n {14, 243577530},\n {15, 13916778},\n {16, 527136},\n {17, 12597},\n {18, 171},\n {19, 1}}\n`

\nwhere the first number in each line is the number of trials and the second is the number of ways to arrange the defendants in that many trials. We see that the values for 1, 2, 18, and 19 trials match what we came up with earlier, and now we have all the others, too. If your eyes are good, you can compare the numbers in the middle to the A&S table.

\nTo get the total, we run

\n`Total[Table[StirlingS2[19, n], {n, 1, 19}]]\n`

\nto get 5,832,742,205,057, or over 5.8 trillion possibilities. I suggest we call this the *McAfee number*.

\n

**Update 7 Sep 2023 1:53 PM**

\nReader Rick Kaye, who has probably forgotten more combinatorics than I’ll ever know, emailed me to point out that the number of ways to partition a set into nonempty subsets is the Bell Number. In Mathematica, it’s calculated through the `BellB`

function, so

`BellB[19]\n`

\nreturns 5,832,742,205,057, the same value I got by summing the Stirling numbers of the second kind. You can check this via

\n`BellB[19] == Total[Table[StirlingS2[19, n], {n, 1, 19}]]\n`

\nwhich returns `True`

. Also, the Bell Numbers are sequence A000110 in the OEIS, where you can look up the value directly. Thanks, Rick!

\n

[If there are equations in this post, you will see them rendered properly in the original article.]

"}, {"title": "Tools, small and large", "url": "https://leancrew.com/all-this/2023/08/tools-small-and-large/", "author": {"name": "Dr. Drang"}, "summary": "Is it better to use minimalist tools or multipurpose ones? Yes.", "date_published": "2023-08-31T16:38:20+00:00", "id": "https://leancrew.com/all-this/2023/08/tools-small-and-large/", "content_html": "Last week, John D. Cook wrote an article that I kind of agree with and kind of disagree with. Weirdly, I think he kind of disagrees with it, too.

\nCook says “using a simple language can teach you that you don’t need features you thought you needed,” and he uses awk as the paradigm of this principle. He uses awk in a limited way to match the limits of the language:

\n\n\nIt has been years since I’ve written an awk program that is more than one line. If something would require more than one line of awk, I probably wouldn’t use awk. I’m not morally opposed to writing longer awk programs, but awk’s sweet spot is very short programs typed at the command line.

\n

The only part of this that doesn’t apply to me is that I don’t think I’ve *ever* written an awk program longer than a single line. I try to use awk when its superpower—the automatic splitting of lines into fields—fits what I need to do.

But it’s in the next section of Cook’s post that we part ways. He argues that awk’s limited regular expression support^{1} is an advantage:

\n\nAt first I wished awk were more expressive is in its regular expression implementation. But awk’s minimal regex syntax is consistent with the aesthetic of the rest of the language. Awk has managed to maintain its elegant simplicity by resisting calls to add minor conveniences that would complicate the language. The maintainers are right not to add the regex features I miss.

\n

This is a reasonable argument for people who’ve never used regexes with a larger syntax, but I don’t know anyone who fits that description. Certainly not Cook and certainly not me. When Perl became the language of the web in the 90s, it put its regex flavor in front of the world, and the world responded by adopting it wherever it could. Pretty much the only programming tools that didn’t were those that existed before Perl: most prominently grep, sed, and awk. So if you want to use regular expressions with any of these tools, you have to ask yourself whether the simplicity of the language is worth accepting the straightjacket of a limited regex syntax.

\nAs much as I like awk, whenever I see my problem needing more than the most elementary of regular expressions, I abandon it for Perl and I don’t look back. Perl-compatible (or very nearly Perl-compatible) regular expressions are in all the other tools I use frequently—trying to remember the awk differences adds complexity to my use of it.

\nAfter reading Cook’s post, I thought *Wait a minute. Isn’t this the guy who recommended using* `tcgrep`

*so you could stick with Perl regex syntax?* Yes it is. I think his argument in that earlier post applies just as well to awk as it does to grep.

\n

[If there are equations in this post, you will see them rendered properly in the original article.]

"}, {"title": "Slugify (slight return)", "url": "https://leancrew.com/all-this/2023/08/slugify-slight-return/", "author": {"name": "Dr. Drang"}, "summary": "A Python function for turning a title into a slug for blog publishing.", "date_published": "2023-08-22T19:33:28+00:00", "id": "https://leancrew.com/all-this/2023/08/slugify-slight-return/", "content_html": "Earlier this year, I had some trouble publishing one of my posts. I think it was this one, and the problem was caused by the parentheses in the title. The code I’d written long ago to turn a title into the slug used in the URL wasn’t as robust as I thought it was. At the time, I made a quick change by hand to get the post published and made a note to myself to fix the code. Today I did. Twice.

\nThe word *slug* was apparently taken from the newspaper business and is defined this way:

\n\nA slug is a few words that describe a post or a page. Slugs are usually a URL friendly version of the post title.

\n

The URLs to individual posts here look like this:

\n`https://leancrew.com/all-this/2023/08/slugify-slight-return/\n`

\nwhich is the domain, a subdirectory, the year and month, and then the slug, which is based on the title. It’s supposed to be lower case, with all the punctuation stripped and all word separators turned into hyphens. Some people prefer underscores, but I like dashes.

\nI’ve had a `slugify`

function in my blog publishing system for ages. In a long-ago post, I wrote about this early version of it:

`python:\n1: def slugify(u):\n2: \"Convert Unicode string into blog slug.\"\n3: u = re.sub(u'[–—/:;,.]', '-', u) # replace separating punctuation\n4: a = unidecode(u).lower() # best ASCII substitutions, lowercased\n5: a = re.sub(r'[^a-z0-9 -]', '', a) # delete any other characters\n6: a = a.replace(' ', '-') # spaces to hyphens\n7: a = re.sub(r'-+', '-', a) # condense repeated hyphens\n8: return a\n`

\nThis was written in Python 2. It had been updated to Python 3 and improved in the intervening years, but it was obviously still not bulletproof. Here’s the version I came up with this morning, including the necessary `import`

s:

`python:\n 1: import re\n 2: from unicodedata import normalize\n 3: \n 4: def slugify(text):\n 5: '''Make an ASCII slug of text'''\n 6: \n 7: # Make lower case and delete apostrophes from contractions\n 8: slug = re.sub(r\"(\\w)['’](\\w)\", r\"\\1\\2\", text.lower())\n 9: \n10: # Convert runs of non-characters to single hyphens, stripping from ends\n11: slug = re.sub(r'[\\W_]+', '-', slug).strip('-')\n12: \n13: # Replace a few special characters that normalize doesn't handle\n14: specials = {'æ':'ae', 'ß':'ss', 'ø':'o'}\n15: for s, r in specials.items():\n16: slug = slug.replace(s, r)\n17: \n18: # Normalize the non-ASCII text\n19: slug = normalize('NFKD', slug).encode('ascii', 'ignore').decode()\n20: \n21: # Return the transformed string\n22: return slug\n`

\nThis will turn

\n`Parabolic mirrors made simple(r)\n`

\ninto

\n`parabolic-mirrors-made-simple-r\n`

\nwhich is what I want. A more complicated string, including non-ASCII characters,

\n`Hél_lo—yøü don’t wånt “25–30%,” do you?\n`

\nwill be converted to

\n`hel-lo-you-dont-want-25-30-do-you\n`

\nwhich would also work well as a slug.

\nLine 19, which uses the `normalize`

function from the `unicodedata`

module followed by `encode('ascii', 'ignore')`

is far from perfect or complete, but it converts most accented letters into reasonable ASCII. Line 19 ends with `decode`

to turn what would otherwise be a `bytes`

object into a string.

You’ll note that Lines 14–16 handle the conversion of a few special characters: æ, ß, and ø. I learned by running tests that those are some of the letters the `normalize/decode`

system doesn’t convert to reasonable ASCII. Even though I couldn’t imagine myself using any of these letters—or any of the myriad of other letters that don’t get converted by `normalize/decode`

, it bothered me that I was rewriting `slugify`

yet again and still didn’t have a way of handling lots of non-ASCII characters.

I decided it was time to swallow my pride and look for a slugifying function written by someone who was willing to put in the time to do a complete job.

\nThe answer was the aptly named `python-slugify`

module by AvidCoderr, which has its own `slugify`

function with many optional parameters. I learned that the defaults work for me. This code

`python:\n1: from slugify import slugify\n2: \n3: print(slugify(\"Hél_lo—yøü don’t wånt “25–30%,” do you, Mr. Encyclopædia?\"))\n`

\nreturns

\n`hel-lo-you-dont-want-25-30-do-you-mr-encyclopaedia\n`

\nwhich is just what I want.

\nA lot of this `slugify`

’s power comes from its use of Tomaž Šolc’s `unidecode`

module, which does the conversion to ASCII in a way that’s much more complete than the `normalize/decode`

method.

So now my publishing code doesn’t have its own `slugify`

function, it just imports AvidCoderr’s and calls it. Kind of anticlimactic, but it works better.

One more nice thing about the `slugify`

module. When you install it—which I did via `conda install python-slugify`

because I use Anaconda to manage Python and its libraries—it comes with a command-line program also called `slugify`

, which lets you test things out in the Terminal. You don’t even have to wrap the string you want to slugify in quotes:

`slugify Hél_lo—yøü don’t wånt “25–30%,” do you, Mr. Encyclopædia?\n`

\nreturns

\n`hel-lo-you-dont-want-25-30-do-you-mr-encyclopaedia\n`

\nNote that if the string you’re converting includes characters that are special to the shell, you *will* have to wrap it in single quotes.

`slugify '$PATH'\n`

\nreturns

\n`path\n`

\nbut

\n`slugify $PATH \n`

\nreturns a very long string that you probably don’t want in your URL.

\n

[If there are equations in this post, you will see them rendered properly in the original article.]

"}, {"title": "Reducing the size of PNGs with Keyboard Maestro, AppleScript, and ImageOptim", "url": "https://leancrew.com/all-this/2023/08/reducing-the-size-of-pngs-with-keyboard-maestro-applescript-and-imageoptim/", "author": {"name": "Dr. Drang"}, "summary": "A Keyboard Maestro macro that runs ImageOptim on selected files.", "date_published": "2023-08-20T18:16:30+00:00", "id": "https://leancrew.com/all-this/2023/08/reducing-the-size-of-pngs-with-keyboard-maestro-applescript-and-imageoptim/", "content_html": "For a long time, I’ve been using ImageOptim to reduce the size of PNG files I use here on the blog. The SnapClip and SnapSCP macros I use for taking most of my screenshots run ImageOptim automatically, but when I need to annotate or otherwise edit a screenshot, I have to run ImageOptim manually on the final version of the image. Until recently I’ve been doing this by selecting the file and control-clicking on it to open it in ImageOptim.

\n\nThis is relatively quick, but I do have to make sure I hit ImageOptim in the long menu of apps—easy to do when I’m sitting up at a desk but less so when I’m lying on a bed or a couch. I decided to turn the operation into a Keyboard Maestro macro. I still have to start by selecting the file(s) I want to optimize, but I no longer have to aim at a menu item.

\nThe macro is called Optimize PNG, and here’s a screenshot of it:

\n\nIf you download it and import it into Keyboard Maestro as is, it will appear in the Finder group and will be active. You can run it when you have one or more PNG files selected in the Finder.

\nThe macro has one step, which is this AppleScript:

\n`applescript:\n 1: -- Set text item delimiters for extracting the extension\n 2: set text item delimiters to \".\"\n 3: \n 4: -- Set the path to the ImageOptim command line executable\n 5: set io to \"/Applications/ImageOptim.app/Contents/MacOS/ImageOptim\"\n 6: \n 7: -- Run ImageOptim on each selected file whose extension is png or PNG\n 8: tell application \"Finder\"\n 9: set imageFiles to selection\n10: repeat with imageFile in imageFiles\n11: set filePath to POSIX path of (imageFile as alias)\n12: set fileExtension to last text item of filePath\n13: if fileExtension is \"png\" or fileExtension is \"PNG\" then\n14: do shell script (io & \" \" & quoted form of filePath)\n15: end if\n16: end repeat\n17: end tell\n18: \n19: do shell script \"afplay /System/Library/Sounds/Glass.aiff\"\n`

\nBasically, the script loops through all the selected files and runs ImageOptim on them. There’s some logic in there that makes sure^{1} that ImageOptim is run only on PNG files, and a conversion from an AppleScript file description to a Unix-style file path. The command that gets run on every PNG file is

`/Applications/ImageOptim.app/Contents/MacOS/ImageOptim '/path/to/image file.png'\n`

\nThe file path is quoted (Line 14) to ensure that spaces are handled correctly.

\nWhen the optimizing is done, the Glass sound plays (Line 19) to let me know the files are ready.

\nThe name of the macro is “Optimize PNG,” but I use ⌃⌥⌘I as the trigger because I think of it as opening ImageOptim, even though ImageOptim never shows itself except briefly in the Dock.

\n\n

\n

\n

- \n
- \n
It’s certainly not a foolproof way of “making sure.” All it does is get the file extension (Lines 2 and 12) and checks to see if it’s “png” or “PNG” (Line 13). That’s good enough for me, but if you’re the kind of person who saves files with misleading extensions (or no extension at all) you’ll have to come up with a better way of distinguishing PNG files. Also, you should rethink your life choices. ↩

\n \n

\n

[If there are equations in this post, you will see them rendered properly in the original article.]

"}, {"title": "Sleeping Beauty is in the eye of the beholder", "url": "https://leancrew.com/all-this/2023/08/sleeping-beauty-is-in-the-eye-of-the-beholder/", "author": {"name": "Dr. Drang"}, "summary": "Looking at the Sleeping Beauty problem without conditional probabilities.", "date_published": "2023-08-19T18:44:53+00:00", "id": "https://leancrew.com/all-this/2023/08/sleeping-beauty-is-in-the-eye-of-the-beholder/", "content_html": "A couple of days ago, Numberphile posted another Tom Crawford video in which he presents an interesting problem and explains it in an unnecessarily complicated way. This time, it’s the Sleeping Beauty problem.

\n\nHere’s the problem as posed in the Wikipedia article:

\n\n\nSleeping Beauty volunteers to undergo the following experiment and is told all of the following details: On Sunday she will be put to sleep. Once or twice, during the experiment, Sleeping Beauty will be awakened, interviewed, and put back to sleep with an amnesia-inducing drug that makes her forget that awakening. A fair coin will be tossed to determine which experimental procedure to undertake:

\n\n

\n- If the coin comes up heads, Sleeping Beauty will be awakened and interviewed on Monday only.
\n- If the coin comes up tails, she will be awakened and interviewed on Monday and Tuesday.
\nIn either case, she will be awakened on Wednesday without interview and the experiment ends.

\nAny time Sleeping Beauty is awakened and interviewed she will not be able to tell which day it is or whether she has been awakened before. During the interview Sleeping Beauty is asked: “What is your degree of belief

\n^{1}now for the proposition that the coin landed heads?”

I don’t understand why the problem is typically described with Sleeping Beauty being given a drug to put her to sleep. Surely it would be more appropriate for it to be a magic spell.

\nThe first thing I don’t like about Tom’s presentation is how he poses the question asked of Sleeping Beauty: What is the probability that the coin was a head?

\n\nAsking about the probability instead of the degree of belief suggests an objectivity that shouldn’t be there. *What is the probablity* connotes a sort of omniscience that doesn’t belong in the question. That’s certainly one of the reasons Brady thinks at one point that the answer should be ½—a fair coin was flipped, and its probability of landing heads isn’t affected by any of the other bits of the story.

But when the question is posed in terms of *degree of belief*, and we remember that it’s Sleeping Beauty’s degree of belief each time she is awakened, we start thinking about the problem differently. This is what leads to the longish section in the middle of the video in which Tom goes through various assumptions and conditional probabilities to get to the “thirder” answer. And this is the part that I think can be made shorter and clearer.

First, let’s think about what degree of belief is. It is an expression of the odds that would be given in a fair wager. In this case, we recast the problem as Sleeping Beauty being offered a bet—heads or tails—by the experimenter each time she’s awakened. We can start by considering which way she should bet if she’s offered 1:1 odds and then move on to determining what odds would be fair to both her and the experimenter.

\nBecause it’s a fair coin, half the time it will land on heads and there will be one wager. The other half of the time it will land on tails and there will be two wagers. If Sleeping Beauty bets on tails, she will, on average, lose one bet half the time and win two bets half the time. If we say the bet is $10, her expected return from betting on tails is

\n\n\\[\\frac{1}{2} (-\\$10) + \\frac{1}{2} (2 \\times \\$10) = \\$5\\]\n\nThe experimenter would have to be an idiot to make this bet with even odds. The fair way is for the person who bets on tails to put up $20 and the person who bets on heads to put up $10. That way the expected return for the tails-bettor is

\n\n\\[\\frac{1}{2} (-\\$20) + \\frac{1}{2} (2 \\times \\$10) = $0\\]\n\nand the expected return for the heads-bettor is the same:

\n\n\\[\\frac{1}{2} (\\$20) + \\frac{1}{2} (2 \\times -\\$10) = $0\\]\n\nThe 2:1 odds make the bet fair.

\nBecause 2:1 odds is the same as “two out of three,” Sleeping Beauty’s degree of belief in tails is ⅔. Conversely, her degree of belief in heads is ⅓.

\nNote that it’s the disparity in the number of wagers (or questions, if we go back to the original problem statement) that makes the degrees of belief differ from ½. If we change the problem slightly and say that there will be *one* question, regardless of the outcome of the coin toss (if it’s tails we could do another coin toss to decide whether the question is asked on Monday or Tuesday), then there will be no disparity in wagers and even odds would be fair. It’s possible that this misinterpretation of the problem—that the question is asked once per experiment rather than once per awakening—is what leads some people to think that Sleeping Beauty’s degree of belief should be ½.

Another way for the degree of belief to be ½ would be if the wager is made not in the middle of the experiment, but either before it on Sunday or after it on Wednesday. In both of these cases, 1:1 odds would be fair.

\nWe can also run simulations of the problem to give us insight into the answer. Here’s a short Python program that simulates both the one-question-per-awakening problem and the one-question-per-experiment problem:

\n`python:\n 1: #!/usr/bin/env python3\n 2: \n 3: from collections import defaultdict\n 4: from random import choice\n 5: \n 6: # Set up the problem\n 7: sides = 'Heads Tails'.split()\n 8: days = 'Monday Tuesday'.split()\n 9: qdays = {'Heads': ['Monday'], 'Tails': days}\n10: \n11: # Initialize the question matrix\n12: q = defaultdict(int)\n13: \n14: # Run 10,000 experiments assuming the question is asked every day\n15: for f in range(10000):\n16: flip = choice(sides)\n17: for day in qdays[flip]:\n18: q[(flip, day)] += 1\n19: \n20: # Show the results\n21: print('Question asked every awakening')\n22: for s in sides:\n23: for d in days:\n24: print(f'{s} and {d}: {q[(s, d)]}')\n25: \n26: print()\n27: \n28: # Reinitialize the question matrix\n29: q = defaultdict(int)\n30: \n31: # Run 10,000 experiments assuming the question is asked once per experiment\n32: for f in range(10000):\n33: flip = choice(sides)\n34: day = choice(qdays[flip])\n35: q[(flip, day)] += 1\n36: \n37: # Show the results\n38: print('Question asked once per experiment')\n39: for s in sides:\n40: for d in days:\n41: print(f'{s} and {d}: {q[(s, d)]}')\n`

\nIn both cases, the `q`

dictionary is being used to keep track of questions. The keys of `q`

are tuples of the (initial) coin toss and the day, e.g., `('Tails', 'Monday')`

, and the values of `q`

are the number of questions asked for each of those condition pairs. I’m using a `defaultdict`

for `q`

to avoid having to initialize it, and the `choice`

function from the `random`

module to simulate the coin flips.

Because the program uses random numbers and doesn’t specify a seed, it will give slightly different answers every time it’s run. Here’s the answer from one run,

\n`Question asked every awakening\nHeads and Monday: 4969\nHeads and Tuesday: 0\nTails and Monday: 5031\nTails and Tuesday: 5031\n\nQuestion asked once per experiment\nHeads and Monday: 4905\nHeads and Tuesday: 0\nTails and Monday: 2572\nTails and Tuesday: 2523\n`

\nwhich fits well with our previous answers.

\nSimulations like this can give you confidence in the solutions you’ve come up with by other means. If you haven’t come up with a solution by other means, a simulation can lead you to the correct line of reasoning. Of course, your simulation code has to match the setup of the problem, which is often the tricky bit.

\nAs I was going through this problem, I couldn’t help but think about the Sleeping Beauty episode of *Fractured Fairy Tales*.

The depiction of Walt Disney as a con man is probably not as wildly obvious now as it was in the early 60s, but even if you don’t know that Daws Butler is recycling his Hokey Wolf/Sgt. Bilko voice or that Disneyland used to have lettered tickets for different attractions, you still get the point.

\n\n

\n

\n

- \n
- \n
The article actually uses

\n*credence*instead of*degree of belief*, but I think the latter is easier to understand, especially for a character from the Middle Ages. ↩ \n

\n

[If there are equations in this post, you will see them rendered properly in the original article.]

"}, {"title": "Continued fractions in Python", "url": "https://leancrew.com/all-this/2023/08/continued-fractions-in-python/", "author": {"name": "Dr. Drang"}, "summary": "A simple function for continued fractions and their convergents.", "date_published": "2023-08-16T15:40:09+00:00", "id": "https://leancrew.com/all-this/2023/08/continued-fractions-in-python/", "content_html": "My last post ends with “one last thing” about continued fractions. That turned out to be a lie. After playing around a bit more, I decided I should have some functions that compute continued fractions in Python, so I looked around for continued fraction libraries. I found some, but none of them seemed like *the* library. So I decided to write my own.

Let’s review some notation and properties. A continued fraction is one in which the denominator contains a fraction, and the denominator of that fraction contains a fraction, and so on.

\n\n\\[x = a_0 + \\cfrac{1}{a_1 + \\cfrac{1}{a_2 + \\cfrac{1}{a_3 + \\ldots}}}\\]\n\nThis is considered the standard form for continued fractions, where the numerators are all ones. You can write out a continued fraction with other numbers as the numerators, but it can always be reduced to this form.

\nIf \\(x\\) is a rational number, then the continued fraction has a finite number of terms and will end with a \\(1/a_n\\) term. If \\(n=4\\), for example, the fraction will look like this:

\n\n\\[x = a_0 + \\cfrac{1}{a_1 + \\cfrac{1}{a_2 + \\cfrac{1}{a_3 + \\cfrac{1}{a_4}}}}\\]\n\nIf \\(x\\) is irrational, the continued fraction has an infinite number of terms, although terms may repeat. Famously, the golden ratio goes on forever and all the terms are one:

\n\n\\[\\phi = 1 + \\cfrac{1}{1 + \\cfrac{1}{1 + \\cfrac{1}{1 + \\ldots}}}\\]\n\nA less explicit but far more compact way to display a continued fraction is to just show the \\(a\\) terms as a bracketed list:

\n\n\\[x = [a_0; a_1, a_2, a_3, \\ldots ]\\]\n\nIt’s common to use a semicolon to separate the \\(a_0\\) term from the others. Mathematica doesn’t do that because it’s more convenient to just use a list. As we saw in the last post, the first five terms of the continued fraction for \\(\\pi\\) is

\n`In[1]:= ContinuedFraction[Pi, 5]\n\nOut[1]= {3, 7, 15, 1, 292}\n`

\nwhere Mathematica uses braces to surround its lists. We’ll use this same idea in Python, where the lists are bracketed.

\nA *segment* of a continued fraction, \\(s_k\\), is a finite continued fraction consisting of the first \\(k+1\\) terms of \\(x\\):

A *remainder*, \\(r_k\\), is all the terms starting with the \\(k^{th}\\) and continuing on, whether the continued fraction is finite or infinite:

So any continued fraction can be broken into a segment, \\(s_{k-1}\\), and a remainder, \\(r_k\\).

\nA *convergent* is the rational number corresponding to a segment. Convergents are what we use to get rational approximations of numbers. In the last post, we did this

`In[2]:= Convergents[ContinuedFraction[Pi, 5]]\n\n 22 333 355 103993\nOut[2]= {3, --, ---, ---, ------}\n 7 106 113 33102\n`

\nto see why \\(22/7\\) and \\(355/113\\) are good rational approximations of \\(\\pi\\).

\nAn interesting property of convergents is that those from even-indexed segments—\\(s_0\\), \\(s_2\\), and so on—bound \\(x\\) from below, and those from odd-indexed segments—\\(s_1\\), \\(s_3\\), and so on—bound \\(x\\) from above. (If \\(x\\) is rational, then there is a final convergent which is equal to \\(x\\), regardless of whether it’s even or odd.) The even convergents form an increasing sequence; the odd convergents form a decreasing sequence.

\nOK, if you want more you can go to the Wikipedia page or get a copy of Khinchin’s book^{1}. Let’s move onto the code.

The function I wrote, `continued`

, returns a tuple of

- \n
- the continued fraction of the argument as a list of integers; and \n
- the convergents of the argument as a list of Fractions. \n

Fractions are a Python type supplied by the *fractions* library.

`continued`

is the only function in `cfractions.py`

, a file I’ve saved in my `site-packages`

directory. This makes it easy to import when I’m working in Jupyter:

`In [1]: import math\n\nIn [2]: from cfractions import continued\n\nIn [3]: continued(math.pi)\nOut[3]:\n([3, 7, 15, 1, 292],\n [Fraction(3, 1),\n Fraction(22, 7),\n Fraction(333, 106),\n Fraction(355, 113),\n Fraction(103993, 33102)])\n`

\nHere’s the code:

\n`python:\n 1: from fractions import Fraction\n 2: from math import isclose\n 3: \n 4: def continued(x, terms=20, rel_tol=1e-9, abs_tol=0.0):\n 5: 'Return the continued fraction and convergents of the argument.'\n 6: # Initialize, using Khinchin's notation\n 7: a = [] # continued fraction terms\n 8: p = [0, 1] # convergent numerator terms (-2 and -1 indices)\n 9: q = [1, 0] # convergent denominator terms (-2 and -1 indices)\n10: s = [] # convergent terms\n11: remainder = x\n12: \n13: # Collect the continued fraction and convergent terms\n14: for i in range(terms):\n15: # Compute the next terms\n16: whole, frac = divmod(remainder, 1)\n17: an = int(whole)\n18: pn = an*p[-1] + p[-2]\n19: qn = an*q[-1] + q[-2]\n20: sn = Fraction(pn, qn)\n21: \n22: # Add terms to lists\n23: a.append(an)\n24: p.append(pn)\n25: q.append(qn)\n26: s.append(Fraction(sn))\n27: \n28: # Convergence check\n29: if isclose(x, float(sn), rel_tol=rel_tol, abs_tol=abs_tol):\n30: break\n31: \n32: # Get ready for next iteration\n33: remainder = 1/frac\n34: \n35: # Return the tuple of the continued fraction and the convergents\n36: return(a, s)\n`

\nThe terms of the continued fraction are calculated using a form of Euclid’s algorithm for finding the greatest common divisor (GCD) of two numbers. The numerators and denominators of the convergents are calculated using the recurrence relations,

\n\n\\[p_k = a_k p_{k-1} + p_{k-2}\\]\n\n\\[q_k = a_k q_{k-1} + q_{k-2}\\]\n\nYou may be wondering why I’m calculating both the continued fraction and the convergents in the same function instead of doing them separately as Mathematic does. Two reasons:

\n- \n
- First, I typically want both the continued fraction and its convergents, so there’s no point in forcing me to make two function calls in the usual case. \n
- Second, I want to be able to set specific convergence criteria, and to do that I need to know how close I am to the input argument as new terms of the continued fraction are calculated. That means I need to calculate the convergents along with the continued fraction terms. \n

The three optional parameters to the function, `terms`

, `rel_tol`

, and `abs_tol`

, set the convergence criteria. `terms`

is an upper bound on the number of continued fraction terms that will be calculated, no matter what the other tolerance values are. `rel_tol`

and `abs_tol`

, are relative and absolute tolerance values that can stop the process before the `terms`

limit is reached. Their names and default values are taken from the `isclose`

function of the `math`

library, which is used on Line 29. For example, we could set an absolute tolerance on our rational estimate of \\(\\pi\\) this way:

`In [4]: continued(math.pi, rel_tol=0, abs_tol=1e-12)\nOut[4]:\n([3, 7, 15, 1, 292, 1, 1, 1, 2, 1, 3],\n [Fraction(3, 1),\n Fraction(22, 7),\n Fraction(333, 106),\n Fraction(355, 113),\n Fraction(103993, 33102),\n Fraction(104348, 33215),\n Fraction(208341, 66317),\n Fraction(312689, 99532),\n Fraction(833719, 265381),\n Fraction(1146408, 364913),\n Fraction(4272943, 1360120)])\n`

\nWe’ve hit our tolerance because

\n\n\\[\\pi - \\frac{4272943}{1360120} = 4 \\times 10^{-13}\\]\n\nI like the idea of having control over the convergence criteria. Mathematica’s second argument to `ContinuedFraction`

gives the equivalent of my `terms`

parameter, but its precision control is, as far as I can tell, entirely internal—there’s no way for the user to set a tolerance.

On the other hand, a disadvantage of my function is that its precision is limited to that of Python floats, whereas Mathematica will give you as much precision as you as for. I can’t, for example, ask for an `abs_tol`

of \\(1 \\times 10^{-20}\\) and expect to get a correct answer:

`In [5]: continued(math.pi, rel_tol=0, abs_tol=1e-20)\nOut[5]:\n([3, 7, 15, 1, 292, 1, 1, 1, 2, 1, 3, 1, 14, 3],\n [Fraction(3, 1),\n Fraction(22, 7),\n Fraction(333, 106),\n Fraction(355, 113),\n Fraction(103993, 33102),\n Fraction(104348, 33215),\n Fraction(208341, 66317),\n Fraction(312689, 99532),\n Fraction(833719, 265381),\n Fraction(1146408, 364913),\n Fraction(4272943, 1360120),\n Fraction(5419351, 1725033),\n Fraction(80143857, 25510582),\n Fraction(245850922, 78256779)])\n`

\nMathematica will happily use as many digits as needed and do so correctly. It tells us that Python screwed up in the 14th term:

\n`In[3]:= ContinuedFraction[Pi, 15]\n\nOut[3]= {3, 7, 15, 1, 292, 1, 1, 1, 2, 1, 3, 1, 14, 2, 1}\n`

\nI’m not especially concerned about this reduced precision, as I seldom want my continued fractions to go out that far. And when I do, I have Mathematica to fall back on.

\nFinally, a small advantage of doing continued fractions in Python instead of Mathematica is that Python uses zero-based indexing for lists, which is consistent with the standard notation given above. Mathematica uses one-based indexing, which usually works out nicely when dealing with vectors and matrices but not in this case.

\n\n

\n**Update 16 Aug 2023 10:53 PM**

\nShortly after this post was published Thomas J. Fan got in touch with me on Mastodon and told me that the SymPy library has continued fraction functions in the number theory sublibrary. Of course! I felt silly for not looking there.

He also included this code snippet, which I ran in Jupyter:

\n`In [1]: from itertools import islice\n\nIn [2]: from sympy.core import pi\n\nIn [3]: from sympy.ntheory.continued_fraction import continued_fraction_iterator\n\nIn [4]: list(islice(continued_fraction_iterator(pi), 15))\nOut[4]: [3, 7, 15, 1, 292, 1, 1, 1, 2, 1, 3, 1, 14, 2, 1]\n`

\nSo the function is not only prewritten, it’s more accurate than mine. Of course, the final line to get a list of terms is kind of convoluted, but it could easily be wrapped in a more compact function. I may give it a go.

\nThanks, Thomas!

\n\n

\n**Update 18 Aug 2023 9:49 AM**

\nI played around with SymPy’s continued fraction functions yesterday and have decided to stick with my function. As discussed above, the SymPy functions are more accurate than mine, but to get that accuracy you have to be working with the SymPy definitions of numbers like \\(\\pi\\) and functions like `sqrt`

. Since I’m usually working with the `math`

package’s definitions, which are numeric rather than symbolic, I wouldn’t normally get the value out of the SymPy functions. Still, it’s good to know that they’re there.

\n

\n

\n

- \n
- \n
It’s a thin Dover paperback, so it’s pretty cheap. ↩

\n \n

\n

[If there are equations in this post, you will see them rendered properly in the original article.]

"}], "home_page_url": "https://leancrew.com/all-this/", "version": "https://jsonfeed.org/version/1", "icon": "https://leancrew.com/all-this/resources/snowman-200.png"}