As I’ve said before, Connections is an obvious rip-off of the Connecting Wall segment of the BBC game show Only Connect. The main differences, apart from the Connecting Wall being much, much harder, are
These rule differences mean game play is different. In the early stages of the Connecting Wall, rapid guesses are a common way to eliminate red herrings. If there are five clues that could fit into a category, you can eliminate the red herring in no more than five quick guesses,^{1} and that’s a good strategy for figuring out groups. It’s after the first two groups are set that teams tend to take a more methodical approach to conserve wrong guesses.
In Connections, at least the way I think it should be played, the goal is not simply to get all four groups (I’ve never failed to do that) but to get them with no mistakes. Texting a four-row solution to your group of fellow players is the ideal. Just guessing from the start—as some people I know do—is an almost certain way to get less than a perfect score.
So unless I’m in a hurry or have given up, I don’t start submitting guesses until I’m sure of at least three, and ideally all four, of the categories. This strategy means I almost always get a better score than my hasty wife and kids, and the additional need to keep track of all the categories in my head is another weapon in the fight against cognitive decline.^{2}
By the way, if you’re outside the UK and would like to see Only Connect, the wheelsongenius YouTube account uploads episodes shortly after they air. The show is currently in Series 19, and wheelsongenius has playlists for several of the older series.
Combinatorics can work against you. If there are six clues that could fit in a category, it could take as many as 15 guesses to get through all the combinations. Keeping track of which two you’ve kept out of your previous guesses is basically impossible, even for the kind of quizzing champions that appear on Only Connect. ↩
I’m skeptical of the claim that doing puzzles and other brainwork can ward off the effects of aging, but it can’t hurt. ↩
[If there are equations in this post, you will see them rendered properly in the original article].
]]>I wanted a script to help me print out blank monthly calendars. The program I’ve always used for this is pcal
, which is pretty easy to use. For example,
pcal -e -S 10 2023 3
will create a three monthly calendars starting with this coming October. The -e
option tell pcal
to make empty calendars,^{1} and the -S
tells it not to include mini-calendars for the preceding and succeeding months.^{2} The result looks like this:
The thing about pcal
is that the p stands for PostScript, a great file format but one that’s been superseded^{3} by PDF. So to get pcal
’s output into a more modern format, I pipe its output to ps2pdf
:
pcal -e -S 10 2023 3 | ps2pdf - -
The first hyphen tells ps2pdf
to get the PostScript from standard input and the second hyphen tells it to write the resulting PDF to standard output. Of course, I really don’t want the PDF code spewing out into my Terminal, so I use Apple’s very handy open
command to pipe it into Preview:
pcal -e -S 10 2023 3 | ps2pdf - - | open -f -a Preview
The -f
option tells open
to take what being piped in through standard input and the -a Preview
tells it to open that content in the Preview application.
This isn’t the most complicated command pipeline in the world, but I have trouble remembering both the -S
option and the order of the month, year, and count arguments. So I decided to whip up a quick little shell script to replace my faulty memory.
You should know first that my main use of this command is to print a few upcoming months for my wife. She’s always preferred paper calendars but decided last December that 2023 would be different, so I didn’t get a 2023 calendar for her for Christmas. Partway through the year, she changed her mind. There’s a lot less selection for calendars in spring, and it would kill her to waste money on a full year when she’d only use eight months, so she asked me to print her a few months at a time.
My first thought was to make a script that takes just two arguments: the starting month and the number of months—I could have the script figure out the year. That thinking led to this simple script, which I called bcal
:
bash:
1: #!/usr/bin/env bash
2:
3: y=$(date +%Y)
4: pcal -e -S $1 $y $2 | ps2pdf - - | open -f -a Preview
This worked fine, but you’ve probably already seen the problem. What happens at the end of the year, when it’s December and she wants calendars for the first few months of the following year?
I could use date
to get the current month, date +%m
, and if it’s 12, add one to $y
. But what if I wanted to print out the upcoming January calendar in November? Instead of trying to have the program guess what I wanted, it seemed better for me to tell it what I wanted. That meant adding an option to bcal
to let me tell it I wanted next year instead of this year.
At this point, I was tempted to give up on bash and move to Python. I know how to handle options, dates, and external calls in Python, so the switch would have been fairly easy. But I had an itch to learn how to do options in bash. Couldn’t be too hard, could it?
It wasn’t. The key command is getopts
, and it’s easy to find examples of its use. And once I had getopts
working, I expanded the script to add a help/usage message and one bit of error handling. Here’s the final version of bcal
:
bash:
1: #!/usr/bin/env bash
2:
3: # Make PDF file with blank calendar starting on month of first argument
4: # and continuing for second argument months
5:
6: usage="Usage: bcal [-n] m c
7: Arguments:
8: m starting month number (defaults to this year)
9: c count of months to print
10: Option:
11: -n use next year instead of this year"
12:
13: # Current year
14: y=$(date +%Y)
15:
16: # If user asks for next year (-n), add one to the year
17: while getopts "nh" opt; do
18: case ${opt} in
19: n) y=$((y + 1));;
20: h) echo "$usage"; exit 0;;
21: ?) echo "$usage"; exit 1;;
22: esac
23: done
24:
25: # Skip over any options to the required arguments
26: shift $(($OPTIND - 1))
27:
28: # Exit with usage message if there aren't two arguments
29: if (($# < 2)); then
30: echo "Needs two arguments"
31: echo "$usage"
32: exit 1
33: fi
34:
35: # Make the calendar, convert to PDF, and open in Preview
36: pcal -e -S $1 $y $2 | ps2pdf - - | open -f -a Preview
Lines 17–23 handle the options. I decided on -n
as the option for “next year” and you can see in the case
statement that giving that option adds one to the current year. Any other options lead to the usage message and a halt to the script.
Line 26 uses shift
to skip over the options to the required arguments. $OPTIND
is the option index, which gets increased by one with each option processed by getopts
, so this command makes $1
point to the month and $2
point to the count, just as if there were no options.
The error handling in Lines 29–33 is limited to just making sure there are two required arguments. If the arguments are letters or negative numbers, the script will continue through this section and fail in a clumsy way. I’m not especially worried about that because this is a script for me, and I’m unlikely to invoke it as bcal hello world
.
Anyway, now I can get the next three months with
bcal 10 3
and the first two months of next year with
bcal -n 1 2
When Preview opens, it shows me a temporary file.
Usually I just print it out and the temporary file is deleted when I quit Preview. This is the nice thing about piping into open
: the script doesn’t create any files that I have to clean up later. But I can save the file if I think there’s a need to.
I should mention that pcal
can be installed through Homebrew, and ps2pdf
is typically installed as part of the Ghostscript suite, which is also in Homebrew.
Now that I kind of know how to use getopts
, I’ll probably extend my shell scripts before bailing out to Perl or Python. I’m not sure that’s a good thing.
By default, pcal
looks in your home directory for a file named .calendar
and parses it to print entries on the appropriate days. Back when I was a Linux user, this was how I kept track of my calendar. Whenever I added a new entry, I’d print out an updated calendar on the back of a sheet I pulled out of the recycling bin. It worked pretty well in those pre-smartphone days. ↩
English has more spelling anomalies than there are stars in the sky, but right now the one that’s bothering me the most is that succeeding has a doubled E and preceding doesn’t. ↩
No doubled E! ↩
[If there are equations in this post, you will see them rendered properly in the original article].
]]>Because I don’t have a 3D drawing app, I did it in Mathematica. And because I’m new to Mathematica, I fumbled around a bit before figuring out what to do. I decided to write up what I learned so I could refer to it later, and I decided to post it here in case it’s of any value to anyone else.
The key function when creating 3D images (that aren’t plots) is Graphics3D
. As you can see from the linked documentation, it can take an enormous number of arguments and options. The main argument is a list of the objects to be drawn, which in the drawing above consisted of the boxy representation of an iPhone and three arrows representing the x, y, and z axes (I added the axis labels “by hand” in Acorn).
One of the first things I learned was to create the objects separately instead of trying to build them within the call to Graphics3D
. It’s certainly possible to make this image entirely within Graphics3D
, but the function call becomes really long and confusing if you do it that way. I started by defining variables with the dimensions of the phone (in millimeters):
b = 71.5
h = 147.5
t = 7.85
In case you’re wondering, b
is commonly used in my field for the width of objects—it’s short for breadth. We avoid w
because we like to use it for weight.
The boxy iPhone is defined using the Cuboid
function:
phone = Cuboid[{-b/2, -h/2, -t/2}, {b/2, h/2, t/2}]
The two arguments are its opposite corners.
In theory, I could use Mathematica’s own knowledge of its coordinate system to draw the axes, but it defaults to drawing axes along the edges of a box that encloses the object, and I didn’t find any handy examples of overriding that default. It was easier to define the axes using the Arrow
function:
xaxis = Arrow[{{0, 0, 0}, {b/2 + 25, 0, 0}}]
yaxis = Arrow[{{0, 0, 0}, {0, h/2 + 25, 0}}]
zaxis = Arrow[{{0, 0, 0}, {0, 0, t/2 + 25}}]
The argument to Arrow
is a list of two points: the “from” point and the “to” point. As you can see, each arrow starts at the origin (which is the center of the phone) and extends in the appropriate direction 25 mm past the edge of the phone. Why 25 mm? It looked about right when I tried it.
With the objects defined, I called Graphics3D
to draw them:
Graphics3D[{Gray, phone, Black, Thick, xaxis, yaxis, zaxis},
Boxed -> False, ImageSize -> Large]
(I’ve split the command into two lines here to make it easier to read, and I’ll do the same from now on.)
As you can see, the list of objects that makes up the first argument is interspersed with directives on how those objects are to be drawn. The first directive, Gray
, applies that color to phone
. Then Black
overrides Gray
and is applied to the three axes that follow. I added the Thick
directive before the axes when I saw that they looked too spindly by default.
The Boxed->False
option stops Mathematica from its default of including a wireframe bounding box in the image. ImageSize->Large
does what you think—it makes the image bigger than it otherwise would be.
Here’s what Mathematica displays:
Mathematica obviously thinks the z direction should be pointing up. This makes sense, but it isn’t what I wanted. The notebook interface allows you “grab” the image and rotate it into any orientation, so that’s what I did, putting it into the position you see at the top of the post. Then I right-clicked on the image and selected
from the contextual menu. I opened the resulting image file in Acorn, added the axis labels, and uploaded the result to my web server.After publishing the post, I returned to Mathematica to see if I could get it to clean a few things up. First, I wasn’t happy with the brownish color that appeared on certain edges, depending on the orientation. That was cleared up with the Lighting->Neutral
option. Then I wanted programmatic control over the orientation, which I got via ViewPoint->{-50, 30, 75}
, which sets the location of the virtual camera, and ViewVertical->{.1, 1, 0}
, which rotates the camera about the axis of its lens until the given vector is pointing up in the image.
Finally, I wanted to add the axis labels in Mathematica instead of relying on another program. This meant adding Text
objects to the argument list, one for each axis. The final call to Graphics3D
looked like this:
Graphics3D[{GrayLevel[.5], phone,
Black, Thick, xaxis, yaxis, zaxis,
FontSize -> 16,
Text["x", {b/2 + 25, -7, 0}],
Text["y", {-7, h/2 + 25, 0}],
Text["z", {-5, -5, t/2 + 25}]},
Boxed -> False, ImageSize -> Large,
ViewPoint -> {-50, 30, 75}, ViewVertical -> {.1, 1, 0},
Lighting -> "Neutral"]
Each Text
object includes both the text and the point at which it is to be displayed. The Text
items are preceded by a FontSize
directive to make them big enough to see clearly. The Black
directive earlier in the list was still in effect, so the text color was black.
Here’s the result:
As you can see, I’ve made the image more upright, and the neutral lighting has gotten rid of the weird brownish and bluish casts of the original. You may also note that I changed the original Gray
directive to GrayLevel[.5]
. This made no difference in the final output, but the GrayLevel
argument did let me play around with different shades of gray before deciding that the 50% provided by Gray
was just fine.
I still have a long way to go with Mathematica, but I’m making progress.
[If there are equations in this post, you will see them rendered properly in the original article].
]]>I’ve been using MathJax (and its predecessor, jsMath) for many years, and it works quite well here on the blog itself, but because it formats the equations via JavaScript, the equations aren’t formatted in the RSS feed. The RSS feed just shows the LaTeX code for each equation—not bad for short equations, but increasingly hard to read as the equations get longer. If you’re an RSS subscriber, you’ve noticed that the following disclaimer appears at the bottom of each article in the feed:
If there are equations in this post, you will see them rendered properly in the original article.
where “the original article” is a link to the blog, where MathJax can do its magic.
So I’m thinking about ways to get the equations to look right in RSS readers. One obvious way is to render them as images, upload them, and insert <img>
tags at the appropriate spots,^{1} but this seems crude and very Web 1.0. Although I suppose I could render the equations as SVGs, which would allow users to zoom in without seeing jaggies.
MathML is the “right” way to do equations and is supported by all the browsers I can think of, so the math should look right for everyone who visits the blog directly.^{2} The question is whether it’ll be rendered properly in RSS readers. My guess is that it will be, since I believe that RSS readers use the same rendering engines used by browsers. But the only way to know for sure is to write a post with MathML and see how it looks. So here goes:
The general formula for the mass moment of inertia about the x-axis, $\begin{array}{r}{I}_{xx}\end{array}$, is
$\begin{array}{r}{I}_{xx}={\int}_{V}\rho \phantom{\rule{0.167em}{0ex}}({y}^{2}+{z}^{2})dV\end{array}$
This can be specialized for certain geometries. For example, the moment of inertia of a thin rod about an axis through the rod’s center and perpendicular to it is
$\begin{array}{r}{I}_{xx}=\frac{1}{12}m{L}^{2}\end{array}$
Finally, for Dan Moren, the parallel axis theorem is
$\begin{array}{r}{I}_{xx}^{P}={I}_{xx}^{C}+m{d}^{2}\end{array}$
where $\begin{array}{r}{I}_{xx}^{C}\end{array}$ is the moment of inertia about an axis through the centroid of the body and $\begin{array}{r}{I}_{xx}^{P}\end{array}$ is the moment of inertia about a parallel axis a distance $\begin{array}{r}d\end{array}$ from the centroid.
After I publish this post, I’ll check my RSS feed in NetNewsWire and update the post with a note on how the equations looked.
Update 14 Sep 2023 10:54 AM
As I hoped, NetNewWire shows the equations rendered properly (apart from some baseline misalignment for the inline math) in my RSS feed. I’m interested in hearing how other feedreaders perform.
[If there are equations in this post, you will see them rendered properly in the original article].
]]>The titanium alloy used in the phone was revealed by Isabel Yang about 57 minutes into the presentation. She called it Grade 5 titanium, which is an ASTM designation. It’s also known as Ti-6Al-4V, because its major alloying elements are aluminum at 6% and vanadium at 4%. Allison Sheridan talked about its properties earlier this month, and I’ve been assuming that it would be the alloy Apple would choose ever since I heard they were switching to titanium for the band.
I guessed it would be Ti-6Al-4V because it’s the garden-variety alloy for titanium. A great material, but not exotic in any way. Apart from many aerospace applications, it’s also used in medical implants, so you know that skin contact won’t be a problem.
Shortly after the introduction of the alloy, Yang talked about how the titanium band is attached to the rest of the phone’s structure, which is aluminum. According to Apple’s newsroom:
Using an industry-first thermo-mechanical process, the titanium bands encase a new substructure made from 100 percent recycled aluminum, bonding these two metals with incredible strength through solid-state diffusion.
In other words, the titanium and aluminum are welded together. Not the kind of welding you’re used to, to be sure, but still welding—solid-state welding with no melting of either material. The thermo part of the “thermo-mechanical process” is heating up the materials, and the mechanical part is smushing them together. In essence, this is the oldest form of welding, the kind the village smithy did under the spreading chestnut tree with a forge and a hammer.
I’m sure the process control needed to do solid-state welding with such thin parts is well beyond what other companies can achieve, and I can understand why Apple didn’t want to describe it using a term that conjures up images of sweaty guys in tilt-down helmets making sparks in a dusty manufacturing plant. But it’s still welding.
Finally, we come to Jason Snell’s surprise at how light the 15 Pro seemed when he played with it in the hands-on area. He mentioned this not only in his Macworld article, but also in the post-keynote episode of Upgrade. You wouldn’t expect a change from 206 g for the 14 Pro to 187 g for the 15 Pro would be that noticeable, but Greg Joswiak mentioned it in the keynote and Jason confirmed it. How can that be?
One answer is that people are just more sensitive than we give them credit for being. A 9–10% drop in weight may seem like a small amount to our brains but a large amount to our hands. But because it allowed me to do some simple calculations, I decided to look into another possibility.
Your ability to manipulate a phone is based primarily on its mass, but also on its moment of inertia. And since the reduction in mass when switching from stainless steel to titanium is occurring almost entirely at the perimeter of the phone, the moment of inertia should be reduced more than if the mass were reduced uniformly.
Let’s assume the two phones are the same size^{1}, 147.5 mm high by 71.5 mm wide (the 7.85 mm thickness can be ignored). We’ll set the origin at the geometric center of the phone and the x, y, and z axes will be associated with what would normally be called pitch, roll, and yaw. We’ll be doing enough approximating that there’s no point in trying to account for the phone’s rounded corners.
If the 187 g mass of the 15 Pro were distributed uniformly, its moment of inertia about the x-axis would be
\[I_{xx}^{(15)} = \frac{1}{12}(187 \;\mathrm{g})(147.5 \;\mathrm{mm})^2 = 339,035\; \mathrm{gm \cdot mm^2}\]If we assume the 14 Pro’s additional 19 g of mass is distributed uniformly around the perimeter, we can say that the long sides have
\[\frac{147.5 \;\mathrm{mm}}{2(147.5 \;\mathrm{mm} + 71.5 \;\mathrm{mm})} (19 \;\mathrm{g}) = 6.4 \;\mathrm{g}\]of extra mass and the short sides have
\[\frac{71.5 \;\mathrm{mm}}{2(147.5 \;\mathrm{mm} + 71.5 \;\mathrm{mm})} (19 \;\mathrm{g}) = 3.1 \;\mathrm{g}\]of extra mass. The moment of inertia of the these four lines of additional mass about the x-axis is
\[I_{xx}^{(lines)} = 2 \left[ \frac{1}{12}(6.4 \;\mathrm{g})(147.5 \;\mathrm{mm})^2 + (3.1 \;\mathrm{g})\left(\frac{147.5 \;\mathrm{mm}}{2}\right)^2 \right]\] \[I_{xx}^{(lines)} = 56,929 \;\mathrm{g \cdot mm^2}\]You’ll note the use of the parallel axis theorem in the second term inside the brackets. I’m not calculating the moments of inertia of the top and bottom lines about their own axes because that’s too small to worry about.
Therefore, the moment of inertia of the 14 Pro is
\[I_{xx}^{(14)} = I_{xx}^{(15)} + I_{xx}^{(lines)} = 395,964 \;\mathrm{g \cdot mm^2}\]and the reduction in the moment of inertia about the x-axis is
\[\frac{I_{xx}^{(14)} - I_{xx}^{(15)}}{I_{xx}^{(14)}} = \frac{56,929}{395,964} = 0.144\]or 14–15%. This reduction, which is more than the mass reduction, would make the iPhone 15 Pro easier to turn, and that may add to the impression that it’s significantly lighter than the 14 Pro.
These calculations were fun, but the initial assumption, that the 15 Pro’s mass is uniformly distributed, is unquestionably wrong. How wrong depends on how non-uniform the mass distribution is, and if I knew that I wouldn’t have had to make the assumption in the first place. My guess is that the assumption is good enough for this kind of back-of-the-envelope calculation.
But even if the numbers are further off than I think, the concept is correct. Reducing the mass at the perimeter, which the change from stainless steel to titanium has done, has definitely reduced the moment of inertia more than a uniform reduction in mass would have. And that will make the 15 Pro easier to manipulate and will contribute—at least somewhat—to the impression of lightness.
You can, of course, do the same sort of calculation for the moments of inertia about the roll and yaw axes. This is left as an exercise for the reader.
Yes, I know the 15 Pro is slightly smaller, but I want to follow out the consequences of changing only the mass out at the perimeter. ↩
[If there are equations in this post, you will see them rendered properly in the original article].
]]>Out of the M² pairs of two numbers coming from a set [of] M numbers, M of these pairs are tied, and in half of the rest the first number is higher than the second. So the number of possible scores, with each score bounded by M, is
M + (M² − M)/2 = M(M + 1)/2.
If M = 73 [the most points scored by a team in NFL history], there are 2,701 possible scores.
[Note that there are M possible scores even though 1 is impossible because 0 is possible.]
This is sound logic, but it isn’t how I would solve the problem. My first thought was to arrange the scores in an M×M matrix, with the columns representing the score of the winning (or tying) team and the rows representing the losing (or tying) team. Putting a checkmark at every possible score position and leaving the other positions blank (because the L team can’t score more than the W team), we get an upper triangular matrix:
This visual approach came to me because I’ve spent a lot of time dealing with upper (and lower) triangular matrices, and I don’t have to think much to come up with the formula for the number of nonzero terms:
\[\frac{M (M + 1)}{2}\]You may recognize this as Gauss’s smartass formula for summing the first M natural numbers.
By the way, Cook was inspired to look into this problem by his Texans losing to the Ravens 25–9, a score that, improbably enough, had never happened before.
OK, there is a way for a team to score one point under the one-point safety rule, but we’re going to follow Cook’s argument and ignore that rule. ↩
[If there are equations in this post, you will see them rendered properly in the original article].
]]>[a] good-faith estimate of the time reasonably anticipated to present the State’s case during a joint trial of all 19 co-defendants, and alternatively any divisions thereof, including the number of witnesses likely to be called and the number and size of exhibits likely to be introduced.
Emphasis added because that’s the point of this post.
Taken at face value, McAfee is asking Willis to make these estimates for a single trial, 19 separate trials, and every possibility in between. Since this is an impossible task because of the monstrous number of trial combinations, we don’t take him at face value. But what if we did? How many different ways could this case be split into separate trials?
Obviously, there’s just one way to have a single trial of all the defendants and just one way to have 19 trials, each with an individual defendant. Let’s consider the next arrangement on the complication scale: 18 trials. This would mean one trial with 2 defendants and 17 trials with individual defendants. The key to working out this figure is determine the number of ways we can pair 2 defendants from the 19. For that we need the binomial coefficient:
\[\binom{19}{2} = \frac{19!}{2! \, 17!} = 171\]The next most complicated arrangement is two trials. For this, we need to consider the nine ways to split up the defendants and the number of combinations associated with each of those splits.
Split of defendants | Formula | Count |
---|---|---|
1 and 18 | \(\dbinom{19}{2}\) | 19 |
2 and 17 | \(\dbinom{19}{2}\) | 171 |
3 and 16 | \(\dbinom{19}{3}\) | 969 |
4 and 15 | \(\dbinom{19}{4}\) | 3,876 |
5 and 14 | \(\dbinom{19}{5}\) | 11,628 |
6 and 13 | \(\dbinom{19}{6}\) | 27,132 |
7 and 12 | \(\dbinom{19}{7}\) | 50,388 |
8 and 11 | \(\dbinom{19}{8}\) | 75,582 |
9 and 10 | \(\dbinom{19}{9}\) | 92,378 |
Total | 262,143 |
Two of those numbers should be familiar.
At this point, I think it’s time to give up on the binomial coefficient. There may be a way to use it to work out the number of ways to have three trials, four trials, and so on up to 17 trials, but I don’t want try it. More powerful tools are available, and we should take advantage of them.
The Stirling numbers of the second kind are what we need. As the MathWorld article says, they are
[t]he number of ways of partitioning a set of n elements into m nonempty sets…
The key words here are partitioning and nonempty. When we partition a set into subsets, the subsets do not intersect with each other and their union is the original set. Translated to our problem, that means each defendant is in one and only one trial. And the subsets are nonempty because we can’t have a trial with no defendant.
The Stirling numbers of the second kind are in the OEIS, but the list on that page doesn’t go up high enough. The tables in Abramowitz & Stegun do,
but there’s no way I can enter the numbers for \(n = 19\) without making several typos. So let’s fire up Mathematica and use its StirlingS2
function
Table[{n, StirlingS2[19, n]}, {n, 1, 19}]
yields
{{1, 1},
{2, 262143},
{3, 193448101},
{4, 11259666950},
{5, 147589284710},
{6, 693081601779},
{7, 1492924634839},
{8, 1709751003480},
{9, 1144614626805},
{10, 477297033785},
{11, 129413217791},
{12, 23466951300},
{13, 2892439160},
{14, 243577530},
{15, 13916778},
{16, 527136},
{17, 12597},
{18, 171},
{19, 1}}
where the first number in each line is the number of trials and the second is the number of ways to arrange the defendants in that many trials. We see that the values for 1, 2, 18, and 19 trials match what we came up with earlier, and now we have all the others, too. If your eyes are good, you can compare the numbers in the middle to the A&S table.
To get the total, we run
Total[Table[StirlingS2[19, n], {n, 1, 19}]]
to get 5,832,742,205,057, or over 5.8 trillion possibilities. I suggest we call this the McAfee number.
Update 7 Sep 2023 1:53 PM
Reader Rick Kaye, who has probably forgotten more combinatorics than I’ll ever know, emailed me to point out that the number of ways to partition a set into nonempty subsets is the Bell Number. In Mathematica, it’s calculated through the BellB
function, so
BellB[19]
returns 5,832,742,205,057, the same value I got by summing the Stirling numbers of the second kind. You can check this via
BellB[19] == Total[Table[StirlingS2[19, n], {n, 1, 19}]]
which returns True
. Also, the Bell Numbers are sequence A000110 in the OEIS, where you can look up the value directly. Thanks, Rick!
[If there are equations in this post, you will see them rendered properly in the original article].
]]>Cook says “using a simple language can teach you that you don’t need features you thought you needed,” and he uses awk as the paradigm of this principle. He uses awk in a limited way to match the limits of the language:
It has been years since I’ve written an awk program that is more than one line. If something would require more than one line of awk, I probably wouldn’t use awk. I’m not morally opposed to writing longer awk programs, but awk’s sweet spot is very short programs typed at the command line.
The only part of this that doesn’t apply to me is that I don’t think I’ve ever written an awk program longer than a single line. I try to use awk when its superpower—the automatic splitting of lines into fields—fits what I need to do.
But it’s in the next section of Cook’s post that we part ways. He argues that awk’s limited regular expression support^{1} is an advantage:
At first I wished awk were more expressive is in its regular expression implementation. But awk’s minimal regex syntax is consistent with the aesthetic of the rest of the language. Awk has managed to maintain its elegant simplicity by resisting calls to add minor conveniences that would complicate the language. The maintainers are right not to add the regex features I miss.
This is a reasonable argument for people who’ve never used regexes with a larger syntax, but I don’t know anyone who fits that description. Certainly not Cook and certainly not me. When Perl became the language of the web in the 90s, it put its regex flavor in front of the world, and the world responded by adopting it wherever it could. Pretty much the only programming tools that didn’t were those that existed before Perl: most prominently grep, sed, and awk. So if you want to use regular expressions with any of these tools, you have to ask yourself whether the simplicity of the language is worth accepting the straightjacket of a limited regex syntax.
As much as I like awk, whenever I see my problem needing more than the most elementary of regular expressions, I abandon it for Perl and I don’t look back. Perl-compatible (or very nearly Perl-compatible) regular expressions are in all the other tools I use frequently—trying to remember the awk differences adds complexity to my use of it.
After reading Cook’s post, I thought Wait a minute. Isn’t this the guy who recommended using tcgrep
so you could stick with Perl regex syntax? Yes it is. I think his argument in that earlier post applies just as well to awk as it does to grep.
[If there are equations in this post, you will see them rendered properly in the original article].
]]>The word slug was apparently taken from the newspaper business and is defined this way:
A slug is a few words that describe a post or a page. Slugs are usually a URL friendly version of the post title.
The URLs to individual posts here look like this:
https://leancrew.com/all-this/2023/08/slugify-slight-return/
which is the domain, a subdirectory, the year and month, and then the slug, which is based on the title. It’s supposed to be lower case, with all the punctuation stripped and all word separators turned into hyphens. Some people prefer underscores, but I like dashes.
I’ve had a slugify
function in my blog publishing system for ages. In a long-ago post, I wrote about this early version of it:
python:
1: def slugify(u):
2: "Convert Unicode string into blog slug."
3: u = re.sub(u'[–—/:;,.]', '-', u) # replace separating punctuation
4: a = unidecode(u).lower() # best ASCII substitutions, lowercased
5: a = re.sub(r'[^a-z0-9 -]', '', a) # delete any other characters
6: a = a.replace(' ', '-') # spaces to hyphens
7: a = re.sub(r'-+', '-', a) # condense repeated hyphens
8: return a
This was written in Python 2. It had been updated to Python 3 and improved in the intervening years, but it was obviously still not bulletproof. Here’s the version I came up with this morning, including the necessary import
s:
python:
1: import re
2: from unicodedata import normalize
3:
4: def slugify(text):
5: '''Make an ASCII slug of text'''
6:
7: # Make lower case and delete apostrophes from contractions
8: slug = re.sub(r"(\w)['’](\w)", r"\1\2", text.lower())
9:
10: # Convert runs of non-characters to single hyphens, stripping from ends
11: slug = re.sub(r'[\W_]+', '-', slug).strip('-')
12:
13: # Replace a few special characters that normalize doesn't handle
14: specials = {'æ':'ae', 'ß':'ss', 'ø':'o'}
15: for s, r in specials.items():
16: slug = slug.replace(s, r)
17:
18: # Normalize the non-ASCII text
19: slug = normalize('NFKD', slug).encode('ascii', 'ignore').decode()
20:
21: # Return the transformed string
22: return slug
This will turn
Parabolic mirrors made simple(r)
into
parabolic-mirrors-made-simple-r
which is what I want. A more complicated string, including non-ASCII characters,
Hél_lo—yøü don’t wånt “25–30%,” do you?
will be converted to
hel-lo-you-dont-want-25-30-do-you
which would also work well as a slug.
Line 19, which uses the normalize
function from the unicodedata
module followed by encode('ascii', 'ignore')
is far from perfect or complete, but it converts most accented letters into reasonable ASCII. Line 19 ends with decode
to turn what would otherwise be a bytes
object into a string.
You’ll note that Lines 14–16 handle the conversion of a few special characters: æ, ß, and ø. I learned by running tests that those are some of the letters the normalize/decode
system doesn’t convert to reasonable ASCII. Even though I couldn’t imagine myself using any of these letters—or any of the myriad of other letters that don’t get converted by normalize/decode
, it bothered me that I was rewriting slugify
yet again and still didn’t have a way of handling lots of non-ASCII characters.
I decided it was time to swallow my pride and look for a slugifying function written by someone who was willing to put in the time to do a complete job.
The answer was the aptly named python-slugify
module by AvidCoderr, which has its own slugify
function with many optional parameters. I learned that the defaults work for me. This code
python:
1: from slugify import slugify
2:
3: print(slugify("Hél_lo—yøü don’t wånt “25–30%,” do you, Mr. Encyclopædia?"))
returns
hel-lo-you-dont-want-25-30-do-you-mr-encyclopaedia
which is just what I want.
A lot of this slugify
’s power comes from its use of Tomaž Šolc’s unidecode
module, which does the conversion to ASCII in a way that’s much more complete than the normalize/decode
method.
So now my publishing code doesn’t have its own slugify
function, it just imports AvidCoderr’s and calls it. Kind of anticlimactic, but it works better.
One more nice thing about the slugify
module. When you install it—which I did via conda install python-slugify
because I use Anaconda to manage Python and its libraries—it comes with a command-line program also called slugify
, which lets you test things out in the Terminal. You don’t even have to wrap the string you want to slugify in quotes:
slugify Hél_lo—yøü don’t wånt “25–30%,” do you, Mr. Encyclopædia?
returns
hel-lo-you-dont-want-25-30-do-you-mr-encyclopaedia
Note that if the string you’re converting includes characters that are special to the shell, you will have to wrap it in single quotes.
slugify '$PATH'
returns
path
but
slugify $PATH
returns a very long string that you probably don’t want in your URL.
[If there are equations in this post, you will see them rendered properly in the original article].
]]>This is relatively quick, but I do have to make sure I hit ImageOptim in the long menu of apps—easy to do when I’m sitting up at a desk but less so when I’m lying on a bed or a couch. I decided to turn the operation into a Keyboard Maestro macro. I still have to start by selecting the file(s) I want to optimize, but I no longer have to aim at a menu item.
The macro is called Optimize PNG, and here’s a screenshot of it:
If you download it and import it into Keyboard Maestro as is, it will appear in the Finder group and will be active. You can run it when you have one or more PNG files selected in the Finder.
The macro has one step, which is this AppleScript:
applescript:
1: -- Set text item delimiters for extracting the extension
2: set text item delimiters to "."
3:
4: -- Set the path to the ImageOptim command line executable
5: set io to "/Applications/ImageOptim.app/Contents/MacOS/ImageOptim"
6:
7: -- Run ImageOptim on each selected file whose extension is png or PNG
8: tell application "Finder"
9: set imageFiles to selection
10: repeat with imageFile in imageFiles
11: set filePath to POSIX path of (imageFile as alias)
12: set fileExtension to last text item of filePath
13: if fileExtension is "png" or fileExtension is "PNG" then
14: do shell script (io & " " & quoted form of filePath)
15: end if
16: end repeat
17: end tell
18:
19: do shell script "afplay /System/Library/Sounds/Glass.aiff"
Basically, the script loops through all the selected files and runs ImageOptim on them. There’s some logic in there that makes sure^{1} that ImageOptim is run only on PNG files, and a conversion from an AppleScript file description to a Unix-style file path. The command that gets run on every PNG file is
/Applications/ImageOptim.app/Contents/MacOS/ImageOptim '/path/to/image file.png'
The file path is quoted (Line 14) to ensure that spaces are handled correctly.
When the optimizing is done, the Glass sound plays (Line 19) to let me know the files are ready.
The name of the macro is “Optimize PNG,” but I use ⌃⌥⌘I as the trigger because I think of it as opening ImageOptim, even though ImageOptim never shows itself except briefly in the Dock.
It’s certainly not a foolproof way of “making sure.” All it does is get the file extension (Lines 2 and 12) and checks to see if it’s “png” or “PNG” (Line 13). That’s good enough for me, but if you’re the kind of person who saves files with misleading extensions (or no extension at all) you’ll have to come up with a better way of distinguishing PNG files. Also, you should rethink your life choices. ↩
[If there are equations in this post, you will see them rendered properly in the original article].
]]>Here’s the problem as posed in the Wikipedia article:
Sleeping Beauty volunteers to undergo the following experiment and is told all of the following details: On Sunday she will be put to sleep. Once or twice, during the experiment, Sleeping Beauty will be awakened, interviewed, and put back to sleep with an amnesia-inducing drug that makes her forget that awakening. A fair coin will be tossed to determine which experimental procedure to undertake:
- If the coin comes up heads, Sleeping Beauty will be awakened and interviewed on Monday only.
- If the coin comes up tails, she will be awakened and interviewed on Monday and Tuesday.
In either case, she will be awakened on Wednesday without interview and the experiment ends.
Any time Sleeping Beauty is awakened and interviewed she will not be able to tell which day it is or whether she has been awakened before. During the interview Sleeping Beauty is asked: “What is your degree of belief^{1} now for the proposition that the coin landed heads?”
I don’t understand why the problem is typically described with Sleeping Beauty being given a drug to put her to sleep. Surely it would be more appropriate for it to be a magic spell.
The first thing I don’t like about Tom’s presentation is how he poses the question asked of Sleeping Beauty: What is the probability that the coin was a head?
Asking about the probability instead of the degree of belief suggests an objectivity that shouldn’t be there. What is the probablity connotes a sort of omniscience that doesn’t belong in the question. That’s certainly one of the reasons Brady thinks at one point that the answer should be ½—a fair coin was flipped, and its probability of landing heads isn’t affected by any of the other bits of the story.
But when the question is posed in terms of degree of belief, and we remember that it’s Sleeping Beauty’s degree of belief each time she is awakened, we start thinking about the problem differently. This is what leads to the longish section in the middle of the video in which Tom goes through various assumptions and conditional probabilities to get to the “thirder” answer. And this is the part that I think can be made shorter and clearer.
First, let’s think about what degree of belief is. It is an expression of the odds that would be given in a fair wager. In this case, we recast the problem as Sleeping Beauty being offered a bet—heads or tails—by the experimenter each time she’s awakened. We can start by considering which way she should bet if she’s offered 1:1 odds and then move on to determining what odds would be fair to both her and the experimenter.
Because it’s a fair coin, half the time it will land on heads and there will be one wager. The other half of the time it will land on tails and there will be two wagers. If Sleeping Beauty bets on tails, she will, on average, lose one bet half the time and win two bets half the time. If we say the bet is $10, her expected return from betting on tails is
\[\frac{1}{2} (-\$10) + \frac{1}{2} (2 \times \$10) = \$5\]The experimenter would have to be an idiot to make this bet with even odds. The fair way is for the person who bets on tails to put up $20 and the person who bets on heads to put up $10. That way the expected return for the tails-bettor is
\[\frac{1}{2} (-\$20) + \frac{1}{2} (2 \times \$10) = $0\]and the expected return for the heads-bettor is the same:
\[\frac{1}{2} (\$20) + \frac{1}{2} (2 \times -\$10) = $0\]The 2:1 odds make the bet fair.
Because 2:1 odds is the same as “two out of three,” Sleeping Beauty’s degree of belief in tails is ⅔. Conversely, her degree of belief in heads is ⅓.
Note that it’s the disparity in the number of wagers (or questions, if we go back to the original problem statement) that makes the degrees of belief differ from ½. If we change the problem slightly and say that there will be one question, regardless of the outcome of the coin toss (if it’s tails we could do another coin toss to decide whether the question is asked on Monday or Tuesday), then there will be no disparity in wagers and even odds would be fair. It’s possible that this misinterpretation of the problem—that the question is asked once per experiment rather than once per awakening—is what leads some people to think that Sleeping Beauty’s degree of belief should be ½.
Another way for the degree of belief to be ½ would be if the wager is made not in the middle of the experiment, but either before it on Sunday or after it on Wednesday. In both of these cases, 1:1 odds would be fair.
We can also run simulations of the problem to give us insight into the answer. Here’s a short Python program that simulates both the one-question-per-awakening problem and the one-question-per-experiment problem:
python:
1: #!/usr/bin/env python3
2:
3: from collections import defaultdict
4: from random import choice
5:
6: # Set up the problem
7: sides = 'Heads Tails'.split()
8: days = 'Monday Tuesday'.split()
9: qdays = {'Heads': ['Monday'], 'Tails': days}
10:
11: # Initialize the question matrix
12: q = defaultdict(int)
13:
14: # Run 10,000 experiments assuming the question is asked every day
15: for f in range(10000):
16: flip = choice(sides)
17: for day in qdays[flip]:
18: q[(flip, day)] += 1
19:
20: # Show the results
21: print('Question asked every awakening')
22: for s in sides:
23: for d in days:
24: print(f'{s} and {d}: {q[(s, d)]}')
25:
26: print()
27:
28: # Reinitialize the question matrix
29: q = defaultdict(int)
30:
31: # Run 10,000 experiments assuming the question is asked once per experiment
32: for f in range(10000):
33: flip = choice(sides)
34: day = choice(qdays[flip])
35: q[(flip, day)] += 1
36:
37: # Show the results
38: print('Question asked once per experiment')
39: for s in sides:
40: for d in days:
41: print(f'{s} and {d}: {q[(s, d)]}')
In both cases, the q
dictionary is being used to keep track of questions. The keys of q
are tuples of the (initial) coin toss and the day, e.g., ('Tails', 'Monday')
, and the values of q
are the number of questions asked for each of those condition pairs. I’m using a defaultdict
for q
to avoid having to initialize it, and the choice
function from the random
module to simulate the coin flips.
Because the program uses random numbers and doesn’t specify a seed, it will give slightly different answers every time it’s run. Here’s the answer from one run,
Question asked every awakening
Heads and Monday: 4969
Heads and Tuesday: 0
Tails and Monday: 5031
Tails and Tuesday: 5031
Question asked once per experiment
Heads and Monday: 4905
Heads and Tuesday: 0
Tails and Monday: 2572
Tails and Tuesday: 2523
which fits well with our previous answers.
Simulations like this can give you confidence in the solutions you’ve come up with by other means. If you haven’t come up with a solution by other means, a simulation can lead you to the correct line of reasoning. Of course, your simulation code has to match the setup of the problem, which is often the tricky bit.
As I was going through this problem, I couldn’t help but think about the Sleeping Beauty episode of Fractured Fairy Tales.
The depiction of Walt Disney as a con man is probably not as wildly obvious now as it was in the early 60s, but even if you don’t know that Daws Butler is recycling his Hokey Wolf/Sgt. Bilko voice or that Disneyland used to have lettered tickets for different attractions, you still get the point.
The article actually uses credence instead of degree of belief, but I think the latter is easier to understand, especially for a character from the Middle Ages. ↩
[If there are equations in this post, you will see them rendered properly in the original article].
]]>Let’s review some notation and properties. A continued fraction is one in which the denominator contains a fraction, and the denominator of that fraction contains a fraction, and so on.
\[x = a_0 + \cfrac{1}{a_1 + \cfrac{1}{a_2 + \cfrac{1}{a_3 + \ldots}}}\]This is considered the standard form for continued fractions, where the numerators are all ones. You can write out a continued fraction with other numbers as the numerators, but it can always be reduced to this form.
If \(x\) is a rational number, then the continued fraction has a finite number of terms and will end with a \(1/a_n\) term. If \(n=4\), for example, the fraction will look like this:
\[x = a_0 + \cfrac{1}{a_1 + \cfrac{1}{a_2 + \cfrac{1}{a_3 + \cfrac{1}{a_4}}}}\]If \(x\) is irrational, the continued fraction has an infinite number of terms, although terms may repeat. Famously, the golden ratio goes on forever and all the terms are one:
\[\phi = 1 + \cfrac{1}{1 + \cfrac{1}{1 + \cfrac{1}{1 + \ldots}}}\]A less explicit but far more compact way to display a continued fraction is to just show the \(a\) terms as a bracketed list:
\[x = [a_0; a_1, a_2, a_3, \ldots ]\]It’s common to use a semicolon to separate the \(a_0\) term from the others. Mathematica doesn’t do that because it’s more convenient to just use a list. As we saw in the last post, the first five terms of the continued fraction for \(\pi\) is
In[1]:= ContinuedFraction[Pi, 5]
Out[1]= {3, 7, 15, 1, 292}
where Mathematica uses braces to surround its lists. We’ll use this same idea in Python, where the lists are bracketed.
A segment of a continued fraction, \(s_k\), is a finite continued fraction consisting of the first \(k+1\) terms of \(x\):
\[s_k = [a_0; a_1, a_2, \ldots, a_k]\]A remainder, \(r_k\), is all the terms starting with the \(k^{th}\) and continuing on, whether the continued fraction is finite or infinite:
\[r_k = [a_k; a_{k+1}, a_{k+2}, \ldots ]\]So any continued fraction can be broken into a segment, \(s_{k-1}\), and a remainder, \(r_k\).
A convergent is the rational number corresponding to a segment. Convergents are what we use to get rational approximations of numbers. In the last post, we did this
In[2]:= Convergents[ContinuedFraction[Pi, 5]]
22 333 355 103993
Out[2]= {3, --, ---, ---, ------}
7 106 113 33102
to see why \(22/7\) and \(355/113\) are good rational approximations of \(\pi\).
An interesting property of convergents is that those from even-indexed segments—\(s_0\), \(s_2\), and so on—bound \(x\) from below, and those from odd-indexed segments—\(s_1\), \(s_3\), and so on—bound \(x\) from above. (If \(x\) is rational, then there is a final convergent which is equal to \(x\), regardless of whether it’s even or odd.) The even convergents form an increasing sequence; the odd convergents form a decreasing sequence.
OK, if you want more you can go to the Wikipedia page or get a copy of Khinchin’s book^{1}. Let’s move onto the code.
The function I wrote, continued
, returns a tuple of
Fractions are a Python type supplied by the fractions library.
continued
is the only function in cfractions.py
, a file I’ve saved in my site-packages
directory. This makes it easy to import when I’m working in Jupyter:
In [1]: import math
In [2]: from cfractions import continued
In [3]: continued(math.pi)
Out[3]:
([3, 7, 15, 1, 292],
[Fraction(3, 1),
Fraction(22, 7),
Fraction(333, 106),
Fraction(355, 113),
Fraction(103993, 33102)])
Here’s the code:
python:
1: from fractions import Fraction
2: from math import isclose
3:
4: def continued(x, terms=20, rel_tol=1e-9, abs_tol=0.0):
5: 'Return the continued fraction and convergents of the argument.'
6: # Initialize, using Khinchin's notation
7: a = [] # continued fraction terms
8: p = [0, 1] # convergent numerator terms (-2 and -1 indices)
9: q = [1, 0] # convergent denominator terms (-2 and -1 indices)
10: s = [] # convergent terms
11: remainder = x
12:
13: # Collect the continued fraction and convergent terms
14: for i in range(terms):
15: # Compute the next terms
16: whole, frac = divmod(remainder, 1)
17: an = int(whole)
18: pn = an*p[-1] + p[-2]
19: qn = an*q[-1] + q[-2]
20: sn = Fraction(pn, qn)
21:
22: # Add terms to lists
23: a.append(an)
24: p.append(pn)
25: q.append(qn)
26: s.append(Fraction(sn))
27:
28: # Convergence check
29: if isclose(x, float(sn), rel_tol=rel_tol, abs_tol=abs_tol):
30: break
31:
32: # Get ready for next iteration
33: remainder = 1/frac
34:
35: # Return the tuple of the continued fraction and the convergents
36: return(a, s)
The terms of the continued fraction are calculated using a form of Euclid’s algorithm for finding the greatest common divisor (GCD) of two numbers. The numerators and denominators of the convergents are calculated using the recurrence relations,
\[p_k = a_k p_{k-1} + p_{k-2}\] \[q_k = a_k q_{k-1} + q_{k-2}\]You may be wondering why I’m calculating both the continued fraction and the convergents in the same function instead of doing them separately as Mathematic does. Two reasons:
The three optional parameters to the function, terms
, rel_tol
, and abs_tol
, set the convergence criteria. terms
is an upper bound on the number of continued fraction terms that will be calculated, no matter what the other tolerance values are. rel_tol
and abs_tol
, are relative and absolute tolerance values that can stop the process before the terms
limit is reached. Their names and default values are taken from the isclose
function of the math
library, which is used on Line 29. For example, we could set an absolute tolerance on our rational estimate of \(\pi\) this way:
In [4]: continued(math.pi, rel_tol=0, abs_tol=1e-12)
Out[4]:
([3, 7, 15, 1, 292, 1, 1, 1, 2, 1, 3],
[Fraction(3, 1),
Fraction(22, 7),
Fraction(333, 106),
Fraction(355, 113),
Fraction(103993, 33102),
Fraction(104348, 33215),
Fraction(208341, 66317),
Fraction(312689, 99532),
Fraction(833719, 265381),
Fraction(1146408, 364913),
Fraction(4272943, 1360120)])
We’ve hit our tolerance because
\[\pi - \frac{4272943}{1360120} = 4 \times 10^{-13}\]I like the idea of having control over the convergence criteria. Mathematica’s second argument to ContinuedFraction
gives the equivalent of my terms
parameter, but its precision control is, as far as I can tell, entirely internal—there’s no way for the user to set a tolerance.
On the other hand, a disadvantage of my function is that its precision is limited to that of Python floats, whereas Mathematica will give you as much precision as you as for. I can’t, for example, ask for an abs_tol
of \(1 \times 10^{-20}\) and expect to get a correct answer:
In [5]: continued(math.pi, rel_tol=0, abs_tol=1e-20)
Out[5]:
([3, 7, 15, 1, 292, 1, 1, 1, 2, 1, 3, 1, 14, 3],
[Fraction(3, 1),
Fraction(22, 7),
Fraction(333, 106),
Fraction(355, 113),
Fraction(103993, 33102),
Fraction(104348, 33215),
Fraction(208341, 66317),
Fraction(312689, 99532),
Fraction(833719, 265381),
Fraction(1146408, 364913),
Fraction(4272943, 1360120),
Fraction(5419351, 1725033),
Fraction(80143857, 25510582),
Fraction(245850922, 78256779)])
Mathematica will happily use as many digits as needed and do so correctly. It tells us that Python screwed up in the 14th term:
In[3]:= ContinuedFraction[Pi, 15]
Out[3]= {3, 7, 15, 1, 292, 1, 1, 1, 2, 1, 3, 1, 14, 2, 1}
I’m not especially concerned about this reduced precision, as I seldom want my continued fractions to go out that far. And when I do, I have Mathematica to fall back on.
Finally, a small advantage of doing continued fractions in Python instead of Mathematica is that Python uses zero-based indexing for lists, which is consistent with the standard notation given above. Mathematica uses one-based indexing, which usually works out nicely when dealing with vectors and matrices but not in this case.
Update 16 Aug 2023 10:53 PM
Shortly after this post was published Thomas J. Fan got in touch with me on Mastodon and told me that the SymPy library has continued fraction functions in the number theory sublibrary. Of course! I felt silly for not looking there.
He also included this code snippet, which I ran in Jupyter:
In [1]: from itertools import islice
In [2]: from sympy.core import pi
In [3]: from sympy.ntheory.continued_fraction import continued_fraction_iterator
In [4]: list(islice(continued_fraction_iterator(pi), 15))
Out[4]: [3, 7, 15, 1, 292, 1, 1, 1, 2, 1, 3, 1, 14, 2, 1]
So the function is not only prewritten, it’s more accurate than mine. Of course, the final line to get a list of terms is kind of convoluted, but it could easily be wrapped in a more compact function. I may give it a go.
Thanks, Thomas!
Update 18 Aug 2023 9:49 AM
I played around with SymPy’s continued fraction functions yesterday and have decided to stick with my function. As discussed above, the SymPy functions are more accurate than mine, but to get that accuracy you have to be working with the SymPy definitions of numbers like \(\pi\) and functions like sqrt
. Since I’m usually working with the math
package’s definitions, which are numeric rather than symbolic, I wouldn’t normally get the value out of the SymPy functions. Still, it’s good to know that they’re there.
It’s a thin Dover paperback, so it’s pretty cheap. ↩
[If there are equations in this post, you will see them rendered properly in the original article].
]]>