A bad response
September 23, 2025 at 9:06 PM by Dr. Drang
This morning, as I was scrolling through Apple News, I came upon this article from the Wall Street Journal about how NFL teams are punting less than they used to. It included this terrible graph:
Who would make their horizontal axis look like that, with the labels jammed together? I was reading the article on my phone, but image above came from the Mac version of Apple News—the two are the same.
I had a feeling the WSJ wasn’t at fault here and went to look at the same article on its website. Viewing the page on my Mac in a normal-width Safari window, the graph was much better:
There are some nits I’d pick with this, but overall it’s a good graph and shows what the writer wants to get across.
The website graph isn’t a static image, it’s built from JavaScript and has a sort of simple interactivity. If you run your pointer across the graph, a little marker will appear on the line and a popup will show the year and average punts per game for that year. Here’s a screenshot I took while the pointer was aligned with 1985:
I said above that I took the screenshots with my Safari window at what I’d consider a normal width. Narrowing the window to less than about 1300 pixels wide, the graph suddenly changed its look and matched what I’d seen in Apple News (it also lost its interactivity). So the ugly graph is due to WSJ’s responsive design. I switched to my phone to look at the web page there, and sure enough, I saw the ugly version of the graph again.
As you’ve probably guessed, the graph on the web page looks fine on my iPad when Safari takes up the full screen—it’s wide enough to avoid the switch to ugly mode.
So I was wrong. The awful horizontal axis is the WSJ’s fault, it’s just that they’re only torturing people who read the site on their phones. Or through Apple News.
I feel compelled to mention that a later graph in the article looks fine at narrow widths and in Apple News. It’s this graph showing the changing history of play calling on fourth down:
Because it’s looking at only 35 years of data instead of 85, narrowing the chart doesn’t jam the horizontal tick labels together. This should have shown the chartmakers at WSJ how to fix the other graph: fewer tick labels. I suspect that putting labels every ten years instead of every five (and making the unlabeled minor ticks five years apart) would’ve done the trick.
Framed iPhone screenshots with Python
September 23, 2025 at 1:33 PM by Dr. Drang
After getting my new iPhone 17 Pro last Friday, I decided I should update my system for framing iPhone screenshots. This should have meant just downloading new templates from the Apple Design Resources page and changing a filename in the Retrobatch workflow, but I decided to effectively start from scratch.
This wasn’t just for the fun of doing it. My previous system had two problems:
- It typically would frame just one screenshot when I invoked the Keyboard Maestro macro with several screenshots selected.
- It usually left Retrobatch open even though the last step in the KM macro was to quit Retrobatch.
I suspected both of these problems could be solved—as many Keyboard Maestro problems can—by adding a Pause action here or there. But I always feel kind of dirty doing that; it’s very much a trial-and-error process that leaves me with no useful knowledge I can apply later. The necessary pauses depend on the applications being automated and the speed of the machine the macro is running on. Also, a safe pause (or set of pauses) slows down the macro, and I thought my framing macro was already on the slow side.
My substitute for Retrobatch was Python and the Python Imaging Library (PIL), which is now available through the Pillow project. The script I wrote, called iphone-frame
, can be run from the command line on the Mac like this:
iphone-frame IMG_4907.PNG
where IMG_4907.PNG
is the name of an iPhone screenshot dragged out of the Photos app into the current working directory. This will turn the raw screenshot on the left into the framed screenshot on the right:
I have the deep blue phone, so that’s the template I use to frame my screenshots. It’s in this download from the Design Resources page.
Because I often don’t need the full resolution in these framed screenshots, frame-iphone
has an option to cut the resolution in half:
iphone-frame -h IMG_4907.PNG
Full resolution is 1350×2760 pixels (for portrait), and half resolution is 675×1380.
I don’t do landscape screenshots very often, but iphone-frame
can handle them with no change in how it’s called.
iphone-frame IMG_4908.PNG
turns the raw screenshot on the top into the framed screenshot below it.
iphone-frame
can be run with more than one argument. If I’ve dragged out several screenshots, I can frame them all by running something like
iphone-frame IMG*
In a call like this, I don’t have to distinguish between the portrait and landscape screenshots—iphone-frame
works that out on its own.
One other thing: if iphone-frame
is called with no arguments, it reads the file names from standard input, one file per line. This gets used by a Keyboard Maestro macro that calls iphone-frame
, which we’ll get to later.
Here’s the source code:
python:
1: #!/usr/bin/env python3
2:
3: from PIL import Image, ImageDraw
4: import os
5: import sys
6: import getopt
7:
8: # Parse the command line
9: half = False
10: opts, args = getopt.getopt(sys.argv[1:], 'h')
11: for o, v in opts:
12: if o == '-h':
13: half = True
14:
15: # If there are no arguments, get the screenshot file paths from stdin
16: if not args:
17: args = sys.stdin.read().splitlines()
18:
19: # Open the iPhone portrait mode frame
20: pframe = Image.open(f'{os.environ['HOME']}/Library/Mobile Documents/com~apple~CloudDocs/personal/iphone-overlays/iPhone 17 Pro - Deep Blue - Portrait.png')
21:
22: # Apply the appropriate frame to each screenshot
23: for a in args:
24: # Open the original screenshot
25: shot = Image.open(a)
26:
27: # Rotate the frame if the screenshot was in landscape
28: if shot.size[0] > shot.size[1]:
29: frame = pframe.rotate(90, expand=True)
30: else:
31: frame = pframe
32:
33: # Offsets used to center the screenshot within the frame
34: hoff = (frame.size[0] - shot.size[0])//2
35: voff = (frame.size[1] - shot.size[1])//2
36:
37: # Round the screenshot corners so they fit under the frame
38: # Use a 1-bit rounded corner mask with corner radius of 100
39: mask = Image.new('1', shot.size, 0)
40: draw = ImageDraw.Draw(mask)
41: draw.rounded_rectangle((0, 0, shot.size[0], shot.size[1]), radius=100, fill=1)
42: shot.putalpha(mask)
43:
44: # Extend the screenshot to be the size of the frame
45: # Start with transparent image the size of the frame
46: screen = Image.new('RGBA', frame.size, (255, 255, 255, 255))
47: # Paste the screenshot into it, centered
48: screen.paste(shot, (hoff, voff))
49:
50: # Put the frame and screenshot together
51: screen.alpha_composite(frame)
52:
53: # Make it half size if that option was given
54: if half:
55: screen = screen.resize((screen.size[0]//2, screen.size[1]//2))
56:
57: # Save the framed image, overwriting
58: screen.save(a)
I hope there are enough comments to explain what’s going on, but I do want to mention a few things:
- While the
getopt
module is said to be “superseded” in the Python docs, it hasn’t been removed and almost certainly won’t be. Because this script has only one option, I thoughtgetopt
was the most straightforward way to deal with it. - Although Apple supplies a landscape template file, I didn’t see any reason to use it. It’s easy enough to just rotate the portrait template. I have my copy of the portrait template stored a few directories down in iCloud Drive, which is why the
open
command in Line 20 is so long. - As you can see from the screenshots above, the corners of the raw screenshot have to be rounded off before putting the frame over it. Otherwise the corners would peek out beyond the frame. I use a radius of 100 pixels to do the rounding. This doesn’t try to match the inside radius of the frame, it just keeps the screenshot corners hidden under the frame.
- The PIL modules used in the script are
Image
andImageDraw
. The corners of the screenshot are rounded by using a mask image and theputalpha
function. The mask is a black rectangle with a white rounded rectangle drawn within it using (unsurprisingly) therounded_rectangle
function. Whenputalpha
is called, the parts of the screenshot that correspond to the black parts of the mask are removed and the parts that correspond to the white parts of the mask are kept. The frame is then put over the screenshot using thealpha_composite
function. - There’s really no error handling in this script. I expect to use this with images that I know are screenshots from my iPhone Pro, so I don’t feel the need to, for example, check their sizes before altering them. And if I do happen to screw up, the consequences aren’t dire—the original screenshots are still in the Photos app.
- Because I’m at the mercy of Apple and how it supplies its templates, I can’t say this script is future proof, but I did my best. No matter the size of future phone screens, I suspect Apple will make templates that are meant to be aligned with the center of the screenshot. The only magic number in the script is the mask radius of 100, and that should continue to work unless Apple makes the iPhone corners either much more rounded or much more squared off.
OK, iphone-frame
is easy to use if I already have a Terminal window open and the working directory set to where I’ve dragged the screenshots. This is a pretty common situation for me, but it’s also common that I don’t have a Terminal window open, and that’s when a quick GUI solution is best. I get that through this Keyboard Maestro macro, which you can download.
To use this in the Finder, I select the screenshot images I want to frame and press ⌃⌥⌘F. A window appears, asking if I want the framed screenshots to be original size or halved.
In very short order, the images change from raw screenshots to framed screenshots. They might also rearrange themselves if I have my Finder window sorted by Date Modified.
The only comment I have on the Keyboard Maestro macro is that the %FinderSelections%
token returns a list of the full path names to each selected file, one file per line. That means it’s in the right form to send to iphone-frame
as standard input. I don’t think I can send a token directly to a shell script, which is why I set the variable AllImageFiles
to the contents of %FinderSelections%
.
It’s only been a few days, but this new system seems to be working well. You’ll probably be seeing framed screenshots from me in the near future as I start to complain about fit and finish (or lack of same) in iOS 26.
Text fragment linking
September 14, 2025 at 9:42 PM by Dr. Drang
Alex Chan published a post today that struck me immediately as something I should s̸t̸e̸a̸l̸ adapt for my own use. It’s a bookmarklet that creates a URL linking to the selected text within a web page. The selected text is called a text fragment, and a link to it will typically cause your browser to scroll to the text in question and highlight it. Here’s an example on the MDN page about text fragments.
Chan’s bookmarklet uses this JavaScript:
javascript:
1: const selectedText = window.getSelection().toString().trim();
2:
3: if (!selectedText) {
4: alert("You need to select some text!");
5: return;
6: }
7:
8: const url = new URL(window.location);
9: url.hash = `:~:text=${encodeURIComponent(selectedText)}`;
10:
11: alert(url.toString());
It gets the selected text in Line 1, combines it with the URL of the page and the necessary directives in Lines 8 and 9, and displays it in a alert window in Line 11. If no text is selected, the bookmarklet puts up an error message via Lines 3–6.
This is great, but using it as-is while writing a post would force me to select the fragment URL in the alert window, copy it, and then switch to BBEdit (where I do all my writing) and call a script named Markdown reference link from the URL on the clipboard.1 I wanted an automation that didn’t require that many steps.
, which creates aSo I made this Keyboard Maestro macro:
As you can see, this macro is meant to be run while BBEdit is the active application and can be called with the ⌃⌥⌘F hotkey. It executes a slightly edited version of Chan’s JavaScript:
javascript:
1: const selectedText = window.getSelection().toString().trim();
2:
3: const url = new URL(window.location);
4: url.hash = `:~:text=${encodeURIComponent(selectedText)}`;
5:
6: return url.toString();
The differences are that there’s no error handling (that’s done elsewhere in the macro) and it puts the text fragment URL into a Keyboard Maestro variable named InstanceFragmentURL
instead of displaying an alert.
If InstanceFragmentURL
ends with text=
, we know that there was no selected text when the macro was invoked. That’s an error, so the Basso sound is played to tell me I made a mistake, and the macro is canceled. Otherwise, the text fragment URL is put on the clipboard and the script is run by choosing it from the submenu of BBEdit’s menu.
You should know that this macro, like Chan’s bookmarklet, creates only the simplest kind of text fragment URL, and it links to the first instance of the text on the page. That might not be the instance you want to link to. For example, if I select this instance of the word “bookmarklet” on Chan’s page,
and call the macro, it will make this link, which goes to the word “bookmarklet” in the post’s title.
The MDN page on text fragments explains a set of directives that can be added to the URL to adjust which instance is linked. The prefix-
and -suffix
directives should be sufficient to uniquely define the fragment. If I need to do so, I’ll add them manually. Like this. I doubt there’s a good way to automate the addition of these directives, so I’m not going to waste my time trying.
I always know I’m going to learn something when Chan’s blog appears in my RSS feed. Today I was able to use what I learned right away.
Update 15 Sep 2025 2:01 PM
A couple of things I’ve learned since posting:
First, Jeff Johnson asked on Mastodon why I wasn’t using the item from Safari’s context menu.
The short answer is that I didn’t know it was there. This is what happens when you’ve been using software so long you think you know it all and don’t pay attention to minor updates.
I’m not certain how helpful this will be. Using it will add a step to my fragment-linking workflow, but maybe the extra step is small enough not to worry about. However that falls out, it’s good to know that menu item is there. Chrome has a similar context menu item; Firefox, as far as I can tell, does not.
Also on Mastodon, Juande Santander-Vela linked to an interesting post by Sangye Ince-Johannsen that talks about the value of text fragment linking and ties it to the wider problem of link rot. There’s no question but that fragment links are more likely to go bad than page links—even minor page edits can ruin a text fragment link. Ince-Johannsen’s solution is a little extreme for me, but it’s worth considering if you really need your links to survive. Me, I’ve kind of resigned myself to a certain degree of impermanence to the web. While I don’t like it when links here go bad, ANIAT is just a blog, and there’s a limit to what I’m willing to do to ensure link stability.
-
I thought I’d written a post about ↩
, but apparently not. I guess I’ll have to do that now.
3D Mathematica graphics for the triangle problem
September 13, 2025 at 5:48 PM by Dr. Drang
As I often do, I thought it worthwhile to put up a quick post showing how I made the graphics in my previous post. The images were made in Mathematica, mainly through the Graphics3D
command. The exception was the plot of 10,000 random points; that was done through ListPointPlot3D
.
Here’s the notebook.
As typically happens, the graphics that you see in the notebook don’t match what was in the post, at least with regard to shading. That’s because the embedded notebook doesn’t evaluate the cells; it shows a kind of pre-evaluation skeleton view of the images that doesn’t account for lighting. It’s basically what I see when I first open my local copy of the notebook. I’ve reported this bug to Wolfram, and they seem to agree that it should be fixed, but my sense is that it’s not a high priority.
After evaluating the notebook, I right-click on the graphics cells to save the images to disk. Then I crop them, if necessary, and save them to the server.