# A big target

While wasting time on YouTube the other night, I came across this Computerphile video (a spinoff of the Numberphile series) on Fitts’s Law and its application to graphical user interfaces. It’s appalling in the number of things it gets wrong, especially with regard to the Mac.

As a warmup, let’s start with the things that are only half wrong.

“You don’t need to make any movement whatsoever. So that is a target that’s really, really easy to get to.”

It is true that contextual menus pop up right where your cursor is, but that doesn’t necessarily mean there’s zero cursor movement involved. The context of a contextual menu—the thing you are interested in operating on with an item from the menu—is the thing under the cursor, e.g., a file icon or a selected text string. While it may be true that the thing of interest is already under the cursor (like selected text right after you’ve made the selection), it often isn’t. And when it isn’t (the usual case when the thing of interest is a file) you have to move the cursor to the target, and in those common cases the contextual menu is no easier to use than any other operation that requires targeting the cursor.

Corners and the X button (5:20)

“If you put a target in the corners of the screen, what you have essentially done there is create a target that is infinitely wide.”

Again, it is absolutely true that the edges of the screen are infinitely wide in a Fitts sense. And the corners are infinitely wide (actually semi-infinite, because they have a definite beginning but no definite end) in two directions, which makes them easier to hit than any of the other edge locations. But the example used for this principle, the X button in Windows, is only in the top right corner of the screen if you’ve expanded the window to full screen. Otherwise, it’s just a normal target.

I’ve noticed that less sophisticated Windows users, and users who typically work in just one app at a time, do tend to keep their windows fully expanded. For these users, the X really is infinite in two directions. But even for these users, how valuable is this? Should the easiest action to accomplish in an app be to quit it? Maybe for some apps, but not in general.1

At this point, Dr. Wiseman goes off the rails, saying things about the Mac user interface that are just plain wrong. The errors come on so rapidly and are so intertwined that it’s hard to separate them.

The Mac close box (6:00)

“Then I think Apple brought it [the X button] down and made it into a circle, so they made the target from being infinitely massive to a tiny little circle… which is kind of silly of them.”

Now you see what set me off, don’t you?

• The clear implication is that Apple took Microsoft’s wonderful X button and ruined it by making it smaller. Surely a researcher in Human Computer Interaction knows that the Mac came before Windows, so why suggest the opposite?
• Maybe she’s not implying Windows came first. Maybe she means Apple ruined the infinitely massive X button Xerox used in the Alto or Star. Nope.

(Image from Prosthetic Knowledge.)

• As anyone familiar with Apple GUI history knows, the current circular close button evolved from its own earlier square close box. It wasn’t shrunken down from a significantly larger interface widget.
• Even if Apple were to expand its close button to fill the corner of the window, it still couldn’t be put in the corner of the screen because the top of a Mac screen is taken up by the menu bar. The menus, therefore, which are accessed repeatedly during normal use of an app, are infinitely large in one dimension. This is not a coincidence. Apple deliberately designed the user interface this way to take advantage of Fitts’s Law for common actions. How do I know this? Because back in the 80s when I started using a Mac, you couldn’t read an article on its user interface that didn’t mention Fitts’s law and the menu bar. Read anything by Bruce Tognazzi or Jef Raskin.
• Quitting (or closing a window—we’ll get to that in a bit) is a destructive action. Apple didn’t think it was a good idea to make destructive actions the easiest ones. Dr. Wiseman may disagree, but it’s wrong to suggest Apple was being thoughtless or silly.
• The close button on a Mac is not the same as the X button on Windows. While there are some exceptions (single-window apps), the close button typically closes the window without quitting the app.
• The close button on a Mac is, of course not in the upper right corner of a window, it’s in the upper left. This is more an indication of laziness (or ignorance) on the part of the video editor than a user interface issue, but I couldn’t resist mentioning it.

One of the things Mac users pointed out when Windows came out (apart from all the copying) was that Microsoft’s decision to attach its menus to the app windows meant that its users couldn’t take advantage of Fitt’s Law when accessing menu commands. I’ve often wondered whether the enormous toolbars so common to Windows apps nowadays are an attempt to make up for that.

And Apple does allow its users to take advantage of the infinite Fitts size of corners for quick actions, but they are limited to system-wide actions (since the screen corners aren’t associated with any particular app) that aren’t destructive.

You can fling the pointer into a corner to perform any of these actions—no need to click—but nothing bad happens if the pointer wanders into a corner by accident.

The original Mac/Lisa interface designers spent a lot of time thinking about Fitts’s Law and other interaction matters. That we’re still using most of what they came up with three and half decades later is strong evidence that they knew what they were doing.2

1. It is, however, definitely important to be able to quit, as newbie vi users can attest (:q!).

2. Overall, I’d say they did a better job than the iPhone/iOS designers, much of whose work is being redone to make the iPhone and iPad ready for their second decades.

# A few weeks with Streaks

Last month, John Voorhees wrote a positive review of a new update to the Streaks habit-tracking app, and I decided to give it a go. In general, I like the app and will continue to use it, but there are some maddening aspects to its design and behavior.

First, the good stuff. The main Streaks view is very simple and easy to parse at a glance.

I have here two habits I’m trying to develop: reading fiction1 and writing more for the blog. Streaks can show two “pages” of habit buttons and each page can hold six habits. Because you don’t have to fill the first page to put habits on the second, you can use the pages to categorize your habits. I’m going to start using the second page to make some corrections to my diet.

My favorite part of Streaks is that it gives you lots of options for defining a habit. They can be duration-based, as my reading and writing habits are; counted, e.g., how many glasses of water I drank today; or negative, e.g., I did not eat chips today. And the timing of habits is realistically flexible. You can make them

• daily;
• every other day (or every x days);
• set for specific days of the week; or
• a given number of days per week or per month, without setting specific days.

I especially love this aspect of Streaks, because rigid scheduling—as you might do on a calendar—can be particularly hard to maintain when family and work obligations intrude. My 30-minute writing habit, for example, is set for four days per week.

I’m not trying to become a professional writer, so there’s no need to establish a daily habit. And travel for work often gets in the way of writing on certain days of the week. But I also don’t want to allow myself to let days and days go by between writing. The x days per week setting is the perfect way to handle this kind of habit.

On the other hand, I find the visual design of some parts of Streaks confusing. When I was first trying it out, I couldn’t figure out how to edit a habit. I seemed clear that I should start by tapping on the gear in the lower left corner of the main screen, but none of the menu items that appeared when I did that gave me the ability to edit a task.

Because my eye was drawn to the menu at the bottom, I didn’t notice that the badges on the icons above had changed to ellipses. It was only after going through every option in the menu at the bottom that I saw the ellipses and realized what they were for.

Navigating through the various screens is also often an exercise in searching for where to tap. Here are the sharing and statistics screens for one of my habits. You dismiss one of them by tapping in the upper left corner and the other by tapping in the lower right corner.

A little more consistency would be nice.

More annoying, though, is Streaks’s seeming inability to communicate with itself across devices and even between functions on a single device. As a result, it often prompts me to get going on one of my habits while I am in the middle of doing it.

For example, my reading habit is set to remind me at 7:15 PM. If I start the Reading timer at 7:10 on my phone or watch and then begin reading on my iPad, I usually get interrupted five minutes later with a Streaks notification on my iPad telling me to start reading. I don’t think it’s fair of Streaks to remind me to do something I’m already doing.

Maybe this problem of inter-device communication comes from some limitation imposed by Apple and isn’t the fault of Streaks. But two nights ago I started the timer for a reading session on my watch shortly before 7:15. A couple of minutes later, the alarm on my watch went off telling me to start reading. This seems more like a problem with Streaks, but I suppose it, too, could be due to an Apple bug.

Overall, I like Streaks and will keep using it. The layout inconsistencies are mostly troublesome when you first start using the app—although they shouldn’t be there, you soon learn to work around them. The spurious notifications, though, are a more serious problem. I hate being nagged to do something I’m already doing.

1. My outside-of-work reading has become way too biased towards news and opinion pieces; this is an attempt at balance.

# Making screenshot frames

With the latest update to Federico Viticci’s screenshot framing shortcut, I needed to change my system for framing and uploading screenshots. Changing the Frame & Upload shortcut itself was trivial—I just needed to change the argument to the Run Shortcut action to the new framing shortcut—but I also needed to change some of Federico’s code to show frames for my devices.

I started with the iOS-only version of Federico’s shortcut. It includes frames for

• iPhone X and X🅂 in portrait and landscape
• iPhone X🅂 Max in portrait and landscape
• iPhones 6, 6🅂, 7, and 8 (and their Plus variants) in portrait and landscape
• 12.9″ iPad Pro in portrait and landscape
• 44 mm Watch Series 4

(I don’t care about the Mac frames. I seldom need full-screen screenshots on the Mac, and I already have a system for taking, framing, and uploading Mac window screenshots. Most important, it’s not as if the Mac is incapable of doing this sort of processing—it would be easy to use the Python Imaging Library or ImageMagick to write a script that frames Mac screenshots. Moving Mac screenshots over to iOS for processing would add an unnecessary step.)

Federico’s shortcut includes frames for devices I don’t have (or don’t intend to use again) and is missing frames for devices I do. So I did some surgery to cut out the parts covering the Max and 6/6🅂/7/8 phones. Then I altered the watch section to handle the 42 mm Series 3 that I own, which involved some image manipulation.

First, I needed an easily editable image of the Series 3 from which to make a frame. I went to same source Federico did for his images: Apple’s Marketing Resources page for developers. There, in the Product Images section, are Photoshop files for (most of) Apple’s current product line. These files are specifically made so developers can easily composite in screenshots to make images showing what the devices look like when running their apps. I downloaded the files for the watch to my Mac.

In the set of Watch images is a file named

AW-S3-42mm-SilverAluminum-BlackBand-@1x.psd


Despite its name, it’s actually an image of a Space Gray Series 3. Here’s what it looks like when opened in Pixelmator (which understands the PSD file format).

As you can see, there are two layers, one for the watch image itself and one for the screenshot. There are also guides to help developers align their replacement screenshots. The watch image, which is behind the screenshot layer, is opaque in the center. To make an image that works with Federico’s shortcut, we need to delete that central area and make it transparent.

First, we make sure the Screenshot layer is active. With the color selection (wand) tool, we click outside the gray rectangle to select the surrounding transparent area and then choose (or ⇧⌘I) to select just the gray rectangle and the text inside it.

Now hide the Screenshot layer and make the Hardware layer active. You’ll still see the crawling ants of the selection rectangle. Press the Delete key to punch out that rectangle and make it transparent in the Hardware layer. Then the image as a PNG. I called it Series-3.png.

Federico’s shortcut doesn’t use the frame images themselves; it uses Base64 encodings of the images. Base64 is long-established way of turning binary data into ASCII text for transmission across systems (like email) that were developed for text only. Federico uses it so he can store the equivalent of binary data in his shortcut. The Shortcuts app makes this easy because it includes an action for encoding and decoding Base64.

At the Terminal, I created the Base64 encoded text with the command

base64 -b 76 Series-3.png > Series-3.txt


The -b 76 switch splits the encoded text into lines 76 characters long, which makes the file a bit easier to navigate in a text editor. I moved the encoded text file to iCloud Drive, switched to my iPad, opened the file, and copied the encoded text into the appropriate spot in the watch section of my edited version of Federico’s shortcut.

The gibberish in the Text action is what I pasted in. As you can see, I also changed the width test from the 368 pixels of the Series 4 to the 312 pixels of the Series 3. The last pieces of information needed by Federico’s shortcut are the x and y offsets of the screenshot. I got them from the coordinates of the left and upper guidelines and added them to the Dictionary action in the watch section of the shortcut.

This same basic idea can be used to make frames from any of Apple’s product images and incorporate them into Federico’s shortcut. So if you have, for example, a 10.5″ iPad Pro, you can make a frame for it and substitute it in for the 12.9″ frame.

Now I have a system that’s nearly what I want. The only thing missing are portrait and landscape frames for my iPad, which is the oft-forgotten 9.7″ Pro. Unfortunately, Apple doesn’t have Photoshop files for any 9.7″ iPad on its website. This seems a little odd, as they are still selling the regular 9.7″ iPad, but I guess they want developers to show their apps running on the larger devices.1

In any event, there are no frames I can use with the 1536×2048 (and 2048×1536) screenshots I take with my iPad. If you’re a developer and happen to have the older PSDs for a 9.7″ Retina iPad lying around, I’d appreciate your sending them to me.

Otherwise, I’ll just have to buy a new iPad Pro when they come out and use the new images Apple will make available. What a shame that would be.

1. Or it’s an oversight, like the incorrect naming of the Series 3 image.

# Tags and copies

In yesterday’s post, I talked about how I’ve been using file tags to organize my work photographs according to both date and subject.1 This works pretty well, but sometimes a set of Smart Folders that collects photos according to their tags isn’t the right solution. In those cases, I have a couple of scripts that allow me to replicate my set of Smart Folders into a set of real folders with copies of the photos organized by subject.

The problem with tagging and Smart Folders is that they are too Mac-centric. If I need to share project photos with a colleague or a client—who are almost always Windows users—I can’t just copy a file structure like this onto a USB stick and give it to them:

The JPEGs are fine, as are the dated directories, but the Smart Folders are just gibberish on a Windows machine—a set of XML files with .savedSearch extensions. If I need to have the photos broadly available in folders organized by subject, I need to make real folders and put copies of the photos in them.

Fortunately, this is pretty easy to do if I’ve already done the tagging and created the Smart Folders for each tag. I have a command-line script called tags2dirs which, when run from the parent directory (e.g., the test directory in the example above), creates a set of traditional folders that parallel the Smart Folders. After running tags2dirs, I get this:

The tags2dirs script is this short bit of Python:

python:
1:  #!/usr/bin/python
2:
3:  import os
4:  import glob
5:
6:  tagDirs = glob.glob('*.savedSearch')
7:  newDirs = [ x.split('.')[0] for x in tagDirs ]
8:  map(os.mkdir, newDirs)


Line 6 looks through the current directory and collects the names of all the Smart Folders into the tagDirs list. Line 7 goes through that list, strips the .savedSearch extension off of each name and saves the result to the newDirs list. Line 8 then makes new directories with the names in newDirs. Simple.

Now comes the harder part: copying each photograph in the dated folders into the corresponding folders named for the tags. As you might expect, I have a script for doing this, too. It’s called cp2tags, and when invoked like this from the test directory,

cp2tags */*.jpg


it will copy every JPEG file one directory level down into all of the appropriate directories that were created by tags2dirs. For example, the left track folder will look like this:

This uses a lot of disk space—if a photo has six tags, it will be copied six times—but both of my Macs have 3 TB Fusion Drives, so I can afford to be profligate, at least occasionally.

Here’s the source code for cp2tags:

python:
1:  #!/usr/bin/python
2:
3:  import os.path
4:  import subprocess as sb
5:  import sys
6:  import shutil
7:
8:  tagCmd = '/usr/local/bin/tag'
9:  for f in sys.argv[1:]:
10:    tagString = sb.check_output([tagCmd, '--no-name', '--list', f]).strip()
11:    if tagString:
12:      tags = tagString.split(',')
13:      for t in tags:
14:        shutil.copy(f, t)


It goes through the arguments, which are expected to be file names, one at a time. For each file, it runs James Berry’s tag command, mentioned in yesterday’s post, to determine all the tags applied to each. The output of tag is a string of comma-separated tags, which is split apart on Line 12 to create a Python list of the file’s tags. Line 14 then copies the file to all the directories named after those tags.

For most projects, I don’t need tags2dirs or cp2tags, because I don’t need to send others my photos. But it’s nice to have the scripts ready.

One last thing: Tags created on the Mac can be used to filter files in the iOS Files app but only if the files are saved in iCloud Drive. File tags are synced to Dropbox, but the Files app doesn’t seem to know it. And I haven’t seen anything in the Dropbox app to suggest that it can filter by tags.

I’ve added tags to the sidebar through the clumsy tap-and-hold technique, but I haven’t worked out a quick, automated way to get a tag list into the Files sidebar.

It’s too bad the Files app knows no more about Smart Folders than Windows does.

1. For me, the subject of a photo is usually a piece of machinery, a structural component, or a fragment of a broken part. Many of my photos include more than one subject, which makes them a natural for tagging.