Recursive jot

Today I had to break up a giant PDF full of scanned drawings for a building, a job similar to that described in this post. So I opened the post and started following the steps. It worked well, but this time I added a twist: using jot to create a set of jot commands.

After using pdftk to burst the PDF into individual files and using sips to convert them into JPEGs, I had a set of over 300 JPEG files with names like pg_0001.jpg. I wanted to rename them to roughly match the drawing numbers given in the title blocks.

These were all architectural drawings (as opposed to structural, mechanical, electrical, etc. drawings), so they all had drawing numbers like A1.01, A5.14, and A11.3, where the number before the period indicates a section. There are sections of elevation drawings, sections of plan drawings, sections of details, and so on. Sixteen sections in all.

Following the steps of my earlier post, I wanted to create a file called names.txt that would contain 300+ lines, one for each of the files, giving the name I wanted to change it to. The shell script to create that file was this:

 1:  jot -w 'A0-%02d.jpg' 1 > names.txt
 2:  jot -w 'A1-%02d.jpg' 4 >> names.txt
 3:  jot -w 'A2-%02d.jpg' 12 >> names.txt
 4:  jot -w 'A3-%02d.jpg' 18 >> names.txt
 5:  jot -w 'A4-%02d.jpg' 41 >> names.txt
 6:  jot -w 'A5-%02d.jpg' 87 >> names.txt
 7:  jot -w 'A6-%02d.jpg' 15 >> names.txt
 8:  jot -w 'A7-%02d.jpg' 18 >> names.txt
 9:  jot -w 'A8-%02d.jpg' 4 >> names.txt
10:  jot -w 'A9-%02d.jpg' 18 >> names.txt
11:  jot -w 'A10-%02d.jpg' 17 >> names.txt
12:  jot -w 'A11-%02d.jpg' 3 >> names.txt
13:  jot -w 'A12-%02d.jpg' 22 >> names.txt
14:  jot -w 'A13-%02d.jpg' 3 >> names.txt
15:  jot -w 'A14-%02d.jpg' 3 >> names.txt
16:  jot -w 'A15-%02d.jpg' 56 >> names.txt
17:  jot -w 'A16-%02d.jpg' 4 >> names.txt

(The A0 section consisted of just the index of all the other drawings. It didn’t have a drawing number, so I decided to call it A0.01. The numbers just before the redirection operators represent the number of drawings in each section.

I didn’t want to write this script. Even though it’s only 17 lines long, and each line could be entered by repeatedly pasting and editing, the repetition would’ve driven me crazy. And I would have made mistakes. Editing the number of drawings for each section would be easy, because I could double-click on the default number and then type the correct value. But changing the section number would require either precision clicking and dragging or a lot of cursor movements that I was likely to screw up.

I thought it would be fun to use jot itself to generate the lines. It wasn’t hard. This command,

jot -w "jot -w 'A%d-%%02d.jpg' 000 >> names.txt" 17 0 >

created the script, called I used 000 as the default number of drawings for each section because it made for a big double-click target when I edited to put in the correct numbers.1

Yes, I could have used Keyboard Maestro to automate the creation of these lines, and I could have even had it prompt me for the number of drawings in each section. But how could I resist the opportunity to use a command to create a bunch of copies of itself?2

  1. I also changed the first append redirection (>>) to a create redirection (>). That wasn’t necessary, since names.txt didn’t exist yet, but it made for a cleaner script. ↩︎

  2. Yes, I know this isn’t really recursion, but I liked the title. If you tweet me to explain recursion, I will block you. ↩︎

A little table cleanup

One of the things I like about blogging is that blog posts can meander a bit. I don’t have to stay focused on a single topic, as I do when writing a report. Also, I don’t feel the need to know everything I’m going to write about before starting a post. I often leave my text editor mid-paragraph and spend an hour or more checking sources or testing code snippets. That’s not very efficient, but I’m not doing this for money, so efficiency doesn’t matter.

On the other hand, this relaxed approach—especially when combined with writing late at night—sometimes leads me to omit things. Thursday night’s post about tables was a good example of this.

When I started that post, I intended to include another reason why complicated tables are messing up my report-writing workflow: they break the connection between the Markdown source and the rendered PDF. Here’s what happens:

Let’s say I write a report in Markdown with a table that needs some tweaking in LaTeX. I start by including the table in the (Multi)Markdown source, but because of some formatting need, I know I’ll eventually have to dig into the generated LaTeX code for the table and do some editing. Ideally, I’d write all the text of the report first, generate the LaTeX, edit the table portion of the LaTeX, and then make the PDF.

But the ideal never happens. After editing the table and making the PDF of the report, I notice that something in the text of the report needs to be changed. Sometimes a lot needs to be changed. I now have to choose between editing the Markdown source, which means I have to first save the LaTeX source of my carefully crafted table and then paste it back into place every time I regenerate the report LaTeX from Markdown; or abandoning the Markdown source and doing all the subsequent editing in LaTeX.

For a while, I was doing the former because I prefer writing and editing in Markdown—that’s why I’m writing in Markdown in the first place. But the back and forth with the table code was annoying and continually editing two text files often led writing new material in the wrong file. So I’ve switched to the latter approach. Once I’ve had to do any editing of the generated LaTeX, I throw away the original Markdown and work exclusively in LaTeX from that point on.

This is not the way I want to write, and this is another reason I’ve been casting about for a new way to include tables in my reports.

As another followup on Thursday night’s post, I’ve been playing around with making tables in OmniGraffle. Here’s an example:

Bolt table

Not especially complex, but it has both column and row spans and some extra spacing to indicate grouping. Here’s what the layout of the individual items looks like in OmniGraffle:

OmniGraffle table items

This started out as two OmniGraffle tables, one for coarse threads and one for fine, that I filled with the pitch and diameter information (more on that in a bit). I then added the headers and rules and split up the tables—using the Arrange‣Ungroup command—and adjusted the spacing to give a little extra separation between

  1. the pitch column and the three diameter columns; and
  2. the coarse and fine blocks.

Finally, I added the units note just below the bottom rule.

Overall, the it was pretty easy to make in OmniGraffle, especially considering I’ve just learned how its tables work. The most time-consuming part would normally be entering the data, but I got around that by writing this Keyboard Maestro macro that takes a set of tab-separated values (TSV) on the clipboard and enters it into the cells of the OmniGraffle table:

Type in Table macro

There are, as you might expect, a few tricks to this.

First, you can’t just copy a TSV table from a Numbers spreadsheet and paste it into an OmniGraffle table. If you try that, it’ll paste a graphic of the cells you selected in Numbers.

Second, the way you move from cell to cell while editing a table in OmniGraffle is to hit the Tab key. This is probably what you expect for moving across a row; what’s unexpected is that when you’re at the end of a row, Tab also moves you to the first cell of the next row.

Third, when you have an OmniGraffle table selected, hitting the Return key selects the text in the top left cell of the table.

The Keyboard Maestro macro uses these last two tricks to get around the problem posed by the first trick. It assumes I’ve copied a block of cells from a spreadsheet and have selected the table in OmniGraffle. At this point, the clipboard holds at least three versions of that block:

  1. As a block of cells that can be pasted elsewhere in a spreadsheet.
  2. As a graphic (see the first trick above).
  3. As a TSV set of text, with tabs between the cells in each row and a newline character between each row.

The macro manipulates the TSV version of the clipboard, first changing all the newlines to tabs (that accommodates the second trick above) and then trimming the trailing whitespace. Don’t ask me why the trimming is necessary. As far as I’m concerned, there should’t be any trailing whitespace on the clipboard, but when I first wrote this macro—without the trimming step—there was always one last tab.

With the clipboard text now prepared, the macro simulates a tap on the Return key to select the text in the top left cell of the OmniGraffle table (third trick) and then types out all the text on the clipboard. The regular text gets entered in the cells, and the tabs move the selection to the next cell. The key here is the “Insert text by typing” setting in the last action. If that setting were “Insert text by pasting,” all the data from the clipboard would go into the first cell of the OmniGraffle table. It’s the simulation of typing that gets the selection to move from cell to cell.

The upshot is that if I have a table of data in a spreadsheet or some TSV output from a program, I can use this macro to populate an OmniGraffle table without much fuss.

I’m still not sure this is the way I want to handle tables in my reports, but it’s looking promising so far.

My table problem

Several years ago, I wrote a series of posts (1, 2, 3, 3.5, 4, and 5) explaining the long evolution of the software tools I use for writing. It started with the Illinois Central Editor (ICE) and RNF on a Cyber 175 mainframe, took a detour through a series of wordprocessors on the Mac (MacWrite to Word to WriteNow to Word to Claris Works), then returned to marked-up plain text with SGML and troff, then LaTeX, then Markdown. I’ve been using this Markdown writing workflow (which still runs through LaTeX to generate the PDF output) for ten or eleven years now, but I’m beginning to think I need to make a change.

The problem is the creation of tables. Because I use MultiMarkdown, and because I’ve written a couple of scripts for handling MultiMarkdown tables, you might think I have a pretty efficient way to add tables to my reports. That’s what I thought, anyway. But over the past few months I’ve been including more tables in my reports and have layout of those tables has often been more complicated than a simple rectangular grid of cells. Row spans, column spans, multiple headers, multiline cells, and oddball alignments have become commonplace, and while MultiMarkdown can handle some of these complexities, too often I’ve had to jump into the generated LaTeX to get the formatting I want.

Have you ever written a table in LaTeX? It’s awful. Even if most of the table is already written, I find it very hard to keep track of where I am while editing. There is just so much “noise” in a LaTeX table—ampersands separating cells, double backslashes separating rows, braces if in-cell formatting is needed—it’s hard to focus on the task at hand. And honestly, the default “look” of a LaTeX table is just embarassing. Here’s an example from Kopka and Daly’s Guide to LaTeX:

Kopka and Daly table

I avoid most of the ugliness in the output by using the booktabs package, but there’s no way around ugliness in the input. For example, this writeup on the use of booktabs present this nicely laid out table,

Booktabs table

but it requires this code to generate,


& \multicolumn{3}{c}{$w = 8$} & \phantom{abc}& \multicolumn{3}{c}{$w = 16$} &
  \phantom{abc} & \multicolumn{3}{c}{$w = 32$}\\ \cmidrule{2-4}
\cmidrule{6-8} \cmidrule{10-12}
  & $t=0$ & $t=1$ & $t=2$ && $t=0$ & $t=1$ & $t=2$ && $t=0$ & $t=1$ & $t=2$\\ \midrule
$c$ & 0.0790 & 0.1692 & 0.2945 && 0.3670 & 0.7187 & 3.1815 && -1.0032 & -1.7104 & -21.7969\\
$c$ & -0.8651& 50.0476& 5.9384&& -9.0714& 297.0923& 46.2143&& 4.3590& 34.5809& 76.9167\\
$c$ & 124.2756& -50.9612& -14.2721&& 128.2265& -630.5455& -381.0930&& -121.0518& -137.1210& -220.2500\\ $dir=0$\\
$c$ & 0.0357& 1.2473& 0.2119&& 0.3593& -0.2755& 2.1764&& -1.2998& -3.8202& -1.2784\\
$c$ & -17.9048& -37.1111& 8.8591&& -30.7381& -9.5952& -3.0000&& -11.1631& -5.7108& -15.6728\\
$c$ & 105.5518& 232.1160& -94.7351&& 100.2497& 141.2778& -259.7326&& 52.5745& 10.1098& -140.2130\\

I’m coming to the conclusion that tables, despite being made of text, should be treated as graphic elements, just like charts, figures, and photographs. Just as I would never use one of the many LaTeX drawing packages to make a scatterplot, I shouldn’t be using \begin{tabular}… \end{tabular} to make tables.

But having come to this conclusion, how do I act on it? What’s a good software tool for quickly generating good looking, well-formatted tables? As I recall, Adobe used to have an application called Tables, but I don’t how good it was. And even if it was good, it’s long since gone. Among software still available, I see three basic types:

Spreadsheets I can merge cells to get row and column spans, and there’s no trouble with multiple header lines. Cell borders of varying thickness can be added, but formatting within a cell isn’t as flexible as I’d like it to be.

Word processors Pages has inherited the table formatting features of Numbers but hasn’t added anything too them, so in-cell formatting is just as limited. MS Word has excellent in-cell formatting, but I find its user interface confusing, and I hate that I can’t see the changes I’m making as I adjust the settings in the Table Properties window. I’m sure I could learn to work around its idiosyncrasies and maybe even set up a few styles that would save me time and frustration, but I’m just not enthused with training myself to use it.

Drawing programs These, of course, can do anything. The question is whether they can be made efficient at generating tables. OmniGraffle has a way to quickly generate a rectangular table of elements which can then be edited to fit the data. Rules can be placed and adjusted with more precision than in spreadsheets or word processors, and in-cell formatting, while not quite as flexible as Word’s, has the features I need. I have a lot of experience with OmniGraffle that I can take advantage of to format a table quickly.

The biggest obstacle is getting data into an OmniGraffle table. Much of the data I put in tables comes from programs or data acquisition equipment. Either way, it’s a set of tab-separated values that I’d like to place in the table by cutting and pasting. But OmniGraffle doesn’t let you paste into a table all at once, you can only paste into one cell at a time. I’ve been trying out a script to get around this, but I’m not happy with it yet.

You may think this is a dumb idea, and I should concentrate on getting better at making LaTeX tables. You may be right. But I’ve been making LaTeX tables for 15 years, and I’m still no good at it. I think it’s time for a change. I just want to make sure it’s the right change.

The Keyboard Maestro scripting environment

Friend of the blog Jason Verly was up late a couple of nights ago, trying to get either SnapClip or SnapSCP working. The embedded Python script was crapping out very early, throwing an error when it tried to run the

import Pashua

line. Jason knew damned well he’d installed both the Pashua app and the Pashua Python module, but the script kept failing. Even more frustrating was that he could run a script from the Terminal with the import Pashua line and it would work just fine.

After a bit of sleep—which often helps—Jason realized that the problem arose because he had two Pythons installed on his computer, the stock one from Apple and another installed via Homebrew. His usual shell environment, which ran under Terminal, was set up to run the Homebrew Python in /usr/local/bin by default, and when he installed the Pashua module, it was put in /usr/local/lib/python2.7/site-packages, which is where the Homebrew Python can find it but not where the stock Python can find it. And Keyboard Maestro was running the stock Python when the macro was invoked.

The explanation can be found in the Keyboard Maestro documentation on scripting:

Shell scripts can execute any installed scripting language, such as perl, python, ruby or whatever. Be aware that because shell scripts are executed in a non-interactive shell, typically none of the shell configuration files (like .login or .profile) will be executed, which may change the way the shell script behaves.

Here’s an example of how different the behavior can be. If I run

echo $PATH

from Terminal, I get this output:


which I’ve reformatted from one single line into a series of lines. PATH is the environment variable that determines the directories your shell searches for executable files and the order in which they’re searched. So when I type python at the command line in Terminal, the Python that gets executed is the one in /Users/drdrang/anaconda/bin because that’s the first directory in the list that contains an executable named python.

The PATH is set through commands in any number of dotfiles that get run whenever I open a Terminal window. These include ~/.profile, ~/.bash_profile, ~/.bashrc, and ~/.bash_login. And those are only my personal configuration files. There are also system configuration files in /etc. If you want to see how this mess works, I suggest you take a look at this post.

Now let’s set up a simple macro that runs the same echo $PATH command, but from within Keyboard Maestro.

PATH in Keyboard Maestro

The output from this is quite sparse:


This is why Jason’s macros weren’t doing what he expected. They were running the stock Python because /usr/local/bin isn’t in the PATH under Keyboard Maestro.

The obvious solution to this, assuming you don’t want to run the stock Python, is to explicitly set the full path to the Python you want to run in the shebang line of the script, e.g.,


I’ve always been happy to just run the stock Python inside Keyboard Maestro. This means any nonstandard libraries have to be installed via /usr/bin/pip or


so they go into /Library/Python/2.7/site-packages where the stock Python can find them.

But what if I want Keyboard Maestro to run the same Anaconda Python that I typically run from the Terminal? In theory, I could start my embedded scripts with


Unfortunately, I like to have a common set of Keyboard Maestro macros shared between my iMac at work and my MacBook Air. And because of an historical fluke, I use different user names on the two machines. So while the shebang line above would work on my MacBook Air, it would have to be


on the iMac. But because they’re synced, the macros have to be the same on both machines.

The solution is to use the env command in the shebang line. From the man page:

The env utility executes another utility after modifying the environment as specified on the command line.

Modifying the Keyboard Maestro environment is exactly what we want to do. We can use env’s -S and -P options to set the PATH for the execution of python this way:

#!/usr/bin/env -S -P${HOME}/anaconda/bin python

The -S option allows the HOME environment variable to be interpreted, and the -P option sets the PATH. The PATH will therefore be either




depending on which computer I’m at.

Thanks to Jason for pointing out this tricky business and for coming up with the simpler solution. He is, I believe, is too smart to have different user names on different computers.

Update 03/2/2017 9:52 PM
A few things you should read if you’re interested in Keyboard Maestro and environments in general:

  • Jason’s written a more thorough description of the trouble he ran into and how he fixed it.
  • You can set the PATH Keyboard Maestro uses to anything you like by going into KM’s Preferences and doing a little editing. Thanks to Peter Lewis (who’s kind of an expert on KM) for the tip.
  • For a more complete discussion of the crazy, nutso system of setting the environment on a Mac, see this post by Rob Wells.