I started responding to Philip Guo’s Helping my students overcome command-line bullshittery that passed through my news feeds today, and my thought quickly outgrew the appropriate size for social media, so it’s going up here.
I understand and often share his frustration, but only selectively agree with his conclusion, and would like to clarify the distinction because I think it is very valuable to understand it.
It is often annoyingly difficult to leverage existing tools, especially the various development toolchains whose install process involves blood sacrifice or perfect replication of the (naturally, undocumented) platform they were developed on, but I object to dismissing all such difficulty as bullshit.
First, in many ways, learning “command-line Bullshittery” is learning how to use a computer. Not surface knowledge parroting sequences of actions “use” a computer, but actually knowing what you’re doing. What he is describing as CLI Bullshittery is mostly precise specification, composition, automation, and generally learning about tools and conventions, and it can make you better. It doesn’t even have to be represented as a CLI, the conceptual equivalence would work on other interface paradigms, it just tends to be even harder elsewhere. These are really fundamental things for people planning to do research in a Computing-related field to understand, and if they don’t it’s probably time for your program to consider a “UNIX Tools” or “Research Software” sort of offering because they’re missing something important.
To substantiate my “makes you better” claim by analogy, writing up a project is also “incidental complexity,” both in the writing itself and in the formatting for publication, but if you learn to do it well you learn powerful general tools and skills from the process. The overhead shrinks with practice and the act of writing (and making visualizations, and etc.) provides value to the research process. Think about learning Latex – it is a pretty large upfront cost, but once you’ve done it, you are a text formatting and organization god. There are a variety of powerful data-vis tools for graphs and such, and learning one gives you enormous new power over datasets. Generally, learning to write well is learning to to compose and communicate your thoughts, which improves your thinking. Complaining about “CLI Bullshittery” is like complaining about the “Math bullshittery” in computing – you can parrot it for a while to get the computer to do things it already knows how to do, but if you’re doing anything really novel, you must be able to speak that language too.
Another objection is that “Incidental complexity” is only defined relative to your task; if your use of the tool is not novel, we can and do already try to optimize it out pretty effectively (see below), and if it is novel, the complexity is probably intrinsic.
“…have them take notes by copying and pasting commands into text files…” means you are doing a bad job leveraging the tools. If you can do that, you have better options. You can help your students understand the process, ideally in generality, so they can leverage the skill to figure out other tools, learn to think like a computer (in the “durr, I’m a computer and I only do exactly what I’m told” sense) and learn to design their own software in ways that are pleasant to deploy and don’t unintentionally violate norms. Or, you can [help your students] make a script, thanks to the expressive power of a CLI, and with a script the student can 1. experience the complexity only as much as is necessary, 2. self-document the places the complexity is unavoidable, and what choices were made, 3. repeat the process and transfer the knowledge until the assumptions change, and 4. have already started the process of passing on the favor for people who want to use their work. On that last note, packaging your own artifacts for easy distribution is also incidental complexity, with yet another set of tools, and also comes with the “anticipating the expected default case and optimizing for it” trade-offs below.
Why is it so arcane? – because it really is a whole fucking language. As in the old “CLI is expressive like a language, GUI is expressive like pre-lingual point-and-grunt” quip. Trying to obscure the computer is dangerous (not wrong, just dangerous in the Media Studies sense) because the person who picks the simplified interface has also picked what you can [readily] do with the system.
We can and do make things easier, but it’s always a compromise; Designs that provide affordances for what to do are good up to the point they get in the way of doing the task if you already have the knowledge, or want to do something unconventional (Don Norman wrote about that) earlier this year, but failed to acknowledge the restriction). Designs that hide information unless you need it are good but also impede discovering that information when you need it. Think in terms of sane defaults hidden away in
/usr and “advanced settings” panes, or the hierarchy of package manager packages, platform specific automated build systems with flags and options, and “it’s in version control – checkout, edit, configure, and make” – you are on a platform with a dependency-aware package manager, right? Otherwise, you’ve chosen not to use the preexisting solution to most such problems.
I’ve been involved in problems with this many times. For a specific case, the research group I’m attached to does quite a bit of work with CHDK for research and classes, and CHDK’s toolchain (a special case of the GNU ARM toolchain as a cross-compiler environment) can be a challenge.
All the ways of coping with that have their own problems. For basic tasks that do start out exactly the same as something someone already did, you can get prebuilt stock images for a particular camera off the projects build server and work from there, but that falls apart as soon as you need custom changes. We usually have machines set up with all the tools set up for people to use use, but that goes away as soon as they move on, and requires either comfort with remote sessions or physically coming in to do work. I fixed and updated the script in the project wiki for building the toolchain a year or so back, which should be helpful well beyond our scope, and also keep a “Unpack this, then tweak and run the included comment-filled script to build an image” tarball of the built toolchain around to shortcut things even further when the assumptions hold from my own machine. Some people have tried to use CHDK-Shell to avoid the complexity, but it is 1. also just a tarball and a script, just written in a different language than
sh, and 2. inevitably doesn’t work in some way, which they are unable to fix because while it provides great affordances (really, it is, IMO, an unusually awesome design for a built tool frontend) for discovering and understanding build options, is itself almost entirely opaque, and quite difficult to adapt.
We’ve also talked about setting up a CGI buildserver or the like, but again, that closes up behind them, and also doesn’t teach them anything about the process. Or passing out VMs with a Linux system that has all the tools for the class preconfigured, but the ones who can’t handle installing the tools aren’t likely to do much better trying to work with VMs, especially if no GUI is included to keep the appliance small, because of the same skill gap.
I have some shocking advantages over my peers simply because I invested the time upfront in understanding the tools in various domains. I’m pretty sure I have more value from that lesson than most of my formal schooling.