I’ve been running in to more and more sites which attempt to override browser features for no apparent reason. We know you can do all kinds of fancy things with CSS and EMCAScript, but that doesn’t mean you should. To pick out two examples I’ve hit in the last few minutes:
The Verge: Uses some sort of dynamic scrolling mechanism, so my scrollbars (and hence indication of length and position) disappear. There is no reason to do that, and it removes features you would otherwise get for free from the browser.
Gmail: For some reason, searches are done with a dynamic page, so the browser’s back button doesn’t take you back to where you were before the search, and even worse, hitting back from a message in the search results doesn’t take you back to the search results. They even replicated the back button in the interface bar because this is obviously how it should work. I leave a persistent Gmail tab up, and probably 1/3 of its reloads are because of this misfeature.
As my adviser is fond of reminding us, you could build a car with a tiller and throttle as easily as a wheel and pedals, and in the early days people did, but we (as a society) picked some acceptable standard interface elements to ease adoption and transitioning between vehicles. Until recently, browsers were one of the few places in computing like that: it didn’t matter what (GUI) platform you were on: the scroll bar moved you around in the page content, and the forward and back buttons moved you between pages you visited in chronological order. Now, the net is full of pages that break that paradigm, and I can’t find any compelling reason to do so beyond “Because we can.” Please stop.
I played with the Windows 8 Developer Preview in VirtualBox for a while this evening. Those who spend time around computers will recall that every other Microsoft OS is a loser. The betas for XP and 7 were clear upgrades when they started circulating. They were fast and stable and added desirable features. Me and Vista hit the market like an animal carcass and stunk up the place for a while. They were slow, and fragile, and changed things for the worse. Windows 8 goes beyond that. This shit is the next Microsoft Bob.
The quirks and performance instability can be excused as a developer preview running in a virtual machine. The fact that every UI change from 7 is for the worse cannot.
The big feature is the Metro interface. Metro is trying to graft a mediocre appliance UI (I thought “Cell Phone” a lab mate compared it to their DVD player) on to the desktop, in place of a sane launcher or window manager. The login screen is a “Swipe up to unlock” affair, with no indication that that’s how it works. Finding programs is like sorting through a desk full of business cards. The task model is more akin to Android, where programs suspend to quietly consume resources in the background until swapped out instead of quitting cleanly. All metro apps run fullscreen, one instance per application, and none of the reference apps have any mechanism for tabs or fields. Task switching is performed by hovering near the left edge of the screen and clicking to cycle through active programs (Alt+Tab switches through all active Metro apps, all Desktop apps, and the desktop itself). There is no indication of what is running, so “active” is more than a little unclear. I still haven’t found a mechanism to shut down without first logging out.
You can partially drop to a conventional desktop mode, which is much like Windows 7, but a little bit worse in every way. The start menu is GONE – clicking where it used to be just drops you back to the Metro mess. Task management is confusing because some programs are programs, and some programs are entities in Metro. The “hover near the left edge of the screen” switching behavior persists on the desktop. Menus have been replaced by ribbons – which are, I shit you not, 115px high in the file manager. To put that another way, 209px of the default file manager’s 597px height are taken up by static decorations – I’m reminded of those pictures of browsers where the user never turned down a toolbar, but it’s the default style.
Looking for new UI metaphors is commendable, and it’s especially nice to see something other than the “Hide ALL the UI elements!” hyper-minimalism (see the new Google bar) that is the current trend being tried, but this may actually be worse. Users deserve better than the fleet of terrible regressive change-for-change’s sake UIs that have been foisted on the personal electronics world of late.
At least we’ll be making mean jokes about this one for years to come.
While I was on my hardware-fiddling spree, I came across the Spiffchorder project pile tucked into the keyboard drawer of my desk. Last time I played with it I had written off the perfboard assembled one, which had been reworked so many times it looked like a solder ball, and left a working one on a breadboard. This meant it was taking up surface- and breadboard- space, and that would not do. So, I sat down, laid out a less-insane board, and soldered it up in one pass.
The design isn’t well suited to the individual-pad perfboard I had around (lots of n>2 component nodes), so I tried a fabrication strategy I hadn’t used before to help simplify: I almost completely populated the perfboard, ran a piece of tape over the components, flipped it, and soldered, rather than re-adding the components as I went. It actually worked pretty nicely. It is a little bigger than the last layout I used, but this one worked on the first try – or at least the first try where I had a programmed UC plugged in to the socket…
In a related matter, one of the two chips I thought I had burnt with the appropriate firmware doesn’t seem to be working, and because there is a bug with the -g flag in the current version of gcc-avr, I can’t burn another from the boxes I have set up for working with AVRs (the VUSB stack needs the -g flag).
The actual chorder I made still sucks almost to the point of being unusable, largely owing to a mistake on the particular tactile buttons I got when I ordered the parts. Eventually something will have to be done about that, but the chorder is on a header, and the project is now in an electronically working state, not taking up prototyping supplies, and can be shoved in a box when idle.
I always enjoy reading things by the Raskins, both Jef, and his son Aza. The latest article making the rounds is The Mac Inventor’s Gift Before Dying: An Immortal Design Lesson for His Son, which is a charming story about the mindset that makes them both so interesting. There is an ever present bit of pretentiousness and excess verbosity to both of their writing, but between their overt self-awareness of the behavior, and my own writing having many of the same properties, it rarely bothers me.
The really interesting thing in stories about Jef, and his own writing is hearing what the prime mover for the direction of computer interfaces for the last thirty thinks went wrong, especially with regard to things he was directly responsible for – it helps remind me that the currently dominant interface paradigms are the result of a long evolutionary process with lots of missteps, not some sort of manifest destiny.
I was doing the first (actually, second, the first was an article from perennial human factors design blowhard Donald Norman, just like I was joking it would be) reading for my PSY562 class, and was kind of disturbed by the degree to which the book seems to treat human technology interaction as a totally pragmatic enterprise that essentially reduces to ergonomics. This stance may make sense with simple mechanical systems, but the human computer interaction I am most familiar with has always seemed more meaningfully posed as an information theory problem than a simple issue of lubricating a system.
Looking at the big HCI pioneers, we get people like Ivan Sutherland, who’s most famous work, sketchpad, was done as his PhD. project under Claude “The father of information theory” Shannon, and Douglas Engelbart (of hypertext and the mouse), who thought of HCI as a matter of Intelligence Amplification which is more “transhumanism” than “building better tools”.
This may just be an artifact of the bad nomenclature in the field; some people, particularly in Europe, tend to use “ergonomics” as a name for the whole field of human technology interaction (Or human-centered design, or human factors, or any of half a dozen names with slightly different implications…). The inconsistent nomenclature is to be expected in a field that draws from so many other more established fields; psychologists, engineers, and designers all tend to use different, incompatible vocabulary with different, incompatible shades of meaning, but that doesn’t really make the situation less bothersome. I’m partial to phrases like “Human Technology Interaction,” because they imply accordances on both sides of the line. Terms like “Human Centered Design” always strike me as implying a system of presenting shallow models to make things “easier” for users, which don’t actually take into account the real mechanisms of the underlying system. This kind of design tends to be grossly inefficient for the technology, and break down as soon as something unexpected happens. It should make for fun discussion in class.
Related Note: While looking at related material, I FINALLY put together that “Intelligence as an emergent property of (reducible) distributed systems” Danny Hillis and “Chief architect/co-founder of Thinking Machines” Danny Hillis are the same person, who was also a student of Claude Shannon. How the fuck did I never put that together?
My usage patterns of late have lead me to the conclusion that something critical has been omitted with modifier keys. Modifier keys are those keys that alter the meaning of keypresses, things like Shift, Alt, Ctrl, Windows, Apple, Option, Command, Function… (I think that covers most common modern keyboards, there have been others). The omission is that no environment I’m aware of reserves a key for the system; applications are always able to intercept the key-presses and do inconsistent things with them (I’m looking at YOU old fashioned text editors). A key reserved for the system (or, actually, the window manager in most stacks) would be useful in a variety of ways, all derived from implementing truly uniform behaviors system-wide. I’ll call this magical key “Sys” (for System Key, its a surprisingly little-used phrase).
Modifier keys have always contentious (and well-storied) things; see oldschool cokebottle jokes and this story about the early Macintosh (near the bottom, that isn’t the one about the symbol, but it may as well be here too). As such, I’ll provide two motivating examples for the addition (or forceful re-purposing) of a modifier:
* I’ve lately found myself hitting Ctrl+T and starting typing a query, expecting a fresh firefox tab preloaded to a google search box. Unfortunately, this doesn’t work so well something other than firefox has focus. I would like to be able to set Sys+T to “Bring the most recent Firefox window to the foreground (or launch it if there isn’t one), and pass it the command to open a new tab.” This should be easily possible with normal NetWM (or even ICCCM) capable window managers, as far as I understand the specs. There just isn’t a good interface for it (AFIK).
* Switching between Ctrl+C/Ctrl+V and Ctrl+Shift+C/Ctrl+Shift+V for copy/paste when switching between applications that have the luxury of following modern conventions and a terminal emulator is distracting and error prone. “Break” and “Background” are useful and have precedence, so I don’t begrudge the behavior, but the “correct” solution would be to have Sys+C and Sys+V manipulate the clipboard (which is (usually, mostly) managed above the application level anyway) in a context-insensitive way.
I know there are ways to approximate this behavior; many media players allow you to set global shortcuts to control them (which may conflict and are usually flaky); most environments have conventions which are theoretically consistent across applications (which are often disobeyed, particularly by still-useful applications written before the standard was established). This isn’t what I’m talking about. The “system only” nature of the key shouldn’t be optional. The window manager should trap anything between press and release of the sys key, including the presses themselves (press and release are separate signals for most keys on every keyboard design I know of), and handle the event, leaving applications completely unaware.
I’ll probably try to adapt XFCE (which seems to have pretty good facilities for this in place already) to as much of my desired behavior as is easily possible, using the windows key as my Sys, when I next have time for a little project like this (ha…). It may even be possible simply by abusing the keyboard preferences, which would be another victory for good old flexible Linux.