I was doing the first (actually, second, the first was an article from perennial human factors design blowhard Donald Norman, just like I was joking it would be) reading for my PSY562 class, and was kind of disturbed by the degree to which the book seems to treat human technology interaction as a totally pragmatic enterprise that essentially reduces to ergonomics. This stance may make sense with simple mechanical systems, but the human computer interaction I am most familiar with has always seemed more meaningfully posed as an information theory problem than a simple issue of lubricating a system.
Looking at the big HCI pioneers, we get people like Ivan Sutherland, who’s most famous work, sketchpad, was done as his PhD. project under Claude “The father of information theory” Shannon, and Douglas Engelbart (of hypertext and the mouse), who thought of HCI as a matter of Intelligence Amplification which is more “transhumanism” than “building better tools”.
This may just be an artifact of the bad nomenclature in the field; some people, particularly in Europe, tend to use “ergonomics” as a name for the whole field of human technology interaction (Or human-centered design, or human factors, or any of half a dozen names with slightly different implications…). The inconsistent nomenclature is to be expected in a field that draws from so many other more established fields; psychologists, engineers, and designers all tend to use different, incompatible vocabulary with different, incompatible shades of meaning, but that doesn’t really make the situation less bothersome. I’m partial to phrases like “Human Technology Interaction,” because they imply accordances on both sides of the line. Terms like “Human Centered Design” always strike me as implying a system of presenting shallow models to make things “easier” for users, which don’t actually take into account the real mechanisms of the underlying system. This kind of design tends to be grossly inefficient for the technology, and break down as soon as something unexpected happens. It should make for fun discussion in class.
Related Note: While looking at related material, I FINALLY put together that “Intelligence as an emergent property of (reducible) distributed systems” Danny Hillis and “Chief architect/co-founder of Thinking Machines” Danny Hillis are the same person, who was also a student of Claude Shannon. How the fuck did I never put that together?