Category Archives: Computers

28C3 The Science of Insecurity

This may be the best talk out of 28C3 this year. I was actually more pumped about Cory Doctrow’s “The Coming War on General Computation” 28C3 talk from the previous day, which I shared enthusiastically on G+, but there is more to talk about in this one. It is mostly coached as language/computational theory, but the thesis is that one shouldn’t design protocols in which one is able to construct a message that causes the recipient to perform arbitrary computation in the process decoding of the message. Which is awesome, and their argument for it is convincing. Furthermore, things with the message “Everyone needs to start thinking like language geeks and compiler writers” are bound to appeal to me. That said, I have a couple problems with the talk.

The first problem is purely aesthetic, and mostly unimportant. In terms of presentation, it wasn’t that great a talk. The slides were bland and repetitive, and the speaker kept using problematic mannerisms. The sewearing and such are right in place, but the coughed interjections and such were not good, and the flavoring particles were excessive. I’ve been guilty of most of the above, most of the times I’ve given talks, but the more I teach and speak, the more I become sensitized to presentation, and the internet has made me spoiled on talk quality, with things like fail0verflow’s Console Hacking 2010 at 27C3 last year, or any talk Lawrence Lessig has ever given. On a better note, the Occupy + rage comics visual conceit used throughout is pretty fun.

With that out of the way, on to the techically interesting stuff:

I think they introduce some fundamental problems in demanding context-insensitive protocols. I’m likely misunderstanding, but from working with simple serial protocols, I’m wary of anything that smells like control characters.
Two conceptual problems: indefinite message length, and unwanted control characters. Both arise from the same discussion of automata their thesis is rooted in. The first problem is simple to explain: it is easy to have unbounded input – a message with no stop character will eventually break shit. In practical implementations, message lengths would necessarily be bounded, and part of the problem would go away, but it would still be extremely vulnerable to flooding. They used S-expressions as an example of a reasonable solution – which makes me think “while true; do echo ‘(‘; done”, now you’re DOSed. This could probably be worked around, but it harms the elegance.
As for the second, I don’t see a similar way out. They correctly note that escaping is not a solution, and refer to the delightful field of SQL injection as proof by example. Then they neglect to suggest a different solution, because as far as I am aware, there isn’t one. Given arbitrary data to be transfered, there ARE no delimiters which cannot appear in the data. It’s one of those time-honored intractable problems in CS. The question asked late in the video about badly formed CSV files was poking at the same idea, and they did a great job explaining why field lengths are unsafe, but I’m still unconvinced that there isn’t a fundamental flaw in in-band start/stop characters that is similarly bad. This will require further reading.

My other technical problem: The speakers kept using YACC/BISON as examples of good programming tools in a talk mostly about problems with “leaky” specifications and implementations of things which are fundimentally recognizers. YACC and its ilk are among the worst offenders in this regard. The biggest problem with YACC and imitators is that they require a separate lexer specification, and all kinds of bad things happen when the specifications inevitably don’t quite match. Also, the generated LALR parser breaks when you embed actions, so all your new safety from generating a monolithic parser from a proper language specification goes away. There are better recognizer tools, in terms of ease (and precision) of specification and quality of the generated parser. Personally, I drank the ANTLR cool-aid for that – single specification for the recognizer, no problem with embedding actions (LL(*) instead of LALR), AND spits out parsers in far more languages than any YACC or Bison version I’ve seen.

As an aside, I had independently found and read through the speaker’s old livejournal/blog and some of their research work, without assembling that they were the same interesting person (last paragraph) until now. I also hadn’t associated the identity with her late husband, who was also an interesting person. The computing community is small and close, and it is equal parts amazing and discomfiting.

Now it’s almost 6:30AM localtime, and I haven’t slept because I got interested in something in the middle of the night. What is wrong with me?
EDIT: I noticed that I originally titled this “28C3 Keynote.” It wasn’t. It was the middle of the night. Fixed now.

Posted in Computers, General | Tagged , , | 1 Comment

Touchpad

I picked up one of the $150 refurbished 32GB Touchpads in the last firesale on Sunday. It seems like HP has done their very best to get as many Touchpads into the hands of hackers as possible, so whether or not it is well supported by HP, the community will do something fun with it. Besides, a $150 ARM developement platform that will boot Android, various Linux chroots, AND let me play with WebOS was too appealing to pass up.
Continue reading

Posted in Announcements, Computers, DIY, Electronics, Entertainment, General, Objects | Leave a comment

Windows 8 DP

I played with the Windows 8 Developer Preview in VirtualBox for a while this evening. Those who spend time around computers will recall that every other Microsoft OS is a loser. The betas for XP and 7 were clear upgrades when they started circulating. They were fast and stable and added desirable features. Me and Vista hit the market like an animal carcass and stunk up the place for a while. They were slow, and fragile, and changed things for the worse. Windows 8 goes beyond that. This shit is the next Microsoft Bob.
The quirks and performance instability can be excused as a developer preview running in a virtual machine. The fact that every UI change from 7 is for the worse cannot.
The Windows8 DP Launcher Screen
The big feature is the Metro interface. Metro is trying to graft a mediocre appliance UI (I thought “Cell Phone” a lab mate compared it to their DVD player) on to the desktop, in place of a sane launcher or window manager. The login screen is a “Swipe up to unlock” affair, with no indication that that’s how it works. Finding programs is like sorting through a desk full of business cards. The task model is more akin to Android, where programs suspend to quietly consume resources in the background until swapped out instead of quitting cleanly. All metro apps run fullscreen, one instance per application, and none of the reference apps have any mechanism for tabs or fields. Task switching is performed by hovering near the left edge of the screen and clicking to cycle through active programs (Alt+Tab switches through all active Metro apps, all Desktop apps, and the desktop itself). There is no indication of what is running, so “active” is more than a little unclear. I still haven’t found a mechanism to shut down without first logging out.
The Explorer Ribbon UI element in Windows8 DP
You can partially drop to a conventional desktop mode, which is much like Windows 7, but a little bit worse in every way. The start menu is GONE – clicking where it used to be just drops you back to the Metro mess. Task management is confusing because some programs are programs, and some programs are entities in Metro. The “hover near the left edge of the screen” switching behavior persists on the desktop. Menus have been replaced by ribbons – which are, I shit you not, 115px high in the file manager. To put that another way, 209px of the default file manager’s 597px height are taken up by static decorations – I’m reminded of those pictures of browsers where the user never turned down a toolbar, but it’s the default style.
Looking for new UI metaphors is commendable, and it’s especially nice to see something other than the “Hide ALL the UI elements!” hyper-minimalism (see the new Google bar) that is the current trend being tried, but this may actually be worse. Users deserve better than the fleet of terrible regressive change-for-change’s sake UIs that have been foisted on the personal electronics world of late.
At least we’ll be making mean jokes about this one for years to come.

Posted in Computers, General | Tagged , , , | Leave a comment

Bookish Dreaming

I only remember a dream every year or so, but I realized on the way back from getting breakfast with friends this morning that a book I thought I’d been reading was entirely in a dream. It was a long dream with various dream-space fucked-up-ness to the setting building (No University space is that large, that nice… or has three large highly styled cafe/lounge spaces in the same complex) and interaction with various old acquaintances, but there was one section I didn’t realize was a dream because it was so normal:
< dream content >
I picked up a book (roughly A4 sized, and inch or so thick, nicely black cloth-bound with gold embossing) about code generation for a particular class of exotic hybrid-SIMD machines (I remember details, which are realistic, but not specific enough to pick out which machine) by David Padua (respected figure in parallel computing, who I’ve met at conferences) and a coauthor I couldn’t remember when I woke up. I got the book from a well stocked engineering library, and discussed it with various engineering types I know, including my current adviser.
< /dream content >
Until we were headed back from breakfast and I realized the setting was “improbable,” I was sure it had happened. When I got home I had to see if it was something I may have seen referenced – the content and authors were probably based on “Optimizing data permutations for SIMD devices.” which I read a year or two ago, but it isn’t an exact match. The description I remember also matches a section in Encyclopedia of Parallel Computing (four volume, $1500) book that I’ve never seen before (and now want access to). I also want the dream book, because it would be all kinds of useful for my MS project.
Aren’t brains interesting…

Posted in Computers, General, Navel Gazing, Objects, School | Tagged | Leave a comment

SC’11 Lessons

I learned some really interesting things at SC this year, and now that I’ve had a day to process, I want to share. Many of these observations come from first or second hand conversations, or justifiable interpretations of press releases, so I don’t promise they are correct, but they are plausible, explanatory, and interesting. I apologize for the 1,000 word wall of text, but there is a lot of good stuff.

  • This is the big one: I’m pretty sure I understand the current long term architecture plan being pursued by Intel, AMD, and Nvidia. This plan signals the end of the current style of monolithic symmetric processor cores.
    They are all apparently pursuing designs with a small N of large integer units, coupled to M >> N SIMD engines.

    • Nvidia’s “Project Denver” is a successor/big sibling to Tegra design, and appears to be the beginning of a line with 2-8 64-bit (probably) ARM cores tightly integrated with a big honking GPU-like SIMD structure for FP. The stale press release about this stuff is kind of nauseating to read, but it looks like they’re betting the farm on that design.
    • Intel’s HPC efforts are going to be based on a lot of MIC (Many Integrated Cores, successor to the Larabee stuff) parts coupled with a few big cores like the current Xeons. The MIC chips are basically large numbers of super-Atoms: tiny, simple, dumb integer units attached to big SWAR (SIMD Within a Register) units focused on SSE/AVX performance. This is less speculative than most observations, they made a pretty good press push (This for example) on this idea.
      The ring interconnects and higher per-“thread” hardware complexity are probably not a good idea in the long run (IMHO), but having an integer unit for every big SWAR engine will be a major advantage in terms of programming environment and code generation. I suspect the more cautious approach is because Intel doesn’t want/can’t afford another Itanic, where the tools couldn’t generate good code for the programming model on their intended high-end part.
    • AMD’s two current products are stepping stones to a design similar to Nvida’s – Bulldozer is a design with some ridiculously powerful x86-64 integer units decoupled from a smaller number of shared FPUs. The APU (I haven’t heard the “Fusion” name in a while) designs are CPUs tightly coupled to GPU structures. The successor parts will be a hybrid of the two – a few big, bulldozer style integer units, with a large number wide next-gen GPU SIMD structures coupled to them.

    I think this is generally a good design direction, particularly with current directions in computing in mind, but it is going to make the compiler/concurrent programming world exciting for a while.

  • AMD appears to be gearing up to abandon a fifth generation of GPGPU products. CTM, CAL, Brook+, OpenCL on 4000 series cards have all been deprecated while still shipping, and indications are that OpenCL (and general driver) support for the current architecture (4-wide VLIW SIMDs, like in the 5- and 6- series) has been relegated to second-class citizen status, while they work on a next generation architecture. The rumor is the next gen parts will be 4 independent banks of SIMD engines instead of 4-wide VLIW SIMD engines, which should be both both nicer to program and generate code for and more similar to Nvidia.
  • Nvidia is going to open source their CUDA environment. One of the primary objections to CUDA in a lot of circles is reluctance to use a proprietary single-vendor programming environment (people who have been in super/scientific computing for long have all been burnt on that in the past), and the Integer+SIMD model is going to require that not be an issue. This is assembled from information from several places, including PGI, Nvidia, and various scientific compute facilities, much of it second hand or further, but it would make sense.
  • I still don’t exactly know what went down at Infiscale, but the impression that the Perceus community was abandoned by the company, the developers fled, and it was a bad scene seems to be correct. No one I know that was there seems to be talking, but they’re all on their way to other interesting things, especially Greg Kurtzer’s Warewulf3 project at LBL.
  • The dedicated high performance compute nodes in Amazon’s EC2 cloud are actually connected as a few large partitionable clusters, users just can’t (nominally, don’t need to) see and instrument the topology like they could with a normal cluster. This is from interpreting press releases, because the people manning Amazon’s booth really didn’t want to chat (and, in fact, were kind of dicks when we tried). This explains how they’ve been getting performance out of a loosely coupled cloud — which is to say they aren’t, they just have a huge cluster attached to their cloud that shares the interface.
  • The current hard drive production problems have given SSDs the opportunity they need to become first class citizens. Talking to OEMs, the wholesale cost per capacity on HDDs almost tripled, and the supply lines aren’t all that stable, so everyone is scrambling to make things work with mostly SSDs. I saw a lot of interesting new form factors for SSDs, and several flavors flash or battery backed “nonvolatile” DRAM floating about as well, so the nature of storing data-sets is changing.
  • I saw motherboards with 32 DIMM slots (mostly AMD Interlagos based) on the floor. I saw 32GB DIMMs on the floor. I saw some shared-memory systems with multiple Terabytes of RAM in them. The standard for high memory machines has roughly quadrupled in the last year or two.
  • The number of women (not booth babes, real technical people, especially younger ones) and educators on the show floor this year was way higher than in the past. This is very good for the field.

I think that covers most of the really good stuff coming off the floor this year, although I am still processing and may come up with some other insights when I’ve had more sleep and discussion.
Also, Pictures! WOO! (Still sorting and uploading the last batch at time of posting).

Posted in Announcements, Computers, Electronics, General | Tagged , , , , | 1 Comment

Stop SOPA

Internet censorship and overbearing laws to maintain the entertainment industries business model are always bad, but SOPA is the worst. Not that it is entirely technically implementable, but if passed it will genuinely break the internet, and ruin lives in the process. I have the support logo blackbar up for the day, and after that, check for details at http://americancensorship.org/

Posted in Computers, General | Leave a comment

Headed out to SC11 in Seattle, WA. for the week. Technical interest, travel complaints, booth hacks, advertising mockery, and schwag to follow.

Posted on by pappp | Leave a comment

Source Scroller

I’ve been working on a side project to make a source display widget for the aggregate.org SC’11 exhibit, and it is surprisingly problematic. The goals for the program are as follows:

  • Take a directory of source files as input
  • Automatically perform appropriate syntax highlighting on the source files.
  • Display (On a dark background – intended use is a floating projection display) all the source files consecutively.
  • Scroll automatically through the output in a pleasing, readable way.
  • Be capable of repeating this action unattended indefinitely.

Continue reading

Posted in Computers, DIY, General, School | Leave a comment

A Day in the Life of the KAOS Lab Thought Process

In which a simple “Do you know an easy way to convert an integer into a string of (character) digits?” turned into an expedition into ancient UNIX codebases. The process went something like this:

Labmate: Do you know an easy way to convert an integer into a string of digits?
< both of us type it in to google>
Looks like sprintf() is best, there must be a simple efficent algorithm.
How does printf() do it?
Let’s look in libc!
< grep through the various toolchain sources on my system >
Well, here’s the uClibc implementation… which is a terrifying mess of ambigously named function calls and preprocessor directives.
I wish I had some old UNIX sources to look at, that would be simple.
< a bit of googling later>
Holy crap, this is amazing! Full root images for a bunch of early UNIXes, many of them from dmr himself!
< download v5, grep /usr/source judiciously to find /usr/source/s4/printf.s>
Well crap, it’s in PDP-11 assembly, maybe a later version.
< download v7, grep /usr/src judiciously to find /usr/src/libc/stdio/doprnt.s>
Damn, still in PDP-11 assembly, but this is a fancier algorithm.
Hmm… the most understandable UNIX I’ve ever looked at was old MINIX
< spin up BasiliskII, in ‘030 mode, use this workaround to make MacMinix run>
< more grepping for justice>
Eventually, we found the printk() from MacMinix 1.5, in all its awful K&R/ANSI transitional C glory

...
#define NO_FLOAT

#ifdef NO_FLOAT
#define MAXDIG 11 /*32 bits in radix 8*/
#else
#define MAXDIG 128 /* this must be enough */
#endif

_PROTOTYPE(void putc, (int ch)); /*user-supplied, should be putk */

PRIVATE _PROTOTYPE( char *_itoa, (char *p, unsigned num, int radix));
#ifndef NO_LONGD
PRIVATE _PROTOTYPE( char *_itoa, (char *p, unsigned long num, int radix));
#endif

PRIVATE char *_iota(p, num, radix)
register char *p;
register unsigned num;
register radix;
{
register i;
register char *q;
q = p + MAXDIG;
do {
i = (int) (num % radix);
i += '0';
if (i > '9') i += 'A' - '0' - 10;
*--q = i;
} while (num = num / radix);
i = p + MAXDIG - q;
do
*p++ = *q++;
while(--i);
return(p);
}
...

Which is, of course, a digit at a time in one of the most straightforward ways imaginable. Minix’s kernel is designed for systems so resource constrained it has a separate prints() that can only handle char and char* to save overhead, so I can’t imagine it uses a sub-optimal technique.
This kind of thing really makes me wish I had learned OSes in the old death-march through the UNIX sources (or at least the old Tenenbaum book with MINIX) way; things are too complicated and opaque now. There always seems to me to be a golden age from around 1970 into the early 1990s where the modern computing abstractions were in place, but the complexity of production hardware and software hadn’t yet grown out of control.
In a related note, as a tool writer, looking at the earliest versions of cc is AMAZING. Most decent programmers should be able to work through the cc from v5 UNIX (574 lines of C for cc proper, 6454 more lines of C and 1606 of PDP-11 assembly in called parts) in a couple hours, and fairly fully understand how it all works. Sadly, (pre-3) MINIX came with a (binary only) CC made with ACK, which is fancy and portable and way, way, way harder to understand. dmr’s simple genius was just that.

Posted in Computers, Entertainment, General, School | Tagged , , | Leave a comment

RIP Dennis Richie

It’s been a bad week for computing pioneers. Steve Jobs on October 5th, and, more quietly, Dennis Richie died October 9th. We’re hearing all about Steve Jobs in the news, because he was a salesman and a showboat. We’re not hearing about Dennis Richie because he did his very best to avoid attention while he did interesting things.
He developed the C programming language, in which virtually all low level programming has been done for the last twenty some years, and from which most widely used modern languages inherit their syntactic structure. He was a major player in the development of UNIX, an operating system which has become so universal that both the vast majority of smartphones and the vast majority of supercomputers run one of it’s derivatives or descendants.
His contributions are so fundamental that they shape the nomenclature and notation we use to discuss computing, and in essence created the world I live in.

I have a copy of K&R in the home directory of all my computers, and always hoped to meet him. His wisdom and knowledge will be sorely missed, but he long ago discovered the secret to immortality: he didn’t just make things, he made things that make things, and as such he will live on through the tools he designed – tools so elegant that we’ll all, mostly unknowingly, be using them every day on every computer for as long as computers remain recognizable.
exit 0;

Posted in Computers, General | Leave a comment