Tag Archives: Security

28C3 The Science of Insecurity

This may be the best talk out of 28C3 this year. I was actually more pumped about Cory Doctrow’s “The Coming War on General Computation” 28C3 talk from the previous day, which I shared enthusiastically on G+, but there is more to talk about in this one. It is mostly coached as language/computational theory, but the thesis is that one shouldn’t design protocols in which one is able to construct a message that causes the recipient to perform arbitrary computation in the process decoding of the message. Which is awesome, and their argument for it is convincing. Furthermore, things with the message “Everyone needs to start thinking like language geeks and compiler writers” are bound to appeal to me. That said, I have a couple problems with the talk.

The first problem is purely aesthetic, and mostly unimportant. In terms of presentation, it wasn’t that great a talk. The slides were bland and repetitive, and the speaker kept using problematic mannerisms. The sewearing and such are right in place, but the coughed interjections and such were not good, and the flavoring particles were excessive. I’ve been guilty of most of the above, most of the times I’ve given talks, but the more I teach and speak, the more I become sensitized to presentation, and the internet has made me spoiled on talk quality, with things like fail0verflow’s Console Hacking 2010 at 27C3 last year, or any talk Lawrence Lessig has ever given. On a better note, the Occupy + rage comics visual conceit used throughout is pretty fun.

With that out of the way, on to the techically interesting stuff:

I think they introduce some fundamental problems in demanding context-insensitive protocols. I’m likely misunderstanding, but from working with simple serial protocols, I’m wary of anything that smells like control characters.
Two conceptual problems: indefinite message length, and unwanted control characters. Both arise from the same discussion of automata their thesis is rooted in. The first problem is simple to explain: it is easy to have unbounded input – a message with no stop character will eventually break shit. In practical implementations, message lengths would necessarily be bounded, and part of the problem would go away, but it would still be extremely vulnerable to flooding. They used S-expressions as an example of a reasonable solution – which makes me think “while true; do echo ‘(‘; done”, now you’re DOSed. This could probably be worked around, but it harms the elegance.
As for the second, I don’t see a similar way out. They correctly note that escaping is not a solution, and refer to the delightful field of SQL injection as proof by example. Then they neglect to suggest a different solution, because as far as I am aware, there isn’t one. Given arbitrary data to be transfered, there ARE no delimiters which cannot appear in the data. It’s one of those time-honored intractable problems in CS. The question asked late in the video about badly formed CSV files was poking at the same idea, and they did a great job explaining why field lengths are unsafe, but I’m still unconvinced that there isn’t a fundamental flaw in in-band start/stop characters that is similarly bad. This will require further reading.

My other technical problem: The speakers kept using YACC/BISON as examples of good programming tools in a talk mostly about problems with “leaky” specifications and implementations of things which are fundimentally recognizers. YACC and its ilk are among the worst offenders in this regard. The biggest problem with YACC and imitators is that they require a separate lexer specification, and all kinds of bad things happen when the specifications inevitably don’t quite match. Also, the generated LALR parser breaks when you embed actions, so all your new safety from generating a monolithic parser from a proper language specification goes away. There are better recognizer tools, in terms of ease (and precision) of specification and quality of the generated parser. Personally, I drank the ANTLR cool-aid for that – single specification for the recognizer, no problem with embedding actions (LL(*) instead of LALR), AND spits out parsers in far more languages than any YACC or Bison version I’ve seen.

As an aside, I had independently found and read through the speaker’s old livejournal/blog and some of their research work, without assembling that they were the same interesting person (last paragraph) until now. I also hadn’t associated the identity with her late husband, who was also an interesting person. The computing community is small and close, and it is equal parts amazing and discomfiting.

Now it’s almost 6:30AM localtime, and I haven’t slept because I got interested in something in the middle of the night. What is wrong with me?
EDIT: I noticed that I originally titled this “28C3 Keynote.” It wasn’t. It was the middle of the night. Fixed now.

Posted in Computers, General | Tagged , , | 1 Comment

PSN Outage Reading

I don’t have any stake in the PSN outage issue, not owning any Sony products more complicated than headphones (The last console I bought was an original Xbox- used- to ‘chip and run XBMC on), but it has made interesting reading on the interwebs. There are the official releases, which until today were basically “The system is down.” There is also all kinds of amusing speculation, because when you take video games away from geeks, they suddenly have all kinds of time for that sort of thing. A fairly credible and highly publicized bit of speculation comes from this thread at reddit, where someone from PSX-Scene places the root of the problem on custom firmware that allowed consoles onto the developer network, which subsequently allowed users to purchase paid content with bogus credit card information. The specific details aren’t that interesting to me – the interesting thing is that almost all the speculation has something in common: that Sony was, at least in part, relying on a client-side security model*. If true, this is seriously fucking stupid, even by Sony standards. Ignoring security concerns, when writing software there is a standard adage “Never trust the user.” Usually, the user can’t be trusted because the user is a fucking idiot. Occasionally, the user can’t be trusted because the user is malicious (where, in this case, “malicious” is defined as “Wants to run their own code on hardware they own”).

Back in December there was the excellent Fail0verflow talk at 27C3 where they eviscerated the security model on the PS3, and pretty much demonstrated that Sony screwed the pooch on that front (watch the talk if you haven’t; it is by far the best security presentation I’ve ever seen). Even before this, the PS3 was fairly deeply compromised by a variety of other techniques, and the PSP has been compromised (and re-compromised) almost since it shipped, so they didn’t just have a reasonable assumption that clients couldn’t be trusted, they knew it for certain.

There was also the rootkit scandal with the copy protection on some Sony BMG audio CDs. All together, this sets up precedent for an almost unlimited degree of poor design in Sony security systems.

Now, Sony is saying that a huge quantity of personal information on every user may have been compromised, and there are a spate of complaints about bogus charges on cards used with PSN services floating about on the ‘net (complaints of unknown correlation and reliability). This leads to the really interesting questions: Was all this information stored in plaintext? – it sure sounds like it was if it was extracted on such a scale. If both the Sony release and the speculation about access being gained through compromised consoles is true, why was this information accessible from clients? And finally, how did a system with all the above properties come to be designed? I’m seriously hoping this gets analyzed in public, because it will make an amazing instructional case study, and something of worth might as well be salvaged from this clusterfuck.

* There are a couple non client-side attack theories too. The boring “Organized criminals did it” option, and the theory that Anonymous (big A) is doing their gleeful mayhem thing, like they threatened. These aren’t any more or less credible, they just aren’t as interesting.

Posted in Computers, DIY, Entertainment, General | Tagged , , | Leave a comment

Package Manager Security

(The following is long, rather technical, and somewhat esoteric. Sorry, it’s what I do.)
I try to keep reasonably abreast of developments in Arch Linux, since it has been my favorite distribution for about seven years now, and the OS on my primary-use computer for five of them. Someone (almost entirely a single very loud someone as it turns out) has been making noise about package signing in pacman, the package manger used by and written for Arch, and said noise propagated up to an article on LWN, so I took some time out tonight to read up on the matter.

The short version is that the description of events on pacman developer Dan McGee’s blog seems to be essentially correct, and the “Arrogant and dismissive” accusations were the result of someone new showing up and making long-winded demands on the mailing list in regard to a topic which has been under (occasionally contentious) discussion for years. The Arch community can certainly be a little blunt, but it has never struck me as unfriendly or inappropriately autocratic (there is quite a bit of the “Those people actually doing things get to decide how they are done” mentality: as far as I am concerned this is exactly right for community projects).

The two primary things I learned in reading are that package manager security is indeed a hard problem, and that most of the possible attacks would be extremely difficult to carry out, regardless of package signing. The typical least concern matter of security: if production machines anywhere that matters are having their DNS (& etc.) spoofed on the required scale, there is a much bigger problem than trying to slip compromised packages into systems during updates. I’ve also discovered that generally, people don’t seem to care: for example, as best I can make out, gentoo has had discussions on package/repository signing since 2002, support since 2004… and it isn’t generally used today. The Arch Wiki has a nifty article about how various distributions handle package security in the context of designing a system for Arch – it is somewhat incomplete, but the only comparison of existing systems I found. Note that the page was started and largely populated in July of 2009.

One thing I don’t quite understand is why there isn’t a movement toward, at least optionally, performing updates over secured connections: simply using ssl (which has it’s own problems) for mirror-to-mirror and user-to-mirror communication would (aside from making the CPU load involved in running a mirror much higher and considerably slowing update downloads…) convey many of the befits of signed packages/repositories with less hassle. More importantly, it would close many of the holes in package management systems which do support signing for those individuals and organizations with sufficiently critical systems and/or paranoid administrators to be willing to swallow the overhead.

With all that in mind, I find myself agreeing with the pacman developer’s ambivalence on the issue – a security scheme for pacman is not so much a “critical feature” as a”nice to have”, largely for future proofing. Likewise, a broken scheme, or one so obtrusive it goes unused is probably worse than none at all. The obtrusive issue is honestly probably the most important to me – one of my favorite things about pacman is that the makepkg process is incredibly easy. I can often go from a source tarball or CMS checkout to a easily handled package as fast as I can (safely) build and install by hand. Contrast this with, say, Debian, where packaging and installing even simple software is often a painful multi-hour affair even with things like debhelper, and simple packages tend to (in my experience) do unhelpful things like fail to uninstall cleanly. I want making my own packages, and building or modifying packages with scripts written by others to remain easy and transparent much more than I want to be protected from improbable attacks.

Forcing the issue (it looks like security features will appear in the next few pacman release cycles as a result of the noise, mostly handled by existing developers) was probably not the right thing – the security scheme should have been done slowly, carefully, and correctly by someone who is actually interested in the matter – the last point both so that it really is done right, and because Arch and Pacman are community maintained projects, where everything should be done by someone who cares, as Linus himself puts it, just for fun.

Posted in Computers, DIY | Tagged , , | Leave a comment