Article note: Oh no.
Though "If you cram enough bullshit into the initramfs it can do anything" is an interesting side effect of the modern boot process that bears exploration. I'm not actually sure how much of the "immutable" type distros are working by basically just never pivoting their initramfs to a real root and instead mounting a bunch of overlays, but it sure seems like an easy way to do the thing.
On the brink of insanity, my tattered mind unable to comprehend the twisted interplay of millennia of arcane programmer-time and the ragged screech of madness, I reached into the Mass and steeled myself to the ground lest I be pulled in, and found my magnum opus.
Article note: Bunch of things in there I didn't know.
Especially, I didn't realize how much Linux was a motivation for Larry McVoy to leave Sun and make BitKeeper as a better successor to their old internal CVS tools, and how short the BitKeeper era for kernel dev was (mostly only 2002-2005 with a few early adopters).
It still wigs me that git's speed, and inertia from the Linux and Ruby folks was enough to make it the clear winner of the DCVS explosion, even arriving in the middle of the pack and being _brutally_ ugly such that there is a whole industry for add-on porcelain to try to hide how nasty it is.
Article note: They trained it on shitty florid academic writing, so it vomits out shitty florid academic writing.
That style is already basically a caricature of itself, propagated by mimicry, so the same "This is probably horseshit" indicators that worked for human authors work for LLM spew.
Thus far, even AI companies have had trouble coming up with tools that can reliably detect when a piece of writing was generated using a large language model. Now, a group of researchers has established a novel method for estimating LLM usage across a large set of scientific writing by measuring which "excess words" started showing up much more frequently during the LLM era (i.e., 2023 and 2024). The results "suggest that at least 10% of 2024 abstracts were processed with LLMs," according to the researchers.
In a pre-print paper posted earlier this month, four researchers from Germany's University of Tubingen and Northwestern University said they were inspired by studies that measured the impact of the COVID-19 pandemic by looking at excess deaths compared to the recent past. By taking a similar look at "excess word usage" after LLM writing tools became widely available in late 2022, the researchers found that "the appearance of LLMs led to an abrupt increase in the frequency of certain style words" that was "unprecedented in both quality and quantity."
Delving in
To measure these vocabulary changes, the researchers analyzed 14 million paper abstracts published on PubMed between 2010 and 2024, tracking the relative frequency of each word as it appeared across each year. They then compared the expected frequency of those words (based on the pre-2023 trendline) to the actual frequency of those words in abstracts from 2023 and 2024, when LLMs were in widespread use.
Article note: Heh, the patches that added the built in block-on-repeated-attempt features into the logging path were also quietly patching a (very complicated to trigger) RCE related to signal handlers and logging because a few glibc functions hit by a signal in the timeout path aren't async safe.