Article note: It's a good description of the problem of software-as-value-extraction, which has become even more true in the year since it was written.
As noted, it's "software prioritizing the vendor" more than the developer, but I think there is an interesting aside about software prioritizing the developer (as in the implementing programmer) in the old Wirth's law lazy but high-overhead dev tools argument. Or the self-justifying whims of UX "designers" situation.
I'm not sure that the suggestions at the bottom are realistic, as the top HN comment notes, it's a Collective Action Problem and right now the only way out is open source.
Article note: Putting lots of sensitive user data in internet-connected silos is never a good idea.
For passwords, use KeePass or something where you have a proper locally-encrypted DB, and sync that through a normal file-syncing tool (Seafile, Syncthing, Dropbox...whatever).
LastPass, one of the leading password managers, said that hackers obtained a wealth of personal information belonging to its customers as well as encrypted and cryptographically hashed passwords and other data stored in customer vaults.
The revelation, posted on Thursday, represents a dramatic update to a breach LastPass disclosed in August. At the time, the company said that a threat actor gained unauthorized access through a single compromised developer account to portions of the password manager's development environment and "took portions of source code and some proprietary LastPass technical information." The company said at the time that customers’ master passwords, encrypted passwords, personal information, and other data stored in customer accounts weren't affected.
Sensitive data, both encrypted and not, copied
In Thursday’s update, the company said hackers accessed personal information and related metadata, including company names, end-user names, billing addresses, email addresses, telephone numbers, and IP addresses customers used to access LastPass services. The hackers also copied a backup of customer vault data that included unencrypted data such as website URLs and encrypted data fields such as website usernames and passwords, secure notes, and form-filled data.
Article note: Looked for the fun old platform, stayed for the "Everything is subtle when you get close enough."
Kernel char being presumptively unsigned on all platforms (-funsigned-char) will be ...fun... for the large amounts of code primarily worked on x86 where char is signed in the ABI.
Article note: I went and listened to the RISC V BoF at SC22 a month or so ago, mostly because I wanted to hear what their plans for making the proliferation of extensions manageable for toolchains and libraries looked like...
The presented plan included quality choices like "vendor toolchains forked from upstream that support their secret sauce features, but suck at everything else compared to mainline, and are only maintained until until the vendor runs out of VC money," "You can always use the nearest base profile and ignore the special hardware you theoretically paid this vendor a fortune for" and special attention to "Most of your workloads are torch or something anyway, just use the vendor's binaries."
I was not impressed.
Article note: I was talking about the weirdness of bool the other day and hit on some of the "char is a little special because of machines that don't have 8-bit byte addressing, and ABIs that disagree about signedness" ... but I wasn't thinking about how char oddness would interact with putc and getc.
It is a little bit non-obvious what the "right" thing is, since if you're on, say, a DSP that has only 16bit addressing, you don't really want to write out "abc" as "a\0b\0c\0" even though that is what you sent it bitwise, and you also don't want to silently discard data or have something read back differently than it wrote out. Not a problem most places because POSIX says CHAR_BIT=8, but it'll still fuck you up in the edge caes.
...and then there's unicode, let's not think about that.
Article note: Even Google can't pull this bullshit off by fiat without serious repercussions.
For several years now, Google has wanted to kill Chrome's current extension system in favor of a more limited one, creating more restrictions on filtering extensions that block ads and/or work to preserve the user's privacy. The new extension system, called "Manifest V3" technically hit the stable channel in January 2021, but Chrome still supports the older, more powerful system, Manifest V2. The first steps toward winding down Manifest V2 were supposed to start January 2023, but as 9to5Google first spotted, Google now says it delayed the mandatory switch to Manifest V3 and won't even have a new timeline for a V2 shutdown ready until March.
The old timeline started in January 2023, when beta versions of Chrome would start running "experiments" that disable Manifest V2. This would move to the stable version in June, with the Chrome Web Store banning Manifest V2 extensions in January 2024. The new timeline is that there is no timeline, and every step is now listed as "postponed" or "under review."
In a post about the delay, Chrome Extensions Developer Advocate Simeon Vincent says, "We’ve heard your feedback on common challenges posed by the migration, specifically the service worker’s inability to use DOM capabilities and the current hard limit on extension service worker lifetimes. We’re mitigating the former with the Offscreen Documents API (added in Chrome 109) and are actively pursuing a solution to the latter." After adding that every step of the timeline is on hold, Vincent said, "Expect to hear more about the updated phase-out plan and schedule by March of 2023."