Article note: We've only had well regarded, well researched books for over a decade saying exactly this.
If you exclude (young) people from physical spaces, from unstructured play, from less-supervised interaction, they'll figure out how to meet those needs as best they can with what they have available.
The fact that a number of companies are exploiting the situation in manipulative ways is a related issue, but not the _same_ issue.
Article note: Title is summary, but the rest does a thorough job.
It's such an obvious and reiterated "Your proposed solution introduces more problems than it solves" that it's getting hard to imagine anything other than ill-intent on the part of people pushing these policies.
Article note: AI Horseshit: wasting energy at every scale.
Almost three weeks ago, Mozilla released Firefox 141 that, among other features like memory optimizations for Linux and a built-in unit converter, brought controversial AI-enhanced tab groups.
Powered by a local AI model, these groups identify related tabs and suggest names for them. There is even a “Suggest more tabs for group” button that users can click to get recommendations.
Now, severalusers have taken to the Firefox subreddit to complain about high CPU usage when using the feature, as well as express their disappointment in Mozilla for adding AI to the browser.
Is anybody even asking for “AI” features in Firefox? Of the six people still left using Firefox, does even one of them want a chatbot in Firefox? Is any Firefox user the type of user to use some nebulous “AI” tool to organize their open tabs? Seeing these kinds of frivolities in Chrome or Edge or whatever makes sense, but in Firefox?
At least they’re easy to disable through about:config – just set both browser.ml.chat.enabled and browser.tabs.groups.smart.enabled to false. I mean, I guess I can understand Mozilla trying to ride the hype bubble, but at least make this nonsense opt-in, instead of asking users to dig around in obtuse config flags.
Article note: Oh boy, another move to obscure manipulative behavior while simultaneously trying to rentseek third parties for access to user content.
Reddit says that it has caught AI companies scraping its data from the Internet Archive’s Wayback Machine, so it’s going to start blocking the Internet Archive from indexing the vast majority of Reddit. The Wayback Machine will no longer be able to crawl post detail pages, comments, or profiles; instead, it will only be able to index the Reddit.com homepage, which effectively means Internet Archive will only be able to archive insights into which news headlines and posts were most popular on a given day.
“Internet Archive provides a service to the open web, but we’ve been made aware of instances where AI companies violate platform policies, including ours, and scrape data from the Wayback Machine,” spokesperson Tim Rathschmidt tells The Verge.
The Internet Archive’s mission is to keep a digital archive of websites on the internet and “other cultural artifacts,” and the Wayback Machine is a tool you can use to look at pages as they appeared on certain dates, but Reddit believes not all of its content should be archived that way. “Until they’re able to defend their site and comply with platform policies (e.g., respecting user privacy, re: deleting removed content) we’re limiting some of their access to Reddit data to protect redditors,” Rathschmidt says.
The limits will start “ramping up” today, and Reddit says it reached out to the Internet Archive “in advance” to “inform them of the limits before they go into effect,” according to Rathschmidt. He says Reddit has also “raised concerns” about the ability of people to scrape content from the Internet Archive in the past.
Reddit has a recent history of cutting off access to scraper tools as AI companies have begun to use (and abuse) them en masse, but it’s willing to provide that data if companies pay. Last year, Reddit struck a deal with Google for both Google Search and AI training data early last year, and a few months later, it started blocking major search engines from crawling its data unless they pay. It also said its infamous API changes from 2023, which forced some third-party apps to shut down, leading to protests, were because those APIs were abused to train AI models.
Reddit also struck an AI deal with OpenAI, but it sued Anthropic in June, claiming Anthropic was still scraping from Reddit even after Anthropic said it wasn’t scraping anymore.
“We have a longstanding relationship with Reddit and continue to have ongoing discussions about this matter,” Mark Graham, director of the Wayback Machine, says in a statement to The Verge.
Update, August 11th: Added statement from the Wayback Machine.
Article note: The "Github is just Microsoft MITMing and data-mining as much external development activity" theory is now confirmed.
Expect maximum AI hard sell intrusion until it becomes the next dead "standard" dev host, on the corpse pile with Sourceforge.
Article note: The "Github is just Microsoft MITMing and data-mining as much external development activity" theory is now confirmed.
Expect maximum AI hard sell intrusion until it becomes the next dead "standard" dev host, on the corpse pile with Sourceforge.
Article note: This is an _absurd_ situation.
Having that kind of persistent telematics should be so legally risky that no company would dare roll it out at scale.
I bought another computer. This one has a tragic origin story, an active pen with (like everything about it) shockingly good Linux support, and – bonus – has finally given me the impetus to switch from VirtualBox to libvirt for my VMs for obstinate software.
Trilith, my Dell Latitude 5340 2-in-1, pictured, as is tradition, with the current KDE default desktop at time of purchase.Continue reading →
Article note: That demo disk really was one of the niftiest artifacts ever shipped.
A modern looking GUI and a network stack and everything on a floppy, which pretty much "Just worked."
QNX itself is both historically interesting and widely used in embedded things where most people probably don't realize that's what they're interacting with; it's fast, it's UNIX-like enough to be easy to develop for, good RT features while maintaining those UNIX-like semantics where it can, customizable, and apparently the license terms are reasonable. Certainly getting squeezed from above by Linux's steadily-less-annoying RT features, and from below by Zephyr lately, but still a reasonable choice.
On Thursday, the Trump administration issued an executive order asserting political control over grant funding, including all federally supported research. The order requires that any announcement of funding opportunities be reviewed by the head of the agency or someone they designate, which means a political appointee will have the ultimate say over what areas of science the US funds. Individual grants will also require clearance from a political appointee and "must, where applicable, demonstrably advance the President’s policy priorities."
The order also instructs agencies to formalize the ability to cancel previously awarded grants at any time if they're considered to "no longer advance agency priorities." Until a system is in place to enforce the new rules, agencies are forbidden from starting new funding programs.
In short, the new rules would mean that all federal science research would need to be approved by a political appointee who may have no expertise in the relevant areas, and the research can be canceled at any time if the political winds change. It would mark the end of a system that has enabled US scientific leadership for roughly 70 years.