Author Archives: pappp

Anbernic RG351(p) and Powkiddy RGB10 Max2 Button Membranes are Drop-In Compatible

I’ve had an Anbernic RG351P for roughly 2 years now, and it’s an absolutely delightful object.

For those unfamiliar: the RG351 is an example of a class of little gaming emulation handhelds that started back in the mid-to-late 2000s with things like the Dingoo A330. They are, essentially, a tiny ARM (+ usually Linux) machine the size and shape of a handheld gaming device, set up with a built-in controller specifically to run games in emulation. The stock firmware on the RG351 is an ancient EmulationStation/RetroArch/Linux stack, but there are better alternatives – IMO, throwing in a decent SD card loaded with AmberElec is the first thing to do when you get one. It will play essentially everything from the dawn of gaming through the PlayStation and some (but not all) of the Nintendo 64 library, and has limited/marginal support for PSP and DS. It is …straightforward but not the sort of thing I’ll link… to obtain the full ROMsets for these platforms, they are frankly not that large. I paid about $90 for mine, I think they’ve gone up a bit, but there are a whole range of similar options at different price points, build qualities, and platform support.

The build quality, however, isn’t perfect. It’s small-brand China-export hardware. You know you have to be a little careful with it just from handling (I keep mine in a fitted case when throwing it in a bag). I’ve been through a screen (I got red lines in my original after about a year), re-gluing the back rubber pads (original glue melted), and now after two years I wore through the membrane behind the “A” button, and that’s actually what this post is about.

I opened it up, found the worn though button, looked around online, couldn’t any in stock, contacted Anbernic through their AliExpress store front (none available), asked the subreddit (no leads), and couldn’t come up with any exact replacement membranes.

HOWEVER on inspection, the membranes from the similar Powkiddy RBG10 appeared extremely similar, and those are readily available (as a $12ish pack of all the membranes and button caps to refit an RGB10, which includes two of the 4x membranes). I ordered this set via Aliexpress, and ~16 days later when it showed up, can confirm the membranes are slightly different, but drop-in compatible.

As you can see from the photos, the Powkiddy membranes have a bit more flat area, and the bottoms of the mounting holes are filled in rather than fully punched through, but the dimensions are exactly right. The height and force of the domes is even almost identical to the originals, and at effectively $6/membrane it’s a very reasonable repair.

Posted in DIY, Electronics, Entertainment, General, Objects | Leave a comment

Setting time on fire and the temptation of The Button

Source: Hacker News

Article note: Grammatically pleasing AI spew is exploiting the same mental shortcut as the well established "Anything looks credible if it's typeset in LaTeX" or "Printing resumes on nice paper for a subconscious boost" effects. We use secondary/contextual indicators, which are historically proxies for "How much time, expense, and/or expertise was involved in preparing this document" to judge documents. It's not even that different than printing eliminating the quality of the scribe's work as a tell. Which is interesting.
Comments
Posted in News | Leave a comment

Git is simply too hard

Source: Hacker News

Article note: I've been known to refer to using git for most projects as "Hammering a nail by launching and deorbiting a space station on to it." It's a great piece of tech internally, but it's a fucking awful interface that maps poorly to how it works internally and even more poorly to any human model of files or software or work.
Comments
Posted in News | Leave a comment

Microsoft will end support for Cortana on Windows later this year

Source: The Verge - All Posts

Article note: People spent _years_ trying to reliably get rid of the previous unwanted Clippy/BonziBuddy but even more invasive type thing being forced on them, so now they're swapping it out for ...an even more invasive thing, now with AI hype fairy dust.
The Microsoft logo on an orange background
Illustration by Alex Castro / The Verge

Microsoft is ending support for Cortana in Windows. In a support page spotted by XDA Developers and Windows Central, the company says it will “no longer support Cortana in Windows as a standalone app” starting later this year.

Cortana’s discontinuation on Windows doesn’t come as much of a surprise. During its Build conference in May, Microsoft announced its new Windows Copilot tool, which will live in your taskbar and use AI to help you do everything — and more — that Cortana does once it’s widely released. That includes summarizing content, rewriting text, asking questions, adjusting your computer’s settings, and more.

Microsoft first brought Cortana to Windows 10 in 2015, allowing you to set reminders, open applications, and ask...

Continue reading…

Posted in News | Leave a comment

Open-Source LLMs

Source: Schneier on Security

Article note: This is the thing I've been most interested in about the whole AI bullshit boom. IF a couple major incumbents get technical or regulatory capture, it will be a pure wealth concentration weapon for the elite against everyone else. If small open source instances turn out to capture most of the utility, it will avoid the oligarchical problem, but be effectively unregulatable and unleash a swarm of small antisocial applications (mostly fraud and disinformation). Either is going to be utter chaos from a copyright/data monetization/plagiarism standpoint, but I'm such a copyright minimalist anyway, as long as it reduces rather than increases wealth concentration, fuck 'em.

In February, Meta released its large language model: LLaMA. Unlike OpenAI and its ChatGPT, Meta didn’t just give the world a chat window to play with. Instead, it released the code into the open-source community, and shortly thereafter the model itself was leaked. Researchers and programmers immediately started modifying it, improving it, and getting it to do things no one else anticipated. And their results have been immediate, innovative, and an indication of how the future of this technology is going to play out. Training speeds have hugely increased, and the size of the models themselves has shrunk to the point that you can create and run them on a laptop. The world of AI research has dramatically changed.

This development hasn’t made the same splash as other corporate announcements, but its effects will be much greater. It will wrest power from the large tech corporations, resulting in both much more innovation and a much more challenging regulatory landscape. The large corporations that had controlled these models warn that this free-for-all will lead to potentially dangerous developments, and problematic uses of the open technology have already been documented. But those who are working on the open models counter that a more democratic research environment is better than having this powerful technology controlled by a small number of corporations.

The power shift comes from simplification. The LLMs built by OpenAI and Google rely on massive data sets, measured in the tens of billions of bytes, computed on by tens of thousands of powerful specialized processors producing models with billions of parameters. The received wisdom is that bigger data, bigger processing, and larger parameter sets were all needed to make a better model. Producing such a model requires the resources of a corporation with the money and computing power of a Google or Microsoft or Meta.

But building on public models like Meta’s LLaMa, the open-source community has innovated in ways that allow results nearly as good as the huge models—but run on home machines with common data sets. What was once the reserve of the resource-rich has become a playground for anyone with curiosity, coding skills, and a good laptop. Bigger may be better, but the open-source community is showing that smaller is often good enough. This opens the door to more efficient, accessible, and resource-friendly LLMs.

More importantly, these smaller and faster LLMs are much more accessible and easier to experiment with. Rather than needing tens of thousands of machines and millions of dollars to train a new model, an existing model can now be customized on a mid-priced laptop in a few hours. This fosters rapid innovation.

It also takes control away from large companies like Google and OpenAI. By providing access to the underlying code and encouraging collaboration, open-source initiatives empower a diverse range of developers, researchers, and organizations to shape the technology. This diversification of control helps prevent undue influence, and ensures that the development and deployment of AI technologies align with a broader set of values and priorities. Much of the modern internet was built on open-source technologies from the LAMP (Linux, Apache, mySQL, and PHP/PERL/Python) stack—a suite of applications often used in web development. This enabled sophisticated websites to be easily constructed, all with open-source tools that were built by enthusiasts, not companies looking for profit. Facebook itself was originally built using open-source PHP.

But being open-source also means that there is no one to hold responsible for misuse of the technology. When vulnerabilities are discovered in obscure bits of open-source technology critical to the functioning of the internet, often there is no entity responsible for fixing the bug. Open-source communities span countries and cultures, making it difficult to ensure that any country’s laws will be respected by the community. And having the technology open-sourced means that those who wish to use it for unintended, illegal, or nefarious purposes have the same access to the technology as anyone else.

This, in turn, has significant implications for those who are looking to regulate this new and powerful technology. Now that the open-source community is remixing LLMs, it’s no longer possible to regulate the technology by dictating what research and development can be done; there are simply too many researchers doing too many different things in too many different countries. The only governance mechanism available to governments now is to regulate usage (and only for those who pay attention to the law), or to offer incentives to those (including startups, individuals, and small companies) who are now the drivers of innovation in the arena. Incentives for these communities could take the form of rewards for the production of particular uses of the technology, or hackathons to develop particularly useful applications. Sticks are hard to use—instead, we need appealing carrots.

It is important to remember that the open-source community is not always motivated by profit. The members of this community are often driven by curiosity, the desire to experiment, or the simple joys of building. While there are companies that profit from supporting software produced by open-source projects like Linux, Python, or the Apache web server, those communities are not profit driven.

And there are many open-source models to choose from. Alpaca, Cerebras-GPT, Dolly, HuggingChat, and StableLM have all been released in the past few months. Most of them are built on top of LLaMA, but some have other pedigrees. More are on their way.

The large tech monopolies that have been developing and fielding LLMs—Google, Microsoft, and Meta—are not ready for this. A few weeks ago, a Google employee leaked a memo in which an engineer tried to explain to his superiors what an open-source LLM means for their own proprietary tech. The memo concluded that the open-source community has lapped the major corporations and has an overwhelming lead on them.

This isn’t the first time companies have ignored the power of the open-source community. Sun never understood Linux. Netscape never understood the Apache web server. Open source isn’t very good at original innovations, but once an innovation is seen and picked up, the community can be a pretty overwhelming thing. The large companies may respond by trying to retrench and pulling their models back from the open-source community.

But it’s too late. We have entered an era of LLM democratization. By showing that smaller models can be highly effective, enabling easy experimentation, diversifying control, and providing incentives that are not profit motivated, open-source initiatives are moving us into a more dynamic and inclusive AI landscape. This doesn’t mean that some of these models won’t be biased, or wrong, or used to generate disinformation or abuse. But it does mean that controlling this technology is going to take an entirely different approach than regulating the large players.

This essay was written with Jim Waldo, and previously appeared on Slate.com.

EDITED TO ADD (6/4): Slashdot thread.

Posted in News | Leave a comment

Dumb Linux trick: Suspicious of damaged system files on an RHEL-like (rocky, alma, centos, probably fedora, whatever) system? Once you get it unfucked enough to use the package manager (dealing with filesystem problems, hand-re-installing the rpms required to make dnf … Continue reading

Posted on by pappp | Leave a comment

Had a call with Reddit to discuss pricing. Bad news for third-party apps

Source: Hacker News

Article note: Wow, reddit really is trying to suicide. $12,000/50 million requests and no access to marked NSFW content through the API basically kills 3rd party clients, their first-party client is such a shitshow of bad UX, privacy invasion, and advertising I don't think many core users will stick with them if that's the option (and the structure of reddit means if the mods and high-activity users leave and/or are replaced by corporate scambots, they're done). We've seen it happen to Digg and Tumblr and Twitter. It's a shame, there's a ton of relatively good content (particularly in the sense that it isn't _entirely_ overrun with marketing spam like many quarters) locked up in reddit, I wonder where the great nerd migration will go next. I'd be kind of pleased if it were a cluster of major Lemmy instances or something similarly more open.
Comments
Posted in News | Leave a comment

The timing of computer search warrants when it takes years to guess the password

Source: Hacker News

Article note: The argument that making a copy of someone's machine and trying to hack it forever [Is | Is Not] a violation of a time limit on a search warrant is interesting. I'm a little curious about some particulars of this case (eg. they worked on a copy, but iPhones are supposed to have a hardware key to prevent exactly that?) There are many suspicions about certain agencies doing drag collection and storage for later decryption, and there are lots of procedural ways that already exist to be allowed to keep hackin' on copied data, so perhaps this is really just a narrow authority spat between a judge and an agency?
Comments
Posted in News | Leave a comment

IRIX community proposes to reverse-engineer the last 32 bit IRIX kernel

Source: OSNews

Article note: That is an odd project - while the userland was interesting, Irix's kernel was (AFIK) not particularly special, at least not until later. It has always been Unix-brand-Unix with occasionally some BSD derived extensions before they were folded in. The 5.3 release they're talking about is directly SVR4 derived, and it didn't pick up the interesting in-house scalable MP extensions until 6.4, so the kernel they're working is essentially a MIPS port of SVR4.

The IRIX Network, the primary community for SGI and IRIX enthousiasts, has announced a fundraising effort to reverse-engineer the last 32 bit version of the IRIX kernel.

IRIX-32, so named for its basis on kernel and APIs of the last 32-bit compatible IRIX (5.3) is a proposed reverse engineering project to be conducted by a team of developers in the US and the EU.

Purpose: We will reverse engineer the version 5.3 kernel with future goal of producing a fully open source reference implementation. This is the first major step and the delivery will be documentation and reference material to enable effective emulation and driver development for IRIX.

This is huge. If they can do this, they will save the operating system from an inevitable demise. I’m of course 100% behind this, and the total costs of 8500 dollars – 6500 from the fundraiser, 1000 as a donation from the IRIX Network itself, and 1000 from a few companies still using IRIX – is definitely realistic in the sense that they should be able to meet their goal. It’s not a lot of money, and it’s not meant as fair compensation for the work delivered – the teams of developers involved know this and aren’t asking for such either.

The thread so far is a great read. They haven’t selected a fundraising platform yet, but I am definitely throwing money their way once they do.

Posted in News | Leave a comment

Raspberry Pi/PlatformIO conflict blocks support for Pico-Arduino toolchain

Source: Hacker News

Article note: Apparently PlatformIO tried to shake down the Pi foundation for ongoing funding in order to accept community-contributed RP2040 support? I was playing with various STM32 toolchains for the last few weeks (CubeIDE, the two Arduino cores, libopencm3+gnu parts),and it was one I was going to consider but... That's distasteful.
Comments
Posted in News | Leave a comment