Category Archives: News

Shared items and notes from my feeds and browsing. Subscribe as feed.

The Capacitor Plague of the Early 2000s

Source: Hack a Day

Article note: Asianometry's take is better than most pop-media tellings - clearly not a single event, but rather an era where we just weren't very good at building long-lived electrolytic capacitors, especially low ESR ones, punctuated by a few especially bad incidents. Exacerbated by (IMO over emphasized here) heat and (IMO under-emphasized here) miniaturization.

Somewhere between the period of 1999 and 2007 a plague swept through the world, devastating lives and businesses. Identified by a scourge of electrolytic capacitors violently exploding or splurging their liquid electrolyte guts all over the PCB, it led to a lot of finger pointing and accusations of stolen electrolyte formulas. In a recent video by [Asianometry] this story is summarized.

Blown electrolytic capacitors. (Credit: Jens Both, Wikimedia)

The bad electrolyte in the faulty capacitors lacked a suitable depolarizer, which resulted in more gas being produced, ultimately leading to build-up of pressure and the capacitor ultimately failing in a way that could be rather benign if the scored top worked as vent, or violently if not.

Other critical elements in the electrolyte are passivators, to protect the aluminium against the electrolyte’s effects. Although often blamed on a single employee stealing an (incomplete) Rubycon electrolyte formula, the video questions this narrative, as the problem was too widespread.

More likely it coincided with the introduction of low-ESR electrolytic capacitors, along with computers becoming increasingly more power-hungry, and thus stressing the capacitors in a much warmer environment than in the early 1990s. Combine this with the presence of counterfeit capacitors in the market and the truth of what happened to cause the Capacitor Plague probably involves a bit from each column, a narrative that seems to be the general consensus.

Posted in News | Leave a comment

US appeals court rules AI generated art cannot be copyrighted

Source: Hacker News

Article note: Years ago I was musing about the copyright implications of deterministic algorithms (eg. "a program that permutes all 32x32x8bpp images, making all possible favicons") as a fun experiment. It's been interesting (albeit much dumber than expected) to watch the AI bros, content racket, and U.S. legal system play dumb games with copyright on stochastic tools' inputs and outputs.
Comments
Posted in News | Leave a comment

Eight years later, new but familiar-looking PebbleOS watches appear

Source: Ars Technica

Article note: Neat, I've contemplated a hacker-y solution (Gadget bridge, an open source or hacked watch, etc.) a couple times, but never quite been comvinced. These look competent and user-centered in a way none of the modern designs seem to be.

Certain watches can stay just as they are and people will keep buying them. The Casio F-91W, the most-sold watch in the world, keeps the time on a readable display and offers a single daily alarm slot (unless you board-swap it). The Timex Weekender may last as long as mechanical watches exist.

What about the Pebble? Is there still room on people's wrists for the most exciting Kickstarter-backed tech of 2012–2016?

Eric Migicovsky, founder of the firm that was perhaps a bit too early to the smartwatch market, has made good on his pledge to find out and has made new Pebble watches available for preorder. The Core 2 Duo, "almost exactly a Pebble 2" with modernized chips, 30 days battery life, and a black-and-white e-paper screen, is $150 at preorder and is scheduled to ship in July. The Core Time 2, Migicovsky's "dream watch," is bigger, color, and metal and goes for $225 right now. Its release is slated for December.

Read full article

Comments

Posted in News | Leave a comment

Checking In On the ISA Wars and Its Impact on CPU Architectures

Source: Hack a Day

Article note: This is a nice set of links/explanation that mostly matches my take on why RISC-V isn't really working out, explained in that critical post-ISA lens that a lot of pieces miss despite it being the status quo for close to 30 years. Bunch of questionable design decisions in the ISA, PLUS way too much divergence to make the toolchain situation tractable... which is the only way in which ISAs are still relevant.

An Instruction Set Architecture (ISA) defines the software interface through which for example a central processor unit (CPU) is controlled. Unlike early computer systems which didn’t define a standard ISA as such, over time the compatibility and portability benefits of having a standard ISA became obvious. But of course the best part about standards is that there are so many of them, and thus every CPU manufacturer came up with their own.

Throughout the 1980s and 1990s, the number of mainstream ISAs dropped sharply as the computer industry coalesced around a few major ones in each type of application. Intel’s x86 won out on desktop and smaller servers while ARM proclaimed victory in low-power and portable devices, and for Big Iron you always had IBM’s Power ISA. Since we last covered the ISA Wars in 2019, quite a lot of things have changed, including Apple shifting its desktop systems to ARM from x86 with Apple Silicon and finally MIPS experiencing an afterlife in  the form of LoongArch.

Meanwhile, six years after the aforementioned ISA Wars article in which newcomer RISC-V was covered, this ISA seems to have not made the splash some had expected. This raises questions about what we can expect from RISC-V and other ISAs in the future, as well as how relevant having different ISAs is when it comes to aspects like CPU performance and their microarchitecture.

RISC Everywhere

Unlike in the past when CPU microarchitectures were still rather in flux, these days they all seem to coalesce around a similar set of features, including out-of-order execution, prefetching, superscalar parallelism, speculative execution, branch prediction and multi-core designs. Most of the performance these days is gained from addressing specific bottlenecks and optimization for specific usage scenarios, which has resulted in such things like simultaneous multithreading  (SMT) and various pipelining and instruction decoder designs.

CPUs today are almost all what in the olden days would have been called RISC (reduced instruction set computer) architectures, with a relatively small number of heavily optimized instructions. Using approaches like register renaming, CPUs can handle many simultaneous threads of execution, which for the software side that talks to the ISA is completely invisible. For the software, there is just the one register file, and unless something breaks the illusion, like when speculative execution has a bad day, each thread of execution is only aware of its own context and nothing else.

So if CPU microarchitectures have pretty much merged at this point, what difference does the ISA make?

Instruction Set Nitpicking

Within the world of ISA flamewars, the battle lines have currently mostly coalesced around topics like the pros and cons of delay slots, as well as those of compressed instructions, and setting status flags versus checking results in a branch. It is incredibly hard to compare ISAs in an apple-vs-apples fashion, as the underlying microarchitecture of a commercially available ARMv8-based CPU will differ from a similar x86_64- or RV64I- or RV64IMAC-based CPU. Here the highly modular nature of RISC-V adds significant complications as well.

If we look at where RISC-V is being used today in a commercial setting, it is primarily as simple embedded controllers where this modularity is an advantage, and compatibility with the zillion other possible RISC-V extension combinations is of no concern. Here, using RISC-V has an obvious advantage over in-house proprietary ISAs, due to the savings from outsourcing it to an open standard project. This is however also one of the major weaknesses of this ISA, as the lack of a fixed ISA along the pattern of ARMv8 and x86_64 makes tasks like supporting a Linux kernel for it much more complicated than it should be.

This has led Google to pull initial RISC-V support from Android due to the ballooning support complexity. Since every RISC-V-based CPU is only required to support the base integer instruction set, and so many things are left optional, from integer multiplication (M), atomics (A), bit manipulation (B), and beyond, all software targeting RISC-V has to explicitly test that the required instructions and functionality is present, or use a fallback.

Tempers are also running hot when it comes to RISC-V’s lack of integer overflow traps and carry instructions. As for whether compressed instructions are a good idea, the ARMv8 camp does not see any need for them, while the RISC-V camp is happy to defend them, and meanwhile x86_64 still happily uses double the number of instruction lengths courtesy of its CISC legacy, which would make x86_64 twice as bad or twice as good as RISC-V depending on who you ask.

Meanwhile an engineer with strong experience on the ARM side of things wrote a lengthy dissertation a while back on the pros and cons of these three ISAs. Their conclusion is that RISC-V is ‘minimalist to a fault’, with overlapping instructions and no condition codes or flags, instead requiring compare-and-branch instructions. This latter point cascades into a number of compromises, which is one of the major reasons why RISC-V is seen as problematic by many.

In summary, in lieu of clear advantages of RISC-V against fields where other ISAs are already established, its strong points seem to be mostly where its extreme modularity and lack of licensing requirements are seen as convincing arguments, which should not keep anyone from enjoying a good flame war now and then.

The China Angle

The Loongson 3A6000 (LS3A6000) CPU. (Credit: Geekerwan, Wikimedia)
The Loongson 3A6000 (LS3A6000) CPU. (Credit: Geekerwan, Wikimedia)

Although everywhere that is not China has pretty much coalesced around the three ISAs already described, there are always exceptions. Unlike Russia’s ill-fated very-large-instruction-word Elbrus architecture, China’s CPU-related efforts have borne significantly more fruit. Starting with the Loongson CPUs, China’s home-grown microprocessor architecture scene began to take on real shape.

Originally these were MIPS-compatible CPUs. But starting with the 3A5000 in 2021, Chinese CPUs began to use the new LoongArch ISA. Described as being a ‘bit like MIPS or RISC-V’ in the Linux kernel documentation on this ISA, it features three variants, ranging from a reduced 32-bit version (LA32R) and standard 32-bit (LA32S) to a 64-bit version (LA64). In the current LS3A6000 CPU there are 16 cores with SMT support. In reviews these chips are shown to be rapidly catching up to modern x86_64 CPUs, including when it comes to overclocking.

Of course, these being China-only hardware, few Western reviewers have subjected the LS3A6000, or its upcoming successor the LS3A7000, to an independent test.

In addition to LoongArch, other Chinese companies are using RISC-V for their own microprocessors, such as SpacemiT, an AI-focused company, whose products also include more generic processors. This includes the K1 octa-core CPU which saw use in the MuseBook laptop. As with all commercial RISC-V-based cores out today, this is no speed monsters, and even the SiFive Premier P550 SoC gets soundly beaten by even a Raspberry Pi 4’s already rather long-in-the-tooth ARM-based SoC.

Perhaps the most successful use of RISC-V in China are the cores in Espressif’s popular ESP32-C range of MCUs, although here too they are the lower-end designs relative to the Xtensa Lx6 and Lx7 cores that power Espressif’s higher-end MCUs.

Considering all this, it wouldn’t be surprising if China’s ISA scene outside of embedded will feature mostly LoongArch, a lot of ARM, some x86_64 and a sprinkling of RISC-V to round it all out.

It’s All About The IP

The distinction between ISAs and microarchitecture can be clearly seen by contrasting Apple Silicon with other ARMv8-based CPUs. Although these all support a version of the same ARMv8 ISA, the magic sauce is in the intellectual property (IP) blocks that are integrated into the chip. These range from memory controllers, PCIe SerDes blocks, and integrated graphics (iGPU), to encryption and security features. Unless you are an Apple or Intel with your own GPU-solution, you will be licensing the iGPU block along with other IP blocks from IP vendors.

These IP blocks offer the benefit of being able to use off-the-shelf functionality with known performance characteristics, but they are also where much of the cost of a microprocessor design ends up going. Developing such functionality from scratch can pay for itself if you reuse the same blocks over and over like Apple or Qualcomm do. For a start-up hardware company this is one of the biggest investments, which is why they tend to license a fully manufacturable design from Arm.

The actual cost of the ISA in terms of licensing is effectively a rounding error, while the benefit of being able to leverage existing software and tooling is the main driver. This is why a new ISA like LoongArch may very well pose a real challenge to established ISAs in the long run, beacause it is being given a chance to develop in a very large market with guaranteed demand.

Spoiled For Choice

Meanwhile, the Power ISA is also freely available for anyone to use without licensing costs; the only major requirement is compliance with the Power ISA. The OpenPOWER Foundation is now also part of the Linux Foundation, with a range of IBM Power cores open sourced. These include the A2O core that’s based on the A2I core which powered the XBox 360 and Playstation 3’s Cell processor, as well as the Microwatt reference design that’s based on the much newer Power ISA 3.0.

Whatever your fancy is, and regardless of whether you’re just tinkering on a hobby or commercial project, it would seem that there is plenty of diversity in the ISA space to go around. Although it’s only human to pick a favorite and favor it, there’s something to be said for each ISA. Whether it’s a better teaching tool, more suitable for highly customized embedded designs, or simply because it runs decades worth of software without fuss, they all have their place.

Posted in News | Leave a comment

Why SNES hardware is running faster than expected—and why it’s a problem

Source: Ars Technica

Article note: Neat. I always half-joke that we could easily teach an entire upper-division class about oscillators/frequency generators/ clock manipulation (resonant circuits, ring oscillators, ceramic oscillators, crystals and their drive modes, dividers, PLLs, multi-phase clocks, a little bit of the beat/heterodyne stuff that radio people care about, etc.), this goes in my example bank.

Ideally, you'd expect any Super NES console—if properly maintained—to operate identically to any other Super NES unit ever made. Given the same base ROM file and the same set of precisely timed inputs, all those consoles should hopefully give the same gameplay output across individual hardware and across time.

The TASBot community relies on this kind of solid-state predictability when creating tool-assisted speedruns that can be executed with robotic precision on actual console hardware. But on the SNES in particular, the team has largely struggled to get emulated speedruns to sync up with demonstrated results on real consoles.

After significant research and testing on dozens of actual SNES units, the TASBot team now thinks that a cheap ceramic resonator used in the system's Audio Processing Unit (APU) is to blame for much of this inconsistency. While Nintendo's own documentation says the APU should run at a consistent rate of 24,576 Hz (and the associated Digital Signal Processor sample rate at a flat 32,000 Hz), in practice, that rate can vary just a bit based on heat, system age, and minor physical variations that develop in different console units over time.

Read full article

Comments

Posted in News | Leave a comment

Everything you say to your Echo will be sent to Amazon starting on March 28

Source: Ars Technica

Article note: That's pretty fuckin' dystopian.
Comments
Posted in News | Leave a comment

New Intel CEO Lip-Bu Tan will pick up where Pat Gelsinger left off

Source: Ars Technica

Article note: This is a genuinely interesting choice. He supposedly previously left Intel's board because of disagreements with the management culture (which is, by all reports, real heavy around the middle), and was on record being pretty pro-foundry and anti-unit-divestment. He successfully lead Cadence (EDA tools) through a pretty major turnaround - lots of experience in the design side of the semiconductor industry. He's been involved with SMIC (China's heavily state-supported semiconductor entity).

After a little over three months, Intel has a new CEO to replace ousted former CEO Pat Gelsinger. Intel's board announced that Lip-Bu Tan will begin as Intel CEO on March 18, taking over from interim co-CEOs David Zinsner and Michelle Johnston Holthaus.

Gelsinger was booted from the CEO position by Intel's board on December 2 after several quarters of losses, rounds of layoffs, and canceled or spun-off side projects. Gelsinger sought to turn Intel into a foundry company that also manufactured chips for fabless third-party chip design companies, putting it into competition with Taiwan Semiconductor Manufacturing Company(TSMC), Samsung, and others, a plan that Intel said it was still committed to when it let Gelsinger go.

Intel said that Zinsner would stay on as executive vice president and CFO, and Johnston Holthaus would remain CEO of the Intel Products Group, which is mainly responsible for Intel's consumer products. These were the positions both executives held before serving as interim co-CEOs.

Read full article

Comments

Posted in News | Leave a comment

DOJ: Google must sell Chrome, Android could be next

Source: Ars Technica

Article note: They're gonna spin a collusion engine as a quasi-independent entity. Basically the members of that Linux Foundation "Supporters of Chromium-Based Browsers" initiative will form an LLC or something to hold the trademarks, which will be de-facto controlled by Google and Microsoft as the largest employer of active developers and cashflow. They'll let Opera and Brave and such act like they have seats at the table to provide plausible deniability, while being such a center of gravity in the ecosystem the largest incumbents can be even bigger bullies except where the infighting gets in the way of collusion. Browsers themselves don't make any income, there is only secondary money in providing other parties access to user data/behavioral influence, so actually independent entities holding up the ridiculous complexity we've stuffed into borrowers is not really a serious proposition (as we've been seeing with Mozilla recently).
Comments
Posted in News | Leave a comment

The ESP32 Bluetooth Backdoor That Wasn’t

Source: Hack a Day

Article note: I didn't post anything when that hype was passing through because I was pretty sure it was "The documented API allowing an attached host to control the device." Sure enough.

Recently there was a panicked scrambling after the announcement by [Tarlogic] of a ‘backdoor’ found in Espressif’s popular ESP32 MCUs. Specifically a backdoor on  the Bluetooth side that would give a lot of control over the system to any attacker. As [Xeno Kovah] explains, much about these claims is exaggerated, and calling it a ‘backdoor’ is far beyond the scope of what was actually discovered.

To summarize the original findings, the researchers found a number of vendor-specific commands (VSCs) in the (publicly available) ESP32 ROM that can be sent via the host-controller interface (HCI) between the software and the Bluetooth PHY. They found that these VSCs could do things like writing and reading the firmware in the PHY, as well as send low-level packets.

The thing about VSCs is of course that these are a standard feature with Bluetooth controllers, with each manufacturer implementing a range of these for use with their own software SDK. These VSCs allow for updating firmware, report temperatures and features like debugging, and are generally documented (except for Broadcom).

Effectively, [Xeno] makes the point that VSCs are a standard feature in Bluetooth controllers, which – like most features – can also be abused. [Tarlogic] has since updated their article as well to distance themselves from the ‘backdoor’ term and instead want to call these VSCs a ‘hidden feature’. That said, if these VSCs in ESP32 chips are a security risk, then as [Xeno] duly notes, millions of BT controllers from Texas Instruments, Broadcom and others with similar VSCs would similarly be a security risk.

Posted in News | Leave a comment

PowerPC Windows NT made to run on GameCube and Wii

Source: OSNews

Article note: This is fuckin goofy, and I love it.

Remember about half a year ago, when the PowerPC versions of Windows NT were made to run on certain models of PowerPC Macs? The same developer responsible for that work, Rairii, took all of this to the next level, and it’s now possible to run the PowerPC version of Windows NT on the GameCube, Wii, Wii U, and a few related development boards.

NT 3.51 RTM and higher. NT 3.51 betas (build 944 and below) will need kernel patches to run due to processor detection bugs. NT 3.5 will never be compatible, as it only supports PowerPC 601. (The additional suspend/hibernation features in NT 3.51 PMZ could be made compatible in theory but in practise would require all of the additional drivers for that to be reimplemented.)

↫ Windows NT for GameCube/Wii GitHub page

As you may have expected, there are some issues, such as instability and random reboots, USB hotplugging doesn’t work, and some other, smaller issues, but none of that takes away from just how awesome and impressive this really is. There’s framebuffer support for the Flipper GPU, full support for the controllers ports and a ton of compatible controllers and related input devices, including support for the N64 mouse and keyboard, although said support is untested.

The GameCube and Wii (U) are PowerPC computers, after all, running IBM processors, so it shouldn’t be surprising that running Windows NT on them is possible. Still, it’s an impressive feat of engineering to get this to work at all, let alone in as complete a state as it appears to be.

Posted in News | Leave a comment