Daily Archives: 2025-04-18

Microsoft’s “1‑bit” AI model runs on a CPU only, while matching larger systems

Source: Ars Technica

Article note: This is one of the only lines of AI research I'm excited about, and I've been excited since that 2023 paper. Most of the even vaguely neuromorphic stuff should work approximately as well as with floats on essentially 1-2 bits per signal (basically just positive,negative, and maybe 0), and that should be _markedly_ cheaper compute-wise, making it likely to actually be worthwhile without the hype and burn-barrels full of investor money. I'm also my graduate advisor's academic offspring and find the idea of variable bit-width/bitserial/packed architectures generally intriguing, and this continuing to work out would favor that design family.

When it comes to actually storing the numerical weights that power a large language model's underlying neural network, most modern AI models rely on the precision of 16- or 32-bit floating point numbers. But that level of precision can come at the cost of large memory footprints (in the hundreds of gigabytes for the largest models) and significant processing resources needed for the complex matrix multiplication used when responding to prompts.

Now, researchers at Microsoft's General Artificial Intelligence group have released a new neural network model that works with just three distinct weight values: -1, 0, or 1. Building on top of previous work Microsoft Research published in 2023, the new model's "ternary" architecture reduces overall complexity and "substantial advantages in computational efficiency," the researchers write, allowing it to run effectively on a simple desktop CPU. And despite the massive reduction in weight precision, the researchers claim that the model "can achieve performance comparable to leading open-weight, full-precision models of similar size across a wide range of tasks."

Watching your weights

The idea of simplifying model weights isn't a completely new one in AI research. For years, researchers have been experimenting with quantization techniques that squeeze their neural network weights into smaller memory envelopes. In recent years, the most extreme quantization efforts have focused on so-called "BitNets" that represent each weight in a single bit (representing +1 or -1).

Read full article

Comments

Posted in News | Leave a comment

PDCurses – for environments that don’t fit the termcap/terminfo model

Source: Hacker News

Comments
Posted in News | Leave a comment

Anti-Spying Phone Pouches Offered To EU Lawmakers For Trip To Hungary

Source: Slashdot

An anonymous reader shares a report: Members of the European Parliament were offered special pouches to protect digital devices from espionage and tampering for a visit to Hungary this week, a sign of rising spying fears within Europe. Five lawmakers from the Parliament's civil liberties committee traveled to Hungary on Monday for a three-day visit to inspect the EU member country's progress on democracy, the rule of law and fundamental rights. One lawmaker on the trip confirmed to POLITICO that the Parliament officials joining the delegation were offered Faraday bags -- special metal-lined pouches that block electromagnetic signals -- by the Parliament's services and were also advised to be cautious about using public Wi-Fi networks or charging facilities.

Read more of this story at Slashdot.

Posted in News | Leave a comment