Source: OSNews
Article note: Every academic CompE type person says some variation of "I want a bunch of simple, predictable cores!", but (IMO) the problem is always that so few people can effectively program in an environment with general-purpose concurrency, and memory management in dynamic environments (always, but especially in the face of concurrency) is so impossibly hard, that big unpredictable pipelined out-of-order basically-a-JIT-to-its-internal-instruction-set cores and bolted on constrained SIMD engines keep winning in practice.
The GPU in your computer is about 10 to 100 times more powerful than the CPU, depending on workload. For real-time graphics rendering and machine learning, you are enjoying that power, and doing those workloads on a CPU is not viable. Why aren’t we exploiting that power for other workloads? What prevents a GPU from being a more general purpose computer?
↫ Raph Levien
Fascinating thoughts on parallel computation, including some mentions of earlier projects like Intel’s Larabee or the Connection Machine with 64k processors the ’80s, as well as a defense of the PlayStation 3’s Cell architecture.