VISION
THE WINNERS OF THE AI ERA WON'T BE THE ONES WITH THE MOST GPUS.
THEY'LL BE THE ONES WHO NEED THE FEWEST.
Everyone is racing to build bigger data centers, secure more power, stockpile more chips. We think that's the wrong race. The real breakthrough will come from making AI radically more efficient—and that requires rethinking the hardware from first principles.
THE PROBLEM
Training a single frontier AI model now costs over $100 million in electricity alone. Data centers are projected to consume 10% of global electricity by 2030. Countries are restarting nuclear plants just to power AI. This isn't sustainable—and it's getting worse as models grow.
Here's the dirty secret of modern computing: 90% of the energy in a GPU is spent moving data between memory and processors—not doing actual computation. This is the "von Neumann bottleneck," a fundamental flaw in computer architecture dating back to 1945. We've been papering over it with better fabrication, but physics is catching up.
Digital computers were designed for precise, deterministic calculations—spreadsheets, databases, simulations. But neural networks are inherently probabilistic. They don't need 32-bit precision. They need massive parallelism and efficient matrix operations. We're using a scalpel when we need a sledgehammer.
THE INSIGHT
What the human brain uses to do everything—vision, language, planning, creativity. 86 billion neurons, 100 trillion synapses. Running on the power of a dim lightbulb.
What a modern AI data center uses to run models that still can't match human flexibility. That's 50,000x more power for arguably less capable intelligence.
Biology proves that intelligent computation can be radically more efficient. The question is: can we build silicon that works the same way?
OUR APPROACH
Analog in-memory computing eliminates the von Neumann bottleneck entirely. Instead of shuttling data between memory and processors, computation happens directly in the memory array using the physics of the devices themselves. Ohm's law does the math. No data movement. No wasted energy.
This isn't science fiction. IBM, Stanford, and dozens of research labs have demonstrated working analog AI accelerators. The physics works. The challenge now is engineering: building a system that's reliable, manufacturable, and programmable enough for real applications.
The big semiconductor companies are watching this space but moving slowly—they have billions invested in digital architectures. The AI labs are focused on algorithms, not hardware. There's a window for a focused startup to move fast, assemble the right team, and build the company that commercializes analog AI compute.
WHY EUROPE
Silicon Valley is optimizing software on commodity hardware. The deep physics and materials science expertise needed for analog hardware lives in Europe—at ETH Zurich, IMEC, Fraunhofer, and a constellation of world-class research institutions.
Europe also offers strategic advantages: neutral jurisdiction, strong talent pipeline, access to European fab capacity, and the ability to serve global markets (including those restricted from US chip exports) without regulatory friction.
We're assembling a founding team of engineers and researchers who believe analog computing is the future of AI. People who can reason from first principles, challenge assumptions, and build something genuinely new.
founders@logorythms.com