TECHNOLOGY
We're not inventing new physics—we're commercializing proven research. Here's the technology stack we're building toward.
COMPUTE WHERE THE DATA LIVES.
In traditional computers, data lives in memory. To do math, you move data to the processor, compute, then move results back. This movement consumes 90% of the energy.
Analog in-memory computing stores neural network weights as physical properties of memory devices (conductance). When you apply an input voltage, Ohm's law (V=IR) does the multiplication instantly, in place. Millions of operations in parallel, with no data movement.
HOW IT WORKS
STORE WEIGHTS AS CONDUCTANCE
Neural network weights are programmed into memory devices (memristors, ReRAM, PCM) as conductance values. High conductance = high weight. Low conductance = low weight. The weight is the device.
APPLY INPUT AS VOLTAGE
Input activations are converted to voltages and applied to rows of the memory array. No data copying, no memory bus traffic—just electrical signals.
PHYSICS DOES THE MATH
Ohm's law: I = V × G. Each device multiplies its input voltage by its conductance, producing a current. Currents from all devices in a column sum automatically (Kirchhoff's law). One column = one multiply-accumulate operation.
READ OUTPUT AS CURRENT
The summed currents at each column are converted back to digital values. A single crossbar array can perform a full matrix-vector multiplication in one shot— millions of operations in nanoseconds.
WHY THIS IS HARD
If analog computing is so great, why isn't everyone doing it? Because building reliable, manufacturable analog systems is genuinely difficult. Here are the challenges we're tackling:
Analog devices aren't perfectly identical. Two memristors programmed to the same conductance will have slightly different values. We need circuit techniques and algorithms that tolerate this variation.
Analog signals are inherently noisy. The key insight: neural networks are remarkably noise-tolerant. They were designed for noisy biological hardware. We can use this to our advantage.
How do you write software for hardware that works on physics, not logic gates? We need compilers that map neural networks to crossbar arrays, training methods that account for analog characteristics, and calibration systems that adapt to device drift.
PROVEN IN RESEARCH
This isn't theoretical—analog in-memory computing has been demonstrated by leading research groups worldwide:
Phase-change memory for analog AI. Published in Nature, demonstrated mixed-signal DNN accelerator achieving record efficiency.
Memristor crossbar arrays. Demonstrated image classification with >90% accuracy using in-memory computing.
Neuromorphic computing chips. Demonstrated 1000x efficiency gains on sparse workloads vs. conventional GPUs.
First commercial analog AI chip. Proved the concept can be productized, though focused on edge inference.
WHERE WE FIT
The research proves the physics. The missing piece is a dedicated team focused on productization—solving the engineering challenges of reliability, manufacturability, and programmability that separate lab demos from commercial products.
We're building that team. Our initial focus is assembling researchers and engineers from the leading analog computing labs, defining our architecture, and building the simulation and design tools needed for first silicon.
Have experience in analog circuits, memristive devices, or ML systems?
founders@logorythms.com