LGR
TECHNICAL_APPROACH

TECHNOLOGY

We're not inventing new physics—we're commercializing proven research. Here's the technology stack we're building toward.

THE_CORE_IDEA

COMPUTE WHERE THE DATA LIVES.

In traditional computers, data lives in memory. To do math, you move data to the processor, compute, then move results back. This movement consumes 90% of the energy.

Analog in-memory computing stores neural network weights as physical properties of memory devices (conductance). When you apply an input voltage, Ohm's law (V=IR) does the multiplication instantly, in place. Millions of operations in parallel, with no data movement.

HOW IT WORKS

STEP_01

STORE WEIGHTS AS CONDUCTANCE

Neural network weights are programmed into memory devices (memristors, ReRAM, PCM) as conductance values. High conductance = high weight. Low conductance = low weight. The weight is the device.

STEP_02

APPLY INPUT AS VOLTAGE

Input activations are converted to voltages and applied to rows of the memory array. No data copying, no memory bus traffic—just electrical signals.

STEP_03

PHYSICS DOES THE MATH

Ohm's law: I = V × G. Each device multiplies its input voltage by its conductance, producing a current. Currents from all devices in a column sum automatically (Kirchhoff's law). One column = one multiply-accumulate operation.

STEP_04

READ OUTPUT AS CURRENT

The summed currents at each column are converted back to digital values. A single crossbar array can perform a full matrix-vector multiplication in one shot— millions of operations in nanoseconds.

WHY THIS IS HARD

If analog computing is so great, why isn't everyone doing it? Because building reliable, manufacturable analog systems is genuinely difficult. Here are the challenges we're tackling:

CHALLENGE_01: DEVICE VARIATION

Analog devices aren't perfectly identical. Two memristors programmed to the same conductance will have slightly different values. We need circuit techniques and algorithms that tolerate this variation.

CHALLENGE_02: NOISE

Analog signals are inherently noisy. The key insight: neural networks are remarkably noise-tolerant. They were designed for noisy biological hardware. We can use this to our advantage.

CHALLENGE_03: PROGRAMMING

How do you write software for hardware that works on physics, not logic gates? We need compilers that map neural networks to crossbar arrays, training methods that account for analog characteristics, and calibration systems that adapt to device drift.

PROVEN IN RESEARCH

This isn't theoretical—analog in-memory computing has been demonstrated by leading research groups worldwide:

IBM RESEARCH (ZURICH)

Phase-change memory for analog AI. Published in Nature, demonstrated mixed-signal DNN accelerator achieving record efficiency.

STANFORD / UMASS

Memristor crossbar arrays. Demonstrated image classification with >90% accuracy using in-memory computing.

INTEL (LOIHI)

Neuromorphic computing chips. Demonstrated 1000x efficiency gains on sparse workloads vs. conventional GPUs.

MYTHIC (STARTUP)

First commercial analog AI chip. Proved the concept can be productized, though focused on edge inference.

WHERE WE FIT

The research proves the physics. The missing piece is a dedicated team focused on productization—solving the engineering challenges of reliability, manufacturability, and programmability that separate lab demos from commercial products.

We're building that team. Our initial focus is assembling researchers and engineers from the leading analog computing labs, defining our architecture, and building the simulation and design tools needed for first silicon.

Have experience in analog circuits, memristive devices, or ML systems?

founders@logorythms.com