Synoptic AI executed the first full transformer inference on gate-based quantum processors — three vendors, two qubit technologies. Classically trained. Quantum executed. Patent pending.
Full autoregressive inference — token by token — with every matrix multiplication performed on the QPU. Pretrained weights compressed into quantum-native form via Phase Space and executed as quantum circuits on gate-based processors. Output verified against classical baseline.
Every result below has been validated on gate-based quantum processors. No simulations. No theoretical proposals. Real quantum hardware, real outputs.
First classically trained transformer to perform full autoregressive inference on gate-based quantum processors. 144 quantum circuits across three hardware platforms, two qubit technologies. Output verified against classical baseline.
Classically trained neural network performing inference on a gate-based quantum processor. 10 inputs, 10 correct outputs. Probabilities match classical to 3–4 decimal places.
High-fidelity quantum error correction implementation. Practical error correction on current NISQ hardware toward fault-tolerant quantum architecture.
Quantum simulation of SYK model black hole microstates. 236/256 microstates with 92% Bekenstein-Hawking accuracy. Extends published research from Brookhaven National Lab.
Proprietary approach achieving 55% improvement over standard QAOA on benchmark optimization problems.
Full-scale condensed matter simulation on Kagome lattice. Physics-validated quantum many-body simulation at utility scale.
Validation of genuine quantum behavior at scale. 1000 shots yielding 1000 unique bitstrings confirms real quantum computation.
Our patented compression method achieves significant parameter reduction across multiple model scales. The same compressed representation runs on classical hardware today and quantum hardware tomorrow.
Patent pending. Validated across three model architectures at scales from 124M to 6.7B parameters. The compressed representation is quantum-native — it executes directly on gate-based quantum processors without translation or approximation.
8–12× fewer parameters means proportionally less memory and compute for inference. Applicable to any transformer-based model architecture. Proven across three model scales.
The compressed representation executes directly on gate-based quantum processors without translation or approximation. Validated on quantum hardware with 100% inference accuracy.
Proprietary compression techniques achieving significant parameter reduction while retaining intelligence. Scaled and validated on billion-parameter architectures.
Compact models that solve complex mathematical problems at speeds orders of magnitude faster than frontier systems — competitive accuracy with a fraction of the parameters.
Validated frameworks for monitoring and controlling emergence in neural networks — enabling new approaches to AI system behavior and capability assessment.
We develop systems that achieve superior performance through fundamental breakthroughs in how information is processed and represented — not by throwing more compute at the problem.
Two decades of research in intelligence, creativity, cognition, and psychology informs an approach that bridges theoretical insight with practical implementation. Multiple patents pending.
The results speak for themselves. Every claim on this site has been verified on real quantum hardware or against published benchmarks.
Open to partnerships, licensing, collaboration, and consulting arrangements.
michael.hoskins@synopticai.io