You Might Not Have to Wait for the "Perfect" Quantum Computer How partial fault tolerance is reshaping assumptions about the quantum timeline
In boardrooms and strategy discussions, one assumption about quantum computing has become deeply entrenched:
"Quantum computers won't be useful until we have a fully fault-tolerant machine with a million qubits. That's at least a decade away."
That assumption has led to a clear strategic stance: quantum computing is important, but not yet actionable.
A growing body of research is making this position harder to defend. For specific classes of problems, useful quantum computation may arrive earlier than the "million-qubit machine" framing suggests — potentially within current strategic planning horizons. If that turns out to be the case, then "waiting for full FTQC" is no longer a neutral default. It becomes a choice with consequences.
Why this matters for planning. Three things are shifting simultaneously. First, the resource estimates for useful quantum computation on certain problems are moving from "millions of qubits" toward "hundreds of thousands" — still beyond today's hardware, but a different order of magnitude. Second, the problem classes where this may apply — materials design, molecular simulation, catalysis — are exactly where even incremental advantages carry real economic weight. Third, the window for preparation (talent, partnerships, problem selection) opens well before the hardware does. None of this means quantum is suddenly imminent. It does mean the "nothing to see here yet" stance is now one interpretation among several, rather than the safe default it used to be.
One of the more concrete contributions to this shift is "Partially Fault-Tolerant Quantum Computation for Megaquop Applications" by Chung et al. (March 2026), a collaboration between QunaSys, 1QBit, Qolab, and HPE Quantum. The paper is part of a broader line of work that examines — carefully — how far useful quantum computation can be pushed without full-scale FTQC.
Rethinking the gap between NISQ and FTQC
The quantum computing story has traditionally been told in two acts.
Act one is the noisy intermediate-scale quantum (NISQ) era, where we are now. Today's devices have a few hundred qubits, but they are noisy and error-prone, too unreliable for most problems of practical interest. Act two is full fault-tolerant quantum computing (FTQC) — a future stage where quantum error correction eliminates errors almost entirely, enabling transformative applications in chemistry, materials science, and optimization. The catch: conventional estimates put the hardware requirement at upwards of a million qubits.
The problem is the gulf between these two stages. NISQ machines cannot do much of practical value, and FTQC demands an enormous leap in hardware. For business leaders, the takeaway has been simple: nothing to see here until the big machine arrives.
Partial fault tolerance challenges that conclusion by changing the trade-off. Full FTQC protects every operation with quantum error correction, including the most error-prone operations — the non-Clifford gates used to make computation universal. Protecting these gates fully is what drives much of the million-qubit overhead. Partial fault tolerance takes a different route: rather than protecting these error-prone operations with full QEC, it accepts that substantial residual errors will remain, uses post-selection at the architecture level to reduce some of them, and relies on error mitigation to clean up the rest. Error mitigation itself is not free — it is the main source of overhead in this approach — but the overall physical qubit count can be smaller than protecting every operation with full QEC. The aim is not perfect fidelity everywhere, but sufficient accuracy for specific classes of problems.
A concrete embodiment of this idea is the STAR (Space-Time efficient Analog Rotation) architecture, proposed by Fujitsu Laboratories and Prof. Keisuke Fujii's group at The University of Osaka.
If this approach holds up under scrutiny, the implication is straightforward: the point at which quantum computing becomes strategically relevant may shift forward — and with it, the window for preparation.
What Chung et al. actually shows
The Chung et al. paper takes the STAR architecture and evaluates it under realistic hardware assumptions for superconducting processors. It is worth being precise about what the paper does and does not claim.
The authors frame their work explicitly as demonstrating both the strengths and the limitations of partial FTQC. The contribution is not an argument that partial FTQC is easy or close at hand. It is the first hardware-grounded resource estimate that replaces the simplified, single-parameter noise models used in earlier studies — a quantitative floor under a discussion that had previously relied on optimistic approximations.
Read in that light, several findings stand out:
- Qubit requirements. Simulation of the 2D Fermi-Hubbard model — a workhorse problem in materials science — is estimated to be feasible with hundreds of thousands of physical qubits, rather than the millions required under full FTQC for the same problem (Mohseni et al., 2024, provide a hardware-grounded full-FTQC baseline, estimating on the order of one to several million physical qubits depending on system size and algorithm). This is a meaningful reduction, but still well beyond today's hardware.
- Runtime. Under the paper's assumptions, computations on modest system sizes complete in minutes rather than the thousands of years projected in earlier versions of the architecture.
- Regime of applicability. The approach is most effective for circuits with roughly 105 to 106 small-angle rotation gates — a "Goldilocks zone." Outside this regime, the advantages over full FTQC shrink.
The authors are careful: these favourable numbers depend on specific problem structures (notably 2D Fermi-Hubbard), and even under their assumptions, the required resources remain substantial. The paper is best read as replacing one large uncertainty with a smaller, better-characterized one — not as a declaration that the finish line is in sight.
Taken on its own, the result does not argue that partial fault tolerance replaces FTQC. It does suggest that the boundary between "not useful" and "useful" may lie somewhat earlier than previously assumed, at least for certain workloads.
Converging evidence from multiple directions
What gives this line of work weight is that multiple independent groups, using different methods and assumptions, are arriving at resource estimates in a broadly similar range for specific problem domains — tens to hundreds of thousands of physical qubits, rather than millions. These estimates remain theoretical, and they differ in how optimistic their underlying assumptions are, but they no longer sit as outliers.
- Toshio, Akahoshi, Fujii et al. (Physical Review X, 2025) estimate that practical quantum advantage for the 8×8 Hubbard model may be achievable with on the order of 68,000 physical qubits.
- Akahoshi et al. (PRX Quantum, 2025) show that improved compilation methods can reduce runtime by roughly an order of magnitude, with comparable qubit requirements under realistic error rates.
- Google Quantum AI (Nature, 2025) demonstrated that quantum error correction improves exponentially with system size — experimental support for a key assumption underlying partial FTQC resource estimates.
- Fujitsu and The University of Osaka (2026) report that STAR-based approaches can target industrially relevant molecular systems, including catalysts and iron-sulfur clusters, with qubit requirements reduced by up to 80× compared to conventional FTQC estimates.
Taken together, the picture that emerges is not that full FTQC has been bypassed. It is that the question has become more granular: rather than asking when the million-qubit machine arrives, the more useful question is which problem classes become accessible earlier, and under what assumptions.
What this means for quantum timelines
For business and R&D leaders, the immediate implication is not that quantum computing is suddenly imminent. It is that the previous all-or-nothing view of the timeline has become harder to justify.
The relevant question is no longer simply when a million-qubit machine appears. For specific problem domains, systems with tens to hundreds of thousands of qubits may eventually deliver results that classical machines cannot match. IBM has set a goal of demonstrating quantum advantage by the end of 2026. Fujitsu's roadmap targets practical quantum computation via the STAR architecture by 2030. Neither of these is guaranteed, but both bring the discussion into current planning horizons.
This also makes the first candidate use cases more concrete. The applications best suited to early quantum advantage are clustered in materials design, molecular simulation for drug discovery, and condensed matter physics — areas where even incremental improvements can carry significant economic value, and where the problem structures line up with what partial FTQC is good at.
At the same time, the option to wait becomes more clearly a strategic choice. A 2025 scenario analysis by Deloitte captures the asymmetry well: in scenarios where scalable quantum computing arrives sooner than expected, organizations that have already invested in quantum readiness gain a meaningful advantage, while those that waited face a steep uphill battle to secure talent and build partnerships. The analysis is directional rather than a forecast, but the asymmetry is real — modest preparation now is cheap; unpreparedness under an accelerated scenario is expensive.
Limits of the partial FTQC approach
None of this removes the fundamental challenges.
Partial fault tolerance is not a general solution. Its effectiveness depends on circuit structure, and the regime it targets is relatively narrow. The results discussed here are based on resource estimation rather than large-scale experimental demonstrations, and on-hardware performance may diverge from paper estimates. Classical methods continue to improve, raising the threshold for what constitutes a meaningful quantum advantage.
And the resource numbers themselves — even under the more favourable partial-FTQC estimates — still require hundreds of thousands of physical qubits. This is a reduction from millions, but it is not near-term hardware.
These caveats matter. But they do not undo the broader point: the structure of the problem has changed.
A less binary path to useful quantum computation
The familiar binary — NISQ today, FTQC tomorrow — has been a useful simplification. It is now becoming less accurate.
Between these two regimes, an intermediate space is emerging in which partial fault tolerance may enable useful computation for specific problems. This is not a replacement for full FTQC, and it is not a guarantee of early quantum advantage. But it is a shift in where the boundary of practical relevance may lie.
It is reasonable to remain cautious. Caution, however, is not the same as neutrality. The assumption that quantum computing is irrelevant until full FTQC arrives is no longer a safe default. It is one interpretation among several — and increasingly, one that deserves to be examined rather than taken for granted.
References
- Chung, M.-Z. et al. "Partially Fault-Tolerant Quantum Computation for Megaquop Applications." arXiv:2603.13093 (2026)
- Toshio, R. et al. "Practical Quantum Advantage on Partially Fault-Tolerant Quantum Computer." Phys. Rev. X 15, 021057 (2025)
- Akahoshi, Y. et al. "Compilation of Trotter-Based Time Evolution for Partially Fault-Tolerant Quantum Computing Architecture." PRX Quantum 6, 040319 (2025)
- Akahoshi, Y. et al. "Partially Fault-tolerant Quantum Computing Architecture with Error-corrected Clifford Gates and Space-time Efficient Analog Rotations." PRX Quantum 5, 010337 (2024)
- Google Quantum AI. "Quantum Error Correction Below the Surface Code Threshold." Nature 638, 920–926 (2025)
- Preskill, J. "Beyond NISQ: The Megaquop Machine." ACM Trans. Quantum Comput. 6, 3, Article 18 (2025)
- Fujitsu & The University of Osaka. "STAR Architecture ver. 3 for Chemical Material Energy Calculations." Press Release (March 25, 2026)
- Mohseni, M. et al. "How to Build a Quantum Supercomputer: Scaling from Hundreds to Millions of Qubits." arXiv:2411.10406 (2024)
Contact
To learn more about the potential applications of quantum computing and the details of partially fault-tolerant quantum computation, please get in touch with our team.