Fault-Tolerant Quantum Computing
- Seth Dalton
- May 7
- 24 min read
Introduction
Fault-tolerant quantum computing represents the holy grail for the industry – a regime where quantum computers can compute reliably despite hardware errors. Today’s quantum processors suffer error rates on the order of 0.1–1% per gate, far too high to run the millions or billions of operations required for practical applications . Quantum error correction (QEC) is the path forward: by encoding a logical qubit into many physical qubits with clever redundancy, errors can be detected and corrected on the fly. This brief provides an overview of the state-of-the-art in QEC techniques (surface codes, bosonic “cat” qubits, topological qubits, etc.), the scalable architectures being pursued to achieve fault tolerance, and the roadmaps of major industry players and startups. We also assess realistic timelines for achieving practical fault-tolerant quantum systems, to inform enterprise CTOs on when truly reliable quantum computing might become a reality.
Quantum Error Correction Techniques: State of the Art
Quantum error correction is essential to bridge the gap between noisy physical qubits and reliable computation. Several QEC codes and approaches have matured in recent years:
Surface Codes:
The most developed error-correction scheme in quantum computing is the surface code, a type of topological code that arranges qubits on a 2D lattice . A logical qubit is encoded in a grid of physical qubits (for example, a $d\times d$ patch encodes one logical qubit, where $d$ is the code distance) . Surface codes can tolerate errors up to a certain threshold (~1% error per gate) – notably higher than earlier QEC codes – making them feasible for today’s devices . Tech giants like Google and IBM have focused on surface codes due to this high error threshold and the fact that only nearest-neighbor interactions are required (convenient for chip layouts) . Recent milestones have demonstrated that surface codes work as expected: in 2023, Google Quantum AI showed for the first time that increasing the code size reduces the logical error rate . Using a 72-qubit superconducting processor, they realized a distance-5 surface code (49 physical qubits per logical qubit) that outperformed a smaller distance-3 code, achieving a lower logical error probability than the best uncorrected qubits . This was a “critical step towards scalable quantum error correction,” indicating that with sufficient scale, logical qubits can indeed become more reliable than physical ones . Surface code experiments on superconducting platforms have now entered the regime below the error-correction threshold, meaning each added qubit provides net benefit . The challenge ahead is the sheer overhead – surface codes still demand hundreds or thousands of physical qubits for one high-quality logical qubit (ultimately millions for full systems) – and engineering fast classical decoders to correct errors in real time.
Bosonic “Cat” Qubits:
An alternative QEC approach is to encode information not in many discrete qubits, but in different states of a quantum harmonic oscillator (e.g. a microwave cavity). These so-called bosonic codes or “cat qubits” store a qubit in superpositions of coherent states (Schrödinger cat states) which can naturally resist certain errors. For example, a cat qubit can be biased to predominantly experience either phase flips or bit flips, and a small stabilizing circuit can passively suppress one error type . The remaining error channel is then corrected by a simpler code on top. In 2023–2024, researchers demonstrated concatenated cat codes on superconducting circuits as a promising hardware-efficient path. In one breakthrough, a logical qubit was encoded in a bosonic mode with an outer repetition code of distance $d=5$, achieving a logical error rate ~1.65% per cycle – comparable to the smaller $d=3$ code – despite using more qubits . This showed that the error suppression from the bosonic encoding can carry through at larger distances without introducing excessive new errors . In essence, the bosonic cat qubit itself handles much of the error correction, so the additional overhead for the outer code is reduced. Such schemes are considered “hardware-efficient” QEC, potentially offering exponential error suppression with far fewer physical qubits than, say, a surface code on transmons . Startups like Alice & Bob and Quantum Circuits Inc. are pursuing cat qubits, aiming to create logical qubits with only a handful of physical modes by leveraging the oscillator’s physics to their advantage.
Topological Qubits (Majorana-based):
A radically different approach is being championed by Microsoft and academic partners – using exotic topological states of matter to create qubits that are inherently immune to certain noise. Often discussed in the context of Majorana zero modes, these topological qubits store quantum information non-locally, so that local perturbations won’t easily cause errors . In theory, a well-crafted topological qubit could have orders of magnitude lower error rates from the start, reducing the burden on error correction codes. Microsoft’s approach involves a so-called “topological core”: they have been developing a new material (a “topological superconductor”) in which pairs of Majorana quasiparticles encode qubit states . After years of research, Microsoft announced in 2023–2025 that they achieved their first hardware-protected topological qubit, showing the ability to create and control Majorana modes in a small device . The appeal of this approach is the potential for built-in error protection, which simplifies the QEC needed. Microsoft reports that their custom error-correcting codes for topological qubits could have about a 10× lower overhead than conventional schemes . In other words, a fault-tolerant quantum computer might be built with an order of magnitude fewer physical qubits if each qubit is topologically robust . This could be a game-changer for scalability – however, it’s important to note that as of today, topological qubits are still in the experimental stage (Microsoft’s first qubit exists, but a full logical qubit encoded in many topological qubits is yet to be demonstrated). If successful, topological qubits could enable fault-tolerance with far fewer resources than surface or bosonic codes. Microsoft’s recent milestones in this area (creating a device dubbed “Majorana 1”) highlight that progress is accelerating , but industry experts are watching closely to see if the approach can deliver on its promise.
Other Codes and Techniques:
Beyond the headline approaches above, many other QEC codes are under study, including color codes, Bacon-Shor codes, and newer LDPC codes optimized for specific hardware. In 2024, IBM and collaborators introduced a new code (the “gross code”) – a type of quantum LDPC code – that is 10× more efficient than prior methods in terms of qubit overhead . This was published as a landmark result, indicating that fault-tolerant algorithms with billions of operations may be achievable without needing millions of qubits . The theoretical landscape of QEC is rich, but the overarching trend is toward codes that reduce overhead and align with hardware realities. Error mitigation techniques (software methods to reduce error impact without full correction) are also playing a role in near-term systems , but truly fault-tolerant quantum computing will ultimately require the kind of robust QEC described above.
Scalable Quantum Hardware Architectures
Achieving fault tolerance isn’t just about codes – it also demands hardware that can scale up to thousands or millions of physical qubits and execute error-correction cycles fast enough. Different quantum hardware platforms have distinct advantages for scaling and QEC:
Superconducting Qubits (Planar Architectures):
Superconducting transmon qubits (used by Google, IBM, Rigetti, and others) have become a workhorse for quantum computing. They are built with modern lithography, which means it’s plausible to integrate a large number on a chip. Google’s and IBM’s chips, for example, feature 2D lattices of qubits – an ideal layout for the local connectivity needs of surface codes. The challenge is that each qubit and its control wiring consume space and cryogenic resources. Scaling to millions of qubits will require modular architectures and advanced packaging. IBM has been a leader in this regard: its Quantum System One is an integrated system that encapsulates a superconducting chip with specialized cryogenics and control electronics, making deployment more stable. IBM’s roadmap includes moving to a “quantum supercomputing” approach with multiple chips networked together in a single system . They introduced a 127-qubit chip in 2021 (“Eagle”), a 433-qubit chip in 2022 (“Osprey”), and aim for a 1121-qubit chip (“Condor”) soon, with plans to scale beyond 4000 qubits by interconnecting chips in 2025 . This modular chiplet strategy (coupling multiple smaller processors via high-speed links) is critical to break the size barrier. Additionally, superconducting platforms benefit from very fast gate speeds (nanoseconds), which helps run error-correction cycles quickly before qubits lose coherence. On the flip side, today’s transmons have limited coherence times (~0.1–0.2 ms) and gate fidelities around 99–99.9%, so many physical qubits per logical qubit are still required. Companies are investing in 3D integration (to bring control wiring into additional layers or chips), better materials to improve coherence, and novel couplers to connect qubit modules . Overall, superconducting qubits coupled with surface code QEC is currently the most advanced path toward fault tolerance, with multiple groups having demonstrated small logical qubits. The task ahead is engineering: going from $10^2$ to $10^6$ qubits through modular and scalable infrastructure.
Trapped-Ion Qubits (Modular Ion Traps):
Trapped ions (offered by IonQ, Quantinuum (Honeywell), and academic labs) are another leading platform, known for exceptional fidelity. Single- and two-qubit gate fidelities in ion traps can exceed 99.9%, and the qubits (ions) can remain coherent for much longer (seconds to minutes). This high quality means error rates are inherently lower, easing some QEC overhead. Moreover, in an ion chain, every ion can interact with every other via electromagnetic modes – providing an “all-to-all” connectivity that is very powerful for implementing error correction circuits and logical qubit operations. Quantinuum’s H1 trap, for example, can perform entangling gates across any pair of its 20 qubits, which enabled them to entangle up to 50 logical qubits in an experiment and to perform real-time error correction across a register. In 2022, Quantinuum demonstrated the first fault-tolerant logical qubit operations where a logical entangling gate outperformed the physical gate fidelity – a key validation of their approach. The main hurdle for trapped ions is scaling the number of qubits. A single ion trap module might hold 50–100 ions at most (beyond that, controlling and spacing them becomes difficult). The path to scale is through modular QCCD architecture – essentially, multiple ion traps interconnected by photonic links or by shuttling ions between trap zones. IonQ and Quantinuum are both pursuing this. IonQ’s strategy involves multiple ion cores connected optically: they have outlined a roadmap using photonic interconnects to link many ion trap chips, ultimately targeting thousands of physical qubits . In fact, IonQ has already built a prototype of a rack-mounted ion trap system and plans for a modular system by 2025 . The trade-off with ion qubits is slower gate speed (microseconds to milliseconds) compared to superconducting, meaning error-correction cycles take longer. However, the much longer coherence times partly compensate for this. From a cryogenics perspective, ions are controlled with laser beams in vacuum chambers at room temperature, so scaling doesn’t face the same cooling complexity – but it does require precise laser systems and complex trap electrodes. In summary, ion trap architectures are inherently high-fidelity and well-suited for early fault-tolerance demonstrations (given their low error rates), and the focus now is on engineering modular ion trap networks for scale. Quantinuum’s newest H2 model and IonQ’s forthcoming systems are expected to push toward ~100–#200 physical ion qubits with error correction in mind.
Photonic Qubits (Cluster-State Quantum Computers):
PsiQuantum and a few others are betting on photonics – using particles of light as qubits. Photonic qubits are appealing because they do not decohere easily (a photon can travel long distances without losing its quantum state) and because existing semiconductor fabrication can produce photonic circuits at scale. PsiQuantum’s approach is to create a fault-tolerant architecture from the start using a photonic cluster state strategy. Instead of individual gate operations, they generate large entangled webs of photons and perform measurements to enact logic (“fusion” measurements). The ultimate goal is a million-qubit photonic quantum computer that is fully error-corrected, which PsiQuantum famously predicted they could build “within five years” (a bold claim made a couple of years ago) . They are working with GlobalFoundries to manufacture photonic chips, indicating a path to mass production . The reason photonics is naturally suited to scaling is that you can integrate many photon sources and beam splitters on a chip, and photons themselves don’t interact unless coaxed via measurement-induced interactions. Also, photonic systems operate at room temperature and can leverage fiber optics for connectivity, making it conceivable to network modules easily . The big challenge, however, is that probabilistic photon sources and detectors have losses – so creating large entangled states with near-perfect fidelity is extremely hard. PsiQuantum’s architecture involves significant overhead in generating redundancy (e.g., using fusion-based QEC where many photons are consumed to stabilize logical entanglement). They have been relatively quiet about intermediate milestones, but their focus is on achieving the first logical qubit with photonics and then scaling by adding more identical photonic modules. If successful, photonic quantum computers could be exceptionally scalable, leveraging the telecom and silicon photonics industries. Startups Xanadu and Quandela are also exploring photonic approaches (Xanadu with squeezed-light continuous variables, Quandela with single photons), with Quandela aiming for its first logical qubits by 2025 and ~50 logical qubits by 2028 on its roadmap . Photonics remains a high-risk, high-reward path: it has the longest way to go in terms of demonstrating basic quantum error correction, but it could leapfrog in scalability if the key photonic integration challenges are solved.
Emerging and Hybrid Approaches:
Other architectures are being pursued with an eye on fault tolerance as well. Neutral atom qubits (Rydberg atom arrays from companies like Pasqal and QuEra) offer flexible connectivity and have 100–300 atom systems operational. Pasqal’s roadmap explicitly targets 10,000 atom-qubits by 2026 and achieving fault tolerance (128+ logical qubits) by 2028 via error correction on their arrays . Superconducting firms are also exploring specialized qubits like fluxonium, which have longer coherence, potentially reducing error rates. We also see hybrid approaches: for instance, quantum communication links to connect distant quantum modules (being developed by academic consortia and startups like QphoX for microwave-to-optical transduction ). These will be key if we are to build distributed quantum computers that act as one large fault-tolerant machine. In the NISQ era (current noisy intermediate-scale quantum devices), techniques like dynamic error suppression, error mitigation protocols, and software-level encoding (e.g. surface code simulators or application-specific error correction) are being employed to maximize what can be done before full fault tolerance. But the clear consensus is that scaling hardware to support error correction is the only way to unlock the true potential of quantum computing.
Major Players: Efforts & Roadmaps
The push for fault tolerance is a central objective for all the major quantum computing organizations. Here we review the strategies of a few key players:
Google Quantum AI (Alphabet):
Google’s team has a focused roadmap toward a large-scale error-corrected quantum computer. After achieving the “beyond classical” milestone in 2019 with a 54-qubit device (Sycamore) that demonstrated quantum supremacy on a synthetic task, Google pivoted squarely to error correction. Their plan involves a sequence of milestones: Milestone 1 was “Quantum advantage” (achieved in 2019 with 53 qubits) and Milestone 2 was a logical qubit prototype, recently achieved in 2023 . At Milestone 2, as discussed, they showed a logical qubit (surface code) with performance nearing the break-even point . The next milestones on Google’s roadmap are even more ambitious – Milestone 3 is a “long-lived logical qubit” (expected around 2025+) that can retain quantum information for long durations with active correction, Milestone 4 involves a tileable module with a logical two-qubit gate (so logical qubits can interact), Milestone 5 is an engineering scale-up of many such modules, and Milestone 6 is a full error-corrected quantum computer of a million+ physical qubits capable of running useful algorithms【11†0-L0】 (see timeline figure below). Google is betting on superconducting qubits and surface code as their technological backbone. They have been steadily improving their Sycamore processors (e.g., a newer 72-qubit “Weber” chip was used for the QEC demo). Google’s public communications emphasize the need for ~$10^{6}$ physical qubits and error rates around $10^{-6}$ or better (from current ~$10^{-3}$) for solving industry-scale problems . In terms of hardware architecture, Google is likely exploring integrated 3D packaging and perhaps even optical interconnects in the future to link quantum tiles, although details are sparse. What’s clear is that Google’s R&D is fully aligned with the stepwise build-up of a fault-tolerant machine. They have not announced a target date for the ultimate system publicly, but internal goals (as suggested by their milestones) imply a functioning multi-logical-qubit prototype within a few years and a large-scale machine perhaps by the early 2030s if progress continues. Google’s approach is often termed “full-stack” – they co-design the algorithms, hardware, and error correction together. For enterprise CTOs, Google’s steady advances mean that within this decade we might see cloud-accessible logical qubits on Google’s quantum cloud, with error rates low enough to attempt deeper algorithms than ever before.

IBM Quantum:
IBM has been at the forefront of quantum hardware for years, and their strategy to reach fault tolerance is multifaceted. On the hardware side, IBM has aggressively grown qubit counts on superconducting chips (27-qubit Falcon in 2019, 127-qubit Eagle in 2021, 433-qubit Osprey in 2022) , and is developing a 1121-qubit Condor processor (expected around 2023–2024) . However, IBM is quick to point out that more qubits alone are not enough – they must be high quality and integrated into a full system. IBM’s Quantum System One installations (in Germany, Japan, U.S., and elsewhere) show their emphasis on turning chips into reliable systems with integrated control, cryogenics, and cloud access. Looking forward, IBM’s updated roadmap envisions a quantum-centric supercomputer by 2025, achieved by coupling multiple chips together (likely using a chip-to-chip coupling technology they’ve been developing) . By 2025, IBM plans to demonstrate error correction on a small code and integrate quantum processors with classical HPC resources for a quantum advantage in specific tasks . IBM also leads in quantum software: their open-source Qiskit framework now includes dynamic circuits (allowing mid-circuit measurements and feedback) which are essential for implementing QEC on their hardware . In 2024, IBM researchers introduced a new efficient QEC code (the “gross code”) as mentioned, which showed that error correction might be achievable with far fewer qubits than previously thought . This aligns with IBM’s focus on quantum error mitigation and LDPC codes in the near term to reach a quantum advantage before full fault tolerance. Looking at IBM’s long-term vision: by 2026–2027 they aim to run circuits with over 10k quantum gates with error mitigation, and by 2030+ they project having thousands of logical qubits capable of executing $10^9$ (one billion) gate operations, which truly marks the fault-tolerant regime . In fact, IBM explicitly states a goal of “thousands of logical qubits by 2033” in their quantum roadmap, effectively targeting that timeframe for a fault-tolerant quantum computer that can tackle classically intractable problems . To reach this, IBM is exploring not just scaling up a single device but also quantum communication links between cryogenic systems (quantum local area networks) and advanced packaging (3D chip stacking, novel couplers). IBM’s Quantum Safe initiative, meanwhile, refers to cryptography – making sure data stays secure in the quantum era – but it underscores IBM’s thought leadership in preparing for the fault-tolerant future. For enterprises, IBM’s strategy suggests that intermediate milestones (like error-mitigated quantum advantage on specific problems by 2025) are within sight , and truly fault-tolerant hardware may arrive in the early 2030s, potentially via cloud offerings on IBM Quantum.
Microsoft (Azure Quantum):
Unlike Google and IBM, Microsoft chose a more unorthodox route by pursuing topological qubits from the outset. After years of fundamental research, 2023–2025 has seen Microsoft make noteworthy claims: in early 2023 they reported evidence of Majorana quasi-particles in a topological device, and by February 2025 Microsoft announced “Majorana 1,” the world’s first quantum processing unit with a Topological Core . This Topological Core uses Majorana zero modes in an iron-based superconductor structure to encode qubits that are inherently protected from certain errors. Each such qubit (sometimes called a tetron or qubit module) consists of four Majorana modes, and Microsoft has shown they can manipulate and measure them (performing joint Pauli measurements to read out states) . Microsoft’s roadmap is distinct: they are not building NISQ devices with tens of qubits; instead, they target a rapid jump to a small fault-tolerant system. In 2025, Microsoft revealed a plan to scale from one topological qubit to an array of 8 qubits on a single chip (they’ve “already placed eight topological qubits on a chip designed to house one million” qubits) . Because each topological qubit is more error-resistant, Microsoft claims that their approach will reduce the overhead for error correction by ~10x compared to traditional superconducting implementations . They foresee being able to build a million-qubit quantum supercomputer by stitching together many topological qubit chips – with each physical qubit being high-fidelity, the number of physical qubits per logical qubit could be dramatically lower. Notably, Microsoft is one of two companies selected for DARPA’s US2QC program, which challenges teams to build a utility-scale quantum computer on a five-year horizon. Microsoft states it is on track to build the world’s first fault-tolerant prototype as part of this program “in years, not decades” . This could mean a prototype machine with perhaps ~100 logical qubits in the late 2020s if all goes well. In parallel, Azure Quantum is integrating classical supercomputing and quantum (providing a cloud platform where enterprises can experiment with quantum algorithms today on hardware from IonQ, Quantinuum, etc., and eventually on Microsoft’s own topological machines when they come online). For CTOs, Microsoft’s approach is high-risk but high-reward – if topological qubits succeed, Microsoft might leap ahead in the fault-tolerant race, delivering a scalable quantum datacenter solution faster than others. However, skepticism remains in parts of the community until Microsoft’s claims are reproduced and their qubits cross error-correction thresholds. 2025–2026 will be critical years to watch for Microsoft as they demonstrate multi-qubit operations in their topological system and hopefully validate that these exotic qubits can indeed be the foundation of a scalable, enterprise-grade quantum computer.
Innovators & Startups to Watch
In addition to the big three, numerous startup companies are driving innovation in error correction and scalable hardware. Here we highlight a few notable ones and their contributions:
PsiQuantum:
Founded in 2016, PsiQuantum is singularly focused on building a fault-tolerant quantum computer using photonic qubits. The company has raised enormous funding (over $1 billion) to realize its vision of a million-qubit photonic machine. PsiQuantum’s architecture relies on creating a massive cluster state of photons and performing adaptive measurements to carry out error-corrected quantum logic. Their partnership with GlobalFoundries to manufacture photonic chips is a unique strength – it means they can leverage advanced semiconductor fabs to produce photonic circuitry. PsiQuantum has publicly stated bold goals, such as aiming to have a million physical qubits within five years (from the time of statement) and to achieve true fault tolerance shortly thereafter. While they have not yet announced a demonstration of a logical qubit, they have published research on fault-tolerant logical gate implementations using photonic fusion techniques . PsiQuantum’s timeline appears to target the late 2020s for delivering a fault-tolerant system to a customer. They are an example of a startup that bypassed the NISQ era entirely – no small-scale product, just an all-or-nothing leap to full error correction. If PsiQuantum succeeds, they could provide one of the earliest fully fault-tolerant quantum computers, which would be a monumental breakthrough. For now, their progress is mostly behind closed doors, but their collaboration with foundries and recent reports of steady advances keep them squarely on the radar.
IonQ:
IonQ is a leader in trapped-ion computing and was the first pure-play quantum computing company to go public. For error correction, IonQ benefits from some of the highest gate fidelities in the industry (recently >99.9% two-qubit fidelity using Yb/Ba ion species) . The company’s roadmap emphasizes a metric called “algorithmic qubits,” which factors in error rates and connectivity to measure the usable qubits for algorithms. IonQ’s aggressive goal is to reach 1024 algorithmic qubits by 2028 – effectively meaning they hope to have ~1024 error-corrected logical qubits with sufficiently low error rates to run deep circuits by that time. To get there, IonQ is pursuing a multi-core, modular ion trap architecture. In 2024, they announced an accelerated roadmap featuring parallel, multi-core ion trap processors connected by photonic links . By 2025, IonQ expects to demonstrate logical qubits with five 9’s fidelity (99.999% at the logical gate level) , which would open the door to error-corrected algorithms. IonQ’s current systems (Harmony, Aria) have up to ~29 physical qubits usable and have been used to run small algorithms; the next-gen systems will incorporate better crosstalk management and shuttling to scale up qubit count. They also plan to introduce rack-mountable quantum systems for data centers in the coming years . IonQ’s strength is a combination of high-quality qubits and a clear modular strategy. They are also part of US government initiatives (e.g., NSF Quantum Leap) focusing on error correction. If IonQ hits its milestones, by the late 2020s they could have one of the first sizable fault-tolerant quantum computers, delivered via the cloud or even on-premise for select customers.
Quantinuum:
Formed by the merger of Honeywell Quantum Solutions and Cambridge Quantum, Quantinuum operates some of the highest-performance quantum hardware today (the H1 and H2 series trapped-ion systems). Quantinuum has made headlines for QEC achievements: as noted, their H1-1 machine demonstrated in 2022 the first logical qubit outperforming a physical qubit and in 2023 they reported entangling 2 logical qubits with higher fidelity than entangling 2 raw physical qubits, a key proof of concept for fault tolerance . In late 2022, they also announced having executed 50 entangled logical qubits with 98% fidelity in a test (exploiting the all-to-all connectivity to entangle many qubits in parallel) . Quantinuum’s hardware roadmap is to scale up the number of qubits per ion trap and then use optical interconnects (they have a demonstrated ion-photon linking technology) to join multiple traps. The upcoming H2 second-generation system is expected to have larger qubit counts (possibly 50+ ions) with improved gate speeds. Quantinuum also heavily invests in quantum software and applications, which means they are preparing error-corrected algorithms (e.g., quantum chemistry simulations) to run as soon as the hardware is ready. Their timeline hasn’t been publicly stated in detail, but given their achievements, they may attempt a fully protected logical qubit (with multiple error layers) in the next year or two, and a prototype fault-tolerant module by the later 2020s. They are also working jointly with partners (including GE, JP Morgan) on quantum algorithms that assume fault-tolerant operations, effectively getting the software ready for the era of logical qubits. Quantinuum is a key startup (though backed by a large corporation, Honeywell) to watch, as they uniquely combine cutting-edge hardware with a full-stack software environment.
Rigetti Computing:
Rigetti is a pioneering startup in superconducting qubits, known for its work on multi-chip quantum processors. Rigetti’s recent roadmap updates indicate a shift toward modular scaling and improved fidelities. They plan to introduce a new 4-chip module in 2025, which will combine four 9-qubit dies into a 36-qubit processor, targeting 99.5% two-qubit fidelities . By end of 2025, Rigetti aims for a >100-qubit system (likely a larger mosaic of chips) with similarly high fidelities . Achieving ~99.5% fidelity would be a big improvement for Rigetti (their prior 8-qubit prototypes were in the ~95–97% range), and it’s necessary for effective error correction. Notably, Rigetti has demonstrated real-time error correction on their current hardware in collaboration with software startup Riverlane . This involved integrating a low-latency feedback loop to correct errors as they occur on a superconducting qubit – a valuable capability for future larger codes. Rigetti is also participating in government programs (e.g., DARPA’s programs and UK’s quantum initiatives ) to push toward fault tolerance. While Rigetti faced some challenges in recent years with qubit coherence and financial losses, they remain one of the few with an end-to-end quantum cloud service and their emphasis on modular “chiplets” echoes the approach of larger players. If Rigetti’s technology matures as planned, they could deliver mid-scale processors that support small QEC codes within a couple of years, and then scale up through tiling. For enterprises, Rigetti’s accessible cloud platform (Quantum Cloud Services) might offer a testing ground for error correction techniques on superconducting qubits in the near term, albeit at smaller scales than IBM or Google.
Others:
There are many other innovators in this space. Alice & Bob, a French startup, focuses on “cat qubits” and recently reported creating a Schrödinger cat state that reduces bit-flip errors exponentially, aiming for a bias where phase errors are dominant and then corrected by simple codes – potentially needing only ~10 physical qubits per logical qubit in theory. IQM (Finland) is building superconducting quantum modules and has a roadmap to incorporate error correction by 2027–2030 with specialized on-premise machines . Pasqal (France) and QuEra (USA) are leading in neutral atoms; Pasqal’s plan for fault tolerance by 2028 using Rydberg atom arrays is an ambitious entrant into the FT race . D-Wave Systems, while known for quantum annealing, has a program for a gate-model superconducting qubit system and has mentioned using error suppression and mitigation – but they are further from fault-tolerance goals than others. On the software front, startups like Q-CTRL are providing error mitigation tools (stabilizing qubits through control engineering) which will complement error correction by squeezing out as much performance as possible from each physical qubit. Academic spinoffs and consortia also contribute: for example, the ETP Quantum program in Europe and US National Quantum Initiative centers are funding research into new materials, better fabrication, and novel error-correcting codes (like holographic codes, bosonic codes, etc.). In short, the ecosystem of startups and research outfits working on the pieces of the fault-tolerance puzzle is vast. Each brings a piece of innovation – whether it’s a better qubit, a better code, or integration techniques – and together they drive the field closer to the end goal.
To summarize the major approaches and players, the table below provides a high-level comparison:

Timeline Outlook: When Will Fault-Tolerant Quantum Arrive?
With such intense development underway, a natural question for technology decision-makers is “When will practical fault-tolerant quantum computers be available?” While exact dates are uncertain, we can make some informed projections based on current roadmaps and progress:
Mid-2020s (2025–2026): We expect to see the first logical qubits with better-than-physical performance accessible on cloud quantum services. This essentially is the “break-even” point: a logical qubit that retains quantum information longer than any single physical qubit could. Google’s 2023 result and Quantinuum’s 2022 experiment already indicate we are at the cusp of this. By 2025, companies like IBM and IonQ aim to demonstrate logical operations with error rates on the order of $10^{-4}$ or $10^{-5}$ (99.99%+ fidelity) , enabling a few consecutive logical gates before errors creep in. During this period, we might witness a fault-tolerance prototype with a handful of logical qubits interacting – for example, two or three logical qubits running a short error-corrected circuit. Microsoft’s target to have a fault-tolerant prototype (as part of DARPA’s program) around 2026 suggests that by then at least a small-scale demonstration (perhaps solving a small problem with logical qubits) could occur . These early fault-tolerant prototypes will likely still be laboratory experiments or early products – not yet outperforming classical supercomputers for useful tasks, but proving the concept.
Late 2020s (2027–2029): This could be the inflection point where quantum computers achieve “utility-scale” fault tolerance. By 2028, IonQ’s roadmap envisions over 1000 effective logical qubits , which if realized, would be enough to run meaningful algorithms (e.g., certain quantum chemistry simulations or optimization problems of commercial interest). IBM’s roadmap around 2028–29 likewise aims for circuits with >1 million physical qubits involved (via modular systems) and starting to realize thousands of logical operations . We anticipate that in the late 2020s, at least one platform (be it superconducting, ion trap, or photonic) will attain the capability to run quantum algorithms with millions of operations reliably, heralding the first real quantum advantage obtained through error correction rather than error mitigation. It’s plausible that by 2028, a quantum computer with, say, 50–100 logical qubits (encoded in perhaps 5,000-50,000 physical qubits depending on the platform) will be demonstrated. Such a machine could tackle problems beyond the reach of classical HPC in areas like chemistry, materials science, or complex optimization – this is the moment many refer to as “quantum supremacy 2.0,” but for a useful task. Enterprise and government investment programs (like DARPA’s US2QC, EU’s Quantum Flagship, etc.) are indeed aligned with achieving a utility-scale quantum computer by around 2030, which implies late 2020s for major breakthroughs. It’s important to note that even in this timeframe, access to fault-tolerant machines will be limited (perhaps a few exist globally, likely accessed via cloud or consortiums).
Early-to-mid 2030s: In the 2030-2035 period, we expect fault-tolerant quantum computing to become commercially available and starting to mature. IBM’s goal of thousands of logical qubits by 2033 suggests that by then, fully error-corrected quantum supercomputers could be operational . These would be second-generation fault-tolerant machines with enough logical qubit capacity to address a wide range of applications – from breaking certain cryptographic codes (if ever allowed) to simulating complex molecules or large-scale optimization for logistics and AI. By the early 2030s, multiple vendors may have deployed fault-tolerant systems: Google likely with a modular superconducting system, IBM with its quantum supercomputer approach, Microsoft with a topological-based machine if their plan succeeds, and perhaps one of the startups like PsiQuantum delivering a photonic quantum data center. At this stage, we might see broad quantum cloud services where developers can run circuits with billions of gates without worrying about errors, much like how we use classical cloud computing today. Costs will still be high (tens of millions of dollars per machine), but organizations will be able to rent time on logical qubits for critical computations. By mid-2030s, standards for quantum error correction might emerge (for instance, common formats for compiling to logical qubits, or standardized QEC firmware in devices). It’s also likely that by this time, we will have a clearer sense of which technology is “winning” for scale or if multiple will coexist (superconducting vs ion vs photonic vs topological).
Beyond 2035: Fully fault-tolerant quantum computers well-integrated into computing infrastructure could be commonplace in labs and enterprises by the later 2030s. This is the era when quantum computing becomes a utility – much like classical HPC – and attention shifts from building the machines to leveraging them for transformative applications. If error-corrected qubits become sufficiently abundant (e.g., >1 million logical qubits, as some ultimately foresee ), there’s no known fundamental limit to the problems they could solve, short of those bounded by quantum algorithms themselves. By this time, we may also see fault-tolerant networks of quantum computers (allowing distributed quantum computing with error-corrected entanglement across distances) and specialized quantum accelerators integrated into classical supercomputers for tasks like cryptographic analysis, large-scale simulation, or machine learning.
It’s worth noting that these timelines assume steady progress and no fundamental roadblocks. In reality, unforeseen engineering challenges could delay some milestones. However, given the diversity of approaches being pursued, it’s a good bet that at least one will bear fruit close to the predicted schedule. For planning purposes, enterprises should view the late 2020s as the likely period when fault-tolerant quantum computing transitions from an experimental phase to a practical tool, and the early 2030s as the start of the era when quantum computers can reliably solve certain high-value problems beyond classical reach.
Conclusion: Preparing for the Fault-Tolerant Quantum Era
The march toward fault-tolerant quantum computing is well underway, marked by rapid advances in both theory and implementation. In the span of just a few years, the field has progressed from basic error-correction experiments to multiple concurrent paths for building logical qubits and scalable architectures. For enterprise CTOs, the implications are profound: once quantum computers can operate error-free at scale, they will unlock computational capabilities that can disrupt cybersecurity, drug discovery, finance, logistics, and beyond.
It is therefore crucial for organizations to stay informed and engaged with this evolving landscape. In the near term (the next 2–3 years), CTOs should track developments such as the first cloud-available logical qubits, and even experiment with error-mitigated quantum algorithms using today’s NISQ devices (many of which are accessible via services like AWS Braket, Azure Quantum, IBM Quantum). This will build internal expertise and readiness for the transition to fault-tolerant hardware.
By the later 2020s, companies should consider strategic partnerships or investments in quantum computing, if they haven’t already. This could include joining industry consortia, engaging with leading vendors on pilot projects, or exploring quantum-safe cryptography upgrades in anticipation of quantum’s impact on security. When fault-tolerant machines come online, the organizations that have hands-on experience and a pipeline of quantum use-cases will be able to leverage these expensive resources most effectively.
In summary, the state-of-the-art in fault-tolerant quantum computing is advancing rapidly on multiple fronts – error correction codes are hitting key performance milestones, hardware is scaling up with creative new architectures, and major players along with innovative startups are driving the timeline toward practical machines within the next decade. The consensus of roadmaps points to the early 2030s as the dawn of widely usable, error-corrected quantum computing . QuantumReport will continue to monitor these developments closely. CTOs should begin strategizing now for how fault-tolerant quantum capabilities can be integrated into their technology stacks and what new opportunities (and risks) they will bring. The coming era of reliable quantum computing promises to be one of the most significant technological shifts in our lifetime – akin to the arrival of classical supercomputers or the internet – and those prepared to harness it will lead the next wave of innovation.
Sources: This executive brief is based on the latest available data from industry research, academic publications, and company roadmaps, including Nature/Science articles on quantum error correction , official blog posts and press releases from Google, IBM, Microsoft , and reports on startup roadmaps and experimental milestones



Comments