Quantum computing has been working for years to overcome an inherent limitation of its nature: complex calculations require many qubits, its basic unit of information, but these are unstable, and with more qubits, there are more errors. This barrier has been attempted to be surmounted with better fault correction techniques, but until now, they were insufficient to guarantee the quantum advantage (more efficient calculations than with any other supercomputer). Jay Gambetta, vice president of IBM, claims on Tuesday to have found the formula with a combination of technologies and programming that allows for the development of the Quantum Starling, “the world’s first large-scale fault-tolerant quantum supercomputer.”
Quantum Starling has begun to be built at IBM’s quantum data center in Poughkeepsie (New York) and will be operational in four years. According to the company, it will execute 20,000 times more circuits than current quantum computers and will be capable of performing 100 million operations using 200 logical qubits. The physical qubit is the one found in a device (like an ion), but it is very unstable, and any interference (noise) negates its ephemeral state. The logical qubit is virtual and is constructed from several physical qubits with error correction. It allows for storing and processing information.
A new error correction system, unveiled in Nature last March, has led IBM to consider the limitations encountered with other systems, such as the usual surface code, to be overcome. It’s called LDPC, which stands for low density parity check. “This end-to-end quantum error correction protocol implements fault-tolerant memory based on a family of low-density parity-check codes up to an error threshold of 0.7% for the standard noise model,” the researchers assert.
This model allows for reducing the load of physical qubits needed to develop logical ones. According to the research, “12 logical qubits can be preserved for almost one million cycles using only 288 physical qubits.” Other systems, such as the mentioned surface code, would require almost 3,000 physical qubits to achieve the same performance.
In this way, LDPC allows “a 90% reduction in the overhead required for fault correction” and opens the door to a stable system of appropriate dimensions to consider quantum advantage. In fact, IBM believes that the future quantum computer will have a quadrillion times more memory than the largest current supercomputer.
Arvind Krishna, president and CEO of IBM, believes that Starling “charts the next frontier in quantum computing.” “Our expertise in mathematics, physics, and engineering is paving the way for a large-scale fault-tolerant quantum computer, one that will solve real-world challenges and unlock immense possibilities for businesses,” he states.
According to the company, Starling “will be able to run algorithms that could dramatically accelerate efficiency across industries, including drug development, material discovery, chemistry, logistics optimization, and financial optimization, among many other areas.”
Error correction is not the only path to achieving Starling’s objective. Equipment developments that are already being tested at IBM’s headquarters are also necessary. “Over the next four years, we are going to launch increasingly larger and interconnected quantum processors, each of which will demonstrate the specific criteria established in IBM’s research on how to scale fault tolerance. Together, these advances will combine to become Starling,” the company explains.
This roadmap includes the following milestones: IBM Quantum Loon (this year), designed to test components of the architecture for the LDPC code, including the “couplers” that connect qubits over longer distances within the same chip; Kookaburra (2026), the first modular processor that will combine quantum memory with logical operations; and Cockatoo (2027), which will interlace two Kookaburra modules and avoid the construction of impractically large chips.
These advances are the foundation to culminate Starling in 2029 and this, in turn, will be the foundation of IBM Blue Jay in 2033, when it will be capable of executing, according to Matthias Steffen, a researcher on the company’s quantum team, “1 billion quantum operations across 2,000 logical qubits,” ten times more powerful than the model announced Tuesday.
The goal is what is considered the holy grail of quantum computing, awaiting the discovery of the Majorana particle, a theoretical proposal about a supposed element capable of maintaining coherence that has yet to be identified. This involves “balancing sufficient control and coupling while preserving quantum coherence.”
The company, engaged in this goal for over a decade, has the experience of 80 systems already deployed and operational (one of the most advanced is being built in the Basque Country), the experience of 600,000 users, and collaboration with 300 scientific, technological, and industrial organizations worldwide.
Gambetta asserts that, with the new advances, “fault-tolerant large-scale quantum computing is no longer a question of science, but an engineering challenge.” “We are confident that we can build it: we have the architecture, we have the hardware, we have the scientific advances, and now we see it as an engineering path,” the researcher emphasizes.
He concludes: “IBM Quantum’s goal is to build this equipment and work with our partners on the algorithms with high hopes that it will be the future of the quantum industry. We will demonstrate quantum advantage, and it will definitely happen in the coming years.”
In the race to achieve effective quantum advantage, IBM is not alone, and in 2025, considered by UNESCO as the International Year of Quantum Science and Technology, marking the centennial of discoveries that opened this door to the microscopic world, has been filled with announcements: Google has unveiled its Willow chip, which the multinational claims can solve a task in five minutes that would take a supercomputer quadrillions of years; Microsoft has stated that it has discovered a new state of matter to tame the elusive Majorana particle; and a team of scientists from Amazon Web Services (AWS) and Caltech have presented in Nature Ocelot, a new quantum computing processor that, according to the company, can “reduce error correction costs by up to 90%.”
All claim to have achieved milestones that herald a new era of computing. However, Fernando Brandão, co-author of the work on Ocelot, warns: “We are on a long-term quest to build a quantum computer capable of doing things that even the best supercomputers cannot do, but scaling it up is a huge challenge.”