The Quantum Readability Problem Is Solved — And What It Means for Cryptography

The Quantum Readability Problem Is Solved — And What It Means for Cryptography | GRIDNET Magazine

GRIDNET Magazine · Deep Dive

The Quantum Readability Problem Is Solved — And What It Means for Cryptography

Majorana qubits were theorized to be ultra-secure because they’re nearly impossible to disturb. Now scientists can read them. The lock on the quantum safe has been picked — from the inside.

February 19, 2026 · 22 min read · Quantum Computing & Cryptography

I. The Paradox at the Heart of Quantum Memory

For nearly two decades, the Majorana fermion has occupied a peculiar throne in theoretical physics — a particle that is its own antiparticle, first proposed by the Italian physicist Ettore Majorana in 1937 before his mysterious disappearance. In the realm of quantum computing, Majorana zero modes (MZMs) promised something that sounded almost paradoxical: a qubit so well-protected by the topology of space itself that no local disturbance could corrupt it. Information encoded in Majorana qubits would be split across two spatially separated points, like tearing a secret in half and hiding the pieces in different cities. To corrupt the data, an attacker — or simply noise from the environment — would have to simultaneously affect both locations at once.

This was, on paper, the holy grail of quantum error correction: a qubit that corrects itself by virtue of its own geometry. No overhead. No redundancy. No thousands of physical qubits babysitting a single logical one.

But there was a catch — an ironic, almost poetic catch. The same topological protection that made Majorana qubits incorruptible also made them unreadable. If no local probe could disturb the state, then no local probe could measure it either. The quantum safe was impenetrable, yes. But it was impenetrable to everyone — including the owner.

Until February 2026.

In a paper published in Nature on February 11, an international collaboration led by QuTech at Delft University of Technology and the Spanish National Research Council (CSIC) demonstrated something that had eluded experimentalists for years: single-shot, real-time readout of quantum information stored in Majorana zero modes. They didn’t merely detect the presence of Majorana states. They read the parity — the actual computational bit, the 0 or the 1 — in a single measurement, without destroying the topological protection that makes these qubits special in the first place.

The quantum safe can now be opened. And everything downstream — from how we build quantum computers to how we encrypt our data to whether your blockchain wallet survives the next decade — just changed.

Diagram showing the architecture of a Majorana qubit based on a minimal Kitaev chain, with two quantum dots coupled through a superconductor and Majorana zero modes at the edges
Fig. 1 — Majorana Qubit Architecture: A minimal Kitaev chain with two semiconductor quantum dots coupled via a superconductor. Majorana zero modes (γ₁, γ₂) emerge at the chain’s edges, storing quantum information non-locally across the entire structure.

II. How They Cracked the Unreadable Qubit

To understand the breakthrough, you need to understand the architecture. The team built what’s called a minimal Kitaev chain — the simplest possible realization of a theoretical model proposed by physicist Alexei Kitaev in 2001. The construction is elegant in its modularity: two semiconductor quantum dots, connected through a superconducting bridge. Ramón Aguado, a CSIC researcher at the Madrid Institute of Materials Science and co-author of the study, describes the approach as “building with Lego blocks” — a bottom-up assembly that gives researchers precise control over every parameter.

When the system is tuned correctly, Majorana zero modes emerge at the outer edges of the chain — one on each quantum dot. These are not particles in the conventional sense. They are quasiparticles, collective excitations of the underlying electrons and superconducting condensate, existing as mathematical zero-energy states pinned to the edges of a topological phase. The quantum information — the bit — is encoded not in either mode individually, but in the joint parity of the two modes taken together. Is the combined state even or odd? That single binary distinction is the qubit.

Previous experiments could create Majorana modes. They could even detect signatures consistent with their existence (though this remained controversial for years). But reading the parity — the actual computational content — required a fundamentally different approach. Local charge measurements, the standard tool in quantum dot physics, were blind to it. As Aguado explains: “How do you read or detect a property that doesn’t reside at any specific point?”

The Quantum Capacitance Technique

The answer turned out to be a measurement technique called quantum capacitance. Rather than probing a single point in the system, the quantum capacitance method acts as what Aguado calls “a global probe sensitive to the overall state of the system.” It measures the system’s response to a tiny oscillating voltage applied across the entire chain — a response that differs measurably depending on whether the Majorana parity is even or odd.

The key insight is subtle: while the charge at any given point in the chain doesn’t change between parity states (this is the topological protection at work), the susceptibility of the system — how it responds to perturbation — does. The quantum capacitance probes this susceptibility, extracting global information without violating the local protection. It’s the difference between trying to read a book by touching individual letters versus feeling the weight of the entire volume.

Technical Milestone: What Was Actually Measured

  • Single-shot parity discrimination — distinguishing |0⟩ from |1⟩ in a single measurement cycle, not averaged over thousands of repetitions
  • Real-time readout — fast enough to track parity dynamics as they happen
  • Random parity jumps — detected and characterized, revealing a parity coherence time exceeding one millisecond
  • Charge-neutral transitions — confirmed via simultaneous charge sensing that parity switches occur without charge transfer, validating the topological nature of the encoding

Gorm Steffensen, a researcher at ICMM-CSIC who participated in the study, captured the result elegantly: “The experiment confirms the protection principle: while local charge measurements are blind to this information, the global probe reveals it clearly.”

The parity coherence time of more than one millisecond is particularly significant. For context, conventional superconducting qubits (transmons, the type used by Google and IBM) achieve coherence times in the range of 100–300 microseconds. One millisecond for a first demonstration of a fundamentally new qubit architecture is not just competitive — it’s a statement of intent.

Comparison diagram of traditional qubit readout mechanisms versus Majorana qubit global probe readout, showing error rates and overhead differences
Fig. 2 — Readout Mechanism Comparison: Traditional qubits require local measurement and massive error correction overhead (~1000:1 physical-to-logical ratio). Majorana qubits leverage a global quantum capacitance probe that reads non-local parity directly, potentially slashing the overhead by an order of magnitude.

III. The Scalability Revolution: Why Topology Changes the Math

To appreciate why this matters for quantum computing at scale, you need to confront an uncomfortable truth about the current state of the field: most of today’s quantum computers are engineering marvels that can’t actually do anything useful yet. The reason is error correction — or more precisely, the ruinous cost of it.

A superconducting transmon qubit, the dominant architecture in machines from IBM, Google, and others, has an error rate of roughly 10⁻³ per gate operation. That means for every thousand quantum logic gates you apply, you expect about one error. This sounds tolerable until you realize that useful quantum algorithms — the kind that could break RSA encryption, simulate molecular dynamics, or optimize global supply chains — require millions or billions of gate operations. At one error per thousand gates, the computation dissolves into noise long before it reaches an answer.

The solution, in theory, is quantum error correction (QEC): using many physical qubits to encode a single, more reliable logical qubit. The most promising QEC scheme, the surface code, requires roughly 1,000 physical qubits per logical qubit at current error rates. To run Shor’s algorithm against RSA-2048 — the attack that keeps cryptographers up at night — you’d need on the order of 4,000 logical qubits, which translates to roughly four million physical qubits. IBM’s current flagship, the 1,121-qubit Condor processor, wouldn’t even cover one logical qubit under these constraints.

The Topological Shortcut

Majorana qubits offer a fundamentally different scaling trajectory. Because the topological protection suppresses errors at the hardware level — before error correction even begins — the overhead ratio drops dramatically. Microsoft, which has been investing in topological quantum computing for over a decade, estimates that Majorana-based architectures could achieve the same logical error rates with 10 to 100 times fewer physical qubits than conventional approaches.

Consider what this means in practice. If you can build a fault-tolerant logical qubit from 10 topological qubits instead of 1,000 transmons, then a machine with 100,000 physical Majorana qubits could house 10,000 logical qubits — enough to run meaningful quantum algorithms, including cryptographic attacks. Microsoft’s roadmap, anchored by the Majorana 1 processor announced in February 2025, targets exactly this trajectory: scaling from the small proof-of-concept to arrays of tetrons (four-Majorana-mode units) that can support logical operations with built-in topological protection.

The February 2026 readout result from QuTech and CSIC fits into this picture as a missing puzzle piece. Microsoft claimed you could build the qubits — though it should be noted that Nature’s own peer-review commentary cautioned that the 2025 results “did not constitute direct evidence for Majorana zero modes,” a point of ongoing scientific debate. This team showed you could read them. Together, they close the loop: create, manipulate, measure. The full stack for topological quantum computing is, for the first time, experimentally demonstrated.

Scatter plot showing different quantum computing architectures positioned by their physical-to-logical qubit ratio and error rates, with topological qubits in the ideal low-overhead, low-error zone
Fig. 4 — Quantum Error Correction Landscape: Different qubit technologies mapped by their physical-to-logical overhead and per-gate error rates. Topological (Majorana) qubits occupy the ideal zone — low inherent errors requiring minimal correction — while superconducting and trapped-ion approaches demand orders of magnitude more physical resources.

IV. The Cryptographic Reckoning: What Happens When Quantum Computers Actually Work

Here is where the story turns dark — or, depending on your perspective, electrifying.

The entire security architecture of the modern internet rests on mathematical problems that classical computers find hard: factoring large numbers (RSA), computing discrete logarithms on elliptic curves (ECDSA, ECDH), and related structures. Your online banking, your encrypted messages, your blockchain wallets, your government’s classified communications — all of it depends on the assumption that these problems remain intractable.

Shor’s algorithm, published in 1994 by mathematician Peter Shor, proved that a sufficiently powerful quantum computer could solve these problems in polynomial time. RSA-2048 would crumble. Elliptic curve cryptography would evaporate. The mathematical bedrock of digital trust would become sand.

For thirty years, this threat has been theoretical — a distant thundercloud on the horizon. The reason is simple: Shor’s algorithm requires a fault-tolerant quantum computer with thousands of logical qubits and deep circuit execution. No such machine exists today. No such machine was even plausible, by most serious estimates, before the mid-2030s at the earliest.

The Majorana readout breakthrough changes the timeline.

The Harvest Now, Decrypt Later Attack

Even before a cryptographically relevant quantum computer exists, the threat is already active. Intelligence agencies, nation-state actors, and sophisticated criminal organizations are engaged in what security researchers call the “Harvest Now, Decrypt Later” (HNDL) attack: intercepting and storing encrypted communications today, with the intention of decrypting them once quantum computers become powerful enough.

Think about what’s being harvested right now: diplomatic cables, military communications, corporate trade secrets, financial transaction records, personal medical data, attorney-client privileged communications. Data that may need to remain confidential for 10, 20, or 50 years is being vacuumed up and stored, waiting for the day a quantum computer can strip away its encryption like tissue paper.

The Majorana readout breakthrough doesn’t make that day tomorrow. But it compresses the timeline in a way that should make every organization reassess its cryptographic posture. If topological qubits can be read reliably, and if they can be manufactured in arrays (as Microsoft’s Majorana 1 architecture suggests), then the path from physics demonstration to fault-tolerant machine shortens significantly. The most optimistic estimates now place cryptographically relevant quantum computing in the late 2020s to early 2030s — within the confidentiality window of data being encrypted today.

Timeline showing post-quantum cryptography milestones from 2024 to 2036, including NIST standards, the Majorana breakthrough, CNSA 2.0 deadlines, and the projected window for fault-tolerant quantum computing
Fig. 3 — Post-Quantum Cryptography Threat Timeline: The race between quantum computing milestones and cryptographic migration deadlines. The Majorana readout breakthrough (2026) accelerates the projected arrival of fault-tolerant QC, compressing the window for organizations to complete their post-quantum migration.

V. The Post-Quantum Migration: NIST, Standards, and the Clock

The good news — and it is genuinely good news — is that the cryptographic community has not been idle. The National Institute of Standards and Technology (NIST) has been running a post-quantum cryptography (PQC) standardization process since 2016, and in August 2024, it finalized three quantum-resistant standards:

NIST Post-Quantum Standards (Finalized 2024)

  • FIPS 203 (ML-KEM / CRYSTALS-Kyber) — Lattice-based key encapsulation mechanism for secure key exchange
  • FIPS 204 (ML-DSA / CRYSTALS-Dilithium) — Lattice-based digital signature algorithm
  • FIPS 205 (SLH-DSA / SPHINCS+) — Hash-based digital signature algorithm (conservative fallback)

Additionally, HQC (Hamming Quasi-Cyclic) was selected for standardization in March 2025 as an alternative code-based KEM, providing algorithm diversity.

These algorithms are designed to resist attacks from both classical and quantum computers. They’re based on mathematical problems — lattice problems, hash functions, error-correcting codes — for which no efficient quantum algorithm is known. NIST’s CNSA 2.0 suite mandates that all new U.S. National Security Systems acquisitions must be post-quantum compliant by January 1, 2027, with full migration expected by 2035.

The bad news is that migration is slow. Cryptographic algorithms are embedded in every layer of our digital infrastructure: TLS certificates, VPN tunnels, code-signing systems, secure boot chains, database encryption, messaging protocols, hardware security modules. Replacing RSA and ECC with lattice-based alternatives across an entire organization — let alone an entire industry — is a multi-year engineering effort that most enterprises haven’t started.

A 2024 survey by the Cloud Security Alliance found that fewer than 15% of large enterprises had begun any form of post-quantum cryptographic assessment, and fewer than 3% had deployed PQC algorithms in production. The gap between the threat timeline and the migration timeline is widening, and the Majorana breakthrough just widened it further.

The Quantum Security Paradox

Ironically, quantum computers themselves face significant security challenges — even as they threaten classical cryptography. Research published in January 2026 by Swaroop Ghosh and Suryansh Upadhyay at Penn State, in a paper published in the Proceedings of the IEEE, outlined several serious security vulnerabilities in current quantum computing systems. These include:

  • Unverified third-party compilers and quantum software stacks that could enable intellectual property theft or tampering
  • Crosstalk-based information leakage — the interconnectedness that makes qubits powerful also creates side channels when multiple users share the same quantum processor
  • Reverse engineering of quantum circuits that encode proprietary algorithms and client data
  • Lack of hardware-level security measures analogous to those in classical computing (secure enclaves, trusted platform modules)

As Ghosh noted, “Classical security methods cannot be used because quantum systems behave fundamentally differently from traditional computers.” The implication is stark: quantum computers are simultaneously the most powerful attack tool and the most vulnerable computing platform humanity has ever built. They threaten our cryptography while being, themselves, deeply insecure.

This creates a peculiar strategic landscape. A nation-state could spend billions building a fault-tolerant quantum computer to break its adversary’s encryption, only to have that same adversary steal the quantum algorithms via crosstalk exploitation on a shared cloud quantum processor. The offense-defense balance in quantum computing security is, to put it mildly, unresolved.

VI. Blockchain’s Quantum Reckoning: Every Chain Is Exposed

If the Majorana readout breakthrough compresses the timeline to fault-tolerant quantum computing, then every Layer 1 blockchain is running a countdown clock it hasn’t fully acknowledged.

Bitcoin uses ECDSA (Elliptic Curve Digital Signature Algorithm) with the secp256k1 curve for transaction signing. This is the mechanism that proves you own your coins — that the person authorizing a transfer actually controls the private key associated with the sending address. Shor’s algorithm on a sufficiently powerful quantum computer could derive a Bitcoin private key from its corresponding public key in hours or minutes, rather than the billions of years it would take a classical computer.

Ethereum faces the same fundamental vulnerability, also relying on ECDSA (with the secp256k1 curve) for account authentication and transaction signing. Its smart contract ecosystem adds an additional layer of concern: any contract that stores or processes cryptographic keys, manages multi-signature wallets, or implements zero-knowledge proofs based on elliptic curve assumptions would need to be rewritten or migrated.

The Bitcoin mining algorithm (SHA-256) faces a less severe but still meaningful threat from Grover’s algorithm, which provides a quadratic speedup for search problems. This would effectively halve the security level of SHA-256 from 256 bits to 128 bits — still substantial, but a significant degradation that would need to be addressed through longer hash outputs or algorithmic changes.

The Reused Address Problem

There’s a particularly acute vulnerability that the quantum threat exposes: address reuse. When a Bitcoin user sends a transaction, their public key is revealed on the blockchain. If they’ve received additional funds to the same address, those funds are now protected only by the difficulty of deriving the private key from the public key — exactly the problem Shor’s algorithm solves. Estimates suggest that approximately 4–7 million BTC (worth hundreds of billions of dollars at current prices) sit in addresses with exposed public keys, including Satoshi Nakamoto’s estimated 1.1 million BTC.

The migration path for blockchain is more complex than for traditional IT infrastructure. You can’t simply patch a protocol used by millions of decentralized nodes across the globe. Any post-quantum upgrade requires:

  • Consensus among a decentralized community (notoriously difficult)
  • A hard fork or soft fork that all major node operators and miners/validators adopt
  • Migration of every existing wallet and smart contract
  • Backwards-compatible transition periods where both old and new signature schemes coexist
  • Larger signature and key sizes (lattice-based signatures are significantly larger than ECC signatures, impacting block space and transaction fees)

GRIDNET OS: Post-Quantum Readiness

GRIDNET OS has been tracking the post-quantum threat since its architectural inception. The system’s modular cryptographic layer is designed for algorithm agility — the ability to swap signature and key exchange algorithms without rewriting the core protocol. The migration pathway targets CRYSTALS-Dilithium (ML-DSA) for digital signatures and CRYSTALS-Kyber (ML-KEM) for key encapsulation, aligned with NIST’s finalized standards. FALCON (a compact lattice-based signature scheme) is under evaluation as an alternative where signature size constraints are critical.

This isn’t a theoretical roadmap. It’s an engineering priority. Every L1 chain that hasn’t started this work is accumulating quantum debt — a form of technical debt that compounds not with interest, but with existential risk.

VII. The Physics Beneath the Hype: What This Experiment Actually Proves

In a field prone to breathless announcements and inflated claims, it’s worth being precise about what the QuTech/CSIC result does and doesn’t demonstrate.

What It Proves

First: Majorana zero modes can be created in a controlled, bottom-up fashion using a modular Kitaev chain architecture. This is significant because earlier claimed observations of Majorana modes (particularly the 2018 results from Delft that were later retracted) were plagued by ambiguity. The bottom-up approach provides much stronger evidence that what’s being observed are genuine topological states, not mundane Andreev bound states mimicking Majorana signatures.

Second: The parity of these Majorana modes — the actual computational content — can be read in a single shot, in real time, using quantum capacitance. This is the readout problem solved. It transforms Majorana qubits from a theoretical curiosity into a measurable, operational object.

Third: The parity coherence time exceeds one millisecond. This is a first-generation result, and coherence times typically improve by orders of magnitude as fabrication and control techniques mature. For reference, superconducting qubits went from microsecond coherence in the early 2000s to hundreds of microseconds today — a factor of 100 improvement over two decades.

Fourth: The topological protection is experimentally confirmed. Simultaneous charge sensing showed that parity transitions occur without any detectable charge transfer, confirming that the information is truly non-local and immune to local charge noise. This is the hallmark signature that distinguishes topological encoding from conventional encoding.

What It Doesn’t Prove (Yet)

The experiment demonstrates readout on a minimal Kitaev chain — two quantum dots, two Majorana modes, one qubit. Scaling to arrays of topological qubits, implementing braiding operations (the native gate set for topological quantum computing), and achieving the error rates needed for fault tolerance are all separate challenges that remain ahead.

The one-millisecond coherence time, while promising, needs to improve by at least one to two orders of magnitude for practical quantum computing. And the quantum capacitance readout, while elegant, needs to be demonstrated at speeds compatible with quantum error correction cycles — typically on the order of microseconds.

This is not a quantum computer. It is a proof that the fundamental building block of a topological quantum computer works as theory predicted. That distinction matters enormously to physicists — and, as we’ve argued, it should matter enormously to cryptographers and blockchain architects as well.

VIII. The Eternal Tension: Computational Power vs. Privacy

Step back far enough, and the Majorana readout breakthrough illuminates a tension that has run through the entire history of human civilization: the relationship between the power to compute and the power to conceal.

Every great leap in computational capability has been followed, eventually, by a renegotiation of what can remain private. The Enigma machine — thought unbreakable — fell to Turing’s Bombe. DES encryption — the U.S. government standard for two decades — yielded to brute-force attacks as processors grew faster. The pattern repeats: society builds a cryptographic wall, and computation eventually finds a way through it, or around it, or under it.

Quantum computing represents the most dramatic instance of this pattern in history. It doesn’t just speed up existing attacks — it introduces entirely new classes of mathematical operations that render certain cryptographic assumptions structurally invalid. The hard problems stay hard for classical computers and become easy for quantum ones. It’s not an incremental threat; it’s a phase transition.

And yet, the same physics that enables quantum attacks also enables quantum defenses. Quantum key distribution (QKD) offers theoretically unbreakable communication channels. Post-quantum cryptographic algorithms (lattice-based, hash-based, code-based) are designed to resist quantum attacks using classical hardware. Homomorphic encryption allows computation on encrypted data without ever decrypting it. The defensive toolkit is expanding just as the offensive one is.

The question is not whether we can survive the quantum transition — we can, if we prepare. The question is whether we will prepare in time. The Majorana readout breakthrough is a shot across the bow: the timeline is shorter than you think, the physics is real, and the migration hasn’t started for most organizations.

“The future is already here — it’s just not evenly distributed.” — William Gibson

In the quantum context, the future of broken encryption is already here in physics laboratories. It just hasn’t been evenly distributed to the adversaries who want to exploit it. That distribution is now accelerating.

IX. What Comes Next: The Roadmap from Lab to Threat

The path from the February 2026 demonstration to a cryptographically relevant quantum computer is not a straight line, but the waypoints are becoming clearer:

2026–2027: Multi-qubit demonstrations. The immediate next step is scaling from one topological qubit (two Majorana modes) to small arrays. Microsoft’s tetron architecture — four-Majorana-mode units arranged in a 2D grid — is the leading candidate. Expect demonstrations of two-qubit gates via measurement-based braiding on minimal systems.

2027–2029: Error correction milestones. The first demonstrations of quantum error correction using topological qubits, showing that the inherent protection translates into practical error suppression that outperforms surface codes on conventional qubits. This is the make-or-break period for the topological approach.

2029–2032: Small-scale fault tolerance. Systems with tens to hundreds of logical qubits, capable of running meaningful quantum algorithms (quantum chemistry, optimization) that exceed the capabilities of classical simulation. This is also the window where quantum advantage becomes commercially relevant.

2032–2035: Cryptographic relevance. Systems with thousands of logical qubits and sufficient circuit depth to run Shor’s algorithm against real-world key sizes. This is the window where RSA-2048 and ECDSA become practically breakable — and where any organization that hasn’t migrated to post-quantum cryptography faces catastrophic risk.

These timelines could accelerate. The history of quantum computing is full of surprises — both setbacks and breakthroughs. The Majorana readout result itself was one such surprise, arriving ahead of many expert predictions. Prudent security planning assumes the worst case: that the timeline compresses further.

X. Conclusion: The Lock Is Picked, the Clock Is Ticking

The single-shot parity readout of a minimal Kitaev chain is one of those results that arrives quietly in a physics journal and proceeds to rearrange the strategic landscape. It doesn’t make headlines the way a new AI chatbot does. It doesn’t trend on social media. It’s buried in the technical language of quantum capacitance, Majorana zero modes, and parity coherence times.

But make no mistake: this is a pivotal moment. The readout problem was the last major experimental barrier to topological quantum computing. With it solved, the path from laboratory demonstration to engineered system becomes an engineering problem rather than a physics problem. Engineering problems get solved. They get solved on timelines. They get solved by well-funded organizations — Microsoft, Google, national laboratories — with powerful incentives.

For the quantum computing community, this is a validation of two decades of theoretical work and a green light for the next phase of hardware development. For the cryptographic community, it’s a compression of the threat timeline that demands accelerated migration to post-quantum standards. For the blockchain ecosystem — Bitcoin, Ethereum, GRIDNET OS, and every other L1 — it’s a reminder that the cryptographic foundations of decentralized trust are not eternal. They are contingent on the hardness of mathematical problems that quantum computers are specifically designed to make easy.

The safe box of the Majorana qubit has been opened. The question now is whether we’ll use the same ingenuity to secure the safes that protect everything else.

The clock didn’t just start. It’s been running for thirty years, since Shor’s algorithm was first published. What changed this month is that we can now hear it ticking.

References & Sources

  1. van Loo, N., Zatelli, F., Steffensen, G.O. et al. “Single-shot parity readout of a minimal Kitaev chain.” Nature 650, 334–339 (2026). https://doi.org/10.1038/s41586-025-09927-7
  2. Spanish National Research Council (CSIC). “Majorana qubits decoded in quantum computing breakthrough.” ScienceDaily, 16 February 2026. https://www.sciencedaily.com/releases/2026/02/260216084525.htm
  3. Quantum Computing Report. “Single-Shot Parity Readout of a Minimal Kitaev Chain: A Breakthrough in Majorana Qubits.” 15 February 2026. https://quantumcomputingreport.com/single-shot-parity-readout-of-a-minimal-kitaev-chain-a-breakthrough-in-majorana-qubits/
  4. Microsoft Azure Quantum Blog. “Microsoft unveils Majorana 1, the world’s first quantum processor powered by topological qubits.” 19 February 2025. https://azure.microsoft.com/en-us/blog/quantum/2025/02/19/microsoft-unveils-majorana-1-the-worlds-first-quantum-processor-powered-by-topological-qubits/
  5. Ghosh, S. & Upadhyay, S. “Security Vulnerabilities in Quantum Computing Systems.” Proceedings of the IEEE (2026). https://www.sciencedaily.com/releases/2026/01/260120000330.htm
  6. National Institute of Standards and Technology (NIST). “Post-Quantum Cryptography Standardization Process.” FIPS 203, 204, 205 (2024). https://csrc.nist.gov/projects/post-quantum-cryptography/post-quantum-cryptography-standardization
  7. NIST. “Transition to Post-Quantum Cryptography Standards.” NISTIR 8547 (Draft), November 2024. https://csrc.nist.gov/pubs/ir/8547/ipd
  8. Kitaev, A. Yu. “Unpaired Majorana fermions in quantum wires.” Physics-Uspekhi 44, 131–136 (2001). https://doi.org/10.1070/1063-7869/44/10S/S29
  9. Shor, P. W. “Algorithms for quantum computation: discrete logarithms and factoring.” Proceedings of the 35th Annual Symposium on Foundations of Computer Science, 124–134 (1994). https://doi.org/10.1109/SFCS.1994.365700
  10. QuTech — Delft University of Technology. Quantum Research Institute. https://qutech.nl/
  11. Instituto de Ciencia de Materiales de Madrid (ICMM-CSIC). https://www.icmm.csic.es/

GRIDNET

Author

GRIDNET

Up Next

Related Posts