So, "verifiable" here means "we ran it twice and got the same result"?
> Quantum verifiability means the result can be repeated on our quantum computer — or any other of the same caliber — to get the same answer, confirming the result.
Normally, where I come from anyway, verifiability would refer to the ability to prove to a classical skeptic that the quantum device did what it's supposed to, cf. e.g. Mahadev (https://arxiv.org/abs/1804.01082), Aaronson (https://arxiv.org/abs/2209.06930), in a strong, theoretical, sense. And that's indeed relevant in the context of proving advantage, as the earlier RCS experiments lacked that ability, so “demonstrating verifiable quantum advantage” would be quite the step forward. That doesn't appear to be what they did at all though. Indeed, the paper appears to barely touch on verifiability at all. And – unlike the press release – it doesn't claim to achieve advantage either; only to indicate “a viable path towards” it.
It is not very clear from the text and from what I can say there is no "verifiability" concept in the papers they link.
I think what they are trying to do is to contrast these to previous quantum advantage experiments in the following sense.
The previous experiments involve sampling from some distribution, which is believed to be classically hard. However, it is a non-trivial question whether you succeed or fail in this task. Having perfect sampler from the same distribution won't allow you to easily verify the samples.
On the other hand these experiments involve measuring some observable, i.e., the output is just a number and you could compare it to the value obtained in a different way (one a different or same computer or even some analog experimental system).
Note that these observables are expectation values of the samples, but in the previous experiments since the circuits are random, all the expectation values are very close to zero and it is impossible to actually resolve them from the experiment.
Disclaimer: this is my speculation about what they mean because they didn't explain it anywhere from what I can see.
At least they claim that: «Quantum verifiability means the result can be repeated on our quantum computer — or any other of the same caliber — to get the same answer, confirming the result. This repeatable, beyond-classical computation is the basis for scalable verification.» (emph. mine)
But apparently they haven't demonstrated the actual portability between two different quantum computers.
A key result here is the first demonstration of quantum supremacy; from TFA
> This is the first time in history that any quantum computer has successfully run a verifiable algorithm that surpasses the ability of supercomputers.
"surpassing even the fastest classical supercomputers (13,000x faster)"
"Quantum verifiability means the result can be repeated on our quantum computer — or any other of the same caliber — to get the same answer, confirming the result."
"The results on our quantum computer matched those of traditional NMR, and revealed information not usually available from NMR, which is a crucial validation of our approach."
It certainly seems like this time, there finally is a real advantage?
I’ve only skimmed the paper but it seems like the “information not usually available” from NMR is the Jacobian and Hessian of the Hamiltonian of the system.
So basically you’re able to go directly from running the quantum experiment to being able to simulate the dynamics of the underlying system, because the Jacobian and Hessian are the first and second partial derivatives of the system with respect to all of its parameters in matrix form.
This is quite different from their previous random circuit sampling (RCS) experiments that have made headlines a few times in the past. The key difference from an applied standpoint is that the output of RCS is a random bitstring which is different every time you run the algorithm. These bitstrings are not reproducible, and also not particularly interesting, except for the fact that only a quantum computer can generate them efficiently.
The new experiment generates the same result every time you run it (after a small amount of averaging). It also involves running a much more structured circuit (as opposed to a random circuit), so all-in-all, the result is much more 'under control.'
As a cherry on top, the output has some connection to molecular spectroscopy. It still isn't that useful at this scale, but it is much more like the kind of thing you would hope to use a quantum computer for someday (and certainly more useful than generating random bitstrings).
This is not the RCS problem or indeed anything from number theory.
The announcement is about an algorithm which they are calling Quantum Echoes, where you set up the experiment, perturb one of the qbits and observe the “echoes” through the rest of the system.
They use it to replicate a classical experiment in chemistry done using nuclear magnetic resonance imaging. They say they are able to reproduce the results of that conventional experiment and gather additional data which is unavailable via conventional means.
My understanding is that this one is "verifiable" which means you get a reproducible result (i.e. consistent result comes out of a computation that would take much longer to do classically).
Non-verifiable computations include things like pulling from a hard-to-compute probability distribution (i.e. random number generator) where it is faster, but the result is inherently not the same each time.
It's what happens when companies are driven by profit rather than making accurate scientific statements that reputation is built by and further research funding is predicated on.
Hyperbolic claims like this are for shareholders who aren't qualified to judge for themselves because they're interested in future money and not actual understanding. This is what happens when you delegate science to corporations.
The article states: “...13,000 times faster on Willow than the best classical algorithm on one of the world’s fastest supercomputers...”
I agree it's not very precise without knowing which of the world's fastest supercomputers they're talking about, but there was no need to leave out this tidbit.
Afaik we are a decade or two away from quantum supremacy. All the AI monks, forget that if AI is the future, quantum supremacy is the present. And whoever controls the present, decides the future.
Rememeber, it is not about quantum general computing, it's about implementing the quantum computation of Shor's algorithm
but much like AI hype quantum hype is also way over played, yeah modern asymmetric encryption will be less secure, but even after you have quantum computers that can do Shor's algorithm it might be a while before there are quantum computers affordable enough for it to be an actual threat (i.e. it's not cheaper to just buy a zero day for the target's phone or something).
But since we already have post quantum algorithms, the end state of cheap quantum computers is just a new equilibrium where people use the new algorithms and they can't be directly cracked and it's basically the same except maybe you can decrypt historical stuff but who knows if it's worth it.
I was mostly talking about state actors buying quantum computers that are just built to run that particular algorithm and using it to snoop on poorer countries who cannot afford such tech. Plus all countries will have plenty of systems that have not been quantum proofed. The moment when it becomes affordable to a state actor until most state actors have access to such tech is likely to be a long time especially since no one will admit to possess such tech. And unlike nuclear weapons, it is much harder to prove whether or not someone has one
The last time I heard a similar news from Google, it turned out they were solving a quantum phenomenon using a quantum phenomenon. It seems to be the same pattern here. Not to say it's not progress, but kind of feels like overhyped.
Idk. I get this is the median take across many comments, I don’t mean to be disagreeable with a crowd. But I don’t know why using quantum phenomena is a sign something’s off. It’s a quantum computer! But I know something is off with this take if it didn’t strike you that way.
To me, it matters because it's a sign that it might not be particularly transferable as a method of computation.
A wind tunnel is a great tool for solving aerodynamics and fluid flow problems, more efficiently than a typical computer. But we don't call it a wind-computer, because it's not a useful tool outside of that narrow domain.
The promise of quantum computing is that it can solve useful problems outside the quantum realm - like breaking traditional encryption.
Good point, I guess that's why I find this comments section boring and not representative of the HN I've known for 16 years: there's a sort of half-remembering it wasn't powerful enough to do something plainly and obviously useful yesterday.
Then, we ignore today, and launder that into a gish-gallop of free-association, torturing the meaning of words to shoehorn in the idea that all the science has it wrong and inter alia, the quantum computer uses quantum phenomena to computer so it might be a fake useless computer, like a wind tunnel. shrugs
It's a really unpleasant thing to read, reminds me of the local art school dropout hanging on my ear about crypto at the bar at 3 am in 2013.
I get that's all people have to reach for, but personally, I'd rather not inflict my free-association on the world when I'm aware I'm half-understanding, fixated on the past when discussing something current, and I can't explain the idea I have as something concrete and understandable even when I'm using technical terms.
> Quantum computing-enhanced NMR could become a powerful tool in drug discovery, helping determine how potential medicines bind to their targets, or in materials science for characterizing the molecular structure of new materials like polymers, battery components or even the materials that comprise our quantum bits (qubits)
There is a section in the article about future real world application, but I feel like these articles about quantum "breakthroughs" are almost always deliberately packed with abstruse language. As a result I have no sense about whether these suggested real world applications are a few years away or 50+ years away. Does anyone?
The rule of thumb is that a working quantum computer that can run Grover's algorithm reduces the security of a symmetric cipher to half of its key size. That is, AES-128 should be considered to have a 64 bit key size, which is why it's not considered "quantum-safe."
Edit: An effective key space of 2^64 is not secure according to modern-day standards. It was secure at the times of DES.
AES-128 is quantum safe (more or less). 64 bit security in the classical domain isn't safe because you can parallelize across 2^20 computers trivially. Grover gives you 2^64 AES operations on a quantum coputer (probably ~2^70 gates or so before error correction or ~2^90 after error correction) that can't be parallelized efficiently. AES-128 is secure for the next century (but you might as well switch to aes-256 because why not)
Before the mega monopolies took over, corps used to partner with universities to conduct this kind of research. Now we have bloated salaries, rich corporations, and expensive research while having under experienced graduates. These costs will get forwarded to the consumer. The future won’t have a lot of things that we have come to expect.
> in partnership with The University of California, Berkeley, we ran the Quantum Echoes algorithm on our Willow chip...
And the author affiliations in the Nature paper include:
Princeton University;
UC Berkeley; University of Massachusetts, Amherst; Caltech; Harvard; UC Santa Barbara; University of Connecticut; UC Santa Barbara; MIT; UC Riverside; Dartmouth College; Max Planck Institute.
This is very much in partnership with universities and they clearly state that too.
I don't think it's accurate to attribute some kind of altruism to these research universities. Have a look at some of those pay packages or the literal hedge funds that they operate. And they're mostly exempt from taxation.
Another response is to come to terms with a possibly meaningless and Sisyphean reality and to keep pushing the boulder (that you care about) up the hill anyway.
I’m glad the poster is concerned and/or disillusioned about the hype, hyperbole and deception associated with this type of research.
Before the mega monopolies took over, corps used to partner with universities to conduct this kind of research. Now we have bloated salaries, rich corporations, and expensive research while having under experienced graduates. These costs will get forwarded to the consumer. The future won’t have a lot of things that we have come to expect.
I don't disagree, but these days I'm happy to see any advanced research at all.
Granted, too often I see the world through HN-colored glasses, but it seems like so many technological achievements are variations on getting people addicted to something in order to show them ads.
Did Bellcore or Xerox PARC do a lot of university partnerships? I was into other things in those days.
In the last sentence of the abstract you will find:
"These results ... indicate a viable path to practical quantum advantage."
And in the conclusions:
"Although the random circuits used in the dynamic learning demonstration remain a toy model for Hamiltonians that are of practical relevance, the scheme is readily applicable to real physical systems."
So the press release is a little over-hyped. But this is real progress nonetheless (assuming the results actually hold up).
[UPDATE] It should be noted that this is still a very long way away from cracking RSA. That requires quantum error correction, which this work doesn't address at all. This work is in a completely different regime of quantum computing, looking for practical applications that use a quantum computer to simulate a physical quantum system faster than a classical computer can. The hardware improvements that produced progress in this area might be applicable to QEC some day, this is not direct progress towards implementing Shor's algorithm at all. So your crypto is still safe for the time being.
An MBA, an engineer and a quantum computing physicist check into a hotel. Middle of the night, a small fire starts up on their floor.
The MBA wakes up, sees the fire, sees a fire extinguisher in the corner of the room, empties the fire extinguisher to put out the fire, then goes back to sleep.
The engineer wakes up, sees the fire, sees the fire extinguisher, estimates the extent of the fire, determines the exact amount of foam required to put it out including a reasonable tolerance, and dispenses exactly that amount to put out the fire, and then satisified that there is enough left in case of another fire, goes back to sleep.
The quantum computing physicist wakes up, sees the fire, observes the fire extinguisher, determines that there is a viable path to practical fire extinguishment, and goes back to sleep.
Not quite sure why all the responses here are so cynical. I mean, it's a genuinely difficult set of problems, so of course the first steps will be small. Today's computers are the result of 80 astonishing years of sustained innovation by millions of brilliant people.
Even as a Googler I can find plenty of reasons to be cynical about Google (many involving AI), but the quantum computing research lab is not one of them. It's actual scientific research, funded (I assume) mostly out of advertising dollars, and it's not building something socially problematic. So why all the grief?
Quantum computing hardware is still at its infancy.
The problem is not with these papers (or at least not ones like this one) but how they are reported. If quantum computing is going to suceed it needs to do the baby steps before it can do the big steps, and at the current rate the big leaps are probably decades away. There is nothing wrong with that, its a hard problem and its going to take time. But then the press comes in and reports that quantum computing is going to run a marathon tomorrow which is obviously not true and confuses everyone.
No, I don't think so. By the time quantum supremacy is really achieved for a "Q-Day" that could affect them or things like them, the existing blockchains which have already been getting hardened will have gotten even harder. Quantum computing could be used to further harden them, as well, rather than compromise them.
Supposing that Q-Day brought any temporary hurdles to Bitcoin or Ethereum or related blockchains, well...due to their underlying nature resulting in justified Permanence, we would be able to simply reconstitute and redeploy them for their functionalities because they've already been sufficiently imbued with value and institutional interest as well. These are quantum-resistant hardenings.
So I do not think these tools or economic substrate layers are going anywhere. They are very valuable for the particular kinds of applications that can be built with them and also as additional productive layers to the credit and liquidity markets nationally, internationally, and also globally/universally.
So there is a lot of institutional interest, including governance interest, in using them to build better systems. Bitcoin on its own would be reduced in such justification but because of Ethereum's function as an engine which can drive utility, the two together are a formidable and quantum-resistant platform that can scale into the hundreds of trillions of dollars and in Ethereum's case...certainly beyond $1Q in time.
I'm very bullish on the underlying technology, even beyond tokenomics for any particular project. The underlying technologies are powerful protocols that facilitate the development and deployment of Non Zero Sum systems at scale. With Q-Day not expected until end of 2020s or beginning of 2030s, that is a considerable amount of time (in the tech world) to lay the ground work for further hardening and discussions around this.
I don’t see why bitcoin wouldn’t update its software in such a case. The majority of minors just need to agree. But why wouldn’t they if the alternative is going to zero?
How could updating the software possibly make a difference here? If the encryption is cracked, then who is to say who owns which Bitcoin? As soon as I try to transfer any coin that I own, I expose my public key, your "Quantum Computer" cracks it, and you offer a competing transaction with a higher fee to send the Bitcoin to your slush fund.
No amount of software fixes can update this. In theory once an attack becomes feasible on the horizon they could update to post-quantum encryption and offer the ability to transfer from old-style addresses to new-style addresses, but this would be a herculean effort for everyone involved and would require all holders (not miners) to actively update their wallets. Basically infeasible.
Fortunately this will never actually happen. It's way more likely that ECDSA is broken by mundane means (better stochastic approaches most likely) than quantum computing being a factor.
> this would be a herculean effort for everyone involved and would require all holders (not miners) to actively update their wallets. Basically infeasible.
Any rational economic actor would participate in a post-quantum hard fork because the alternative is losing all their money.
If this was a company with a $2 trillion market cap there'd be no question they'd move heaven-and-earth to prevent the stock from going to zero.
Y2K only cost $500 billion[1] adjusted for inflation and that required updating essentially every computer on Earth.
> would require all holders (not miners) to actively update their wallets. Basically infeasible.
It doesn't require all holders to update their wallets. Some people would fail to do so and lose their money. That doesn't mean the rest of the network can't do anything to save themselves. Most people use hosted wallets like Coinbase these days anyway, and Coinbase would certainly be on top of things.
Also, you don't need to break ECDSA to break BTC. You could also do it by breaking mining. The block header has a 32-bit nonce at the very end. My brain is too smooth to know how realistic this actually is, but perhaps someone could do use a QC to perform the final step of SHA-256 on all 2^32 possible values of the nonce at once, giving them an insurmountable advantage in mining. If only a single party has that advantage, it breaks the Nash equilibrium.
But if multiple parties have that advantage, I suppose BTC could survive until someone breaks ECDSA. All those mining ASICs would become worthless, though.
Firstly I'd want to see them hash the whole blockchain (not just the last block) with the post-quantum algo to make sure history is intact.
But as far as moving balances - it's up to the owners. It would start with anybody holding a balance high enough to make it worth the amount of money it would take to crack a single key. That cracking price will go down, and the value of BTC may go up. People can move over time as they see fit.
As you alluded to, network can have two parallel chains where wallets can be upgraded by users asynchronously before PQC is “needed” (a long way away still) which will leave some wallets vulnerable and others safe. It’s not that herculean as most wallets (not most BTC) are in exchanges. The whales will be sufficiently motivated to switch and everyone else it will happen in the background.
A nice benefit is it solves the problem with Satoshi’s (of course not a real person or owner) wallet. Satoshi’s wallet becomes the defacto quantum advantage prize. That’s a lot of scratch for a research lab.
The problem is that the owner needs to claim their wallet and migrate it to the new encryption. Just freezing the state at a specific moment doesn't help; to claim the wallet in the new system I just need the private key for the old wallet (as that's the sole way to prove ownership). In our hypothetical post-quantum scenario, anyone with a quantum computer can get the private key and migrate the wallet, becoming the de-facto new owner.
I think this is all overhyped though. It seems likely we will have plenty of warning to migrate prior to achieving big enough quantum computers to steal wallets. Per wikipedia:
> The latest quantum resource estimates for breaking a curve with a 256-bit modulus (128-bit security level) are 2330 qubits and 126 billion Toffoli gates.
IIRC this is speculated to be the reason ECDSA was selected for Bitcoin in the first place.
The problem is all the lost BTC wallets, which is speculated to be a lot and also one of the biggest reason for the current BTC price, who obviously cannot upgrade to PQ. There is currently a radical proposal of essentially making all those lost wallets worthless, unless they migrate [1]
no, not really, PQC is already being discussed in pretty much every relevant crypto thing for couple years alearady and there are multiple PQC algos ready to protect important data in banking etc as well
I don’t really understand the threat to banking. Let’s say you crack the encryption key used in my bank between a java payment processing system and a database server. You can’t just inject transactions or something. Is the threat that internal network traffic could be read? Transactions all go to clearing houses anyway. Is it to protect browser->webapp style banking? those all use ec by now anyway, and even if they don’t how do you mitm this traffic?
As far as i am aware, eliptic curve is also vulnerable to quantum attacks.
The threat is generally both passive eavesdropping to decrypt later and also active MITM attacks. Both of course require the attacker to be in a position to eavesdrop.
> Let’s say you crack the encryption key used in my bank between a java payment processing system and a database server.
Well if you are sitting in the right place on the network then you can.
> how do you mitm this traffic?
Depends on the scenario. If you are government or ISP then its easy. Otherwise it might be difficult. Typical real life scenarios are when the victim is using wifi and the attacker is in the physical vicinity.
Like all things crypto, it always depends on context. What information are you trying to protect and who are you trying to protect.
All that said, people are already experimenting with PQC so it might mostly be moot by the time a quantum computer comes around. On the other hand people are still using md5 so legacy will bite.
> Well if you are sitting in the right place on the network then you can.
Not really. This would be if not instantly then when a batch goes for clearing or reconciliation, be caught -- and an investigation would be immediately started.
There are safeguards against this kind of thing that can't be really defeated by breaking some crypto. We have to protect against malicious employees etc also.
One can not simply insert bank transactions like this. They are really extremely complicated flows here.
Sure, if a bank gets compromised you could in theory DOS a clearing house, but I'd be completely amazed if it succeeded. Those kind of anomalous spikes would be detected quickly. Not even imagining that each bank probably has dedicated instances inside each clearing house.
These are fairly robust systems. You'd likely have a much better impact dossing the banks.
Okay, but breaking that TLS (device->bank) would allow you to intercept the session keys and then decrypt the conversation. Alright, so now you can read I logged in and booked a transaction to my landlord or whatever. What else can you do? OTP/2FA code prevents you from re-using my credentials. Has it been demonstrated at all that someone who intercepts a session key is able to somehow inject into a conversation? It seems highly unlikely to me with TCP over the internet.
So we are all in a collective flap that someone can see my bank transactions? These are pretty much public knowledge to governments/central banks/clearing houses anyway -- doesn't seem like all that big a deal to me.
(I work on payment processing systems for a large bank)
> Has it been demonstrated at all that someone who intercepts a session key is able to somehow inject into a conversation? It seems highly unlikely to me with TCP over the internet.
if you can read the TLS session in general, you can capture the TLS session ticket and then use that to make a subsequent connection. This is easier as you dont have to be injecting packets live or make inconvinent packets disappear.
It seems like detecting a re-use like this should be reasonably easy, it would not look like normal traffic and we could flag this to our surveillance systems for additional checks on these transactions. In a post quantum world, this seems like something that would be everywhere anyway (and presumably, we would be using some other algo by then too).
Somehow, I'm not all that scared. Perhaps I'm naive.. :}
If quantum computers crack digital crytography, traditional bank account goes to zero too because regular 'ol databases also use crytography techniques for communication.
If all else fails, banks can generate terabytes of random one-time pad bytes, and then physically transport those on tape to other banks to set up provably secure communication channels that still go over the internet.
It would be a pain to manage but it would be safe from quantum computing.
Aaronson isn't a cartoonist, it was an AI cartoon from ChatGPT that an antisemite sent Aaronson in the mail which he then seemingly maliciously misattributed to Woit making people assume Aaronson went batshit.
Aaronson did work at OpenAI but not on image generation, maybe you could argue the OpenAI safety team he worked on should be involved here but I'm pretty sure image generation was after his time, and even if he did work directly on image generation under NDA or something, attributing that cartoon to Aaronson would be like attributing a cartoon made in Photoshop by an antisemite to a random Photoshop programmer, unless he maliciously added antisemitic images to the training data or something.
The most charitable interpretation that I think Aaronson also has offered is that Aaronson believed Woit was an antisemite because of a genocidal chain of events that in Aaronson's belief would necessarily happen with a democratic solution and that even if Woit didn't believe that that would be the consequence, or believed in democracy deontologically and thought the UN could step in under the genocide convention if any genocide began to be at risk of unfolding, the intent of Woit could be dismissed, and Woit could therefore be somehow be lumped in with the antisemite who sent Aaronson the image.
Aaronson's stated belief also is that any claim that Isreal was commiting a genocide in the last few years is a blood-libel because he believes the population of Gaza is increasing and it can't be a genocide unless there is a population decrease during the course of it. This view of Aaronsno would imply things like if every male in Gaza was sterilized, and the UN stepped in and stopped it as a genocide, it would be a blood libel to call that genocide so long as the population didn't decrease during the course of it, even if it did decrease afterwards. But maybe he would clarify that it could include decreases that happen with a delayed effect of the actions. But these kind of strong beliefs of blood-libel I think are part of why he felt ok labeling the comic with Woit's name.
I also don't think if the population does go down or has been going down he will say it was from a genocide, but rather that populations can go down from war. He's only proposing that a population must go down as a necessary criteria of genocide, not a sufficient one.
Classic HN, always downvoting every comment concerning decentralized and p2p currencies. How do you like your centralized, no privacy, mass surveilled worthless banking system
I don't consider the current system to be worthless. In fact, it functions remarkably well. There is certainly room for additional substrate layers though, and Bitcoin being digital or electronic gold and Ethereum being an e-steam engine or e-computer make for a powerful combination for applications together. I agree that the crowd here has historically not understood, or wanted to understand, the underlying protocols and what is possible. A bizarre kind of hubris perhaps, or maybe just a response to how the first iterations of a web2.5 or web3.0 were...admittedly more mired in a kind of marketing hype that was not as reflective of what is possible and sustainable in the space due to there not being realistic web and engineering muscle at the forefront of the hype.
I think this current cycle is going to change that though. The kinds of projects spinning up are truly massive, innovative, and interesting. Stay tuned!
What is it about then? I'm not spreading propaganda - I'm maximally truth seeking and approaching things from a technical/economic/governance point of view and not an ideological one per se. Though ideology shapes everything, what I mean is that I'm not ideologically predisposed towards a conclusion on things. For me what matters is the core, truthy aspects of a given subject.
the big problem with quantum advantage is that quantum computing is inherently error-prone and stochastic, but then they compare to classical methods that are exact
let a classical computer use an error prone stochastic method and it still blows the doors off of qc
Stochasticity (randomness) is pervasively used in classical algorithms that one compares to. That is nothing new and has always been part of comparisons.
"Error prone" hardware is not "a stochastic resource". Error prone hardware does not provide any value to computation.
Notice how they say "quantum advantage" not "supremacy" and "a (big) step toward real-world applications". So actually just another step as always. And I'm left to doubt if the classic algorithm used for comparison was properly optimised.
I skimmed the paper so might have missed something, but iiuc there is no algorithm used for comparison. They did not run some well defined algorithm on some benchmark instances, they estimated the cost of simulating the circuits "through tensor network contraction" - I quote here not to scare but because this is where my expertise runs out.
> Quantum verifiability means the result can be repeated on our quantum computer — or any other of the same caliber — to get the same answer, confirming the result.
I think what they are trying to do is to contrast these to previous quantum advantage experiments in the following sense.
The previous experiments involve sampling from some distribution, which is believed to be classically hard. However, it is a non-trivial question whether you succeed or fail in this task. Having perfect sampler from the same distribution won't allow you to easily verify the samples.
On the other hand these experiments involve measuring some observable, i.e., the output is just a number and you could compare it to the value obtained in a different way (one a different or same computer or even some analog experimental system).
Note that these observables are expectation values of the samples, but in the previous experiments since the circuits are random, all the expectation values are very close to zero and it is impossible to actually resolve them from the experiment.
Disclaimer: this is my speculation about what they mean because they didn't explain it anywhere from what I can see.
But apparently they haven't demonstrated the actual portability between two different quantum computers.
> This is the first time in history that any quantum computer has successfully run a verifiable algorithm that surpasses the ability of supercomputers.
The idea: Quantum Computation of Molecular Structure Using Data from Challenging-To-Classically-Simulate Nuclear Magnetic Resonance Experiments https://journals.aps.org/prxquantum/abstract/10.1103/PRXQuan...
Verifying the result by another quantum computer (it hasn't been yet): Observation of constructive interference at the edge of quantum ergodicity https://www.nature.com/articles/s41586-025-09526-6
As many times in the past quantum supremacy was claimed, and then, other groups have shown they can do better with optimized classical methods.
"Quantum verifiability means the result can be repeated on our quantum computer — or any other of the same caliber — to get the same answer, confirming the result."
"The results on our quantum computer matched those of traditional NMR, and revealed information not usually available from NMR, which is a crucial validation of our approach."
It certainly seems like this time, there finally is a real advantage?
So basically you’re able to go directly from running the quantum experiment to being able to simulate the dynamics of the underlying system, because the Jacobian and Hessian are the first and second partial derivatives of the system with respect to all of its parameters in matrix form.
My impression was that every problem a quantum computer solves in practice right now is basically reducible from 'simulate a quantum computer'
The new experiment generates the same result every time you run it (after a small amount of averaging). It also involves running a much more structured circuit (as opposed to a random circuit), so all-in-all, the result is much more 'under control.'
As a cherry on top, the output has some connection to molecular spectroscopy. It still isn't that useful at this scale, but it is much more like the kind of thing you would hope to use a quantum computer for someday (and certainly more useful than generating random bitstrings).
The announcement is about an algorithm which they are calling Quantum Echoes, where you set up the experiment, perturb one of the qbits and observe the “echoes” through the rest of the system.
They use it to replicate a classical experiment in chemistry done using nuclear magnetic resonance imaging. They say they are able to reproduce the results of that conventional experiment and gather additional data which is unavailable via conventional means.
Am I crazy or have I heard this same announcement from Google and others like 5 times at this point?
Non-verifiable computations include things like pulling from a hard-to-compute probability distribution (i.e. random number generator) where it is faster, but the result is inherently not the same each time.
Hyperbolic claims like this are for shareholders who aren't qualified to judge for themselves because they're interested in future money and not actual understanding. This is what happens when you delegate science to corporations.
I agree it's not very precise without knowing which of the world's fastest supercomputers they're talking about, but there was no need to leave out this tidbit.
Rememeber, it is not about quantum general computing, it's about implementing the quantum computation of Shor's algorithm
But since we already have post quantum algorithms, the end state of cheap quantum computers is just a new equilibrium where people use the new algorithms and they can't be directly cracked and it's basically the same except maybe you can decrypt historical stuff but who knows if it's worth it.
A wind tunnel is a great tool for solving aerodynamics and fluid flow problems, more efficiently than a typical computer. But we don't call it a wind-computer, because it's not a useful tool outside of that narrow domain.
The promise of quantum computing is that it can solve useful problems outside the quantum realm - like breaking traditional encryption.
Then, we ignore today, and launder that into a gish-gallop of free-association, torturing the meaning of words to shoehorn in the idea that all the science has it wrong and inter alia, the quantum computer uses quantum phenomena to computer so it might be a fake useless computer, like a wind tunnel. shrugs
It's a really unpleasant thing to read, reminds me of the local art school dropout hanging on my ear about crypto at the bar at 3 am in 2013.
I get that's all people have to reach for, but personally, I'd rather not inflict my free-association on the world when I'm aware I'm half-understanding, fixated on the past when discussing something current, and I can't explain the idea I have as something concrete and understandable even when I'm using technical terms.
There is a section in the article about future real world application, but I feel like these articles about quantum "breakthroughs" are almost always deliberately packed with abstruse language. As a result I have no sense about whether these suggested real world applications are a few years away or 50+ years away. Does anyone?
Edit: An effective key space of 2^64 is not secure according to modern-day standards. It was secure at the times of DES.
> in partnership with The University of California, Berkeley, we ran the Quantum Echoes algorithm on our Willow chip...
And the author affiliations in the Nature paper include:
Princeton University; UC Berkeley; University of Massachusetts, Amherst; Caltech; Harvard; UC Santa Barbara; University of Connecticut; UC Santa Barbara; MIT; UC Riverside; Dartmouth College; Max Planck Institute.
This is very much in partnership with universities and they clearly state that too.
Another response is to come to terms with a possibly meaningless and Sisyphean reality and to keep pushing the boulder (that you care about) up the hill anyway.
I’m glad the poster is concerned and/or disillusioned about the hype, hyperbole and deception associated with this type of research.
It suggests he still cares.
I don't disagree, but these days I'm happy to see any advanced research at all.
Granted, too often I see the world through HN-colored glasses, but it seems like so many technological achievements are variations on getting people addicted to something in order to show them ads.
Did Bellcore or Xerox PARC do a lot of university partnerships? I was into other things in those days.
https://www.nature.com/articles/s41586-025-09526-6
In the last sentence of the abstract you will find:
"These results ... indicate a viable path to practical quantum advantage."
And in the conclusions:
"Although the random circuits used in the dynamic learning demonstration remain a toy model for Hamiltonians that are of practical relevance, the scheme is readily applicable to real physical systems."
So the press release is a little over-hyped. But this is real progress nonetheless (assuming the results actually hold up).
[UPDATE] It should be noted that this is still a very long way away from cracking RSA. That requires quantum error correction, which this work doesn't address at all. This work is in a completely different regime of quantum computing, looking for practical applications that use a quantum computer to simulate a physical quantum system faster than a classical computer can. The hardware improvements that produced progress in this area might be applicable to QEC some day, this is not direct progress towards implementing Shor's algorithm at all. So your crypto is still safe for the time being.
I'll add this to my list of useful phrases.
Q: Hey AndrewStephens, you promised that task would be completed two days ago. Can you finish it today?
A: Results indicate a viable path to success.
The MBA wakes up, sees the fire, sees a fire extinguisher in the corner of the room, empties the fire extinguisher to put out the fire, then goes back to sleep.
The engineer wakes up, sees the fire, sees the fire extinguisher, estimates the extent of the fire, determines the exact amount of foam required to put it out including a reasonable tolerance, and dispenses exactly that amount to put out the fire, and then satisified that there is enough left in case of another fire, goes back to sleep.
The quantum computing physicist wakes up, sees the fire, observes the fire extinguisher, determines that there is a viable path to practical fire extinguishment, and goes back to sleep.
Even as a Googler I can find plenty of reasons to be cynical about Google (many involving AI), but the quantum computing research lab is not one of them. It's actual scientific research, funded (I assume) mostly out of advertising dollars, and it's not building something socially problematic. So why all the grief?
Im pretty reluctant to make any negative comments about these kinds of posts be cause it will prevent actually achieving the desired outcome.
The problem is not with these papers (or at least not ones like this one) but how they are reported. If quantum computing is going to suceed it needs to do the baby steps before it can do the big steps, and at the current rate the big leaps are probably decades away. There is nothing wrong with that, its a hard problem and its going to take time. But then the press comes in and reports that quantum computing is going to run a marathon tomorrow which is obviously not true and confuses everyone.
The current situation with "AI" took off because people learned their lessons from the last round of funding cuts "AI winter".
That being said any pushback against funding quantum research would be like chopped your own hands off.
So I do not think these tools or economic substrate layers are going anywhere. They are very valuable for the particular kinds of applications that can be built with them and also as additional productive layers to the credit and liquidity markets nationally, internationally, and also globally/universally.
So there is a lot of institutional interest, including governance interest, in using them to build better systems. Bitcoin on its own would be reduced in such justification but because of Ethereum's function as an engine which can drive utility, the two together are a formidable and quantum-resistant platform that can scale into the hundreds of trillions of dollars and in Ethereum's case...certainly beyond $1Q in time.
I'm very bullish on the underlying technology, even beyond tokenomics for any particular project. The underlying technologies are powerful protocols that facilitate the development and deployment of Non Zero Sum systems at scale. With Q-Day not expected until end of 2020s or beginning of 2030s, that is a considerable amount of time (in the tech world) to lay the ground work for further hardening and discussions around this.
No amount of software fixes can update this. In theory once an attack becomes feasible on the horizon they could update to post-quantum encryption and offer the ability to transfer from old-style addresses to new-style addresses, but this would be a herculean effort for everyone involved and would require all holders (not miners) to actively update their wallets. Basically infeasible.
Fortunately this will never actually happen. It's way more likely that ECDSA is broken by mundane means (better stochastic approaches most likely) than quantum computing being a factor.
Any rational economic actor would participate in a post-quantum hard fork because the alternative is losing all their money.
If this was a company with a $2 trillion market cap there'd be no question they'd move heaven-and-earth to prevent the stock from going to zero.
Y2K only cost $500 billion[1] adjusted for inflation and that required updating essentially every computer on Earth.
[1]https://en.wikipedia.org/wiki/Year_2000_problem#Cost
It doesn't require all holders to update their wallets. Some people would fail to do so and lose their money. That doesn't mean the rest of the network can't do anything to save themselves. Most people use hosted wallets like Coinbase these days anyway, and Coinbase would certainly be on top of things.
Also, you don't need to break ECDSA to break BTC. You could also do it by breaking mining. The block header has a 32-bit nonce at the very end. My brain is too smooth to know how realistic this actually is, but perhaps someone could do use a QC to perform the final step of SHA-256 on all 2^32 possible values of the nonce at once, giving them an insurmountable advantage in mining. If only a single party has that advantage, it breaks the Nash equilibrium.
But if multiple parties have that advantage, I suppose BTC could survive until someone breaks ECDSA. All those mining ASICs would become worthless, though.
But as far as moving balances - it's up to the owners. It would start with anybody holding a balance high enough to make it worth the amount of money it would take to crack a single key. That cracking price will go down, and the value of BTC may go up. People can move over time as they see fit.
A nice benefit is it solves the problem with Satoshi’s (of course not a real person or owner) wallet. Satoshi’s wallet becomes the defacto quantum advantage prize. That’s a lot of scratch for a research lab.
I think this is all overhyped though. It seems likely we will have plenty of warning to migrate prior to achieving big enough quantum computers to steal wallets. Per wikipedia:
> The latest quantum resource estimates for breaking a curve with a 256-bit modulus (128-bit security level) are 2330 qubits and 126 billion Toffoli gates.
IIRC this is speculated to be the reason ECDSA was selected for Bitcoin in the first place.
That's an uncomfortably apt typo.
[1] - https://github.com/jlopp/bips/blob/quantum_migration/bip-pos...
Where is the exact threat?
As far as i am aware, eliptic curve is also vulnerable to quantum attacks.
The threat is generally both passive eavesdropping to decrypt later and also active MITM attacks. Both of course require the attacker to be in a position to eavesdrop.
> Let’s say you crack the encryption key used in my bank between a java payment processing system and a database server.
Well if you are sitting in the right place on the network then you can.
> how do you mitm this traffic?
Depends on the scenario. If you are government or ISP then its easy. Otherwise it might be difficult. Typical real life scenarios are when the victim is using wifi and the attacker is in the physical vicinity.
Like all things crypto, it always depends on context. What information are you trying to protect and who are you trying to protect.
All that said, people are already experimenting with PQC so it might mostly be moot by the time a quantum computer comes around. On the other hand people are still using md5 so legacy will bite.
Not really. This would be if not instantly then when a batch goes for clearing or reconciliation, be caught -- and an investigation would be immediately started.
There are safeguards against this kind of thing that can't be really defeated by breaking some crypto. We have to protect against malicious employees etc also.
One can not simply insert bank transactions like this. They are really extremely complicated flows here.
These are fairly robust systems. You'd likely have a much better impact dossing the banks.
So we are all in a collective flap that someone can see my bank transactions? These are pretty much public knowledge to governments/central banks/clearing houses anyway -- doesn't seem like all that big a deal to me.
(I work on payment processing systems for a large bank)
if you can read the TLS session in general, you can capture the TLS session ticket and then use that to make a subsequent connection. This is easier as you dont have to be injecting packets live or make inconvinent packets disappear.
Somehow, I'm not all that scared. Perhaps I'm naive.. :}
It would be a pain to manage but it would be safe from quantum computing.
This paper on verifiable advantage is a lot more compelling. With Scott Aaronson and Quantinuum among other great researchers
[1] https://eprint.iacr.org/2025/1237
https://scottaaronson.blog/?p=9098
Aaronson did work at OpenAI but not on image generation, maybe you could argue the OpenAI safety team he worked on should be involved here but I'm pretty sure image generation was after his time, and even if he did work directly on image generation under NDA or something, attributing that cartoon to Aaronson would be like attributing a cartoon made in Photoshop by an antisemite to a random Photoshop programmer, unless he maliciously added antisemitic images to the training data or something.
The most charitable interpretation that I think Aaronson also has offered is that Aaronson believed Woit was an antisemite because of a genocidal chain of events that in Aaronson's belief would necessarily happen with a democratic solution and that even if Woit didn't believe that that would be the consequence, or believed in democracy deontologically and thought the UN could step in under the genocide convention if any genocide began to be at risk of unfolding, the intent of Woit could be dismissed, and Woit could therefore be somehow be lumped in with the antisemite who sent Aaronson the image.
Aaronson's stated belief also is that any claim that Isreal was commiting a genocide in the last few years is a blood-libel because he believes the population of Gaza is increasing and it can't be a genocide unless there is a population decrease during the course of it. This view of Aaronsno would imply things like if every male in Gaza was sterilized, and the UN stepped in and stopped it as a genocide, it would be a blood libel to call that genocide so long as the population didn't decrease during the course of it, even if it did decrease afterwards. But maybe he would clarify that it could include decreases that happen with a delayed effect of the actions. But these kind of strong beliefs of blood-libel I think are part of why he felt ok labeling the comic with Woit's name.
I also don't think if the population does go down or has been going down he will say it was from a genocide, but rather that populations can go down from war. He's only proposing that a population must go down as a necessary criteria of genocide, not a sufficient one.
> guywithahat [...] I'll be waiting for Scott Adams to tell me what to think about this
Scott Adams
Text adventure guy: https://en.wikipedia.org/wiki/Scott_Adams_(game_designer)
Batshit cartoonist: https://en.wikipedia.org/wiki/Scott_Adams
(also, for fun, a cartoon by Scott Aaronson and Zack Weinersmith: https://www.smbc-comics.com/comic/the-talk-3)
I think this current cycle is going to change that though. The kinds of projects spinning up are truly massive, innovative, and interesting. Stay tuned!
let a classical computer use an error prone stochastic method and it still blows the doors off of qc
this is a false comparison
"Error prone" hardware is not "a stochastic resource". Error prone hardware does not provide any value to computation.
Not a big leap then.