Advertisement
Guest User

Untitled

a guest
Oct 23rd, 2019
162
0
Never
Not a member of Pastebin yet? Sign Up, it unlocks many cool features!
text 21.26 KB | None | 0 0
  1.  
  2. IBM paper and blog post responding to Google’s announcement
  3.  
  4. When Google’s quantum supremacy paper leaked a month ago—not through Google’s error, but through NASA’s—I had a hard time figuring out how to cover the news here. I had to say something; on the other hand, I wanted to avoid any detailed technical analysis of the leaked paper, because I was acutely aware that my colleagues at Google were still barred by Nature‘s embargo rules from publicly responding to anything I or others said. (I was also one of the reviewers for the Nature paper, which put additional obligations on me.)
  5.  
  6. I ended up with Scott’s Supreme Quantum Supremacy FAQ, which tried to toe this impossible line by “answering general questions about quantum supremacy, and the consequences of its still-hypothetical achievement, in light of the leak.” It wasn’t an ideal solution—for one thing, because while I still regard Google’s sampling experiment as a historic milestone for our whole field, there are some technical issues, aspects that subsequent experiments (hopefully coming soon) will need to improve. Alas, the ground rules of my FAQ forced me to avoid such issues, which caused some readers to conclude mistakenly that I didn’t think there were any.
  7.  
  8. Now, though, the Google paper has come out as Nature‘s cover story, at the same time as there have been new technical developments—most obviously, the paper from IBM (see also their blog post) saying that they could simulate the Google experiment in 2.5 days, rather than the 10,000 years that Google had estimated.
  9.  
  10. (Yesterday I was deluged by emails asking me “whether I’d seen” IBM’s paper. As a science blogger, I try to respond to stuff pretty quickly when necessary, but I don’t—can’t—respond in Twitter time.)
  11.  
  12. So now the gloves are off. No more embargo. Time to address the technical stuff under the hood—which is the purpose of this post.
  13.  
  14. I’m going to assume, from this point on, that you already understand the basics of sampling-based quantum supremacy experiments, and that I don’t need to correct beginner-level misconceptions about what the term “quantum supremacy” does and doesn’t mean (no, it doesn’t mean scalability, fault-tolerance, useful applications, breaking public-key crypto, etc. etc.). If this is not the case, you could start (e.g.) with my FAQ, or with John Preskill’s excellent Quanta commentary.
  15.  
  16. Without further ado:
  17.  
  18. (1) So what about that IBM thing? Are random quantum circuits easy to simulate classically?
  19.  
  20. OK, so let’s carefully spell out what the IBM paper says. They argue that, by commandeering the full attention of Summit at Oak Ridge National Lab, the most powerful supercomputer that currently exists on Earth—one that fills the area of two basketball courts, and that (crucially) has 250 petabytes of hard disk space—one could just barely store the entire quantum state vector of Google’s 53-qubit Sycamore chip in hard disk. And once one had done that, one could simulate the chip in ~2.5 days, more-or-less just by updating the entire state vector by brute force, rather than the 10,000 years that Google had estimated on the basis of my and Lijie Chen’s “Schrödinger-Feynman algorithm” (which can get by with less memory).
  21.  
  22. The IBM group understandably hasn’t actually done this yet—even though IBM set it up, the world’s #1 supercomputer isn’t just sitting around waiting for jobs! But I see little reason to doubt that their analysis is basically right. I don’t know why the Google team didn’t consider how such near-astronomical hard disk space would change their calculations, probably they wish they had.
  23.  
  24. I find this to be much, much better than IBM’s initial reaction to the Google leak, which was simply to dismiss the importance of quantum supremacy as a milestone. Designing better classical simulations is precisely how IBM and others should respond to Google’s announcement, and how I said a month ago that I hoped they would respond. If we set aside the pass-the-popcorn PR war (or even if we don’t), this is how science progresses.
  25.  
  26. But does IBM’s analysis mean that “quantum supremacy” hasn’t been achieved? No, it doesn’t—at least, not under any definition of “quantum supremacy” that I’ve ever used. Recall that the Sycamore chip took about 3 minutes to generate enough samples to pass the “linear cross-entropy benchmark,” the statistical test that Google applies to the outputs of its device. Three minutes versus 2.5 days is still a quantum speedup by a factor of 1200. More relevant, I think, is to compare the number of “elementary operations.” Let’s generously count a FLOP (floating-point operation) as the equivalent of a quantum gate. Then by my estimate, we’re comparing ~5×109 quantum gates against ~2×1020 FLOPs—a quantum speedup by a factor of ~40 billion.
  27.  
  28. Even that, though, is arguably not the right comparison. As far as we know today, the Summit supercomputer would need the full 2.5 days, or 2×1020 FLOPs, even to generate one sample from the output distribution of an ideal, randomly-generated 53-qubit quantum circuit. Google’s Sycamore chip needs about 40 microseconds to generate each sample. Now admittedly, the samples are extremely noisy: indeed, they’re drawn from a probability distribution that looks like 0.998U+0.002D, where U is the uniform distribution over 53-bit strings, and D is the hard distribution we care about. But notice that if we took 500=1/0.002 samples from this noisy distribution, then with high probability at least one of them would’ve been drawn from D. Sycamore takes 0.02 seconds to generate 500 samples—which compared to 2.5 days, would imply a quantum speedup by a factor of ~11 million, or a factor of ~4×1014 in “number of elementary operations.”
  29.  
  30. For me, though, the broader point is that neither party here—certainly not IBM—denies that the top-supercomputers-on-the-planet-level difficulty of classically simulating Google’s 53-qubit programmable chip really is coming from the exponential character of the quantum states in that chip, and nothing else. That’s what makes this back-and-forth fundamentally different from the previous one between D-Wave and the people who sought to simulate its devices classically. The skeptics, like me, didn’t much care what speedup over classical benchmarks there was or wasn’t today: we cared about the increase in the speedup as D-Wave upgraded its hardware, and the trouble was we never saw a convincing case that there would be one. I’m a theoretical computer scientist, and this is what I believe: that after the constant factors have come and gone, what remains are asymptotic growth rates.
  31.  
  32. In the present case, while increasing the circuit depth won’t evade IBM’s “store everything to hard disk” strategy, increasing the number of qubits will. If Google, or someone else, upgraded from 53 to 55 qubits, that would apparently already be enough to exceed Summit’s 250-petabyte storage capacity. At 60 qubits, you’d need 33 Summits. At 70 qubits, enough Summits to fill a city … you get the idea.
  33.  
  34. From the beginning, it was clear that quantum supremacy would not be a milestone like the moon landing—something that’s achieved in a moment, and is then clear to everyone for all time. It would be more like eradicating measles: it could be achieved, then temporarily unachieved, then re-achieved. For by definition, quantum supremacy all about beating something—namely, classical computation—and the latter can, at least for a while, fight back.
  35.  
  36. As Boaz Barak put it to me, the current contest between IBM and Google is analogous to Kasparov versus Deep Blue—except with the world-historic irony that IBM is playing the role of Kasparov! In other words, Kasparov can put up a heroic struggle, during a “transitional period” that lasts a year or two, but the fundamentals of the situation are that he’s toast. If Kasparov had narrowly beaten Deep Blue in 1997, rather than narrowly losing, the whole public narrative would likely have been different (“humanity triumphs over computers after all!”). Yet as Kasparov himself well knew, the very fact that the contest was close meant that, either way, human dominance was ending.
  37.  
  38. Let me leave the last word on this to friend-of-the-blog Greg Kuperberg, who graciously gave me permission to quote his comments about the IBM paper.
  39.  
  40. I’m not entirely sure how embarrassed Google should feel that they overlooked this. I’m sure that they would have been happier to anticipate it, and happier still if they had put more qubits on their chip to defeat it. However, it doesn’t change their real achievement.
  41.  
  42. I respect the IBM paper, even if the press along with it seems more grouchy than necessary. I tend to believe them that the Google team did not explore all avenues when they said that their 53 qubits aren’t classically simulable. But if this is the best rebuttal, then you should still consider how much Google and IBM still agree on this as a proof-of-concept of QC. This is still quantum David vs classical Goliath, in the extreme. 53 qubits is in some ways still just 53 bits, only enhanced with quantum randomness. To answer those 53 qubits, IBM would still need entire days of computer time with the world’s fastest supercomputer, a 200-petaflop machine with hundreds of thousands of processing cores and trillions of high-speed transistors. If we can confirm that the Google chip actually meets spec, but [we need] this much computer power to do it, then to me that’s about as convincing a larger quantum supremacy demonstration that humanity can no longer confirm at all.
  43.  
  44. Honestly, I’m happy to give both Google and IBM credit for helping the field of QC, even if it is the result of a strange dispute.
  45.  
  46. I should mention that, even before IBM’s announcement, Johnnie Gray, a postdoc at Imperial College, gave a talk (abstract here) at Caltech’s Institute for Quantum Information with a proposal for a different faster way to classically simulate quantum circuits like Google’s—in this case, by doing tensor network contraction more cleverly. Unlike both IBM’s proposed brute-force simulation, and the Schrödinger-Feynman algorithm that Google implemented, Gray’s algorithm (as far as we know now) would need to be repeated k times if you wanted k independent samples from the hard distribution. Partly because of this issue, Gray’s approach doesn’t currently look competitive for simulating thousands or millions of samples, but we’ll need to watch it and see what happens.
  47.  
  48. (2) Direct versus indirect verification.
  49.  
  50. The discussion of IBM’s proposed simulation brings us to a curious aspect of the Google paper—one that was already apparent when Nature sent me the paper for review back in August. Namely, Google took its supremacy experiments well past the point where even they themselves knew how to verify the results, by any classical computation that they knew how to perform feasibly (say, in less than 10,000 years).
  51.  
  52. So you might reasonably ask: if they couldn’t even verify the results, then how did they get to claim quantum speedups from those experiments? Well, they resorted to various gambits, which basically involved estimating the fidelity on quantum circuits that looked almost the same as the hard circuits, but happened to be easier to simulate classically, and then making the (totally plausible) assumption that that fidelity would be maintained on the hard circuits. Interestingly, they also cached their outputs and put them online (as part of the supplementary material to their Nature paper), in case it became feasible to verify them in the future.
  53.  
  54. Maybe you can now see where this is going. From Google’s perspective, IBM’s rainstorm comes with a big silver lining. Namely, by using Summit, hopefully it will now be possible to verify Google’s hardest (53-qubit and depth-20) sampling computations directly! This should provide an excellent test, since not even the Google group themselves would’ve known how to cheat and bias the results had they wanted to.
  55.  
  56. This whole episode has demonstrated the importance, when doing a sampling-based quantum supremacy experiment, of going deep into the regime where you can no longer classically verify the outputs, as weird as that sounds. Namely, you need to leave yourself a margin, in the likely event that the classical algorithms improve!
  57.  
  58. Having said that, I don’t mind revealing at this point that the lack of direct verification of the outputs, for the largest reported speedups, was my single biggest complaint when I reviewed Google’s Nature submission. It was because of my review that they added a paragraph explicitly pointing out that they did do direct verification, using something like a million cores running for something like a month, for a smaller quantum speedup (merely a million times faster than a Schrödinger-Feynman simulation running on a million cores, rather than two billion times faster).
  59.  
  60. (3) The asymptotic hardness of spoofing Google’s benchmark.
  61.  
  62. OK, but if Google thought that spoofing its test would take 10,000 years, using the best known classical algorithms running on the world’s top supercomputers, and it turns out instead that it could probably be done in more like 2.5 days, then how much else could’ve been missed? Will we find out next that Google’s benchmark can be classically spoofed in mere milliseconds?
  63.  
  64. Well, no one can rule that out, but we do have some reasons to think that it’s unlikely—and crucially, that even if it turned out to be true, one would just have to add 10 or 20 or 30 more qubits to make it no longer true. (We can’t be more definitive than that? Aye, such are the perils of life at a technological inflection point—and of computational complexity itself.)
  65.  
  66. The key point to understand here is that we really are talking about simulating a random quantum circuit, with no particular structure whatsoever. While such problems might have a theoretically efficient classical algorithm—i.e., one that runs in time polynomial in the number of qubits—I’d personally be much less surprised if you told me there was a polynomial-time classical algorithm for factoring. In the universe where amplitudes of random quantum circuits turn out to be efficiently computable—well, you might as well just tell me that P=PSPACE and be done with it.
  67.  
  68. Crucially, if you look at IBM’s approach to simulating quantum circuits classically, and Johnnie Gray’s approach, and Google’s approach, they could all be described as different flavors of “brute force.” That is, they all use extremely clever tricks to parallelize, shave off constant factors, make the best use of available memory, etc., but none involves any deep new mathematical insight that could roust BPP and BQP and the other complexity gods from their heavenly slumber. More concretely, none of these approaches seem to have any hope of “breaching the 2n barrier,” where n is the number of qubits in the quantum circuit to be simulated (assuming that the circuit depth is reasonably large). Mostly, they’re just trying to get down to that barrier.
  69.  
  70. Ah, but at the end of the day, we only believe that Google’s Sycamore chip is solving a classically hard problem because of the statistical test that Google applies to its outputs: the so-called “Linear Cross-Entropy Benchmark,” which I described in Q3 of my FAQ. And even if we grant that calculating the output probabilities for a random quantum circuit is almost certainly classically hard, and sampling the output distribution of a random quantum circuit is almost certainly classically hard—still, couldn’t spoofing Google’s benchmark be classically easy?
  71.  
  72. This last question is where complexity theory can contribute something to story. A couple weeks ago, UT undergraduate Sam Gunn and I adapted the hardness analysis from my and Lijie Chen’s 2017 paper “Complexity-Theoretic Foundations of Quantum Supremacy Experiments,” to talk directly about the classical hardness of spoofing the Linear Cross-Entropy benchmark. Our short paper about this should be on the arXiv later this week. Briefly, though, Sam and I show that if you had a sub-2n classical algorithm to spoof the Linear Cross-Entropy benchmark, then you’d also have a sub-2n classical algorithm that, given as input a random quantum circuit, could estimate a specific output probability (for example, that of the all-0 string) with variance at least slightly (say, Ω(2-3n)) better than that of the trivial estimator that just always guesses 2-n. Or in other words: spoofing Google’s benchmark is no easier than the general problem of nontrivially estimating amplitudes in random quantum circuits. Our result helps to explain why, indeed, neither IBM nor Johnnie Gray nor anyone else has suggested any attack that’s specific to Google’s Linear Cross-Entropy benchmark: they all simply attack the general problem of calculating the final amplitudes.
  73.  
  74. (4) Why use Linear Cross-Entropy at all?
  75.  
  76. In the comments of my FAQ, some people wondered why Google chose the Linear Cross-Entropy benchmark specifically—especially since they’d used a different benchmark (multiplicative cross-entropy, which unlike the linear version actually is a cross-entropy) in their earlier papers. I asked John Martinis this question, and his answer was simply that linear cross-entropy had the lowest variance of any estimator they tried. Since I also like linear cross-entropy—it turns out, for example, to be convenient for the analysis of my certified randomness protocol—I’m 100% happy with their choice. Having said that, there are many other choices of benchmark that would’ve also worked fine, and with roughly the same level of theoretical justification.
  77.  
  78. (5) Controlled-Z versus iSWAP gates.
  79.  
  80. Another interesting detail from the Google paper is that, in their previous hardware, they could implement a particular 2-qubit gate called the Controlled-Z. For their quantum supremacy demonstration, on the other hand, they modified their hardware to implement a different 2-qubit gate called the iSWAP. Now, the iSWAP has no known advantages over the Controlled-Z, for any applications like quantum simulation or Shor’s algorithm or Grover search. Why then did Google make the switch? Simply because, with certain classical simulation methods that they’d been considering, the simulation’s running time grows like 4 to the power of the number of iSWAP gates, but only like 2 to the power of the number of Controlled-Z gates! In other words, they made this engineering choice purely and entirely to make a classical simulation of their device sweat more. This seems totally fine and entirely within the rules to me. (Alas, the iSWAP versus Controlled-Z issue has no effect on a proposed simulation method like IBM’s.)
  81.  
  82. (6) Gil Kalai’s objections.
  83.  
  84. Over the past month, Shtetl-Optimized regular and quantum computing skeptic Gil Kalai has been posting one objection to the Google experiment after another on his blog. Unlike the IBM group and many of Google’s other critics, Gil completely accepts the centrality of quantum supremacy as a goal. Indeed, he’s firmly predicted for years that quantum supremacy could never be achieved for fundamental reasons—and he agrees that the Google result, if upheld, would refute his worldview. Gil also has no dispute with the exponential classical hardness of the problem that Google is solving.
  85.  
  86. Instead, Gil—if we’re talking not about “steelmanning” his views, but about what he himself actually said—has taken the position that the Google experiment must’ve been done wrong and will need to be retracted. He’s offered varying grounds for this. First he said that Google never computed the full histogram of probabilities with a smaller number of qubits (for which such an experiment is feasible), which would be an important sanity check. Except, it turns out they did do that, and it’s in their 2018 Science paper. Next he said that the experiment is invalid because the qubits have to be calibrated in a way that depends on the specific circuit to be applied. Except, this too turns out to be false: John Martinis explicitly confirmed for me that once the qubits are calibrated, you can run any circuit on them that you want. In summary, unlike the objections of the IBM group, so far I’ve found Gil’s objections to be utterly devoid of scientific interest or merit.
  87.  
  88. Update #1: Alas, I’ll have limited availability today for answering comments, since we’ll be grading the midterm exam for my Intro to Quantum Information Science course! I’ll try to handle the backlog tomorrow (Thursday).
  89.  
  90. Update #2: Aaannd … timed to coincide with the Google paper, last night the group of Jianwei Pan and Chaoyang Lu put up a preprint on the arXiv reporting a BosonSampling experiment with 20 photons (the previous record had been 6 photons). At this stage of the quantum supremacy race, many had of course written off BosonSampling—or said that its importance was mostly historical, in that it inspired Google’s random circuit sampling effort. I’m thrilled to see BosonSampling itself take such a big leap; hopefully, this will eventually lead to a demonstration that BosonSampling was (is) a viable pathway to quantum supremacy as well. And right now, with fault-tolerance still having been demonstrated in zero platforms, we need all the viable pathways we can get. What an exciting day for the field.
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement