# Can a parallel computer simulate a quantum computer? Is BQP inside NP?

+ 11 like - 0 dislike
2407 views

If you have an infinite memory infinite processor-number classical computer, and you can fork arbitrarily many threads to solve a problem, you have what is called a "nondeterministic" machine. This name is misleading, since it isn't probabilistic or quantum, but rather parallel, but it is unfortunately standard in complexity theory circles. I prefer to call it "parallel", which is more standard usage.

Anyway, can a parallel computer simulate a quantum computer? I thought the answer is yes, since you can fork out as many processes as you need to simulate the different branches, but this is not a proof, because you might recohere the branches to solve a PSPACE problem that is not parallel solvable.

Is there a problem strictly in PSPACE, not in NP, which is in BQP? Can a quantum computer solve a problem which cannot be solved by a parallel machine?

### Jargon gloss

1. BQP: (Bounded error Quantum Polynomial-time) the class of problems solvable by a quantum computer in a number of steps polynomial in the input length.
2. NP: (Nondeterministic Polynomial-time) the class of problems solvable by a potentially infinitely parallel ("nondeterministic") machine in polynomial time
3. P: (Polynomial-time) the class of problems solvable by a single processor computer in polynomial time
4. PSPACE: The class of problems which can be solved using a polynomial amount of memory, but unlimited running time.
edited Feb 3, 2015

@Yrogirg: Oh, I see--- that isn't true, because you need to initialize the processor state, to tell each of the processors what to do. You can simulate arbitrarily many processors on a single one, at a cost of memory and slowdown. If these processors don't share memory, and if they all the processes are branched by a fork instruction as in unix, you get a "nondeterministic" machine, and it is an open problem if it is even faster than a regular single processor machine asymptotically (although it is obviously true). This is P!=NP.

@RonMaimon: that's precisely what I mean that we disagree upon. By the techniques that we currently posess, polynomial resources are not enough. That it is not to say that it cannot be done with only polynomial resources, admitting that those resources can be prepared randomly as with coin flips.

This post imported from StackExchange Physics at 2014-07-24 15:39 (UCT), posted by SE-user Niel de Beaudrap

@NieldeBeaudrap: So, you think that you can factor numbers in polynomial time by flipping coins? This is why one can be certain--- you can estimate the number of coin flips you need using a heuristic argument for the hardness of factoring. To my mind, it is certain (in the scientific sense) that you can't naively factor with polynomial resources (meaning without knowing more about factoring than what you learn in Shor's algorithm), and I also believe factoring is truly nonpolynomial, from the same heuristic argument (which explains why P!=NP) so there is no polynomial quantum state.

For others: Niel is misleading and wrong about the quantum state not "having" exponential information. What he notices is that you can't encode and extract more than the classical amount of information in a quantum state, but that's not saying anything--- the representation of a quantum state at intermediate times is not by the amount of information you can get out of it through some measurement. This is the terminology difference, and it is essential: he means "what can you get out", and I mean "what do you need to say to simulate it".
@RonMaimon: This is comical. You're basically playing the role of Richard Feynman, inasmuch that he apparently couldn't be convinced that P vs NP was an open problem; only with you it's P vs BQP. I'm only saying that it hasn't been proven either way yet. That isn't to say that I believe that we can factorize with coin flips; quite the opposite, I do think that superpolynomial time is probably necessary to simulate quantum computers. But much of what is exponential in quantum states is also exponential in probability distributions, and we have no solid proofs. Does this bother you?

This post imported from StackExchange Physics at 2014-07-24 15:39 (UCT), posted by SE-user Niel de Beaudrap
@RonMaimon: we're converging on an approximate agreement; though your characterization of a nondeterministic machine would be like me describing an electron as a tiny hard ball of electrical charge which spins on its axis, but which is so sensitive to magnetized measurement devices that it jitters and swings to align with or against any large magnetic field it encounters -- it's a coarse description which misses much. As for labelled processes: the problem is to define how the "processors" would exchange results in a way which agrees with tensor product structure / accounts for entanglement.

This post imported from StackExchange Physics at 2014-07-24 15:39 (UCT), posted by SE-user Niel de Beaudrap

@NieldeBeaudrap: We are not converging on anything! I have not changed my mind in any way in this discussion--- you are wrong to say my characterization is false stop it, it is not a coarse description, it's the friggin definition! It's a fine characterization of what a "nondeterministic machine" is. NP is a trivial, obvious concept. You are wrong about entanglement--- you can simulate exact BQP in SHM-P, it is easy to do iterated matrix multiplication on this machine. You asked for something which surpasses the state of the art, SHM-P is it.

+ 7 like - 0 dislike

This has been a major open problem in quantum complexity theory for 20 years. Here's what we know:

(1) Suppose you insist on talking about decision problems only ("total" ones, which have to be defined for every input), as people traditionally do when defining complexity classes like P, NP, and BQP. Then we have proven separations between BQP and NP in the "black-box model" (i.e., the model where both the BQP machine and the NP machine get access to an oracle), as mmc alluded to. On the other hand, while it's very plausible that those would extend to oracle separations between BQP and PH (the entire polynomial hierarchy), right now, we don't even know how to prove an oracle separation between BQP and AM (a probabilistic generalization of NP slightly higher than MA). Roughly the best we can do is to separate BQP from MA.

And to reiterate, all of these separations are in the black-box model only. It remains completely unclear, even at a conjectural level, whether or not these translate into separations in the "real" world (i.e., the world without oracles). We don't have any clear examples analogous to factoring, of real decision problems in BQP that are plausibly not in NP. After years thinking about the problem, I still don't have a strong intuition either that BQP should be contained in NP in the "real" world or that it shouldn't be.

(Note added: If you allow "promise problems," computer scientists' term for problems whose answers can be undefined for some inputs, then I'd guess that there probably is indeed a separation between PromiseBQP and PromiseNP. But my example that I'd guess witnesses the separation is just the tautological one! I.e., "given as input a quantum circuit, does this circuit output YES with at least 90% probability or with at most 10% probability, promised that one of those is the case?")

For more, check out my paper BQP and the Polynomial Hierarchy.

(2) On the other hand, if you're willing to generalize your notion of a "computational problem" beyond just decision problems -- for example, to problems of sampling from a specified probability distribution -- then the situation becomes much clearer. First, as Niel de Beaudrap said, Alex Arkhipov and I (and independently, Bremner, Jozsa, and Shepherd) showed there are sampling problems in BQP (OK, technically, "SampBQP") that can't be in NP, or indeed anywhere in the polynomial hierarchy, without the hierarchy collapsing. Second, in my BQP vs. PH paper linked to above, I proved unconditionally that relative to a random oracle, there are sampling and search problems in BQP that aren't anywhere in PH, let alone in NP. And unlike the "weird, special" oracles needed for the separations in point (1), random oracles can be "physically instantiated" -- for example, using any old cryptographic pseudorandom function -- in which case these separations would very plausibly carry over to the "real," non-oracle world.

This post imported from StackExchange Physics at 2014-07-24 15:40 (UCT), posted by SE-user Scott Aaronson
answered Aug 21, 2012 by (795 points)
"We don't have any clear examples analogous to factoring, of real decision problems in BQP that are plausibly not in NP", I accepted mmc's answer because I thought "recursive Fourier sampling" is an example of this. Regarding oracles and the real world, NP oracles are not uncomputable, they are just slow to compute, so you can realize them in real world.
Recursive Fourier Sampling is an oracle problem; we don't know how to realize it in the non-oracle setting. (Also, it only gives an n vs. n^(log n) separation; if you want an n vs. exp(n) oracle separation check out my BQP vs. PH paper.) And yes, most of the oracles we talk about are computable, but if they're exponentially slow, then simulating them might negate the complexity separation that was our original goal.

This post imported from StackExchange Physics at 2014-07-24 15:40 (UCT), posted by SE-user Scott Aaronson
+ 6 like - 0 dislike

There is no definitive answer due to the fact that no problem is known to be inside PSPACE and outside P. But recursive Fourier sampling is conjectured to be outside MA (the probabilistic generalization of NP) and has an efficient quantum algorithm. Check page 3 of this survey by Vazirani for more details.

This post imported from StackExchange Physics at 2014-07-24 15:39 (UCT), posted by SE-user mmc
answered Aug 19, 2012 by (100 points)
+ 6 like - 0 dislike

To add to mmc's response, it is currently generally suspected that NP and BQP are incomparable: that is, that neither is contained in the other. As usual for complexity theory, the evidence is circumstantial; and the suspicion here is orders of magnitude less intense (if we pretend that strength of suspicion is measurable) than the general hypothesis that P ≠ NP.

Specifically: as Aaronson and Archipov showed somewhat recently, there are problems in BQP which, if they were contained in NP, would imply that the polynomial hierarchy collaspes to the third level. Restricting myself to conveying the significance of this complexity theorist jargon, any time they talk about the "polynomial hierarchy collapsing" to any level, they mean something which they would regard as (a) quite implausible, and consequently (b) disasterous to their understanding of complexity on the level of the transition from Newtonian mechanics to quantum mechanics, i.e. a revolution of comprehension to be informally anticipated perhaps no more frequently than once every century or so. (The ultimate crisis, a total "collapse" of this "hierarchy", to the zeroeth level, would be precisely the result P = NP.)

This post imported from StackExchange Physics at 2014-07-24 15:39 (UCT), posted by SE-user Niel de Beaudrap
answered Aug 21, 2012 by (270 points)
+1: The paper you link is great, thanks. BTW: saying how implausible people find something isn't evidence without an argument: one should just make up a simple nonrigorous argument to explain why stuff is hard. It's easy for NP and factoring, but for the higher levels of the polynomial hierarchy, I never tried.
Ron, if you do try, I predict you'll be able to find a "simple nonrigorous argument" by which to convince yourself that the polynomial hierarchy should indeed be infinite! (To calibrate, I'm much more confident of that than I am that factoring is classically hard.) Just take whatever intuition you've already used to convince yourself that P!=NP, and try extending it to convince yourself that NP!=coNP. Then try convincing yourself that P^NP != NP^NP. Then conclude, by "physicist induction", that ALL these classes should be distinct! :-)

This post imported from StackExchange Physics at 2014-07-24 15:39 (UCT), posted by SE-user Scott Aaronson
@RonMaimon: I agree with you on the implausibility front. I prefer actual proofs, personally. Of course, the proofs (if they exist) are nevertheless expected to be difficult to find, because no-one's succeeded yet. I'm really just representing the sociopolitical import of those claims.

This post imported from StackExchange Physics at 2014-07-24 15:39 (UCT), posted by SE-user Niel de Beaudrap
@ScottAaronson: The intuition I use (which concievably might be made rigorous in some way) is based on the minimum number of waste-bits during a reversible computation, and when you introduce an NP oracle, as in higher levels of the heirarchy, you have to compute the oracle value and those waste bits are not counted in a simple way, so I can't do the heuristic immediately. It's probably simple to fix. This heuristic is better than the heuristics I see in the literature, I tried to make it a proof once, but it's hard to prove minimality of waste-bits in a reversible implementation.

To convince yourself factoring is hard, consider the minimum number of bits in a reversible computation implementation of multiply. Then a full search over these bits is required for sure, and there are enough waste-bits to tell you that the problem is hard.
Ron, I don't quite understand your argument for why factoring should be hard, but it seems like it can't possibly work. For how does your argument deal with the existence of algorithms like the Number Field Sieve, which classically factors an n-bit integer using ~exp(n^{1/3}) steps, still exponential but much much faster than a "full search"? Note that, like any algorithm, the Number Field Sieve can be implemented reversibly with only a constant-factor slowdown.

This post imported from StackExchange Physics at 2014-07-24 15:39 (UCT), posted by SE-user Scott Aaronson
@RonMaimon: can you outline how your waste-bits analysis would proceed with an efficiently solvable problem, such as 2-CNF-SAT?

This post imported from StackExchange Physics at 2014-07-24 15:39 (UCT), posted by SE-user Niel de Beaudrap
@ScottAaronson: The number of waste bits is not that large for factoring, it doesn't scale linearly with the number of bits, you need to know the information loss in multiplication. I forget what the right scaling is, I did this years ago, but I remember that it comes out hard, but not fully exponentially hard. I could reproduce it in a bit, but I haven't thought about it in a while.

@NieldeBeaudrap: The idea is that the forward computation of 2-sat doesn't need many waste bits, you can implement it with a number of waste bits only scaling as the log, and you can see this from the solution of the backward problem. I honestly don't remember the details, I did it a long time ago.
This sounds either way too good to be true -- like your "method of counting waste bits" will revolutionize theoretical computer science, by giving us at least heuristic answers to all the great unsolved problems -- or else like you simply have some way to map the best known conventional algorithms into this framework. So yes, details please! (Since it's a bit off-topic, go ahead and post them somewhere else, or email me and Niel.)

This post imported from StackExchange Physics at 2014-07-24 15:39 (UCT), posted by SE-user Scott Aaronson
@RonMaimon: ditto each sentence of Scott's previous comment.

This post imported from StackExchange Physics at 2014-07-24 15:39 (UCT), posted by SE-user Niel de Beaudrap
@Ron: Since I've learned a lot from reading your physics.SE posts, I find it sad that you'd react angrily to what, at least on my part (and I imagine on Niel's), was a genuine request for explanation about something that frankly sounds incredible to people who work in this field. (The history of CS is rife with wrong claims about which algorithms were "obviously unimprovable" on heuristic grounds!) Since you wanted questions, here are a few: does your heuristic method tell you what the true complexity of the graph isomorphism problem should be? How about matrix multiplication?

This post imported from StackExchange Physics at 2014-07-24 15:39 (UCT), posted by SE-user Scott Aaronson
@RonMaimon: as to junk arguments, I'm just telling you how (other) people talk about things: I'm not active nor expert in the polynomial hierarchy. I'm puzzled that you should be so angry at other people's intuitions (not yours, nor incidentally mine), when you clearly place such store by your own means of generating them that you dismiss my counterpoints as misrepresentation. Is there any mode that we can communicate where I can avoid brushing you the wrong way, in those instances where we both have something to say where "knowledge" gives way to "ideation"?

This post imported from StackExchange Physics at 2014-07-24 15:39 (UCT), posted by SE-user Niel de Beaudrap

@ScottAaronson: I erased my ridiculous comment, and I am truly sorry. I had a knee jerk borderline psychotic paranoid reaction. I might be full of crap on this, I don't remember the argument very well, and I never applied it to specific problems other than factoring. The idea is to look at the number of waste-bits in a reversible implementation of the only NP problem I cared about--- figure out the initial memory state of a universal computer given the instruction and the final memory state. This obviously is the granddaddy of all other NP-complete problems.

@NieldeBeaudrap: I apologize to you too, my reaction was unacceptable, and I deleted it. (continued) "figure out a starting state from end-state" is obviously NP complete on an irreversible computer. it is trivial on a reversible computer, you just run the computer in reverse. So the idea was that you keep track of the junk bits that are thrown away for the irreversible computation, and arrange to minimize these. This gives you the entropy of the problem going forward. The intuition I had was that the optimal reversing entropy is is the log of the size of the search space when going backwards.

I supposed this was everyone's intuition in the field regarding why NP complete problems were hard. I can't say anything about decision problems directly, because I was thinking about taking an initial state to a final state, not to a bit. There is a major issue with making the argument in that you might need to consider many copies of the problem computed reversibly in parallel all at once, and sharing waste bits, so as to minimize the entropy production of the computation, the same way Shannon's entropy is only found on copies. It's really half-baked, hence my defensive psychotic reply.

Thanks; apology gratefully accepted! I've been there, man: when I was an undergrad, I also thought I should ignore whatever had been written about P vs. NP, and figure out the "right" way to think about complexity theory from scratch. But repeated experiences following stupid dead ends, reinventing the wheel, etc. finally drove home that the people who'd thought about these things before were not all fools. Incidentally, the funny thing about NP-complete problems is of course that, by definition, every NP-complete problem gets to be the "granddaddy" of all the other ones!

This post imported from StackExchange Physics at 2014-07-24 15:39 (UCT), posted by SE-user Scott Aaronson

@ScottAaronson: Unless you had the exact same idea that I had regarding reversible waste-bits, you haven't been there, you are making an analogy. Reinvention is important, but this is not an example. It is not in the spirit of the idea that "every NP complete problem gets to be the granddaddy", because they are only so because Cook showed they are equivalent to the problem I gave. I don't think the people studying this are fools, it's just completely clear that they have absolutely not the slightest hint of an idea about how to prove anything, not even a heuristic argument. So I made one up.

Also, the field essentially has one real idea-- relativize to an oracle--- and this is how you make money and get positions in the field. This means that most papers is buried in mind-numbing brain-destroying jargon, and impenetrable rigor, and avoid simple CS style algorithm descriptions. Most papers I see are rewrites of one approach in a thousand slightly different ways (there are exceptions of course). I find it hard to read this literature because of this, it's like reading logic, which also has a paucity of ideas compared to papers.

First: I'm not addressing your idea because I don't understand it. Why should we expect that bounding the number of waste bits in a reversible computation (rather than some other quantity, like number of gates, or number of memory bits in an irreversible computation) would give any particular insight? Write up something more detailed, and I'll be happy to read it and form an opinion.

This post imported from StackExchange Physics at 2014-07-24 15:39 (UCT), posted by SE-user Scott Aaronson

@ScottAaronson: Because you can reverse the computation starting with any waste bits by running the computer backwards, and only if you guessed the right final value of the waste-bits, you end up zeroing them all out when you are done reversing. If you try to use as few waste bits as possible, I felt you should be maximally compressing them as you compute, then they end up effectively random (otherwise you could compress them further), so the remaining waste-bit number is the minimal hard-to-guess part of the forward computation.

Second: I'd say your assessment of the state of complexity theory is fairly accurate circa 1975. Today we do have non-relativizing results, from IP=PSPACE and the PCP Theorem to Williams' NEXP vs. ACC breakthrough last year to Mulmuley's GCT program. Are you familiar with these? They're all still a hell of a long way from P!=NP, but that's the wrong metric: they pretty obviously use nontrivial new ideas to answer hard complexity questions that people couldn't answer before. Dismissing them is like dismissing string theory because it hasn't brought us closer to explaining the muon mass.

This post imported from StackExchange Physics at 2014-07-24 15:39 (UCT), posted by SE-user Scott Aaronson
(In a similar vein, I find HEP papers full of mind-numbing brain-destroying jargon and impenetrable handwaving! Must not be too much worth understanding there...)

This post imported from StackExchange Physics at 2014-07-24 15:39 (UCT), posted by SE-user Scott Aaronson
@ScottAaronson: I don't know these, thank you for pointing them out! I am happy to learn, and I hope that the ideas are good. The analogy with HEP is not really right, the physicists generally bend over backwards to eliminate unnecessary jargon, and speak as much as possible in homey metaphors. There is a political cabal in physics that will smack you if you don't say things as clearly as you possibly can, it's very nice, and I wish other fields had it.
Let me see if I understand your idea: first you fix a "universal" reversible circuit C? (Otherwise, how do you know which reversible computation to run backwards, once you've guessed the final values of the waste bits?) Then given (say) a composite integer N and its prime factors p,q, you search for a final value b of the waste bits, such that C^{-1}(p,q,b)=(N,a) for some other string a (which you can think of as the "instructions" to C)? Of course, even after you've found such an a, there's no guarantee it will work for some other composite integer N'.

This post imported from StackExchange Physics at 2014-07-24 15:39 (UCT), posted by SE-user Scott Aaronson
It's amazing how strongly people's perception of "jargon-ness" can just reflect their background. I'd heard about how important AdS/CFT was, so I went through the papers on it, hoping to learn the new conceptual insights about spacetime in quantum gravity ... and instead found technical constructions involving stacks of D3-branes. Which CS theory papers are you reading? Whichever they are, I can probably point you to better ones. In the proceedings of the major CS theory conferences (STOC, FOCS, CCC, ...), the first few pages of every paper are just history, motivation, high-level ideas...

This post imported from StackExchange Physics at 2014-07-24 15:39 (UCT), posted by SE-user Scott Aaronson
@ScottAaronson: Not exactly--- you start with a reversible computer, and input (I,p,q,0) where 0 is a string of zeroed out bits, I are the instructions for multiply (which you don't touch) and p and q are the inputs. Then you runs it forward, and you get (I,pq,J) where J is junk. You now ignore J and you get the output of the regular machine: (I,pq). If you just add bits to the answer, you reverse the computation to get (I,p',q',J'). If you guessed right J' is 0, as you initialized the machine originally. If you make the least wasteful algorithm (perhaps on copies) I felt you need to search J.

Regarding jargon and physics, the jargon-free holography insights are better found in 'tHooft's papers in Nuclear Physics B in the period 1985-1990, but they contain a few inaccuracies. The Susskind papers from '90-96 also contain these insights. The Maldacena paper builds somewhat on previous string and supergravity papers, and is less accessible.

 Please use answers only to (at least partly) answer questions. To comment, discuss, or ask for clarification, leave a comment instead. To mask links under text, please type your text, highlight it, and click the "link" button. You can then enter your link URL. Please consult the FAQ for as to how to format your post. This is the answer box; if you want to write a comment instead, please use the 'add comment' button. Live preview (may slow down editor)   Preview Your name to display (optional): Email me at this address if my answer is selected or commented on: Privacy: Your email address will only be used for sending these notifications. Anti-spam verification: If you are a human please identify the position of the character covered by the symbol $\varnothing$ in the following word:$\varnothing\hbar$ysicsOverflowThen drag the red bullet below over the corresponding character of our banner. When you drop it there, the bullet changes to green (on slow internet connections after a few seconds). To avoid this verification in future, please log in or register.