Quantcast
  • Register
PhysicsOverflow is a next-generation academic platform for physicists and astronomers, including a community peer review system and a postgraduate-level discussion forum analogous to MathOverflow.

Welcome to PhysicsOverflow! PhysicsOverflow is an open platform for community peer review and graduate-level Physics discussion.

Please help promote PhysicsOverflow ads elsewhere if you like it.

News

PO is now at the Physics Department of Bielefeld University!

New printer friendly PO pages!

Migration to Bielefeld University was successful!

Please vote for this year's PhysicsOverflow ads!

Please do help out in categorising submissions. Submit a paper to PhysicsOverflow!

... see more

Tools for paper authors

Submit paper
Claim Paper Authorship

Tools for SE users

Search User
Reclaim SE Account
Request Account Merger
Nativise imported posts
Claim post (deleted users)
Import SE post

Users whose questions have been imported from Physics Stack Exchange, Theoretical Physics Stack Exchange, or any other Stack Exchange site are kindly requested to reclaim their account and not to register as a new user.

Public \(\beta\) tools

Report a bug with a feature
Request a new functionality
404 page design
Send feedback

Attributions

(propose a free ad)

Site Statistics

205 submissions , 163 unreviewed
5,047 questions , 2,200 unanswered
5,345 answers , 22,709 comments
1,470 users with positive rep
816 active unimported users
More ...

  Can a parallel computer simulate a quantum computer? Is BQP inside NP?

+ 11 like - 0 dislike
5124 views

If you have an infinite memory infinite processor-number classical computer, and you can fork arbitrarily many threads to solve a problem, you have what is called a "nondeterministic" machine. This name is misleading, since it isn't probabilistic or quantum, but rather parallel, but it is unfortunately standard in complexity theory circles. I prefer to call it "parallel", which is more standard usage.

Anyway, can a parallel computer simulate a quantum computer? I thought the answer is yes, since you can fork out as many processes as you need to simulate the different branches, but this is not a proof, because you might recohere the branches to solve a PSPACE problem that is not parallel solvable.

Is there a problem strictly in PSPACE, not in NP, which is in BQP? Can a quantum computer solve a problem which cannot be solved by a parallel machine?

Jargon gloss

  1. BQP: (Bounded error Quantum Polynomial-time) the class of problems solvable by a quantum computer in a number of steps polynomial in the input length.
  2. NP: (Nondeterministic Polynomial-time) the class of problems solvable by a potentially infinitely parallel ("nondeterministic") machine in polynomial time
  3. P: (Polynomial-time) the class of problems solvable by a single processor computer in polynomial time
  4. PSPACE: The class of problems which can be solved using a polynomial amount of memory, but unlimited running time.
asked Aug 19, 2012 in Theoretical Physics by Ron Maimon (7,720 points) [ revision history ]
edited Feb 3, 2015 by Ron Maimon
I think a more interesting question is can a quantum computer that we could build simulate "infinite memory infinite processor number classical computer"?

This post imported from StackExchange Physics at 2014-07-24 15:39 (UCT), posted by SE-user Yrogirg
@Yrogirg: That's widely conjectured to be false--- that's the statement the BQP includes NP, and it's not taken seriously. It would require a quantum algorithm for an NP complete problem. Of course, proving this is hopeless, since it would prove P!=NP automatically.
I thought "infinite memory infinite processor number classical computer" should be capable of certain super-Turing computations, like testing every integer number. I was wondering whether some quantum computer could do it.

This post imported from StackExchange Physics at 2014-07-24 15:39 (UCT), posted by SE-user Yrogirg

@Yrogirg: Oh, I see--- that isn't true, because you need to initialize the processor state, to tell each of the processors what to do. You can simulate arbitrarily many processors on a single one, at a cost of memory and slowdown. If these processors don't share memory, and if they all the processes are branched by a fork instruction as in unix, you get a "nondeterministic" machine, and it is an open problem if it is even faster than a regular single processor machine asymptotically (although it is obviously true). This is P!=NP.

Suggestion to question(v1): For the benefit of new readers, it is a good idea to spell out abbreviations.

This post imported from StackExchange Physics at 2014-07-24 15:39 (UCT), posted by SE-user Qmechanic
@Qmechanic: I did it, but I am not sure it is necessary, as I was careful to state everything twice, first without jargon, then with.
@Yrogirg: Ron Maimon's statement that a quantum computer is probably couldn't not simulate an infinite memory, ininite processor computer is not only correct, it is an enormous understatement. Though Ron disagrees with me on this point, it is generally accepted in the domain of quantum information theory that quantum states don't contain even exponentially more information than classical-states.

This post imported from StackExchange Physics at 2014-07-24 15:39 (UCT), posted by SE-user Niel de Beaudrap
@NieldeBeaudrap: I don't know what you mean by "disagree", I only disagree that to simulate a quantum computer classically you need polynomial resources, and this is also well accepted. I think this is just terminology.
@RonMaimon: that's precisely what I mean that we disagree upon. By the techniques that we currently posess, polynomial resources are not enough. That it is not to say that it cannot be done with only polynomial resources, admitting that those resources can be prepared randomly as with coin flips.

This post imported from StackExchange Physics at 2014-07-24 15:39 (UCT), posted by SE-user Niel de Beaudrap

@NieldeBeaudrap: So, you think that you can factor numbers in polynomial time by flipping coins? This is why one can be certain--- you can estimate the number of coin flips you need using a heuristic argument for the hardness of factoring. To my mind, it is certain (in the scientific sense) that you can't naively factor with polynomial resources (meaning without knowing more about factoring than what you learn in Shor's algorithm), and I also believe factoring is truly nonpolynomial, from the same heuristic argument (which explains why P!=NP) so there is no polynomial quantum state.

For others: Niel is misleading and wrong about the quantum state not "having" exponential information. What he notices is that you can't encode and extract more than the classical amount of information in a quantum state, but that's not saying anything--- the representation of a quantum state at intermediate times is not by the amount of information you can get out of it through some measurement. This is the terminology difference, and it is essential: he means "what can you get out", and I mean "what do you need to say to simulate it".
@RonMaimon: This is comical. You're basically playing the role of Richard Feynman, inasmuch that he apparently couldn't be convinced that P vs NP was an open problem; only with you it's P vs BQP. I'm only saying that it hasn't been proven either way yet. That isn't to say that I believe that we can factorize with coin flips; quite the opposite, I do think that superpolynomial time is probably necessary to simulate quantum computers. But much of what is exponential in quantum states is also exponential in probability distributions, and we have no solid proofs. Does this bother you?

This post imported from StackExchange Physics at 2014-07-24 15:39 (UCT), posted by SE-user Niel de Beaudrap
I know what you are saying, but this representation problem of quantum states is what people have been banging their heads on for 80 years, that probability distriution have a clear computational reduction, namely Monte-Carlo, and quantum systems don't have it. I don't want to leave the impression that there is a clever reduction out there, because it makes uncertainty about the 'tHooft threads that are the main inspiration for this question. It's ok to speculate given that we don't know how to prove anything, but I consider Landauer (?) reversible computation to give good heuristics.

@NieldeBeaudrap: Stop editing the post to replace "parallel" with "nondeterministic"! I put thought into when I go jargon-busting, and this is a case where it is needed. I will never say "nondeterministic machine" in my life without prefacing it with "parallel machine", since I will never participate in erecting jargon-walls to keep outsiders out. This sort of stuff keeps the field of complexity permanently stuck in the dark ages.

@RonMaimon: I edited it because it's polemical. There is nothing misleading in calling it "nondeterministic": a computation with unboundedly many processors is only one way to describe the model, and is no more physical than one making guesses nondeterministically. You are deliberately discouraging people from acquiring the tools people would require to assess complexity literature on their own by reverting my edit which provided explanations and links to the existing concepts. If you think complexity is in the dark ages, whatever is motivating you to care about NP, anyway?

This post imported from StackExchange Physics at 2014-07-24 15:39 (UCT), posted by SE-user Niel de Beaudrap

@NieldeBeaudrap: What do you mean "making guesses nondeterministically"? This is the type of obfuscatory garbage people in this field write. There is no way to describe a "nondeterministic machine" as anything "nondeterministic". The reason for the name is because a forking automaton has 2 outputs for a given input, and therefore has "nondeterministic" evolution. This is a stupid convention, and I'm busting it. Explaining the thing clearly does not discouraging anything, it only shows up the incompetence of the people. I am explaining stuff simply that you folks incompetently make opaque.

@RonMaimon: A "nondeterministic" machine in the CS sense is one in which there is one processor, but no specification of which of the permitted transitions it may explore, not even probabilistically; it is simply not determined, hence the name. It's non-physical, but then that concept was defined by logicians who didn't put a priority on realism of physical evolution. If you prefer a different idiom, that's fine. But that doesn't make the standard terminology "non-standard". As for our competence or obfuscation: once you've managed to surpass the state of the art, do please let us know.

This post imported from StackExchange Physics at 2014-07-24 15:39 (UCT), posted by SE-user Niel de Beaudrap

@NieldeBeaudrap: You are totally annoyingly wrong. The nondeterministic machine makes all the transitions at once, so if it can go from state A to "B and C" it goes from state A to state B and state C both at once. If any of the successor states halt, it is said to halt. This is why it was called "non-deterministic", it is stupid name, it is called "parallel" by everyone else. There is no state of the art to surpass, nobody in this field has any real results.

@Ron: leaving aside what grounds you have to make authoritative statements about how models of computation are described, and how they were consequently named -- if the state of the art is in fact trivial, surpassing it ought also to be trivial. So it's heartening to hear you say so. Godspeed.

This post imported from StackExchange Physics at 2014-07-24 15:39 (UCT), posted by SE-user Niel de Beaudrap
@NieldeBeaudrap: The "grounds" are that I understand what a nondeterministic machine is, from reading the definition. It's a parallel machine, that's what it is, there's no debate possible. Making progress requires working on it, and it's not my favorite thing to think about (although I think it's very important). I thought a little bit this week, spurred by your challenge, but I didn't get anywhere. I am not disheartened, because this is pretty much the same as all other folks in this field. My line of thought is reversible computation and waste bits, I think this is key.
@RonMaimon: indeed it shouldn't be discouraging, but it shows that maybe "incompetence of the majority of people who working on this topic" is not an experimentally justified theory for stagnation in this field. As to definitions, the ones I know make no reference to computing in parallel either; they refer to the existence of computational branches without remarking on how they should be found (by lucky guesses or by brute force computation). Anything else is a semantic gloss. As such machines don't actually exist, and neither description is more useful, there's no basis for refutation.

This post imported from StackExchange Physics at 2014-07-24 15:39 (UCT), posted by SE-user Niel de Beaudrap
@NieldeBeaudrap: Parallel machines do exist, calling them "imaginary" is silly. You can do a non-slowing-down fork instruction on a machine with multiple processors, people do it all the time now, and you can imagine a machine with a large number of independent processors. This is what a nondeterministic machine is, and the tripe about it being "unphysical" or "mysterious" is annoying. I didn't say the people working in this field are incompetent, I said they are obscurantists. There's a huge difference. Logicians are competent obscurantists for example.
@Ron: there do not exist any computers which can potentially double the number of processors working on a problem at any point in time. If you're satisfied with solving problems like SAT on 30 bits, then yes, a server farm of a just over a billion networked simple processors suffices, and initialising them won't take too long with a network topology in 3+1D. But it simply doesn't scale; the structure of spacetime itself works against getting the needed resources. If you make do with a fixed # of processors, you then can no longer complete in poly-time, unless e.g. P = NP.

This post imported from StackExchange Physics at 2014-07-24 15:39 (UCT), posted by SE-user Niel de Beaudrap
@NieldeBeaudrap: I see what you mean--- the rate of processor allocation is too large to be physical. But it's not "unphysical" like a halting problem is. I don't like it when people call it unphysical, it's just "infinitely parallel".
@RonMaimon: would you consider infinite energy to be unphysical? If not, why not the energy required for infinite parallelism? What if we took all of that energy and put it into a single processor to make it compute infinitely quickly (as in a 'Zeno' processor), to actually obtain answers to the halting problem? To me, these questions of infinity (or exponentials) are not identical, but they are certainly equivalent in that they are resources which we could never hope use to obtain locally an answer to a difficult computational problem. So they are all unphysical as far as I'm concerned.

This post imported from StackExchange Physics at 2014-07-24 15:39 (UCT), posted by SE-user Niel de Beaudrap
@NieldeBeaudrap: Ok, ok, we agree on this, it's just a question of "potential infinity" and how you explain things. I don't like explaining things that are simple so that they sound mysterious--- this is obscurantism--- and if a student asks you "what is a nondeterministic machine?" You can say "A machine with so many processors that UNIX's 'fork' instruction is cost free, no matter how many times you use it."

By the way, this does lead to a simple thing I don't see anywhere in the literature--- if you allow the machines to keep a label identifying the other processes, and trade their results with other machines, compare notes as they are running, I think you get a bigger class intermediate between NP and PSPACE. Let me call this hypothetical class "SHM-P" (for unix shm--- shared memory). This is the natural polynomial thing that strictly includes BQP and that isn't PSPACE (at least not obviously).
@RonMaimon: we're converging on an approximate agreement; though your characterization of a nondeterministic machine would be like me describing an electron as a tiny hard ball of electrical charge which spins on its axis, but which is so sensitive to magnetized measurement devices that it jitters and swings to align with or against any large magnetic field it encounters -- it's a coarse description which misses much. As for labelled processes: the problem is to define how the "processors" would exchange results in a way which agrees with tensor product structure / accounts for entanglement.

This post imported from StackExchange Physics at 2014-07-24 15:39 (UCT), posted by SE-user Niel de Beaudrap

@NieldeBeaudrap: We are not converging on anything! I have not changed my mind in any way in this discussion--- you are wrong to say my characterization is false stop it, it is not a coarse description, it's the friggin definition! It's a fine characterization of what a "nondeterministic machine" is. NP is a trivial, obvious concept. You are wrong about entanglement--- you can simulate exact BQP in SHM-P, it is easy to do iterated matrix multiplication on this machine. You asked for something which surpasses the state of the art, SHM-P is it.

3 Answers

+ 7 like - 0 dislike

This has been a major open problem in quantum complexity theory for 20 years. Here's what we know:

(1) Suppose you insist on talking about decision problems only ("total" ones, which have to be defined for every input), as people traditionally do when defining complexity classes like P, NP, and BQP. Then we have proven separations between BQP and NP in the "black-box model" (i.e., the model where both the BQP machine and the NP machine get access to an oracle), as mmc alluded to. On the other hand, while it's very plausible that those would extend to oracle separations between BQP and PH (the entire polynomial hierarchy), right now, we don't even know how to prove an oracle separation between BQP and AM (a probabilistic generalization of NP slightly higher than MA). Roughly the best we can do is to separate BQP from MA.

And to reiterate, all of these separations are in the black-box model only. It remains completely unclear, even at a conjectural level, whether or not these translate into separations in the "real" world (i.e., the world without oracles). We don't have any clear examples analogous to factoring, of real decision problems in BQP that are plausibly not in NP. After years thinking about the problem, I still don't have a strong intuition either that BQP should be contained in NP in the "real" world or that it shouldn't be.

(Note added: If you allow "promise problems," computer scientists' term for problems whose answers can be undefined for some inputs, then I'd guess that there probably is indeed a separation between PromiseBQP and PromiseNP. But my example that I'd guess witnesses the separation is just the tautological one! I.e., "given as input a quantum circuit, does this circuit output YES with at least 90% probability or with at most 10% probability, promised that one of those is the case?")

For more, check out my paper BQP and the Polynomial Hierarchy.

(2) On the other hand, if you're willing to generalize your notion of a "computational problem" beyond just decision problems -- for example, to problems of sampling from a specified probability distribution -- then the situation becomes much clearer. First, as Niel de Beaudrap said, Alex Arkhipov and I (and independently, Bremner, Jozsa, and Shepherd) showed there are sampling problems in BQP (OK, technically, "SampBQP") that can't be in NP, or indeed anywhere in the polynomial hierarchy, without the hierarchy collapsing. Second, in my BQP vs. PH paper linked to above, I proved unconditionally that relative to a random oracle, there are sampling and search problems in BQP that aren't anywhere in PH, let alone in NP. And unlike the "weird, special" oracles needed for the separations in point (1), random oracles can be "physically instantiated" -- for example, using any old cryptographic pseudorandom function -- in which case these separations would very plausibly carry over to the "real," non-oracle world.

This post imported from StackExchange Physics at 2014-07-24 15:40 (UCT), posted by SE-user Scott Aaronson
answered Aug 21, 2012 by ScottAaronson (795 points) [ no revision ]
"We don't have any clear examples analogous to factoring, of real decision problems in BQP that are plausibly not in NP", I accepted mmc's answer because I thought "recursive Fourier sampling" is an example of this. Regarding oracles and the real world, NP oracles are not uncomputable, they are just slow to compute, so you can realize them in real world.
Recursive Fourier Sampling is an oracle problem; we don't know how to realize it in the non-oracle setting. (Also, it only gives an n vs. n^(log n) separation; if you want an n vs. exp(n) oracle separation check out my BQP vs. PH paper.) And yes, most of the oracles we talk about are computable, but if they're exponentially slow, then simulating them might negate the complexity separation that was our original goal.

This post imported from StackExchange Physics at 2014-07-24 15:40 (UCT), posted by SE-user Scott Aaronson
+ 6 like - 0 dislike

There is no definitive answer due to the fact that no problem is known to be inside PSPACE and outside P. But recursive Fourier sampling is conjectured to be outside MA (the probabilistic generalization of NP) and has an efficient quantum algorithm. Check page 3 of this survey by Vazirani for more details.

This post imported from StackExchange Physics at 2014-07-24 15:39 (UCT), posted by SE-user mmc
answered Aug 19, 2012 by mmc (100 points) [ no revision ]
+ 6 like - 0 dislike

To add to mmc's response, it is currently generally suspected that NP and BQP are incomparable: that is, that neither is contained in the other. As usual for complexity theory, the evidence is circumstantial; and the suspicion here is orders of magnitude less intense (if we pretend that strength of suspicion is measurable) than the general hypothesis that P ≠ NP.

Specifically: as Aaronson and Archipov showed somewhat recently, there are problems in BQP which, if they were contained in NP, would imply that the polynomial hierarchy collaspes to the third level. Restricting myself to conveying the significance of this complexity theorist jargon, any time they talk about the "polynomial hierarchy collapsing" to any level, they mean something which they would regard as (a) quite implausible, and consequently (b) disasterous to their understanding of complexity on the level of the transition from Newtonian mechanics to quantum mechanics, i.e. a revolution of comprehension to be informally anticipated perhaps no more frequently than once every century or so. (The ultimate crisis, a total "collapse" of this "hierarchy", to the zeroeth level, would be precisely the result P = NP.)

This post imported from StackExchange Physics at 2014-07-24 15:39 (UCT), posted by SE-user Niel de Beaudrap
answered Aug 20, 2012 by Niel de Beaudrap (270 points) [ no revision ]
Most voted comments show all comments
(In a similar vein, I find HEP papers full of mind-numbing brain-destroying jargon and impenetrable handwaving! Must not be too much worth understanding there...)

This post imported from StackExchange Physics at 2014-07-24 15:39 (UCT), posted by SE-user Scott Aaronson
@ScottAaronson: I don't know these, thank you for pointing them out! I am happy to learn, and I hope that the ideas are good. The analogy with HEP is not really right, the physicists generally bend over backwards to eliminate unnecessary jargon, and speak as much as possible in homey metaphors. There is a political cabal in physics that will smack you if you don't say things as clearly as you possibly can, it's very nice, and I wish other fields had it.
Let me see if I understand your idea: first you fix a "universal" reversible circuit C? (Otherwise, how do you know which reversible computation to run backwards, once you've guessed the final values of the waste bits?) Then given (say) a composite integer N and its prime factors p,q, you search for a final value b of the waste bits, such that C^{-1}(p,q,b)=(N,a) for some other string a (which you can think of as the "instructions" to C)? Of course, even after you've found such an a, there's no guarantee it will work for some other composite integer N'.

This post imported from StackExchange Physics at 2014-07-24 15:39 (UCT), posted by SE-user Scott Aaronson
It's amazing how strongly people's perception of "jargon-ness" can just reflect their background. I'd heard about how important AdS/CFT was, so I went through the papers on it, hoping to learn the new conceptual insights about spacetime in quantum gravity ... and instead found technical constructions involving stacks of D3-branes. Which CS theory papers are you reading? Whichever they are, I can probably point you to better ones. In the proceedings of the major CS theory conferences (STOC, FOCS, CCC, ...), the first few pages of every paper are just history, motivation, high-level ideas...

This post imported from StackExchange Physics at 2014-07-24 15:39 (UCT), posted by SE-user Scott Aaronson
@ScottAaronson: Not exactly--- you start with a reversible computer, and input (I,p,q,0) where 0 is a string of zeroed out bits, I are the instructions for multiply (which you don't touch) and p and q are the inputs. Then you runs it forward, and you get (I,pq,J) where J is junk. You now ignore J and you get the output of the regular machine: (I,pq). If you just add bits to the answer, you reverse the computation to get (I,p',q',J'). If you guessed right J' is 0, as you initialized the machine originally. If you make the least wasteful algorithm (perhaps on copies) I felt you need to search J.

Regarding jargon and physics, the jargon-free holography insights are better found in 'tHooft's papers in Nuclear Physics B in the period 1985-1990, but they contain a few inaccuracies. The Susskind papers from '90-96 also contain these insights. The Maldacena paper builds somewhat on previous string and supergravity papers, and is less accessible.
Most recent comments show all comments
First: I'm not addressing your idea because I don't understand it. Why should we expect that bounding the number of waste bits in a reversible computation (rather than some other quantity, like number of gates, or number of memory bits in an irreversible computation) would give any particular insight? Write up something more detailed, and I'll be happy to read it and form an opinion.

This post imported from StackExchange Physics at 2014-07-24 15:39 (UCT), posted by SE-user Scott Aaronson

@ScottAaronson: Because you can reverse the computation starting with any waste bits by running the computer backwards, and only if you guessed the right final value of the waste-bits, you end up zeroing them all out when you are done reversing. If you try to use as few waste bits as possible, I felt you should be maximally compressing them as you compute, then they end up effectively random (otherwise you could compress them further), so the remaining waste-bit number is the minimal hard-to-guess part of the forward computation.

Your answer

Please use answers only to (at least partly) answer questions. To comment, discuss, or ask for clarification, leave a comment instead.
To mask links under text, please type your text, highlight it, and click the "link" button. You can then enter your link URL.
Please consult the FAQ for as to how to format your post.
This is the answer box; if you want to write a comment instead, please use the 'add comment' button.
Live preview (may slow down editor)   Preview
Your name to display (optional):
Privacy: Your email address will only be used for sending these notifications.
Anti-spam verification:
If you are a human please identify the position of the character covered by the symbol $\varnothing$ in the following word:
p$\hbar$y$\varnothing$icsOverflow
Then drag the red bullet below over the corresponding character of our banner. When you drop it there, the bullet changes to green (on slow internet connections after a few seconds).
Please complete the anti-spam verification




user contributions licensed under cc by-sa 3.0 with attribution required

Your rights
...