Famous examples are easy to find, but the worst cases are things which you can't document--- they are cases of self-censorship, where important results are not published because the authors know already that they won't pass review, from introspection, so they work on something else. This has the effect of letting prominent journal referees navigate the research direction of a whole field, and to reduce the true creative work to a handful of people at the top.
But you can find outright failures of peer review very easily. Peter Higgs's paper was rejected on the first submission (the referees thought it was wrong), and then accepted a few weeks or months later on second revised submission, after Higgs added one prosaic sentence at the end. The one sentence obviously was not important, what happened was likely that either the referee or the editor read Brout/Englert, or else they ran it by someone who knew something about something, like Gell-Mann or somebody, who then said ok.
By contrast, Marco Frasca's recent papers based on a nonsense non-result have gotten positive enough reviews to get published in journals (and they get cited too). A paper with completely wrong or speculative results paradoxically doesn't have such a hard time, because such a paper is never threatening to anybody, it doesn't contradict something you thought you knew. This is especially true if it pretends to be in a different field, and is sent to experts who can't be bothered to check the arguments in detail, or don't have time. Open peer review fixes this.
The experimental failures are most notorious, they get the most press, because here you have fabricated data, and this is usually easy for people who work in similar experiments to catch, but not for a randomly chosen referee. So open peer review can immediately and quickly catch such fraud, but journals have admitted fabricated data in the past. Open peer review can also catch bad methodology, for example, see this stackexchange answer regarding nonsense: http://physics.stackexchange.com/questions/23725/is-the-emdrive-or-relativity-drive-possible .
You don't have to go famous examples--- my own paper with Jennifer Schwarz ( http://arxiv.org/abs/cond-mat/0301495 ) faced a split decision (one referee for publishing, the other against), and the one against was more adamant. I was forced to make the language of the proof of the main theorem much more formal and general, and less comprehensible, Jen was forced to do redundant work on figures and evidence for obvious things, it was a major annoyance, and the paper quality was gradually diminished in the process (you can compare first to last revision). This was not even a major controversial paper, it was just a precise clarification of what was going on in Fisher-Schwarz depinning models. I had just started working outside of academia, so the hostility was much greater than for worse papers with less important results coming from an academic institution. The referee against never changed his mind, the PRL editor just accepted it anyway (after a full year of back and forth) despite the referee's judgement. You can't count on such enlightened editors.
That's another effect of peer review--- the results are biased towards admission of papers by famous academic authors, and against those coming from unknown people outside of academia. This bias is terribly political, and has the effect of shutting up the discourse to a small handful of experts.
An extremely common situation is not wrong results, or good results suppressed, but a generally accepted wrongness and slow decay, coming from papers which misrepresent earlier results to make their current result seem more new. This is the situation in this stackexchange question: http://physics.stackexchange.com/questions/45626/fermi-surface-nesting-and-cdw-sdw-sc-orders . The paper is relatively honest and original regarding density functional calculations, but contains misunderstandings regarding the subject of the simulation. It also wants to argue that standard heuristics fail, that makes their calculation more important, but it is not clear that these heuristic fail, their arguments do not support them. The referees job is to evaluate the DFT, not the summary of other literature in the paper. That stuff can easily slide when you have a handful of referees, it can't go by in open peer review. Plus, it's an honest mistake, all you need to do is point it out, and it should get fixed.
All papers on string theory were rejected as a matter of routine in mainstream physics journals from 1970 on, the papers appear in a handful of journals which still accepted string theory work, most prominent is Nuclear Physics B.
In prehistoric times, Ernst Stueckelberg's renormalization paper was rejected in 1941, Stueckelberg invented modern perturbation theory.
Lack of citation of people who are not known is never punished, but lack of citation to famous papers is punished. This creates a bad citation climate, where citations accrue by a rich-get-richer method. For an example of citation mistakes, there's David John Candlin's paper on Candlin variables (Grassman variables, fermionic path integral), which was published but hardly ever cited, people instead citing a later textbook review of Berezin's with no reference to Candlin.
The peer review process as we know it today never worked as intended, that is, it never worked to keep the journals accurate. There are wrong papers all over the literature, to find one, open any journal and read. What it did do was increase the power of a certain group of top people to control what was in the literature. When these dictators are good people (like, say, Witten), you get good literature. When the dictators are small-minded, you get closed fields with no progress, because people know that original work will be rejected.
There is no need to rely on dictatorship anymore, we have an internet, and anyone with a refereeing comment should be able to make it easily and quickly, and get an answer from the author. This is ten-thousand times more grueling than convincing two referees and an editor, and paradoxically enough, also more forgiving, because even when there are hostile referees, the upvotes you get from silent folks will show the controversy (if it exists) and it is very fast to find what are actual mistakes that you made, and what is just political nonsense. All that is required is no censorship whatsoever, and complete inclusion.