There is a method to study these physical systems called **belief propagation (BP)**, which yields exact methods for interaction graphs that are trees, and empirically works pretty fine when you have physical systems with *loopy* interaction graphs that **locally-look-like trees**. Randomly-generated sparse interaction graphs are of this form (its explicitly mentioned in the paper), and you could apply Belief Propagation to your model to obtain good estimates of the following quantities.

- Marginal probabilities $\sigma_i$.
- Joint probabilities $p(\sigma_i, \sigma_j)$, in particular, the
**correlators** you mention.
- Conditional probabilities $p(\sigma_i | \sigma_j)$
- Partition functions, energies, free energies, entropies.

And, in addition, BP can be adapted to sample from the probability distribution $p(\sigma)$. In physics, systems like these are sometimes called 'diluted spin-glasses'.

I summarise the mean features of Belief propagation below. Mézard, Montanari's book contains a very pedagogical exposition of the method, also rather up-to-date and complete; you can read the draft of the book *online* without buying it. I personally used this book to prepare a seminar and I can recommend it. The connections between Belief Propagation and the **ferromagnetic spin Isin chain** are explained in Part D, section 14.1 of the online version.

## Overview

Belief propagation is an iterative "message-passing" algorithm. For each site/spin/particle (vertexes of the graph) of your system, the belief propagation methods assigns incoming and leaving "messages" to its neighbouring edges (which represent interactions) in the interaction graph. These messages can be understood as "probability weights" that contribute to the *marginal probability* that the particle say on site $i$ is on a given local configuration $\sigma_i$, which I denote by $p(\sigma_i)$. The messages are updated through *local* computations (thus, not costly) for each particle, following a set of equations known as the **belief-propagation update-rules**. I will not write the update-rules here, for they are lengthy.

What it is important is the following: if on site $i$, with neighbours indexed by $a$, we denote the leaving/incoming messages on time $t$ by $p_{i\rightarrow a}^{t}(\sigma_i)$ and $p_{a\rightarrow i}^{t}(\sigma_i)$, then BP method gives a way to *estimate* the marginal probability $p(\sigma_i)$ as follows:
$$p(\sigma_i)\simeq p^{t}(\sigma_i)=\prod_{a\in\partial_i}p_{a\rightarrow i}^{t-1}(\sigma_i)$$

For **trees**, this iteration converges to fix point in finite time $t_{*}$, and this bound has a known value. Moreover, the method *provably* computes the marginal probabilities *exactly*. In fact, for trees this method and **transfer-matrix** methods used e.g. for the classical Ising spin chain turn out to be equivalent.

If your system it is *not a tree*, you do not get exact results and the message-update iteration is not promised to converge. However, the method has been used with wide **experimental success** on graphs that **locally-look-like-trees** (this seems to set an upper bound to the correlation-distance that you can have) to provide good estimations of these marginal probabilities.

Finally, the method can be adapted to estimate all the other quantities I mentioned.

## Drawbacks

The main disadvantage of this method is that it is rather complicated to find analytical bounds for errors or analytical conditions that guarantee convergence; both are fields of current research. For a recent study regarding convergence you can check [1]; regarding errors, I found this ones [2].

And, of course, another disadvantage is that if you add a lot of random interactions to your graph, at some point your interaction-graph stops being tree-like and the method would behave poorly. In fact, there is a threshold were a gigantic connected component appears in the interaction graph [3].

This post imported from StackExchange MathOverflow at 2014-06-15 16:31 (UCT), posted by SE-user Juan Bermejo Vega