SMAC Introduction to Monte Carlo: Difference between revisions

From Wiki Cours
Jump to navigation Jump to search
(Created page with "=Monte Carlo algorithms= ==Direct sampling or the children's game== image:IN_children.jpg align="center" caption="Children playing at the Monte Carlo beach" The game...")
 
Line 23: Line 23:


[[image:direct_pi_color.png width="554" height="456" align="center" caption="Output of the direct-sampling program with "hits" in red and "non-hits" in blue."]]
[[image:direct_pi_color.png width="554" height="456" align="center" caption="Output of the direct-sampling program with "hits" in red and "non-hits" in blue."]]


Monte Carlo is an integration algorithm.The above direct-sampling algorithms treats a probability distribution (uniform distribution of pebbles within the square), and an observable, the "hitting variable" (one within the unit circle, zero outside):
Monte Carlo is an integration algorithm.The above direct-sampling algorithms treats a probability distribution (uniform distribution of pebbles within the square), and an observable, the "hitting variable" (one within the unit circle, zero outside):
[[math]]
<math>
\frac{n_\text{hits} } {N} \sim \frac{ \int_{-1}^{1} dx \int_{-1}^{1} dy \pi(x,y) O(x,y) } {
\frac{n_\text{hits} } {N} \sim \frac{ \int_{-1}^{1} dx \int_{-1}^{1} dy \pi(x,y) O(x,y) } {
\int_{-1}^{1} dx \int_{-1}^{1} dy \pi(x,y) }
\int_{-1}^{1} dx \int_{-1}^{1} dy \pi(x,y) }
[[math]]
</math>


===Comments===  
===Comments===  

Revision as of 15:26, 5 June 2018

Monte Carlo algorithms

Direct sampling or the children's game

File:IN children.jpg align="center" caption="Children playing at the Monte Carlo beach"


The game takes place on the beach of Monte Carlo. The pebbles are samples of the uniform probability distribution in the square. They are obtained directly. It is for this reason that the algorithm is called "direct-sampling" Monte Carlo.

from random import uniform
def direct_pi(N):
    n_hits = 0
    for i in range(N):
        x, y = uniform(-1.0, 1.0), uniform(-1.0, 1.0)
        if x ** 2 + y ** 2 < 1.0:
            n_hits += 1
    return n_hits
n_trials = 10000
for attempt in range(10):
    print attempt, 4 * direct_pi(n_trials) / float(n_trials)

File:Direct pi color.png width="554" height="456" align="center" caption="Output of the direct-sampling program with "hits" in red and "non-hits" in blue."

Monte Carlo is an integration algorithm.The above direct-sampling algorithms treats a probability distribution (uniform distribution of pebbles within the square), and an observable, the "hitting variable" (one within the unit circle, zero outside):

Comments

  1. Direct-sampling algorithms exist only for a handful of physically interesting models. They are very useful
  2. The existence of a uniform (pseudo) random number generator is assumed. The setup of good random number generators is a mature branch of mathematics.

Markov Chain Monte Carlo: the adult's game

File:Fig.1.02.jpg align="center" caption="Adults playing on the Monte Carlo heliport"

The game takes place at Monte Carlo heliport. The helipad is a too large for direct sampling and a Markov chain strategy should be adopted. An adult stands at the last pebble position and draws the new pebble inside a square of side //delta//. An important rejection problem has to be fixed every time the new pebble jumps outside the helipad. The solution we adopt allows to uniformly cover the large square with pebbles.

from random import uniform
def markov_pi(delta, N):
    x, y = 1.0, 1.0
    N_hits = 0
    for i in range(N):
        del_x, del_y = uniform(-delta, delta), uniform(-delta, delta)
        if abs(x + del_x) < 1.0 and abs( y + del_y ) < 1.0:
            x, y = x + del_x, y + del_y
        if x**2 + y**2 < 1.0:
            N_hits += 1.0
     return N_hits

n_trials = 10000
for k in range(10):
    print 4 * markov_pi(0.3, n_trials) / float(n_trials)

Comments

  1. In Markov-chain sampling algorithms the initial condition must be allowed, not necessary typical
  2. Here adults start their promenade from the "club house" located in (x,y) = (1,1).
  3. The algorithm is correct for all step sizes //delta//, but best performance are obtained for moderate //delta//.
  4. **Rule of thumb:** acceptance ratio of Markov chain should be close to 1/2


Detailed and global balance

For simplicity we discuss a simplified and discrete 3x3 pebble game. The pebble walks on a 3x3-chessboard without periodic boundary conditions.

File:PRank1.png width="300" height="300" File:PRank2.png width="300" height="300"


We design a Markov chain algorithm, so that each site is visited with the same probability: math \pi(1) = \pi(2)= \pi(3)= \pi(4)= \pi(5)= \pi(6)= \pi(7)= \pi(8)= \pi(9)= \frac{1}{9} math Here a pebble throw consists in moving from a site to each of its neighbors with probability 1/4. Suppose we are on site //a//=9, at one time. We can only move to //b//=8 or //c//=6, or simply remain at //a//. This gives

math p_{a \to a} + p_{a \to b} + p_{a \to c} = 1 math

On the same time, to get to //a//, we either come from //a//, or from //b// or from //c//.

math \pi(a)p(a \to a) + \pi(b) p(b\to a) + \pi(c) p(c \to a) = \pi(a) math

This yields the global balance condition

math \pi(b) p(b\to a) + \pi(c) p(c \to a) = \pi(a) p(a\to b) + \pi(a) p(a \to c) math

A more restrictive condition is called detailed balance condition:

math \pi(b) p(b\to a) = \pi(a) p(a\to b), \text{etc.} math

Below a Python implementation for the 3x3 pebble game. With positions 1,2,...,9, the four neighbors of site 1 are (2,4,1,1). This ensures that the pebble moves with probability 1/4 to sites 2 and 4, and remains on site 1 with probability 1/2. We start the simulation from site 9.


import random, pylab
neighbor = {1 : [2, 4, 1, 1], 2 : [3, 5, 1, 2], 3 : [3, 6, 2, 3],
            4 : [5, 7, 4, 1], 5 : [6, 8, 4, 2], 6 : [6, 9, 5, 3],
            7 : [8, 7, 7, 4], 8 : [9, 8, 7, 5], 9 : [9, 9, 8, 6]}
all_pos = []
N_iter = 100
for iter1 in range(10000):
    pos = 9
    for iter in range(N_iter):
        pos = neighbor[ pos][ random.randint(0, 3)]
    all_pos.append(pos)
pylab.figure(1)
pylab.hist(all_pos,bins=9,range=(0.5,9.5),normed=True)
pylab.title('3x3 pebble game, starting at 9, after '+str(N_iter)+' steps')
pylab.savefig('histo_3x3_'+str(N_iter)+'_steps.png')
pylab.show()

Here is output of the above Python program for 5, 10, 100 steps

File:Histo 3x3 5 steps.png width="280" height="200" File:Histo 3x3 10 steps.png width="280" height="200"File:Histo 3x3 100 steps.png width="280" height="200"

Inhomogeneous 3x3 pebble game (Metropolis algorithm)

For a general probability distribution math \pi(1),\pi(2), \dots, \pi(9), math we can use the celebrated Metropolis algorithm math p(a \to b) = \min(1, \pi(b)/\pi(a) ) math That we illustrate in a Python program for the inhomogeneous 3x3 pebble game.

File:Histo 3x3 inhomogeneous 1000000 steps.png align="center" caption="Histogram obtained by Metropolis versus the exact probabilities (red dots)"


Comments

  1. Markov-chain Monte Carlo algorithms are a very general tool for integration.
  2. They access the relevant information in the infinite-time limit , but one can do better.
  3. The dynamics of the Markov-chain Monte Carlo algorithm is not always physically relevant.
  4. Many Markov-chain Monte Carlo algorithms satisfy detailed balance, but the necessary condition is global balance.

References

  1. We were deeply inspired by the first chapter of [[1]] pp 1-9; 15-22
  2. here you can find the original paper by [Metropolis, A.W. Rosenbluth, M.N. Rosenbluth, A.H. Teller et E. Teller (1953)]