Gibbs sampler as a Markov process

In summary: The proof I was looking for said that for a subset of the variables, say x_{1},...x_{n}, the transition probabilities between x_{1} and x_{n} are determined only by x_{1} and x_{n}, not by the whole set of variables. This subset is called the "current state", and it is usually denoted by the symbol {x_{1},...,x_{n}}. In summary, the Gibbs sampler is a Markov process as long as you can express the transition probabilities as a function of the current "state".
  • #1
Stochasticus
2
0
I'm trying to learn more about Markov chains and came across the Gibbs sampler

x_1{t+1} ~ p(x_1|x_2 = x_2{t},...x_n{t})
x_2{t+1} ~ p(x_2|x_1 = x_1{t+1},x_3 = x_3{t},...,x_n{t})
.
.
.
x_i{t+1} ~ p(x_i|x_1 = x_1{t+1},...,x_(i-1) = x_(i-1){t+1},x_(i+1) = x_(i+1){t},...,x_n{t})

Supposedly this thing is a Markov chain. I just don't see it. It sure looks like each updated variable x_i{t+1} is contingent on the whole set, not just the prior value x_i{t}. Can someone show me how this satisfies the Markov criterion
 
Physics news on Phys.org
  • #2
A process can be viewed as a Markov process if the transition probabilities to the next "state" depends only on the current "state". You can define a state in any way that you wish and it might even consist of an infinite number of variables. Until it is revealed how a state is defined, it is impossible to say whether a given mathematical or physical process is Markov.

The state in the Gibbs sampler is the current vector of values [itex] (x_1(t),x_2(t),...x_n(t)) [/itex]. The fact that these variables at time [itex] t [/itex] are realized by a transition probability that depends on the vector at [itex] t-1 [/itex] doesn't contradict that fact that the transisition probabilities are determined by the vector of variables at [itex] t [/itex]. The fact that you might be able to write the transition probabilities at time [itex] t [/itex] as a function of the [itex] x_i() [/itex] at an earlier time such as [itex] t = 2 [/itex] doesn't contradict the fact that the process is a Markov process. It's a Markov process as long as you can expresses the transition probabilities as a function of the current "state".
 
  • #3
Hey Stochasticus and welcome to the forums.

Yours is an interesting question as I have just assumed this in my prior courses without bothering to check it myself, but I found a PDF that shows a proof that might help you.

http://users.aims.ac.za/~ioana/notes-ch4-2x1.pdf

Scroll down to proposition 4.5
 
  • #4
That proof doesn't bother to prove the Gibbs Sampler is a Markov Chain. The proof just assumes it is, which I think is justified.

A further thought: Many respectable books phrase their definition of a Markov chain to say that the transistion probabilities to the next state "depend on the history of past states only through the current state". They say it that way instead of saying that the transitions probabilities "don't depend on the history of past states", which is the phrase that casual writers use. The casual writers are wrong. It is actually OK for the transition probabilities to depend on the history of past states and, in fact, you can usually write the transition probabilities as functions of a history of past states.
 
  • #5
It seems I was overthinking this. I was aware that for the whole set of variables, call it big X = {x[itex]_{1}[/itex],...,x[itex]_{n}[/itex]} the sampler met the Markov criterion. I got caught up trying to prove that each subsequent value x[itex]_{i}[/itex][itex]^{t+1}[/itex] was conditioned only on x[itex]_{i}[/itex][itex]^{t}[/itex].
 

Related to Gibbs sampler as a Markov process

1. What is the Gibbs sampler?

The Gibbs sampler is a Markov chain Monte Carlo (MCMC) algorithm used for generating samples from a multivariate probability distribution. It is a type of simulation method that is commonly used in Bayesian statistics.

2. How does the Gibbs sampler work?

The Gibbs sampler works by iteratively sampling from the conditional distributions of each variable in a multivariate distribution. It uses the current values of the other variables as fixed values to generate a new sample for the current variable being updated. This process is repeated multiple times to generate a set of samples that approximate the true distribution.

3. What is the advantage of using the Gibbs sampler?

The Gibbs sampler is advantageous because it can be used to generate samples from complex and high-dimensional probability distributions. It is also relatively easy to implement and can provide accurate results even when the conditional distributions are not known analytically.

4. What are the limitations of the Gibbs sampler?

One limitation of the Gibbs sampler is that it can be computationally intensive and require a large number of iterations to generate accurate samples. It can also suffer from slow convergence when the variables are highly correlated.

5. In what fields is the Gibbs sampler commonly used?

The Gibbs sampler has applications in various fields such as statistics, machine learning, computational biology, and physics. It is commonly used for tasks such as parameter estimation, data imputation, and image reconstruction.

Similar threads

  • Set Theory, Logic, Probability, Statistics
Replies
2
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
1
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
1
Views
2K
Replies
3
Views
867
  • Set Theory, Logic, Probability, Statistics
Replies
2
Views
2K
  • Set Theory, Logic, Probability, Statistics
Replies
7
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
9
Views
1K
Replies
24
Views
2K
  • Set Theory, Logic, Probability, Statistics
Replies
7
Views
1K
  • General Math
Replies
2
Views
757
Back
Top