Gibbs paradox: an urban legend in statistical physics

In summary, the conversation discusses the misconception surrounding the mixing of distinguishable particles and the existence of a paradox in classical statistical physics. The speaker discovered that there is no real paradox, and this is supported by various references and papers. The correction factor of 1/N! in the entropy calculation is not a paradox, but a logical necessity. The factor is imposed by logic, not to obtain an extensive entropy, but due to combinatorial logic. This is different from Gibbs' definition of entropy in terms of volume of phase space. Within classical mechanics, there is no logical reason to include this factor, but in quantum theory it is a result of indistinguishability. This correction factor persists in the classical limit and is a natural result of Bolt
  • #36
Can you give the gist of the argument here?
 
Physics news on Phys.org
  • #37
We discuss in circles. Just once more: For distinguishable particles there IS mixing entropy!
 
  • Like
Likes Vanadium 50 and hutchphd
  • #38
hutchphd said:
Can you give the gist of the argument here?
Suppose two dogs. There are N fleas among the two. What is the most probable partition of the fleas among the two Dogs?
Assuming that fleas are permutable elements (meaning that exchanging two results in a different state) there are N!/(n1!n2!) ways to have n1 fleas in dog 1 and n2=N-n1 in dog 2. The maximum happens to be when d ln(1/n1!)/d n1 = d ln(1/n2!)/d n2, what leads to n1=n2.
In the case of classical ideal gases, besides the number of particles there are also energy and volume. All the terms from the entropy come as usual, the 1/n! comes from the enumeration of the permutation of particles among the two systems, just as in the dog's fleas problem.
 
  • Like
Likes dextercioby, hutchphd and vanhees71
  • #39
autoUFC said:
I recently discovered that there is no real paradox in the question of the mixing of classical distinguishble particles. I was shocked. Most books and all my professors suggest that an extensible entropy could not be defined for distinguishble particles.

I'm a bit late for this discussion and apologize if what I bring up has already been mentioned in some of the answers.

As I see it there are two paradoxes:
One is that statistical entropy for an ideal gas of distinguishable particles is non-extensive, whereas thermodynamic entropy is extensive. To resolve this one has to assume that gas-molecules are actually indistinguishable and that leads to the Sackur-Tetrode expression for entropy, which is extensive.

The second paradox is the mixing paradox, which states that if you mix two different gases the entropy increases, but if you mix two gases of identical, but distinguishable atoms the entropy cannot increase since macroscopically nothing changes due to mixing identical atoms.

In the textbook by Blundell and Blundell the mixing paradox is demonstrated using the (extensive) Sackur-Tetrode expression. To me that indicates that the mixing paradox doesn't automatically go away just by making entropy extensive. You have to require that atoms are indistinguishable explicitly once more to resolve the mixing paradox.

Most comments welcome.
 
  • Like
Likes vanhees71
  • #40
vanhees71 said:
We discuss in circles. Just once more: For distinguishable particles there IS mixing entropy!
I've also wondered if there are experimental results that show that if you mix for example He with Ne the total entropy increases, whereas if you mix Ne with Ne it doesn't.
Is there anything like that?
 
  • #41
vanhees71 said:
You mean extensive! Sure, that's the point. If you assume that the particles are distinguishable (and in my opinion nothing in classical mechanics make particles indistinguishable) then by calculating the entropy according to Boltzmann and Planck, you get a non-extensive expression for the entropy of a gas consisting of (chemically) "identical" particles which leads to Gibbs's paradox. This paradox can only be resolved when assuming the indistinguishability of particles in the sense that you have to count any configuration which results from a specific configuration by only exchanging particles as one configuration, which leads to the inclusion of the crucial factor ##1/N!## in the canonical partition sum:
$$Z=\frac{1}{N!}\int_{\mathbb{R}^{6N}} \mathrm{d}^{6 N} \Gamma \exp[-\beta H(\Gamma)], \quad \beta=\frac{1}{k_{\text{B}} T}.$$

This factor 1/N! is only correct for low occupancy, I believe. One has to assume that there is hardly ever more than one particle in the same state or phase space cell. Do you agree?
What happens if occupancy is not low?
 
  • #42
Philip Koeck said:
The second paradox is the mixing paradox, which states that if you mix two different gases the entropy increases, but if you mix two gases of identical, but distinguishable atoms the entropy cannot increase since macroscopically nothing changes due to mixing identical atoms.

If the identical particles are distinguishable there is no paradox. If you do the Gibbs-paradox setup with ortho- and para-helium (at sufficiently low temperature, so that the transition probability between the two states is small) then you should get a mixing entropy, because when taking out the dividing wall the atoms in the two states diffuse irreversibly into each other, and thus there's entropy.

The paradox occurs, whenever you have indistinguishable particles on both sides and do classical statistics with the non-extensive entropy formula. The correct formula is, of course, the Sackur-Tetrode formula, which you indeed get by dividing by ##N!##. It's the classical limit of the correct quantum counting of states in the approximation that Pauli blocking or Bose enhancement is negligible due to low occupation numbers. This correct classical limit takes nevertheless the indistinguishability into account and avoids the Gibbs paradox.
 
  • #43
vanhees71 said:
If the identical particles are distinguishable there is no paradox. If you do the Gibbs-paradox setup with ortho- and para-helium (at sufficiently low temperature, so that the transition probability between the two states is small) then you should get a mixing entropy, because when taking out the dividing wall the atoms in the two states diffuse irreversibly into each other, and thus there's entropy.

How that apply to the mixing of distinguishable globules of butterfat in milk (or any other colloids)? where by removing a barrier between two containers (mixing) you can increase entropy but by restoring the barrier you can decrease entropy in a reversible way?
 
  • #44
vanhees71 said:
If the identical particles are distinguishable there is no paradox. If you do the Gibbs-paradox setup with ortho- and para-helium (at sufficiently low temperature, so that the transition probability between the two states is small) then you should get a mixing entropy, because when taking out the dividing wall the atoms in the two states diffuse irreversibly into each other, and thus there's entropy.

Now I'm confused about your terminology.
Are you saying that identical particles can be distinguishable?
Would you say that ortho- and para-helium are identical?
 
  • #45
If you have distinct particles there is mixing entropy, but that does imply that diffusion of the distinct particles and mixing is IRreversible and thus you cannot restore the lower-entropy state by simply restoring the barrier but you need a lot of energy to sort the mixed particles into the two compartments, i.e., to lower the entropy you must do work, and this is precisely what the various phenomenological formulations of the 2nd law says.
 
  • Like
Likes Philip Koeck
  • #46
Philip Koeck said:
As I see it there are two paradoxes:
One is that statistical entropy for an ideal gas of distinguishable particles is non-extensive,
false.

Please note that I use the term permutable to differentiate from the term distinguishable. It may be impossible to distinguish between two identical particles, but it is the case that in the classical model for identical particles one should count as distinct two states where all particles are in the same positions with the same velocities, but two of them, that switch positions and velocities. That is, when counting the number of states of classical permutable particles one needs to account to the fact that permutations lead to different states.
Not relevant right now but I would like to say that identical quantum particles are impermutable.

Let us now tackle the problem of equilibrium in the exchange of particles.
Assume a system composed of two chambers 1 and 2. Assume that they can exchange particles, but the total is constant. n1+n2 = N.
Given that n1 permutable particles are in chamber 1, the entropy of the whole system is proportional to the number of accessible states for the whole system
Omega(n1)=Omega1(n1) Omega2(n2) [ N! / ( n1! n2! ) ]
Omega(n1) is the enumeration for the whole system given that n1 are in chamber 1
Omega1(n1) is the enumeration in chamber 1 and Omega2(n2) the enumeration for chamber 2.
The last term is the number of ways to choose which of the permutable particles are in chamber 1.The key is the term [ N! / ( n1! n2! ) ] . For equilibrium in temperature and pressure this term is not needed, and one rightly concludes that
Omega(n1)=Omega1(n1) Omega2(n2)
as in the cases of thermal and mechanical equilibrium there is no exchange of particles, and it is determined which particles are in which chamber.
However, when considering exchange of classical permutable particles that is no longer the case. And the number of possible permutations need to be included when counting the number of states for the whole system. I guess that is now clear that when considering exchange of particles between two subsystems, entropy should logically be defined as S=k ln(Omega(n)/n!). Extensivity follows.

Philip Koeck said:
To resolve this one has to assume that gas-molecules are actually indistinguishable and that leads to the Sackur-Tetrode expression for entropy, which is extensive.
It is true that assuming molecules are impermutable leads to an extensive entropy. Impermutable meaning that exchanging two of them leads to the same state. Usually, in quantum theory, the terms identical or indistinguishable are used. I am using impermutable to emphasize that permutable particles may be identical.
May point is that permutable particles also have an extensive entropy. In the case of the classical ideal gas the very Sackur-Tetrode. You can include the N! term either by assuming that the particles are impermutable or by including it to account for the permutations of classical particles between the two subsystems, both lead to extensivity.
Philip Koeck said:
The second paradox is the mixing paradox, which states that if you mix two different gases the entropy increases, but if you mix two gases of identical, but distinguishable atoms the entropy cannot increase since macroscopically nothing changes due to mixing identical atoms.
There is no paradox. mixing identical permutable (classical) particles do not increase entropy. The inclusion of N! is nessessary due to the correct counting of accessible states. Extensivity follows.

Philip Koeck said:
In the textbook by Blundell and Blundell the mixing paradox is demonstrated using the (extensive) Sackur-Tetrode expression. To me that indicates that the mixing paradox doesn't automatically go away just by making entropy extensive. You have to require that atoms are indistinguishable explicitly once more to resolve the mixing paradox.

Most comments welcome.

Blundell, as most books, is very bad in this point. In his paper Vam Kapen includes a short list of doubtful quotes from textbooks then writes:

"Actually the problem of the N! was completely cleared up by Ehrenfest and Trkal
(Ann. Physik 1921), but their argument is ignored in most of the literature. It may
therefore be of some use to take up the matter again starting from scratch, as a service to future textbook writers. "

It seems to me that his efforts where for not, as textbooks of the 21st century are still misleading.
 
  • #47
autoUFC said:
..., entropy should logically be defined as S=k ln(Omega(n)/n!). Extensivity follows.

Interesting. I used something like that in a text I put on ResearchGate. Not sure whether it makes sense, though.
In deed, I get the the Sackur-Tetrode expression even for distinguishable particles like that.
Here's the link: https://www.researchgate.net/publication/330675047_An_introduction_to_the_statistical_physics_of_distinguishable_and_indistinguishable_particles_in_an_ideal_gas
 
  • #48
Philip Koeck said:
Now I'm confused about your terminology.
Are you saying that identical particles can be distinguishable?
Would you say that ortho- and para-helium are identical?
I don't like to use the word "identical" in this context. I just tried to use the language obviously implied by this argument by you: "mix two gases of identical, but distinguishable atoms".

I thought you mean two gases consisting of the same atoms, which are however distinguishable. This can only be if you have the same atoms (in my example He) in different states (in my example ortho- and para-He). These atoms are indeed distinguishable, because they are different states (distinguished by the total spin of the two electrons being either 0 or 1).

Now indeed transitions between the two states of He is pretty suppressed due to the different symmetries of the spatial part of the wave functions (the total wave function of the two electrons must of course be always antisymmetric because electrons are fermions). That's why really when He was discovered in the spectral decomposition of the Sun light one first believed that there are in fact two different new elements, but in fact it were only the two different states of Helium (spin singlet = para-helium, spin triplet = ortho-helium).

So for the "Gibbs-paradox experiment" you have to treat the two states of He as distinguishable and thus you'd expect mixing entropy, i.e., an increase in entropy when letting the two before separated gases diffuse into each other.
 
  • Like
Likes Philip Koeck
  • #49
Philip Koeck said:
Interesting. I used something like that in a text I put on ResearchGate. Not sure whether it makes sense, though.
In deed, I get the the Sackur-Tetrode expression even for distinguishable particles like that.
Here's the link: https://www.researchgate.net/publication/330675047_An_introduction_to_the_statistical_physics_of_distinguishable_and_indistinguishable_particles_in_an_ideal_gas
But there you get of course the Sackur Tetrode formula only because you don't write ##S=\ln W## but just put ##S=\ln(W/N!)##. So you get a different distribution function but the same entropy for distinguishable as for indistinguishable particles but just because you put in this ##1/N!## again by hand. If you'd had put it in in the very beginning into ##W## there'd be no difference whatsoever in the treatment of distinguishable and indistinguishable particles. This seems a bit paradoxical to me.
 
  • #50
vanhees71 said:
But there you get of course the Sackur Tetrode formula only because you don't write ##S=\ln W## but just put ##S=\ln(W/N!)##. So you get a different distribution function but the same entropy for distinguishable as for indistinguishable particles but just because you put in this ##1/N!## again by hand. If you'd had put it in in the very beginning into ##W## there'd be no difference whatsoever in the treatment of distinguishable and indistinguishable particles. This seems a bit paradoxical to me.
My thinking is this: W is simply the number of different ways to arrive at a certain distribution of particles among the available energy levels.
In equilibrium the system will be close to the distribution with the highest W, which is also that with the highest entropy.
The exact relationship between S and W is not quite clear, however.
Swendsen, for example, states that there is an undefined additive function of N (which I chose to be -k ln(N!).
I assume that in the expression S = k ln(omega) the quantity omega stands for a probability (actually a quantity proportional to a probability) rather than a number. I believe Boltzmann might have reasoned like that too, since he used W, as in Wahrscheinlichkeit (I think Swendsen wrote something like that too.).
That's why I introduce the correction 1/N! for distinguishable particles.
I'm not saying this is the right way to think about it. I just tried to make sense of things for myself.

W is given by pure combinatorics so I can't really redefine W as you suggest.
The only place where I have some freedom in this derivation is where I connect my W to entropy, and, yes, I do that differently for distinguishable and indistinguishable particles.
 
Last edited:
  • #51
Please disabuse me of any foolishness in the following. It seems to me the question of distinguishability is a real problem for classical mechanics which is obviated by quantum theory.
If I buy a dozen new golf balls they might all be identical to my eye when new, but when (assuming no errant shots) I examine them after a year that will no longer be so...they will be somewhat different. Molecules in the "classical" universe, each having a unique history surely could not be expected to be identical after millenia. It then seems to require an ill-defined artifice to arbitrarilly define them as such. These stipulations then need to give way to macroscopic rules at some scale between molecules and golf balls. Just throw in an N!
Whatever one does will not be pretty. Quantum mechanics ties this into a rather neat bundle. It seems to me the least objectionable solution (indeed this is the best of all possible worlds).
 
  • #52
I think this problem was really solved by Landauer. Consider for example mixing two samples of hemoglobine.
The point is, that molecules like hemoglobine are idistinguishable on a macrocscopical level, but, they are distinguishable on a microscopic level due to two molecules almost certainly having a different isotopic composition (C-13 and Deuterium) so that with very high probability, there are no two identical hemoglobine moclecules within one human person. Let's say you are mixing two samples of hemoglobine from two different persons. Will you measure any increase in entropy? This somehow depends. Of course you could determine the isotopic pattern of each hemoglobine molecule of each person before mixing. Then you would find certainly an increase of entropy (= loss of information) upon mixing. But you don't have this information if you only have macroscopic information on the composition of the hemoglobine (which would be identical for the two persons).
Hence the point is the following: The information contained in the labels distinguishing different individuals would contribute to entropy. Usually, you simply don't have this information, whence it also does not contribute to entropy. Hence mixing entropy will be zero, because you can't loose information which you don't have.
 
  • Like
Likes HPt, dextercioby, DrStupid and 1 other person
  • #53
DrDu said:
Will you measure any increase in entropy?
What in fact would be measured?
If you do this by somehow "counting" each molecule type then your argument seems a tautology to me. You obviously cannot lose information you never had.
I really don't understand this stuff.
 
Last edited:
  • #54
vanhees71 said:
... only because you don't write ##S=\ln W## but just put ##S=\ln(W/N!)##...

It is claimed that that was the original definition of entropy given by Boltzmann.
 
  • #55
Philip Koeck said:
My thinking is this: W is simply the number of different ways to arrive at a certain distribution of particles among the available energy levels.
In equilibrium the system will be close to the distribution with the highest W, which is also that with the highest entropy.
The exact relationship between S and W is not quite clear, however.
Swendsen, for example, states that there is an undefined additive function of N (which I chose to be -k ln(N!).
I assume that in the expression S = k ln(omega) the quantity omega stands for a probability (actually a quantity proportional to a probability) rather than a number. I believe Boltzmann might have reasoned like that too, since he used W, as in Wahrscheinlichkeit (I think Swendsen wrote something like that too.).
That's why I introduce the correction 1/N! for distinguishable particles.
I'm not saying this is the right way to think about it. I just tried to make sense of things for myself.

W is given by pure combinatorics so I can't really redefine W as you suggest.
The only place where I have some freedom in this derivation is where I connect my W to entropy, and, yes, I do that differently for distinguishable and indistinguishable particles.
Let me first say that I like your manuscript very much, because it gives a very clear derivation of the distributions in terms of counting microstates (Planck's "complexions") for the different statistics (Boltzmann, Bose, Fermi). Of course, the correct Boltzmann statistics is the one dividing by ##1/N!## to take into account the indistinguishability of identical particles, where by identical I mean particles with all intrinsic properties (mass, spin, charges) the same.

That's of course not justified within a strict classical theory, because according to classical mechanics you can precisely follow each individual particle's trajectory in phase space and thus each particle is individually distinguished from any other identical particle simply by labelling it's initial point in phase space.

Of course, it's also right that in classical stastistical physics there's the fundamental problem of the "natural" meausure of phase-space volumes, because there is non within classical physics. What's clear in classical mechanics is that the available phase-space volume is a measure for the a-priori equal probabilities of micro states because of the Liouville theorem. Boltzmann introduced an arbitrary phase-space volume and thus the entropy is anyway only defined up to an arbitrary constant, which is chosen ##N##-dependent by some authors (e.g., in the very good book by Becker, Theory of Heat) and then used to adjust the entropy to make it extensive, leading to the Sackur-Tetrode formula.

I think it's a bit of an unnecessary discussion though, because today we know that the only adequate theory of matter is quantum theory including the indistinguishability of identical particles, leading (by topological arguments) to Bose-Einstein or Fermi-Dirac Fock space realizations of many-body states (in ##\geq 3## spatial dimensions; in 2 dimensions you can have "anyons", and indeed some quasiparticles have been found in condensed matter physics of 2D strutures like graphene, that behave as such).

As you very nicely demonstrate, the classical limit for BE as well as FD statistics leads to the MB statistics under consideration of the indistinguishability of identical particles, leading to the correct Sackur-Tetrode entropy for ideal gases.

From a didactical point of view I find the true historical approach to teach the necessity of quantum theory is the approach via thermodynamics. What was considered a "few clouds" in the otherwise "complete" picture of physics mid of the 19th century, which all were within thermodynamics (e.g., the specific heat of solids at low temperature, the black-body radiation spectrum, the theoretical foundation of Nernst's 3rd Law) and all have been resolved by quantum theory. It's not by chance that quantum theory was discovered by Planck when deriving the black-body spectrum by statistical means, and it was Einstein's real argument for the introduction of "light quanta" and "wave-particle duality", which were important steps towards the development of the full modern quantum theory.
 
  • Like
Likes hutchphd and Philip Koeck
  • #56
andresB said:
It is claimed that that was the original definition of entropy given by Boltzmann.
As far as I know, Boltzmann introduced the factor consistently in both the probabilities (or numbers of microstates) and the entropy, i.e., he put it on both places, and from the information-theoretical point of view that should be so, because entropy is the meausure of missing information for a given probability (distribution), i.e., entropy should be a unique functional of the probability distribution.
 
  • #57
Philip Koeck said:
I've also wondered if there are experimental results that show that if you mix for example He with Ne the total entropy increases, whereas if you mix Ne with Ne it doesn't.
Is there anything like that?

In his book “Theory of Heat”, Richard Becker points to an idea to calculate the entropy increase for the irreversible mixing of different gases by performing an imagined reversible process using semipermeable walls. Have a look at the chapters “A gas consisting of several components” and “The increase of entropy for irreversible mixing”.
 
  • Like
Likes Philip Koeck and vanhees71
  • #58
I must admit, though looking for such experiments in Google scholar, I couldn't find any. I guess it's very hard to realize such a "Gibbs-paradox setup" in the real world and measuring the entropy change.
 
  • Like
Likes Philip Koeck and Lord Jestocost
  • #59
vanhees71 said:
I guess it's very hard to realize such a "Gibbs-paradox setup" in the real world and measuring the entropy change.

Being in the 21st century, I think it is possible to perform this kind of experiment.
How did scientists in the 19th century imagine identical but distinguishable particles?
At least I am thinking of billiard balls meticulously numbered with indian ink.
We already note two points here:
a) Due to the labelling, even the classical billiard balls won't be exactly identical.
b) The amount of information, which can be stored on a billiard ball, is finite. Therefore extensivity, i.e. labelling an almost infinite amount of balls is not strictly possible.

How would we do this in the 21st century? Labelling of larger molecules is possible either by isotopic substitution or, using DNA tags. For the latter case, highly advanced methods exist, which would allow to set up a thermodynamic cycle in any lab.

First attach n primers to each of two carriers and synthesise random single strand DNA of length N from e.g. A and C, only.
In the second step, synthesize the complementary strands.
If the amount of DNA molecules per carrier ##n << 2^N##, the probability to find two identical sequences on the same or on different carriers is practically 0. Hence, all DNA molecules are distinguishable.
Note that this is clearly not extensive. But e.g. with N=1000, the amount of DNA necessary to find two identical sequences would probably spontaneously collapse into a star.

Now if you put each carrier with attached double stranded DNA into a vial with buffer and heat it up, the complementary strands will separate from the carrier. The amount of heat supplied as a function of temperature can easily be recorded with standard calorimeters.
On a macroscopic level, the DNA from different carriers will be indistinguishable and appear as a homogeneous chemical substance.

Remove the carriers and mix the DNA solutions.
Now put in the two carriers into the solution and lower the temperature. Heat will be released at a somewhat lower temperature than before, because the concentration of the individual DNA strands is lower.
At the end of the process, the DNA is bound to its original carrier again.
Hence we have completed a thermodynamic cycle consisting of an irreversible mixing step and a reversible separation. The entropy can easily be calculated from the calorimetric measurements.

So we see that
a) Mixing of distinguishable particles really leads to an increase of entropy.
b) This is not a problem, because distinguishability of microscopic particles cannot be made arbitrarily extensive.
 
Last edited:
  • Like
Likes hutchphd and vanhees71
  • #60
vanhees71 said:
Of course, the correct Boltzmann statistics is the one dividing by ##1/N!## to take into account the indistinguishability of identical particles, where by identical I mean particles with all intrinsic properties (mass, spin, charges) the same.
That's of course not justified within a strict classical theory, because according to classical mechanics you can precisely follow each individual particle's trajectory in phase space and thus each particle is individually distinguished from any other identical particle simply by labelling it's initial point in phase space.

I am not sure what you are saying that is not justified within a strict classical theory.

Is the idea that classical particles may be indistinguishable (or impermutable as I prefer)?
If so, I agree, indistinguishable particles (in the quantum sense) is not consistent with classical mechanics.

If you intend to say that the inclusion of the 1/N! is not justified in classical mechanics then you are wrong. This term is demanded by the definition of entropy as S=k ln(W), with W being the number of accessible states for a system with two partitions that can exchange identical classical particles.
 
  • #61
vanhees71 said:
From a didactical point of view I find the true historical approach to teach the necessity of quantum theory is the approach via thermodynamics. What was considered a "few clouds" in the otherwise "complete" picture of physics mid of the 19th century, which all were within thermodynamics (e.g., the specific heat of solids at low temperature, the black-body radiation spectrum, the theoretical foundation of Nernst's 3rd Law) and all have been resolved by quantum theory. It's not by chance that quantum theory was discovered by Planck when deriving the black-body spectrum by statistical means, and it was Einstein's real argument for the introduction of "light quanta" and "wave-particle duality", which were important steps towards the development of the full modern quantum theory.
One could consider hypothetical quantum particles that are permutable. What I mean is particles that need to be in a discrete quantum state with quantized energy, but permutations lead to different state of the multi particle system. Reif section 9.4 refer to the statistics of permutable quantum particles as Maxwell Boltzmann statistics.

Einstein, in one of the papers introducing BE statistics, writes that he has shown that permutable photons would obey Wien's law. He notes that considering photons as impermutable (normal indistinguishable quantum particles) he derives Plank's law.
I never saw this derivation by Einstein, but it I believe that, to derive Wien, he assumes that there is just one way that n permutable photons may occupy the same quantum state. However, since these are permutable photons, one could as well consider that there are n! ways for n permutable photons to be in the same state. In this case one finds that permutable photons would obey Planks law, just as normal photons. In similar fashion, one could get these hypothetical permutable quantum particles to follow Bose-Einstein, or Fermi-Dirac (imposing Pauli exclusion principle).

I am not saying that identical quantum particles are permutable. I just saying that the fact that they obey FD and BE statistics does not proves that they are impermutable.
 
  • #62
autoUFC said:
Einstein, in one of the papers
Please.
 
  • #63
hutchphd said:
Please.
?
 
  • #64
autoUFC said:
Einstein, in one of the papers introducing BE statistics, writes that he has shown that permutable photons would obey Wien's law. He notes that considering photons as impermutable (normal indistinguishable quantum particles) he derives Plank's law.

That is the quote . I guess "mutually statistically independent entities" are what I call identical permutable particles. I see no other way to get to Wien's.

"An aspect of Bose’s theory of radiation and of my analogous theory of the ideal gases which has been criticized by Mr. Ehrenfest and other colleagues is that in these theories the quanta or molecules are not treated as mutually statistically independent entities; this matter not being pointed out explicitly in our treatments. This is absolutely correct. If the quanta are treated as mutually statistically independent in their localization, one arrives at Wien’s displacement law; if
one treats the gas molecules in an analogous manner, one arrives at the classical equation of
state of the ideal gases, even when proceeding in all other respects exactly as Bose and I have done."

From
Quantum theory of the monoatomic ideal gas
Second treatise. 1925
By
A. Einstein.
 
  • Like
Likes hutchphd
  • #65
autoUFC said:
One could consider hypothetical quantum particles that are permutable. What I mean is particles that need to be in a discrete quantum state with quantized energy, but permutations lead to different state of the multi particle system. Reif section 9.4 refer to the statistics of permutable quantum particles as Maxwell Boltzmann statistics.
Do you know about para-Fermi and para-Bose statistics? Both can be formulated consistently in QM and Fermi, Bose and Maxwell-Boltzmann statistics are special cases.
 
  • #66
Stephen Tashi said:
The link to the Jaynes paper given by the Wikipedia article is: http://bayes.wustl.edu/etj/articles/gibbs.paradox.pdf and, at the current time, the link works.

I read this paper long ago and just re-read it. I think it is quite on spot and I am also convinced that my DNA example is as close as possible to the example Jaynes gives with two gasses which only are distinguished by their solubility in "Whifnium 1" and "Whifnium 2", yet to be discovered. The two DNA samples will be similar in all macroscopic respects, e.g. their solubility, average molar weight, etc. On a macroscopic level, they differ in the affinity to their relative carriers, which take the place of Whifnium. If we have them available or not will change the definition of the macro state and our ability to exploit their difference in a thermodynamic cycle.
 
  • Like
Likes Dale
  • #67
DrDu said:
Do you know about para-Fermi and para-Bose statistics? Both can be formulated consistently in QM and Fermi, Bose and Maxwell-Boltzmann statistics are special cases.

I did not know about it. I read the Wikipedia article and one of its references by catati and bassalo https://arxiv.org/abs/0903.4773. In the article, the obtain Maxwell-Boltzmann as a special case of parastatistics when E-mu >kT
(E energy, mu chemical potential, k Boltzmann constant, and T temperature)
I see this as disingenuous. In this limit both Fermi-Dirac and Bose-Einstein reduce to Maxwell-Boltzmann and we do not say that MB is a special case of FD or BE.

Therefore, I do not agree that Maxwell-Boltzmann is a special case of parastatistics, since parastatistics never agrees with Maxwell-Boltzmann in the low temperatures regime.
 
  • #68
No, parastatistics depend on an integer parameter p, if p is 1, the usual Bose or Fermi statistic arises, if p = infinity, Maxwell Boltzmann arises, independent of temperature.
 
  • #69
DrDu said:
No, parastatistics depend on an integer parameter p, if p is 1, the usual Bose or Fermi statistic arises, if p = infinity, Maxwell Boltzmann arises, independent of temperature.

I read that in wikipedia. However this claim is not justifica there. In the work of catani and bassalo, they recover MB in high temperarures of gentile statistics. Are you aware of any work that demonstrates that MB is the limite of p->infinity parastatistics?
 
  • #70
R. Haag, Local Quantum Physics contains this statement. I suppose you can also find it in the article by Green cited by Catani and Bassalo. I would not trust too much a preprint which is not even consistently written in one language.
 

Similar threads

  • Thermodynamics
Replies
3
Views
1K
  • Advanced Physics Homework Help
Replies
4
Views
3K
  • High Energy, Nuclear, Particle Physics
Replies
3
Views
1K
  • Advanced Physics Homework Help
Replies
3
Views
2K
Replies
4
Views
1K
Replies
1
Views
1K
  • Quantum Interpretations and Foundations
Replies
8
Views
2K
  • Thermodynamics
Replies
9
Views
1K
  • Classical Physics
Replies
19
Views
3K
  • STEM Academic Advising
Replies
6
Views
1K
Back
Top