- #36
hutchphd
Science Advisor
Homework Helper
- 6,605
- 5,690
Can you give the gist of the argument here?
Suppose two dogs. There are N fleas among the two. What is the most probable partition of the fleas among the two Dogs?hutchphd said:Can you give the gist of the argument here?
autoUFC said:I recently discovered that there is no real paradox in the question of the mixing of classical distinguishble particles. I was shocked. Most books and all my professors suggest that an extensible entropy could not be defined for distinguishble particles.
I've also wondered if there are experimental results that show that if you mix for example He with Ne the total entropy increases, whereas if you mix Ne with Ne it doesn't.vanhees71 said:We discuss in circles. Just once more: For distinguishable particles there IS mixing entropy!
vanhees71 said:You mean extensive! Sure, that's the point. If you assume that the particles are distinguishable (and in my opinion nothing in classical mechanics make particles indistinguishable) then by calculating the entropy according to Boltzmann and Planck, you get a non-extensive expression for the entropy of a gas consisting of (chemically) "identical" particles which leads to Gibbs's paradox. This paradox can only be resolved when assuming the indistinguishability of particles in the sense that you have to count any configuration which results from a specific configuration by only exchanging particles as one configuration, which leads to the inclusion of the crucial factor ##1/N!## in the canonical partition sum:
$$Z=\frac{1}{N!}\int_{\mathbb{R}^{6N}} \mathrm{d}^{6 N} \Gamma \exp[-\beta H(\Gamma)], \quad \beta=\frac{1}{k_{\text{B}} T}.$$
Philip Koeck said:The second paradox is the mixing paradox, which states that if you mix two different gases the entropy increases, but if you mix two gases of identical, but distinguishable atoms the entropy cannot increase since macroscopically nothing changes due to mixing identical atoms.
vanhees71 said:If the identical particles are distinguishable there is no paradox. If you do the Gibbs-paradox setup with ortho- and para-helium (at sufficiently low temperature, so that the transition probability between the two states is small) then you should get a mixing entropy, because when taking out the dividing wall the atoms in the two states diffuse irreversibly into each other, and thus there's entropy.
vanhees71 said:If the identical particles are distinguishable there is no paradox. If you do the Gibbs-paradox setup with ortho- and para-helium (at sufficiently low temperature, so that the transition probability between the two states is small) then you should get a mixing entropy, because when taking out the dividing wall the atoms in the two states diffuse irreversibly into each other, and thus there's entropy.
false.Philip Koeck said:As I see it there are two paradoxes:
One is that statistical entropy for an ideal gas of distinguishable particles is non-extensive,
It is true that assuming molecules are impermutable leads to an extensive entropy. Impermutable meaning that exchanging two of them leads to the same state. Usually, in quantum theory, the terms identical or indistinguishable are used. I am using impermutable to emphasize that permutable particles may be identical.Philip Koeck said:To resolve this one has to assume that gas-molecules are actually indistinguishable and that leads to the Sackur-Tetrode expression for entropy, which is extensive.
There is no paradox. mixing identical permutable (classical) particles do not increase entropy. The inclusion of N! is nessessary due to the correct counting of accessible states. Extensivity follows.Philip Koeck said:The second paradox is the mixing paradox, which states that if you mix two different gases the entropy increases, but if you mix two gases of identical, but distinguishable atoms the entropy cannot increase since macroscopically nothing changes due to mixing identical atoms.
Philip Koeck said:In the textbook by Blundell and Blundell the mixing paradox is demonstrated using the (extensive) Sackur-Tetrode expression. To me that indicates that the mixing paradox doesn't automatically go away just by making entropy extensive. You have to require that atoms are indistinguishable explicitly once more to resolve the mixing paradox.
Most comments welcome.
autoUFC said:..., entropy should logically be defined as S=k ln(Omega(n)/n!). Extensivity follows.
I don't like to use the word "identical" in this context. I just tried to use the language obviously implied by this argument by you: "mix two gases of identical, but distinguishable atoms".Philip Koeck said:Now I'm confused about your terminology.
Are you saying that identical particles can be distinguishable?
Would you say that ortho- and para-helium are identical?
But there you get of course the Sackur Tetrode formula only because you don't write ##S=\ln W## but just put ##S=\ln(W/N!)##. So you get a different distribution function but the same entropy for distinguishable as for indistinguishable particles but just because you put in this ##1/N!## again by hand. If you'd had put it in in the very beginning into ##W## there'd be no difference whatsoever in the treatment of distinguishable and indistinguishable particles. This seems a bit paradoxical to me.Philip Koeck said:Interesting. I used something like that in a text I put on ResearchGate. Not sure whether it makes sense, though.
In deed, I get the the Sackur-Tetrode expression even for distinguishable particles like that.
Here's the link: https://www.researchgate.net/publication/330675047_An_introduction_to_the_statistical_physics_of_distinguishable_and_indistinguishable_particles_in_an_ideal_gas
My thinking is this: W is simply the number of different ways to arrive at a certain distribution of particles among the available energy levels.vanhees71 said:But there you get of course the Sackur Tetrode formula only because you don't write ##S=\ln W## but just put ##S=\ln(W/N!)##. So you get a different distribution function but the same entropy for distinguishable as for indistinguishable particles but just because you put in this ##1/N!## again by hand. If you'd had put it in in the very beginning into ##W## there'd be no difference whatsoever in the treatment of distinguishable and indistinguishable particles. This seems a bit paradoxical to me.
What in fact would be measured?DrDu said:Will you measure any increase in entropy?
vanhees71 said:... only because you don't write ##S=\ln W## but just put ##S=\ln(W/N!)##...
Let me first say that I like your manuscript very much, because it gives a very clear derivation of the distributions in terms of counting microstates (Planck's "complexions") for the different statistics (Boltzmann, Bose, Fermi). Of course, the correct Boltzmann statistics is the one dividing by ##1/N!## to take into account the indistinguishability of identical particles, where by identical I mean particles with all intrinsic properties (mass, spin, charges) the same.Philip Koeck said:My thinking is this: W is simply the number of different ways to arrive at a certain distribution of particles among the available energy levels.
In equilibrium the system will be close to the distribution with the highest W, which is also that with the highest entropy.
The exact relationship between S and W is not quite clear, however.
Swendsen, for example, states that there is an undefined additive function of N (which I chose to be -k ln(N!).
I assume that in the expression S = k ln(omega) the quantity omega stands for a probability (actually a quantity proportional to a probability) rather than a number. I believe Boltzmann might have reasoned like that too, since he used W, as in Wahrscheinlichkeit (I think Swendsen wrote something like that too.).
That's why I introduce the correction 1/N! for distinguishable particles.
I'm not saying this is the right way to think about it. I just tried to make sense of things for myself.
W is given by pure combinatorics so I can't really redefine W as you suggest.
The only place where I have some freedom in this derivation is where I connect my W to entropy, and, yes, I do that differently for distinguishable and indistinguishable particles.
As far as I know, Boltzmann introduced the factor consistently in both the probabilities (or numbers of microstates) and the entropy, i.e., he put it on both places, and from the information-theoretical point of view that should be so, because entropy is the meausure of missing information for a given probability (distribution), i.e., entropy should be a unique functional of the probability distribution.andresB said:It is claimed that that was the original definition of entropy given by Boltzmann.
Philip Koeck said:I've also wondered if there are experimental results that show that if you mix for example He with Ne the total entropy increases, whereas if you mix Ne with Ne it doesn't.
Is there anything like that?
vanhees71 said:I guess it's very hard to realize such a "Gibbs-paradox setup" in the real world and measuring the entropy change.
vanhees71 said:Of course, the correct Boltzmann statistics is the one dividing by ##1/N!## to take into account the indistinguishability of identical particles, where by identical I mean particles with all intrinsic properties (mass, spin, charges) the same.
That's of course not justified within a strict classical theory, because according to classical mechanics you can precisely follow each individual particle's trajectory in phase space and thus each particle is individually distinguished from any other identical particle simply by labelling it's initial point in phase space.
One could consider hypothetical quantum particles that are permutable. What I mean is particles that need to be in a discrete quantum state with quantized energy, but permutations lead to different state of the multi particle system. Reif section 9.4 refer to the statistics of permutable quantum particles as Maxwell Boltzmann statistics.vanhees71 said:From a didactical point of view I find the true historical approach to teach the necessity of quantum theory is the approach via thermodynamics. What was considered a "few clouds" in the otherwise "complete" picture of physics mid of the 19th century, which all were within thermodynamics (e.g., the specific heat of solids at low temperature, the black-body radiation spectrum, the theoretical foundation of Nernst's 3rd Law) and all have been resolved by quantum theory. It's not by chance that quantum theory was discovered by Planck when deriving the black-body spectrum by statistical means, and it was Einstein's real argument for the introduction of "light quanta" and "wave-particle duality", which were important steps towards the development of the full modern quantum theory.
Please.autoUFC said:Einstein, in one of the papers
?hutchphd said:Please.
autoUFC said:Einstein, in one of the papers introducing BE statistics, writes that he has shown that permutable photons would obey Wien's law. He notes that considering photons as impermutable (normal indistinguishable quantum particles) he derives Plank's law.
Do you know about para-Fermi and para-Bose statistics? Both can be formulated consistently in QM and Fermi, Bose and Maxwell-Boltzmann statistics are special cases.autoUFC said:One could consider hypothetical quantum particles that are permutable. What I mean is particles that need to be in a discrete quantum state with quantized energy, but permutations lead to different state of the multi particle system. Reif section 9.4 refer to the statistics of permutable quantum particles as Maxwell Boltzmann statistics.
Stephen Tashi said:The link to the Jaynes paper given by the Wikipedia article is: http://bayes.wustl.edu/etj/articles/gibbs.paradox.pdf and, at the current time, the link works.
DrDu said:Do you know about para-Fermi and para-Bose statistics? Both can be formulated consistently in QM and Fermi, Bose and Maxwell-Boltzmann statistics are special cases.
DrDu said:No, parastatistics depend on an integer parameter p, if p is 1, the usual Bose or Fermi statistic arises, if p = infinity, Maxwell Boltzmann arises, independent of temperature.