NON-EQUILIBRIUM WORLDS


Jean Pierre PETIT – Former research director – CNRS FR.

12 janvier 2013

Pdf ( english)      Pdf (French)

When the man of the street thinks about equilibrium of a system, he usually sees a ball standing at bottom of a well, or something like.


The basics of thermodynamic equilibrium theory contain something more subtle: the dynamic equilibrium. The simplest example is the air we breathe. Its molecules are shaken in every direction, showing a mean thermal velocity of 400 m/s. At a tremendous rate, these molecules collide, interact. These shocks change their speed. However, the physicist will translate this into statistical stationary (the used word is “detailed balancing”). Imagine a goblin who, at any time and any point of room can measure molecular speed in a certain direction, modulo a slight angular uncertainty. At every time increment, our goblin counts V and V + AV, algebraic value. Then he plots these values on a graph, and sees growing a nice Gaussian curve, with a top mean value near 400 m/s. Then, the faster or slower are the molecules, the smaller is their population.


He repeats this work, aiming his measuring device towards any space direction, and, surprise, surprise, gets the same result. Molecules agitation in the room is isotropic. More, nothing can disturb this dynamic equilibrium, if temperature remains constant, because gas temperature is exactly the mean kinetic energy coming from this thermal agitation. The physicist will describe this gas in thermodynamic equilibrium. This state is multifaceted: air molecules have no spherical symmetry. Di-atomic molecules, oxygen or helium, are peanut shaped. Those of carbonic gas or water vapor have other shapes. All those objects, when rotating, can store energy as tiny flywheels. These molecules can also vibrate. Energy even-distribution concept says energy must be distributed equally into all these various “modes”. During a collision, some kinetic energy can be converted into vibrational or rotational energy of a molecule. The reverse is also valid. Then all this is statistics and our goblin can count how many molecules are in such and such state, have such kinetic energy, are in such vibrating state. Back to our breathing air, this census leads to the stationary status. This medium is then said to be in thermodynamic equilibrium, namely relaxed. Imagine a wizard having power to stop those molecules, stays put their rotation or vibration moves, modifying them at will, creating a new statistic law, deforming that beautiful Gaussian curve, even creating some anisotropic event, where for example thermal speed in one direction is increased two fold relative to transverse directions. After all, he would let the system evolve following further collisions. How many of these should be necessary for the system to go back towards thermodynamic equilibrium? Answer: very few. The mean free travel time of a molecule, in between two collisions, gives an idea of relaxation time in a gas, of its return time towards thermodynamic equilibrium.

Does-it exist non-equilibrium media, where molecular statistic speeds go notably out of this comfortable isotropy and beauty of these nice Gaussian curves?

Oh yes! And it’s even case majority in universe. A galaxy, this “universe-island”, comprised of many hundreds billions stars, whose mass are more or less some, can be seen as a gaseous medium, in which molecules should be… stars. In this precise case, one discovers a disconcerting world where the mean time free travel of a star, before any encounter with neighbor star, is ten thousand times age of universe. What do we mean by encounter? Should it be a collision where the two stars heavily smashed? Not even! In the theoretical physics domain we call kinetic theory of gases, we will consider collision when star trajectory is noticeably modified while crossing neighbor star.
However, calculation proves these events are extremely rare, and our hundreds billions stars system can be seen as usually no-colliding.

Since billions years, trajectory of our Sun is regular, quasi circular. If our Sun was self conscious, providing he did not change pace due to encounters, he would ignore having neighbors. He only senses the gravitational field as “smooth”. He walks along his pace like in a basin, while not sensing any bump created by other stars. Immediately the corollary surfaces: place our goblin, now an astronomer, in the vicinity of Sun in our Galaxy and ask him to build a speed statistic of neighbor stars in any direction. Obvious fact comes now. The medium, dynamically speaking, is strongly anisotropic. It does exist a direction where the stars’ agitation speeds (called residual speed by astronomers, relative to the mean rotation drive of the galaxy, quite circular and at 230 km/s near Sun) are practically two times more than in any other transverse direction. In our breathing air, this was called spheroid speed distribution – Now, this becomes ellipsoid speed distribution. So far, so good? How this does affect our vision, our understanding of the world? Change everything! Because by far we cannot deal with theories of so drastically non equilibrium systems.

Leaving aside paradoxical status where stand galaxies due to this damned effect of dark matter (missing mass), discovered in 1930 by the American, Swiss originated, Fritz Zwicky, and in any case we could produce any model of self gravitating, punctual mass (orbiting in their own gravitational field). Our physics stands always near a state of thermodynamic equilibrium. Obviously, any deviation of this or that represents a deviation against equilibrium, for example temperature gap between two gaseous areas, which will lead to heat transfer, a transfer of kinetic energy from thermal agitation. In this case, if we put back to work our goblin, he would conclude that medium, dynamically speaking, is “almost isotopic”. This will be case of our atmosphere, even crossed by the utmost windstorms.

Well then, is that impossible to encounter, “to put the fingers on” situations where a gaseous medium, a fluid, are frankly out of equilibrium? One will found such occurrences when crossing shock waves. They are limited areas, as precisely the thickness of shock wave has order of magnitude of small number of mean free path.

When it is crossed by a shock wave, a gas switches between states very abruptly, considering a state near thermodynamic equilibrium, in the “shocked” gas, is recovered after some mean free path times.

We reported an observation, forty years ago, in the laboratory where I worked, now dismantled, the “Institut de Mécanique des Fluides de Marseille”. We had then some sort of gas guns we called “shock tubes”. Outline is: using an explosive, we ignited a shock wave, propagating at several thousand meters/sec in a rare gas – Initially this gas was at some millimeter mercury pressure. The shock wave move did recompress gas, increasing its density.

We could follow easily and precisely the increase in density using interferometry. At the time, we also measured the heat flow at surface of Plexiglas mock-ups. As the experiments did last only fractions of milliseconds, our measuring devices must have fast response time. Precisely they where metallic films of some micron thickness, vacuum coated on the wall, they acted as thermistors. We evaluated heat flow by recording resistance of these wall sensors while they heated.

One day we placed a sensor straight on the tube wall. Then we observed the heat flow was reaching the sensor after a certain delay following shock wave passage, materialized by an abrupt density jump. Yet we made sure the thermal lag of the sensor was small enough for this delay was not coming from it. In fact we put the finger on a return phenomenon towards a quasi thermodynamic equilibrium, downstream the shock wave.

We can compare this one to a hammer slam. Not only the density is brutally increased, also we observed a temperature jump, meaning an increase of thermal speed of molecules. But behind this wave, isotropy is only seen after several mean free path times. Immediately before density front, increase of thermal agitation is translated by movements starting perpendicular to wave direction.

When our sensor collects heat, this results from impact of air molecules on its surface. Yet immediately before the density front, on some distance, thermal agitation was developing parallel to the wall. The gas was well “heated” but momentarily unable to transfer this heat to the wall. Over the collisions the “ellipsoid of speeds” was transforming itself into “spheroid of speeds”, and the sensor ended in giving return of the heat flow it received. I believe remembering, with the experimental setup we had, that we recorded this heat flow near one centimeter before density front.

So the shock waves represent tiny thickness areas, where the gaseous medium is strongly out of equilibrium.
How do we manage this? To make equal these areas to no thickness surfaces. And this works since almost one century.
I am old enough to have known almost all of the computer history, since beginning. When I was a student at “Ecole Nationale Supérieure de l’Aéronautique”, there was no computer in house. These ones were installed inside sanctuaries called “calculation centers” we could not access. We were calculating using sliding rules, curiosity objects for today’s generation. In superior school classes we all had our logarithm book, and every examinations include a boring numeric calculation test using these items, which are by now exposed in museums.

When I leaved Sup Aero School were just coming mechanical calculators (FACIT), hand powered. To multiply numbers your turn crank one way, to divide you turn opposite.

The professors, or department managers, had electrical machines, which were breaking the silent’s office with their cog noise at Institut Mécanique des Fluides, 1964. Computers had the place of honor, as distant gods only seen through a window, in these calculation centers. These computers, having the power of a today’s pocket computer, were served by priests in white cassocks. You could only communicate with them via a thick amount of punched cards noisily read by a mechanical “card reader”. We bought “calculation time” by the second, so costly it was. It is Neolithic vision for young people of today.

Micro Computers invasion has changed all this. More, the increase of computers power being eruptive, the Net is now full of pictures where you see vast rooms filled with mysterious black cabinets, managing jaw dropping quantities of data.
Megaflops, gigaflops, petaflops, galore! Back into seventies, you could easily read the content of an Apple II RAM, which was entirely written as a small booklet.

We are in a promethean world. Can we say these modem tools increase our physics mastering? An anecdote is coming to my mind. In France, I have been a pioneer for micro computing, having managing one of the first centers (based on Apple II) dedicated to this technology. By this time, also sculpture professor to Ecole des Beaux Arts of Aix en Provence, one day I presented a system, which a flatbed plotter which drew at will master perspective drawings. An old professor, raising eyebrows, said then “don’t tell me computer will replace artist?”

Paraphrasing this we could imagine any fellow who, after visiting a mega data center, claimed: “Don’t tell me computer will replace brain?”

In spite of unstoppable computing power escalation, and massive multi processors, we are far away from it. However, in certain areas, these systems have sent to scrap our logarithm books and sliding rules, amongst others. Who is still playing to calculate integrals, pen and paper? Who is still juggling with differential calculus, apart pure mathematicians?

Nowadays we believe in “computer’s doing everything”. We built algorithms, we supply data, we run until we receive results. If it is to draw any building or nice engineering work, this works so well. Theory of fluids is also a success.

We can place a surface element, of any shape, perpendicularly to some gaseous flow, and compute the whirling flow pattern past it, whatever its aspect. Does it fit the experiment? Not always. Qualitatively, we master the event, for example we can compute a reliable aerodynamic drag figure as a result of this gas swirling. Same, we compute the burning efficiency inside a cylinder, the convection current in an enclosure. Predictive meteorology is gaining fast, providing a time frame of few days, except “micro events”, very localized, which are not yet manageable. Is that the case in every domain?

There are bodies who refuse to be kept in leash by this modern times lion tamer so-called computer. These are “non equilibrium” plasmas, title holder, all categories. They also drift away from fluid’s theory, in spite of a family semblance with, because they are subject to distance action, due to electromagnetic field whose action can only be evaluated in taking account all ionic particles constituting the system.

Don’t matter, said you. It is enough to consider plasma as N-bodies system. Easier to say than to do! We spoke earlier about galaxies, as examples of collision free worlds. Tokamaks are another kind (ITER is a giant tokamak). The gas they contain is extremely scarce. Before starting, filling pressure inside the 840 cubic meter of ITER would be less than fractions of mercury millimeter pressure. Why so low a pressure? Because we are to heat this gas more than 100 millions degree. Yet you know the pressure is expressed as: p = nkT - k being Boltzmann constant, T absolute temperature and n the number of particles per cubic meter. Plasma confinement is due only to magnetic pressure, this last increasing as the square of the magnetic field.

With a field intensity of 5, 2 Tesla, magnetic pressure is 200 atmospheres. In view of plasma confinement, its pressure must remain far below this value. Due to use of a superconductor device, the magnetic field cannot be increased indefinitely, then plasma density inside the reactor chamber stay limited to very low values. From these facts we see a totally collision-free body, escaping to any reliable macroscopic definition. Can we manage it like N-bodies problem? Don’t even dream about, present or future – it is not possible to calculate locally, as we could do it with neutral fluids mechanics. Every area is coupled with any other via electromagnetic field. Take for example the problem of energy transfer from plasma core to walls. Besides a mechanism looking like conduction phenomenon, besides what belongs to turbulence, is coming a third modality, named “abnormal transport”, using… waves.

In brief as in the only one, a tokamak is an absolute nightmare for a theorist.

Plasma in itself, apart its uncontrollable behavior, is not the only one involved. There is everything else: amongst them is the unavoidable ablation of particles from the wall. Those who practice glider know the basic parameter of these machines is the lift- to- drag ratio: it express the number of meters flown per meter of height lost (the glide ratio). The sailplane wing, at a given speed, is producing a certain lift force. At same speed we get a drag force, which is twofold: first is induced drag: a loss of energy due to vortices at wingtips.

You cannot avoid it unless infinite wingspan… It is to decrease it that gliders have so large wingspan, frequently more than 20 meters, associated with aspect ratio – ratio of half wing span on mean wing width – larger than 20. Second source of drag is viscous drag. It will be reduced in seeking the smoothest wing surface. Due to a nice polish we delay starting turbulence in the immediate vicinity of wing surface. This phenomenon is a basic fluid instability, the excellence of surface polish can only delay its coming. Inversely, this turbulence can be started by a perturbation. If we look at a line of smoke in a calm atmosphere, it is an upstream of hot gas, colored by its particles content. This thread of smoke, calm at first, will become intensely turbulent after a tenth of centimeters rise, whatever the quietness of ambient air. By introducing an obstacle, as a needle, in this rising flow we could trigger an irreversible turbulence. The same is done by a minute harshness on the polished surface of sailplane wing, which will trigger turbulent phenomena, locally increasing by an easy hundredth factor air friction, thus total drag. In modern sailplanes, we succeed in keeping a laminar airflow (non turbulent, parallel layers) over 60% of the chord line. If by chance a mosquito crashes on leading edge, this minute asperity will start turbulence in a more or less 30 degrees further zone. For this reason, in contest sailplanes, whose glide ratio is more than 50, there is a leading edge cleaning device, started automatically and timely, which can be compared to a linear wind shield, a sort of brush is traveling along leading edge, back and forth, and comes back to rest in a hidden place. Considerable works have been spent to increase overall glide ratio of airliners, in order to reduce their fuel consumption. Back in the sixties the “Caravelle”, which was able to sail between Orly and Dijon, had a glide factor of 12. Nowadays, even these monstrous Airbus 380 have a glide factor more than 20.

That is, when missing propulsive force, with their four motors idle, starting from 10 thousands meters height they can glide over 200 kilometers.

Back to plasmas and tokamaks : in these machines, a micro volume of turbulence can be triggered by minute particles, torn from walls, and will invade the reaction chamber. Turbulence speaking, the range is extremely large and spreads from this micro turbulence to electro-dynamic plasma convulsions involving entire volume.

As a conclusion, engineers are not at all managing the machine unless using approximated empiric “engineer’s laws”, of weak reliability, about running system. In this domain where non-equilibrium is the king, where measurements are extremely difficult, computer is of no help. Experiment is the only leader. Also extrapolation leads to discover new unforeseen phenomena, as the vertical plasma movement (VDE, Vertical Displacement Event), which appeared when size jumping from Fontenay aux Roses TFR to Culham JET.

The recent fiasco of the NIF (National Ignition Facility, based at Livermore, Ca) is a good example of sounding failure in large and costly facilities, with help of the most powerful computers in the world. It is the conclusion of NIC (National Ignition Campaign) after 2 years trials, from 2010 to 2012. The system, comprised of 192 lasers, delivers 500 terawatt (more than thousand times USA electrical grid power) in a hand- full of nanoseconds, on a spherical target 2 mm diameter, filled up with Deuterium – Tritium mixture, itself inserted at center of a cylindrical box 2 cm long and 1 cm diameter, called Holraum (oven in German).

The plan is following: half of the lasers’ disc-shaped beams burst into opening one side of Holraum, the other half barge into hole opposite side. These ultrathin U.V. beams strike the inner walls of the oven, built from Gold. This one re-emits X radiation. Laser beams, precisely focused, created 3 rigs of spots on the inner wall. The re-emitted X radiation then hits the spherical target. We talk now about indirect irradiation. This system has been devised basically to mimic the fusion stage of a hydrogen bomb, where the X radiation (generated this time by a fission device) strikes the walls of a shell called ablator, containing the fusion explosive (Lithium deuterure). In the NIF, this last was replaced with mixture of Deuterium – Tritium in which fusion starts at a lower temperature, order of 100 million degree. Envelop (the ablator, thin spherical shell) sublimates and explodes both directions external and internal. We use this back compression to create a “hot spot” at target center, hoping to start ignition in an inertial confinement scheme.

All this had been calculated under direction of John Lindl. In 2007, a paper devoted to this scientist, during Maxwell prize-giving, described finely what would happen. Theorists were so self-convinced than Lindl did not hesitate to claim ignition would be the starting point of a vast series of experiments. It is same for the test manager, who even had fixed a dead line for the operational success, October 2012, which was supposed to crown thirty years of efforts, both theoretical and technologic.

The result has been an immense fiasco, pinned out by a 19 July 2012 report, emanated from D.O.E. (US Department Of Energy) and written under supervision of Davis H. Crandall.

What must remain of this observation report, related to this paper very mater, is that in spite of excellence of this work, both technology and measurement speaking, nothing what emerged from this experiment was exhibiting any relation with computed data and predictions obtained with help of the most powerful computers.

Up to the point where some observers were asking if these simulations could represent any investment for further experiments.

The NIF crisis is evident – it is impossible to increase the number of lasers (Neodymium-doped glass) for cost reasons. Impossible again to increase their unity power – in fact, when they are drenched with energy, above a certain level, they are prone to blow up, whatever the homogeneity and glass quality are.

To succeed in starting ignition and inertial confinement fusion, implosion speed must be at least 370 Km/sec.
Not only this speed is not reached, but, serious by far, when the shell constituting ablative device, is turned to plasma, and pushes its D-T content, “the piston is mixing up with fuel”, due to a well known instability, the one of Raleigh Taylor. To minimize its effects, we must make thicker the ablator. But then it would increase its inertia and the speed implosion threshold would not be reached again.

Simulations done on computer have given false results in all domains. As written in the D.O.E. report, modeling of interactions between laser and walls (impact of X rays on gold walls) is not satisfying, in spite of tenths of years studies spent on this subject, and hundredths of thesis and papers. Same thing for interaction between X rays beams, following a law named “inverse Raman Scattering”, with the gold plasma, coming from sublimation of parietal gold inside chamber. Interaction of X radiation with ablator is also not correctly simulated. Last, the calculation algorithms (LASNEX) totally under–estimated the weight of Raleigh Taylor instability, the deformation of the contact surface of ablator, Deuterium–Tritium, recalling intestinal villosities.

These mishaps show confidence limits we can set into superb simulation computerized results, once these machines try to attack frankly out-of-equilibrium problems, mostly non-linear, where a bunch of mechanisms, poorly modeled, play a role in the game.

Dr. Jean Pierre Petit