Over the past four centuries high-pressure research has amassed a distinguished record of accomplishment ranging from (1) measurement of the pressure that Earth’s atmosphere exerts on Earth’s surface, which proved the atmosphere has mass, a major question in 1640, to (2) state-of-the-art experiments and theory on the virtually universal nature of atomic matter at extremely high pressures, densities and temperatures. Today the issue of whether Earth’s atmosphere has mass or not seems difficult to accept as ever having been a serious question. However, in the seventeenth century no less a scientist than Galileo thought Earth’s atmosphere to be weightless, which motivated invention of the barometer and the first measurement of the “weight” of the atmosphere by a scientifically curious student named Torricelli, who did not believe the opinion of his professor, Galileo.
An experimental science begins when a property can be measured with sufficient accuracy to draw a significant conclusion about a scientific question. The key requisite is an experimental measurement and its estimated accuracy. Modern high-pressure research developed into its present form with dynamic and static compression from 1643 to 1961. The criteria for the choices of these dates is the first quantitative measurement of pressure in 1643 and general acceptance by the static high-pressure community in ~1961 that dynamic high-pressure research is a science based on experimental detection under both static and dynamic compression of the α−ε transition in Fe at a pressure of 13 GPa.
Up until ~1850 only static pressures had been investigated. In the last half of the nineteenth century the basic idea of a shock wave was developed theoretically by W. J. M. Rankine of the University of Glasgow (Reference Rankine1870). Rankine’s theoretical work on shock compression began with completion of the derivation of the ideal-gas EOS, PV = RT = (2/3)E, where R is the gas constant. In 1848 Lord Kelvin, also of the University of Glasgow, devised a temperature scale with an absolute zero, which is independent of the properties of any particular substance and is based on Carnot’s theory of heat. By international agreement, absolute 0 K is taken as -273.15° C. Rankine has been called a founding contributor to thermodynamics, along with William Thompson (Lord Kelvin) and Rudolph Clausius.
Dynamic high-pressure experiments began in earnest in the 1940s as a result of World War II. Not until 1961 was it generally accepted by the static high-pressure community that dynamic compression is a science – that is, that experimental results of dynamic compression were being interpreted correctly. The main concern was the apparently brief timescale on which shock experiments occur (< μsec) compared to “human” timescales of ~sec.
In 1956 researchers at Los Alamos published the first shock-induced phase transition in Fe, now known as the Fe α−ε transition at 13 GPa (Bancroft et al., Reference Bancroft, Peterson and Minshall1956). Shock pressures were generated with high explosives. When P. W. Bridgman, a Nobel Laureate, heard this result, he measured the electrical resistance of Fe versus static pressure. When he found no indication of a transition near what he thought was 13 GPa in his experiments, Bridgman said it was unlikely that a crystal could change phase in a ~μsec (Bridgman, Reference Bridgman1956). To test Bridgman’s assertion H. G. Drickhamer and A. S. Balchan (Reference Drickhamer and Balchan1961) and J. C. Jamieson and A. W. Lawson (Reference Jamieson and Lawson1962) both spent ~five years building apparatus to measure electrical resistance and X-ray diffraction, respectively, above 10 GPa in a large-volume press to look for a resistance anomaly and lattice diffraction, respectively, at static pressures up to ~15 GPa. The fact that Fe actually undergoes the α−ε transition in a time as brief as a ns under dynamic compression was eventually observed in 2005 (Kalantar et al., Reference Kalantar, Belak, Collins, Colvin and Davies2005).
4.1 Evangelista Torricelli: 1643
Torricelli made the first quantitative measurement of pressure in 1643. To do so, he invented the Hg barometer to see if Earth’s atmosphere has weight – that is, if the atmosphere exerts pressure vertically downward on Earth’s surface. Torricelli was a student of Galileo, and Galileo claimed the atmosphere does not have weight. Torricelli thought it did based on some then puzzling engineering observations, for which he had a proposed explanation, and set out to test his idea. Torricelli eventually invented the Hg barometer, measured the value of atmospheric pressure and reportedly found atmospheric pressure varies with elevation by running up and down hills in central Italy. In so doing so, Torricelli experienced the thrill of discovery and founded high-pressure science.
The existence of forces exerted by air had been well known to sailors and homeowners for millennia. However, Galileo and most of the few other scientists of those days thought that air exerts forces only in directions parallel to the Earth’s surface and thus is massless – ethereal one might say. This was not an unreasonable assumption, given that the human body is unable to detect atmospheric pressure. Effects of wind were then essentially the only observations that indicated Earth has an atmosphere. At that time there was no existing theory applicable to the question – only opinion. The question of whether or not air has mass, and thus weight, needed an experiment to settle the issue.
The basis of Torricelli’s opinion that air has weight was the fact that engineers could use a water siphon to lift materials to a limiting height of ~34 feet and no more. Torricelli thought that atmospheric pressure might explain the height limit of a water siphon. He knew that the density of liquid Hg is about 14 times greater than that of water and so reasoned that, if total weight/area of a liquid column is all that matters, then the atmosphere should support a column of Hg about ~30 inches high. So Torricelli built a glass barometer and observed that when a pool of liquid Hg was left open to the atmosphere and an evacuated long glass tube, sealed on one end and open on the other, was inserted into the Hg pool with its axis vertical to the Hg surface, liquid Hg rose in the evacuated tube to a height of ~30 inches. His barometer verified his prediction. After observing that his intuition was correct, Torricelli is reported to have said enthusiastically, “We live submerged at the bottom of an ocean of air.” He is also said to have walked up and down hills and observed ~mm changes in the height of his Hg column caused by changes in atmospheric pressure with elevation. Torricelli had experienced the thrill of discovery.
Torricelli’s barometer is so sensitive, accurate, simple and useful that it is still used today to measure barometric pressure. Named in his memory, a torr is the unit of pressure required to raise the level of a Hg column 1 mm. The torr is commonly used to quantify pressures from ~10–12 torr up to ~103 torr.
Torricelli went into a region that was a scientific frontier of his day. He went where no experimental scientist had gone previously, developed a new diagnostic with great sensitivity and resolution to look for something he thought might be there and, with his new instrument, resolved a major scientific controversy. Nature does not always behave in the way one might expect, not even for a great scientist like Galileo. Experiments, not opinion, are required to prove a scientific hypothesis or theoretical prediction. This idea is known as the scientific method.
Galileo himself experienced the thrill of discovery several times. In 1610, by peering into his crude ~10x magnification telescope, he observed faint images of the Galilean satellites, the four largest moons of Jupiter. One can only imagine the thrill Galileo must have felt in realizing that he was the first person on Earth to learn that other planets have moons. Materials in the deep interiors of the Galilean satellites, in Jupiter, in Saturn and in other planets within and beyond our solar system are themselves under very high gravitational pressures and are studied today at planetary pressures and temperatures obtained in laboratories. With his questioning mind and discoveries, Galileo played an important, if indirect, role in the establishment of high-pressure research and of science in general.
4.2 Blaise Pascal: Experimental Verification
Later in the 1640s Blaise Pascal also constructed a barometer. Pascal measured barometric pressure as a function of altitude and verified Torricelli’s observations of barometric pressure. Pascal also got involved in a controversy of those days as to whether or not a vacuum can exist, as for example, above the Hg column in his barometer. Today, the MKS unit of pressure is the Pascal (Pa). MKS units are based on length in meters, mass in kilograms and time in seconds.
4.3 Ideal-Gas Equation of State: 1660 to 1848
Development of the ideal-gas EOS was the next important step in the development of dynamic high-pressure research. An EOS is a relation between thermodynamic variables, such as pressure, density and temperature. Rankine used the ideal-gas EOS explicitly for the first derivation of the equations of conservation of momentum, mass and internal energy across the front of a shock wave. The ideal gas EOS is used today to demonstrate basic ideas of dynamic compression, such as limiting shock compression and how many weak shocks are required for a quasi-isentrope to be an isentrope from the first-shock state of a multiple-shock compression (Section 2.1.7).
The EOS of an ideal gas is PV = RT. It took almost 200 years after P was first measured to derive this simple form of the EOS for several reasons, not the least of which was a general belief that the expected simplicity of nature required that P, V and T must each be measured relative to their respective absolute zeroes. An ideal gas is one that satisfies several assumptions: (1) particles are point masses with zero volume per particle; (2) interactions between particles are negligible, (3) pressure is caused only by elastic collisions of particles with walls containing the gas and (4) temperature is caused by kinetic energy of particles in the gas. P = 0 in a vacuum because there are no particles to impart momentum to walls. V = 0 in a vacuum because there are no particles at all.
But back in the seventeenth to nineteenth centuries, the meaning of T = 0 was unknown. It was realized at that time that absolute zero T might exist but experiments had yet to be performed at very low temperatures to give clues as to the nature of matter there. Temperatures near absolute zero were not actually achieved until 1898 and 1908 (20 K in liquid H2 and 4.2 K in liquid He) by James Dewar and Kamerlingh Onnes, respectively (Mendelssohn, Reference Mendelssohn1966). Today we know there is an absolute zero of temperature and that Lord Kelvin deduced absolute zero temperature in degrees C in the middle of the nineteenth century (Thomson, Reference Thomson1848). However, in the seventeenth century, whether absolute zero temperature existed had yet to be determined and an absolute temperature scale had yet to be developed.
Based on history of that period, one can speculate on what happened after Torricelli invented a technique to measure gas pressure. After 1643 it was possible to determine experimentally the relation between absolute P and V of a gas at a fixed temperature over a limited range. In those days the height of a Hg column in a barometer and linear dimensions of a sealed box containing air could probably be measured to a ~mm or better with a “ruler”. Pressure changes of an atmosphere or so could probably be effected with then available pumps and seals. Thus, the relation between pressure and volume could be measured with sufficient accuracy to demonstrate a quantitative relation between pressure and volume of air, an ideal gas, at room temperature. In 1662, nineteen years after the invention of the barometer, Boyle reported the well-known law named after him, that pressure and volume of an ideal gas are inversely proportional to one another at fixed temperature. Boyle’s law applies to all ideal gases at fixed temperature, not simply to air.
One might then ask the question, why did it take so long to go from the invention of the barometer in Italy to the discovery of Boyle’s law in England? Given the slow rate of diffusion of scientific information from Italy to England and elsewhere in those days, the small number of scientists, the time required for Boyle to think about an interesting experiment to do with the new pressure-measuring instrument, the time it took Robert Hooke, Boyle’s assistant, to build Boyle’s apparatus, the political turmoil in England that followed when King Charles I was deposed in 1642 by Oliver Cromwell, the execution of Charles I in 1649 and the restoration of Charles II to the monarchy in 1660, nineteen years is a relatively short time for Boyle to come up with his law.
The next advance in the science of ideal gases, the temperature dependence of P(V), took much more than two decades to discover. Charles’ law, in its present form, states that the volume of an ideal gas is inversely proportional to its absolute temperature at fixed pressure. In 1787 Jacques Charles found experimentally that in a given temperature interval of about 80 K, air, hydrogen, oxygen, nitrogen and carbon dioxide expand essentially the same amount over the same temperature interval. Charles recorded his observations in his notebook but never published his results. In 1802 Joseph Guy-Lussac obtained similar experimental results, which indicated a linear relationship between volume and temperature, and he credited the discovery to the previous unpublished work of Charles.
In 1834 Emile Clapeyron combined Boyle’s and Charles’ laws into what today is called the Ideal Gas law. Temperature was in units of C. Thus, to express temperature relative to absolute zero, it was essential to add a constant temperature, the magnitude of absolute zero, which in 1834 was thought from experiments to be −267° C. In 1848 William Thomson, Lord Kelvin, of the University of Glasgow, derived an absolute temperature scale on which the value of absolute 0 K on the centigrade scale was derived from the laws of thermodynamics to be 0 K = −270° C (Thomson, Reference Thomson1848), near the value derived experimentally in those days using the absolute temperature scale of the “air” thermometer and near the now internationally accepted value of 0 K = −273.15° C. Thus, in 1848 the derivation of the Ideal Gas law was completed, 200 years after the first measurement of pressure by Torricelli.
4.4 Theoretical Concept of a Shock Wave: 1848 to 1910
The starting point of the conceptual development of shock flows was the Navier-Stokes equation, named after C.-L. Navier and G. G. Stokes (Liu, Reference Liu1986). Claude-Louis Navier was a French engineer and physicist whose primary interest was mechanics. He was Inspector General of the Corps of Bridges and Roads. Navier was admitted into the French Academy of Science in 1824. In 1830 he became professor at the École Nationale des Ponts et Chaussées. In 1831 he succeeded A. L. Cauchy as professor of Calculus and Mechanics at the École Polytechnique. George Gabriel Stokes was a mathematician and physicist at Cambridge University, who made important contributions to fluid dynamics, optics and mathematical physics. Stokes was a Fellow, Secretary, and President of the Royal Society of London. Major contributors to the development of the idea of a shock wave were J. Challis, E. E. Stokes, S. Earnshaw, B. Riemann, W. J. M. Rankine, P.-H. Hugoniot and J. W. Strutt (Lord Rayleigh) (Courant and Friedrichs, Reference Courant and Friedrichs1948). The Ideal Gas law was completed by the time the concept of a shock wave began to be developed and played an important role in the development of shock hydrodynamics.
Challis tried to solve a differential equation for flow of an isothermal gas in terms of a simple wave but found that it was not always possible to find a solution with a unique wave velocity (Challis, Reference Challis1848). Stokes proposed that a discontinuity in wave velocity occurs when the rate of change of wave velocity with run distance becomes infinite (Stokes, Reference Stokes1848). He also argued that discontinuities in wave velocity cannot occur in real systems because such a discontinuity would be smoothed by viscous forces. Earnshaw developed a simple solution to a wave equation for matter in which pressure is a function only of density, P = P(ρ). For a compression wave with dP/dρ >0, sound velocity increases with P. Thus, an initial tendency of pressure to increase with density would grow larger with density until the front of a sound wave would steepen into a discontinuity called a shock wave (Earnshaw, Reference Earnshaw1860). Riemann expanded on previous solutions of the flow problem, incorrectly assuming that the transition across a shock front is both adiabatic and reversible (Riemann, Reference Riemann1860).
W. J. M. Rankine was a Scottish physicist, engineer, and Fellow of the Royal Society of London. In his early years he was one of the major contributors to the new field of thermodynamics, along with Clausius and Lord Kelvin. From 1855 to 1872 Rankine was a professor of civil engineering and mechanics at the University of Glasgow. In 1859 W. J. M. Rankine proposed the absolute Rankine scale of temperature in which temperature is expressed in R, rather than in K, relative to absolute zero. A degree R is identical to a degree F. The R scale is used in engineering applications. Rankine did substantial research in thermodynamics of gases.
Rankine and later Hugoniot developed conservation equations for momentum, mass and energy across a shock front, which compression is adiabatic with respect to heat transfer from outside the front while thermal energy may be exchanged by particles within the shock front (Rankine, Reference Rankine1870). That is to say, thermal equilibrium can be achieved under adiabatic compression within a shock front. Rankine derived his conservation equations specifically using the EOS of an ideal gas.
P.-H. Hugoniot was a member of the French marine artillery service in which he was professor of mechanics and ballistics, then Assistant Director of the Central Laboratory of Marine Artillery and finally Captain of Marine Artillery. His research received an award of the Paris Academy of Sciences in 1884 (Cheret, Reference Cheret1992). Hugoniot, like Rankine, derived conservation equations of momentum, mass and energy across a shock front. Those equations are now called the Rankine-Hugoniot (RH) equations.
Hugoniot showed that conservation of energy implies an entropy change across a shock front. Hugoniot (Reference Hugoniot1887, Reference Hugoniot1889) and Strutt (Lord Rayleigh) (Reference Strutt and Rayleigh1910) both pointed out that a shock front cannot be both adiabatic and reversible, as Riemann had claimed, because of conservation of energy. Since shock compression is adiabatic, it is irreversible. Rayleigh also pointed out that because entropy must increase across a shock front, isentropic release of shock pressure by a rarefaction shock (a sharp discontinuous drop in pressure) cannot occur in an ideal gas, which is a single-phase material. Rarefaction shocks are observed to occur at phase transitions (Grady, Reference Grady1998).
Lord Rayleigh had a broad interest in hydrodynamic instabilities in various types of fluid flows. One such flow was the instability developed in layers of fluids of variable density (Strutt, Reference Strutt and Rayleigh1883). In particular, he considered instability growth of an interface between two fluids of different densities in which the light fluid pushes on the heavy fluid, which results in turbulent mixing of the two materials along that interface. Shock-induced mixing at the interface between a fuel capsule and an enclosed D-T fuel ball is an important limiting issue today in ICF.
Mathematical development of the concept of a shock wave was essentially completed in 1910 (Strutt, Reference Strutt and Rayleigh1910). That seminal theoretical work was stored on library shelves in Europe for several decades awaiting experimental verification, which was not possible in those early days. A major motivation was needed to fund experimental development of the investigation of shock propagation.
4.5 In the Beginning: Early 1940s
Shock flows are supersonic and very fast. Fast cameras, fast electronic diagnostics, short-pulse X-radiography, fast triggers and facilities to generate high shock pressures with explosive shock drivers are required for generating and diagnosing shock experiments in condensed matter. In 1900 those fast diagnostics did not exist and their development was well beyond technological capabilities of those days. A major motivation was needed for governments to provide the substantial funding, facilities and technical staff required for such a shock driver and diagnostic development program.
Whereas the Ideal Gas law was derived virtually entirely from experimental data, the idea of a shock wave was developed in three distinct phases. In the first phase during the last half of the nineteenth century, the concept of a shock wave was developed in Europe from the theory of hydrodynamics and the interrelation between supersonic fluid dynamics (nonlinear mechanical flows) and thermodynamics (Courant and Friedrichs, Reference Courant and Friedrichs1948). That is, dynamic pressures, densities, internal energies and entropies are achieved by supersonic nonlinear flows, and the nature of such a flow depends on the response of the particular material that is flowing via its EOS (Bethe, Reference Bethe1942).
The second major phase of the development of shock-wave research began near the beginning of World War II. At that time, development of a picture of the equilibration process in the front of a shock wave was initiated (Bethe and Teller, Reference Bethe and Teller1940). This work enabled Bethe and Teller to become part of the Manhattan Project, whose purpose was to build the first atomic bomb. Bethe and Teller had immigrated to the United States from Germany in the 1930s and very much wanted to be part of America’s war effort. However, in 1940 they were still classified as enemy aliens (non-citizens), who where forbidden by Congress to work on the Manhattan Project. After they developed their 1940 calculation on the nature of thermal equilibration in the front of a shock wave, their unpublished calculation was declared classified material, and Bethe and Teller were permitted to join the Manhattan Project at Los Alamos as group leaders.
World War II generated substantial governmental funding for experimental facilities to test theoretical predictions about shock-wave propagation. A major emphasis of that period was the development of fast experimental techniques to diagnose shock flows, particularly measurements of shock-compression P-V data using the R-H conservation equations.
The third phase of the development of shock-wave research, from the early 1950s to 1961, emphasized understanding the relation between experimental results obtained with shock and with static compression. This issue was brought on by the fact that both shock and static research were going into regimes of higher pressures than investigated previously. During this period, pressure standards were a major issue for static-pressure research. In contrast PH, VH and EH are obtained experimentally in dynamic-compression experiments by the R-H conservation equations across the front of a shock wave.
4.6 Experimental Development of Supersonic Hydrodynamics: 1940s to 1956
With the arrival of World War II, interest in shock compression increased exponentially and rapidly. While Rankine and Hugoniot had both said energy is exchanged between particles within a shock front and, thus, thermal equilibration is possible, the fact that thermal equilibrium is reached in a shock front had yet to be demonstrated. In gases, liquids and most solids, the thickness of a shock front is the thickness of the region at the front of a shock wave in which material comes into thermal equilibrium. Bethe and Teller showed that values of thermodynamic parameters behind a shock front are uniquely determined by their values ahead of the shock front, independent of the path taken toward thermal equilibrium in the extremely non-equilibrium conditions in the width of a shock front (Bethe and Teller, Reference Bethe and Teller1940). Hans Bethe went on to win the Nobel Prize in Physics in 1967 for his research in nuclear physics. That 1940 research was never published in a scientific journal because it was written during wartime. However, Bethe considered that paper to be in the top 10% in his “Selected Works” list. Today that paper is a key work in the study of solids far from equilibrium (Mermin and Ashcroft, Reference Mermin and Ashcroft2006).
The motivation to develop shock wave experimental facilities was provided by the onset of World War II. Shock compression experiments in the United States began in the Manhattan Project, whose goal was the development of nuclear weapons. The Manhattan Project was built on the future site of Los Alamos National Laboratory. A similar project was undertaken in the Soviet Union.
In 1944 electrical detectors, called “pins”, were in use at Los Alamos to measure shock wave velocities in condensed matter. Electrical pins measure arrival times at discrete points in space at the front of a travelling shock wave. By suitably averaging arrival times at known positions, shock velocity us can be determined. Shock velocity us is then used to calculate mass velocity up behind the shock front with shock-impedance matching. Knowledge of us and up enables the calculation of shock pressure, specific volume, and specific internal energy, PH, VH, and EH, respectively, via the R-H equations.
By 1950 the optical flash-gap method (McQueen et al., Reference McQueen, Marsh, Taylor, Fritz, Carter and Kinslow1970) was developed to measure shock arrival times for the same purpose as electrical pins. In the flash-gap method, a thin layer of Ar gas is placed between two thin solid shims, one of which is transparent. When a strong shock wave transits the Ar gas, the gas is shock-compressed sufficiently to produce hot plasma and a sharp, brief, optical flash. A rotating-mirror streak camera views an assembly and records shock arrival times at the various flash gaps arrayed in space.
In the mid-1950s a large number of experimental results obtained in the late 1940s and early 1950s started to appear in the scientific literature. Since then American and Russian defense laboratories have published an enormous amount of experimental shock results. Subsequent to World War II, France, Germany, Japan, China and several universities have also established shock-compression experimental research programs. With these experimental facilities, theoretical development of shock-wave physics from ~1850 to 1942 were demonstrated to be correct by experiments performed in the last half of the twentieth century. That theoretical development, essentially without benefit of experimental data, was a major theoretical accomplishment.
In 1950 G. I. Taylor published his theory on instabilities of interfaces between different materials under steady acceleration (Taylor, Reference Taylor1950). His work was complementary to that of Lord Rayleigh (Strutt, Reference Strutt and Rayleigh1883). Today such interfacial instabilities are known as Rayleigh-Taylor instabilities and are investigated widely in a large number of fluid-mechanics problems.
4.7 P. W. Bridgman’s Contributions to Dynamic Compression: 1956 to 1961
P. W. Bridgman founded modern static high-pressure research (Nellis, Reference Nellis2010). Static high-pressure experimental research began in European universities in the last part of the nineteenth century. Bridgman conducted extensive static high-pressure experiments for more than fifty years at Harvard University. His research is well documented in seven volumes of his collected works (Bridgman, Reference Bridgman1964).
In the last five years of his career, Bridgman made significant contributions to the scientific foundations of dynamic compression as well. His activities included statements about shock compression of Fe that motivated static researchers to build systems to perform X-ray diffraction experiments and to measure electrical resistances of Fe at pressures up to ~15 GPa. The goals of those static-pressure experiments were (1) to demonstrate that the α−ε transition in Fe does occur under static compression as well as shock compression at 13 GPa and (2) to determine the likely crystal structure of ε-Fe. Bridgman also made a prediction that motivated Soviet researchers at Arzamas-16 to perform the first experiments at ultrahigh shock pressures in proximity to underground nuclear explosions.
Bridgman was Professor of Physics at Harvard. In 1946 he won the Nobel Prize in Physics for his static high-pressure experiments. His graduate students include John Van Vleck (graduated 1923), Francis Birch (1932) and Gerald Holton (1948). All three were professors at Harvard. Van Vleck was awarded the 1977 Nobel Prize in Physics for his research on magnetism in solids, a prize he shared with Philip Anderson and Sir Neville Mott. Birch was Professor of Geophysics and derived the first equation of state for materials at 100 GPa pressures in the deep Earth. Holton is Professor of Physics and the History of Science, Emeritus. Simultaneous with his high-pressure research, Bridgman had a very productive career in the philosophy of science and published several textbooks on thermodynamics and the philosophy of science (Bridgman, Reference Bridgman1935).
Bridgman’s significant involvement with shock waves began in 1956 when Bancroft et al. (Reference Bancroft, Peterson and Minshall1956) reported a transition in Fe to a then unknown phase at a shock pressure of 13 GPa. Shock pressures in those experiments at Los Alamos were driven by chemical explosives, and experimental lifetimes were ~ μsec. Because the calculated shock temperature of Fe at 13 GPa is only 40° C, Bridgman tried with his static pressure system to reproduce, in electrical resistance measurements, the shock-induced transition in Fe but was not able to do so. Bridgman’s reaction to the achievement of Bancroft et al. was simply to state that it was unlikely for a phase transition to occur in a μsec and that whatever it was that caused that observation was not a phase transition (Bridgman, Reference Bridgman1956).
Static-pressure researchers noted what Bridgman said and embarked on projects to measure electrical resistance (Drickhamer and Balchan, Reference Drickhamer and Balchan1961) and X-ray diffraction (xrd) spectra of Fe (Jamieson and Lawson, Reference Jamieson and Lawson1962) at static pressures up to ~15 GPa. Their goal was to determine whether or not the Fe transition occurs at 13 GPa and, if it does, to determine the crystal structure of the high-pressure ε-phase. Their goal was not unlike Torricelli’s with respect to whether the atmosphere has mass, namely, to test Galileo’s opinion on the matter.
The capability to measure xrd patterns at 15 GPa did not exist in the early 1950s. Thus, Jamieson and Lawson designed and built a system to make those xrd measurements. After five years, they discovered that by comparing the previous shock results of Bankcroft et al. with their diffraction results that Fe does undergo a phase transition at 13 GPa. Drickhamer and Balchan (Reference Drickhamer and Balchan1961) also observed a phase transition in Fe at 13 GPa.
To get a bit ahead of this story, once Bridgman realized in 1961 that chemical explosives produced reliable shock-compression data, he said that nuclear explosives would be even better and predicted that highest shock pressures would eventually be achieved with nuclear explosives. L. V. Altshuler read Bridgman’s 1961 prediction and in the late 1960s measured shock pressures of several TPa in proximity to underground nuclear explosions (Altshuler et al., Reference Altshuler1968). Bridgman never learned of Altshuler’s accomplishment, having passed away in 1961.
P. W. Bridgman entered Harvard University in 1900, where he studied physics through to his PhD in 1908. His first major paper was on the measurement of high pressure (Bridgman, Reference Bridgman1909). Between 1905 and 1961, Bridgman developed experimental techniques and measured physical properties of many materials at static pressures that ranged up toward 20 GPa. During his career he published more than 200 papers.
During World War II, Bridgman performed static high-pressure experiments as part of the Manhattan Project. He discovered the α−β transition in Pu and estimated that it occurs at a pressure very approximately to be ~0.1 GPa (Bridgman, Reference Bridgman1959). Bridgman made those Pu measurements at the Watertown Arsenal of the U.S. Army in Watertown, Massachusetts, not far from Harvard University. Today the Watertown Arsenal is but a distant memory, having been replaced by a shopping center years ago.
After World War II, Los Alamos performed shock-compression equation-of-state (Hugoniot) experiments on many materials, generating shock pressures up to several tens of GPa. Shock waves were planar and generated with chemical explosives. The detectors were either electrical pins, which measured arrival times of a shock-wave front at various spatial points in a sample (Bancroft et al., Reference Bancroft, Peterson and Minshall1956), or flash gaps observed as a function of time with a rotating-mirror streak camera (McQueen et al., Reference McQueen, Marsh, Taylor, Fritz, Carter and Kinslow1970). Experimental lifetimes were ~μsec and detector resolution was ~0.01 μsec. By analyzing measured arrival times at various spatial locations in a sample, the temporal structure of a shock wave and velocities of individual components of a multiple-shock wave could be derived.
In 1956, Bancroft and colleagues measured the Hugoniot of Fe up to ~20 GPa using ~65 electrical pins on each of several shots. Shock velocity us was measured and particle velocity up was derived from measured us and ρ0, the initial Fe sample density. Shock pressure and density were calculated from us, up and ρ0 using the R-H equations (Chapter 2). Those experiments were self-calibrating in that absolute pressure and density are given by the R-H equations.
A large number of pins were used because two and possibly three shock waves were expected in Fe, one caused by elastic strength, one possibly caused by a phase transition and one caused by plastic compression above the elastic limit at a stress called the Hugoniot elastic limit (HEL). Those three waves were expected to have different shock velocities, which means those three waves would be well separated in time in a sufficiently thick sample. Three distinct shock waves were observed, one of which was attributed to a phase transition at 13 GPa and 40° C from α-bcc Fe at ambient to a then unknown high-pressure phase. Because of the modest rise in temperature, in principal the transition reported by Bancroft et al. (Reference Bancroft, Peterson and Minshall1956) could be verified by measurements under static high pressures at room temperature.
Bridgman tried to verify the existence of the Fe phase transition by measuring electrical resistance of Fe up to what he calculated to be 17 GPa. He found no indication of a phase transition in electrical resistance measurements at any pressure. While acknowledging the resistance method was not definitive, he said the observation of a third shock wave probably needed to be explained by something other than a phase transition. However, it is important to realize that in the 1950s absolute static-pressure scales were quite uncertain (Decker et al., Reference Decker, Bassett, Merrill, Hall and Barnett1972; Graham, Reference Graham, Schmidt, Shaner, Samara and Ross1994). Thus, there is a very reasonable possibility that Bridgman’s stated pressure of 17 GPa was substantially overestimated as suggested by Ruoff (Cornell University, private communication, 2010).
In the early 1960s two different static-pressure experiments on the same Fe sample were reported, which demonstrated the α−ε transition in Fe at 13 GPa at room temperature does in fact occur. In 1961 Balchan and Drickamer measured the electrical resistance of Fe and detected a phase transition at 13 GPa (Drickamer and Balchan, Reference Drickhamer and Balchan1961), the same pressure reported by Bancroft et al. In 1962 Jamieson and Lawson reported development of the first high-pressure cell in which xrd patterns were measured in solid specimens under quasi-hydrostatic high pressures. Near their highest pressure, ~15 GPa, they observed that α-Fe transforms to a new phase, which they identified as hcp, now known as ε−hcp Fe (Jamieson and Lawson, Reference Jamieson and Lawson1962).
Jamieson and Lawson based their conclusion about ε−Fe on the observation of only one strong xrd line that was atypical of the bcc α phase of Fe and the constraint that their high-pressure phase should have the density of the high-pressure phase measured by Bancroft et al. under shock compression. That is, a second xrd line of the high-pressure phase of hcp-Fe could not be measured. So ε−Fe was assumed to be bcc, fcc or hcp, and a hypothetical second Fe xrd line for each possible phase was indexed such that the combination of the single measured xrd line and one of the possible bcc, fcc, or hcp lines would give the measured shock volume. Thus, ε-Fe was determined to be hcp.
The xrd measurements were a tour de force. The combination of the shock and static data resolved the question of α−ε transition in Fe at 13 GPa. Consistency of the static and shock experiments implied that the Fe α−ε transition had occurred under shock compression on a sub-μsec timescale. This agreement led to acceptance by the static-pressure community of the results of the shock experiment of Bancroft et al., which had not generally been accepted prior to the experiment of Jamieson and Lawson (Graham, Reference Graham, Schmidt, Shaner, Samara and Ross1994).
Bridgman’s questioning of the interpretation of the shock-wave data of Bancroft et al. had set off a scientific controversy that motivated construction of a high-pressure system coupled to an X-ray diffraction system. It took five years to build the experimental apparatus and make the xrd measurements. The xrd data of Jamieson and Lawson and the resistance measurements of Drickhamer and Balkan resolved the question of whether a phase transition had occurred in Fe. Shock and static researchers have worked together on issues of pressure calibration ever since.
Once Bridgman realized that chemical explosives could be used reliably to achieve high-shock pressures, he shortly thereafter realized that nuclear explosives could be used to achieve even higher pressures. On the last page of the seventh and last volume of his collected works Bridgman wrote: “The very highest pressures will doubtless continue to be reached by some sort of shock-wave technique. … Perhaps some fortunate experimenters may ultimately be able to command the use of atomic explosives in studying this field” (Bridgman, Reference Bridgman, Paul and Warschauer1963; Bridgman, Reference Bridgman1964, Vol. 7: no. 199–4625/4637).
4.8 Altshuler: The 1960s
Bridgman’s suggestion about nuclear explosives was eventually read and pursued by L. V. Altshuler and his group at Arzamas-16. In his memoirs Altshuler (Reference Altshuler2001) writes: “In 1962 Bridgman … suggested that with some luck, experimenters might even employ atomic blasts in high-pressure research. Such lucky experimenters were my team’s members … who in 1968 were the first to carry out measurements in the near zone of an underground nuclear explosion” (Altshuler et al., Reference Altshuler1968). Today, shock experiments at ultrahigh pressures are done with giant pulsed lasers and with the giant pulsed-power Z machine, which drives metal impactors to ultrahigh velocities and impact-shock pressures.
4.9 A New Beginning
In the same paper Bridgman states: “It is conceivable that a way will be found of superimposing shock-wave pressures with static pressures.” This kind of experiment is done today by pre-compressing a sample statically in a diamond anvil cell (DAC) and then shock compressing it to high pressure generated with a pulsed giant laser. Water, for example, has been pre-compressed to ~1 GPa in a DAC and then shock-compressed with a laser to 250 GPa (Lee et al., Reference Lee, Benedetti, Jeanloz, Celliers and Eggert2006). A variation of the above-mentioned pre-compression experiment is a reverberating shock-wave experiment in which the first shock effectively pre-compresses a liquid to a relatively low shock pressure and the remaining eight reverberating shocks then essentially compress the fluid isentropically from that first-shock state. Virtually, the same states are achieved in water up to 100 GPa using both methods (Chau et al., Reference Chau, Mitchell, Minich and Nellis2001; Lee et al., Reference Lee, Benedetti, Jeanloz, Celliers and Eggert2006).
To complete the Fe story, in the early 1970s Barker and Hollenbach (Reference Barker and Hollenbach1974) developed the VISAR, a fast optical velocimeter with which they measured with ns time resolution the continuous temporal profile of a three-wave shock in Fe. Their measurements confirmed the three-wave structure in Fe, which had been observed by Bancroft et al. While Barker and Hollenbach confirmed a phase transition does occur, their technique is not able to determine the crystal structure of the high-pressure phase.
Still to be demonstrated experimentally was the structure of the high-pressure phase and the fact that the α−ε phase transition actually occurs on a sub-μsec timescale. In 2005, a laser beam was split into two beams, one to generate an intense 1-ns X-ray source and one to generate a shock in a small, oriented single crystal of Fe. Those results showed that an Fe phase transition occurs in ~1 ns under shock compression at 13 GPa and the in situ structure of the high-pressure phase is ε-hcp (Kalantar et al., Reference Kalantar, Belak, Collins, Colvin and Davies2005).
Bridgman was truly remarkable in terms of his contributions to static high-pressure research over his fifty-six-year career, the students he produced, his important contributions to establishing the scientific foundations of shock-compression research and his predictions about shock-compression research that have been demonstrated to be correct decades after his death.
High-pressure research remains a field that is primarily a curiosity-driven quest for discovery. Discovery-driven research is about ideas – an unexpected experimental observation that requires theoretical interpretation or a theoretical prediction that requires experimental verification. This iteration between experiment and theory is the foundation of science. It is so basic, in fact, that this process has been given a name: the scientific method. This principal is the essential element of all scientific and technological research and development.
Technological applications follow scientific discovery. Since scientific discoveries are generally unexpected, so too are their associated technological applications. Perhaps the greatest example of this process is the experimental discovery of the nuclear atom. In 1911 Ernest Rutherford demonstrated that atoms are composed of tiny nuclei with virtually all the atomic mass, surrounded by clouds of tiny, virtually mass-less electrons that occupy virtually all the atomic volume. When reportedly asked if his discovery had any practical applications, Rutherford is reported to have replied that there were none that he could think of. The nuclear atom, discovered as a result of pure intellectual curiosity, is the basis of the majority of high-technology developments, which today are the bases of the world’s largest economies.