As any reader of MRS Bulletin is aware, the space spanned by the materials characterization topic is vast. Reference Kaufmann1–Reference Czichos, Saito and Smith3 Researchers likely first think of familiar tools in today’s advanced laboratories and at large user facilities. Certainly, those tools are vital for discovery science as pursued in university, government, and corporate laboratories. However, to make the connection to engineering, which spans applied research, development, design, and manufacturing of devices, followed by their utilization, maintenance, and ultimate disposal, we must broaden our view of the role played by materials characterization methods.
We must also disabuse ourselves of the fiction that the route from the science to the product follows a neat sequential innovation chain. In practice, much of the fundamental understanding garnered from materials characterization lies fallow in the reservoir of published literature and property databases until some often unanticipated development project finds a bit of it quite useful. At the other extreme, the scientific understanding of material properties and behavior based on advanced measurements in the laboratory might lag many years behind the commercialization of a product built through empirical trial and error (see the sidebar on Aluminum alloy grain refinement).
In addition, the characterization of materials does not belong to any one or a few aspects of the innovation process. That is, characterization does not merely help launch the next engineering advance of a material from the laboratory, where it stays behind awaiting the next specimen to analyze. Rather, it overlays the entire development process. Characterization is not only an early precursor or an after-the-fact elucidator; rather, it permeates the entire materials engineering and development enterprise from end to end. A particularly cogent pictorial attempt to categorize the materials science and engineering (MS&E) field is reflected in the so-called materials science tetrahedron. Reference Allen and Thomas10 An amended version is shown in Figure 1 that highlights characterization’s central role.
A polyhedron with more vertices would be needed to capture the complete journey of an advanced material to the marketplace. Nontechnical economic factors, such as cost and customer demand, control the final steps toward the marketplace. Those same practical considerations constrain the use of characterization tools to the minimum needed to guarantee quality and consistency without regard to underlying discovery science.
Electron microscopy and x-ray analysis are perhaps the two most frequently used modern tools. They each have many variants for addressing many materials types and properties, and we devote later sections of this article to each of these tools. In practice, however, it is not the tool that determines what materials problem to solve. Instead, the material at hand and its unknowns dictate what tools to use. We have included a brief example of that relationship for the case of ultrananocrystalline diamond films. First, however, we step though some familiar concepts of the how, where, and why of the general materials characterization enterprise itself.
Common contexts and considerations
All measurement techniques have one thing in common: they involve first a probe of a sample (usually artificially applied and controlled) and then observation of a response. Photons, electrons, positrons, neutrons, atoms and ions, magnetic fields, electric currents, heat, pressure, chemical attack, and mechanical stresses are a few typical probes. Observations can take the form of real- or reciprocal-space images of reflected or transmitted radiation as modified by the sample, recordings of macroscopic constitutive properties such as elastic or plastic strain, microstructural or lattice-structure changes, deflection of a stylus, expulsion of magnetic field lines, or desorption or erosion of material constituents. Whatever the specific experiment might be, spatial resolution will be a concern when only a small well-defined region of a specimen is interrogated. Similarly, attention to temporal resolution is necessary when measured properties are not static but evolve, for example, in the case of chemical reactions, mechanical failures, and phase transitions. Finally, control of a sample’s environment is critical. Any factors that might affect the result of a measurement will either be kept as stable as possible or be systematically altered and controlled as independent variables.
When a material property is measured, it is often as a function of an independent parameter. Elapsed time is one such variable, varying from ultrafast pump–probe tests in the femtosecond range to years of monitored aging in weathering and corrosion tests. Temperature, pressure, magnetic field, and solution pH are other parameters that can be systematically controlled. These variables might be serving a dual purpose, that is, simultaneously acting as the variable against which a response is measured and the probe that causes the response.
Where values of the independent variable are not accessible in the laboratory, extrapolation based on available physical models comes into play. For example, to understand shock-wave physics in condensed matter that is relevant to inertial-confinement fusion, astrophysics, and materials such as metallic hydrogen, the results of gas-gun experiments that measure the Hugoniot shock pressure versus volume curve up to hundreds of gigapascals and thousands of kelvin must be extrapolated to more extreme values where the phenomena of interest actually occur. Reference Holmes11 Studies of corrosion and radiation effects on nuclear-waste-encapsulating materials, such as Synroc and products of other vitrification processes, attempt to predict future behavior out to 105 years or more. Reference Mehrtens and Snowden12,Reference Solomah and Zumwalt13 In this case, rather than extrapolation of a measurement made against time in the laboratory, measured rates of relevant processes such as corrosion, diffusion, devitrification, and void formation against the many independent variables that affect them—temperature, humidity, ambient atmosphere, and acidity—are fed into models that also must predict how such environmental variables in a repository will evolve and affect the material’s performance.
When considering the tools required to measure a specific property of interest, it is clear that the apparatus needed to apply and control one or more independent variables must be considered as well.
Because a measurement tool must probe a sample, a legitimate concern is whether that probe not only generates the desired response but also modifies the sample in a way that interferes with the measurement, possibly skewing the results or rendering the sample unusable for further tests. Obviously problematic are effects such as charge accumulation on an insulating sample in an electron microscope or sample heating during analysis under intense x-ray or particle beam bombardment. On the positive side of the ledger, one might also take advantage of probe-induced modifications to track those changes as part of the overall characterization goal.
Inseparable from materials modification as a byproduct of characterization is the use of a characterization tool for materials processing per se. In a sense, a dual-use paradigm is at work here. For example, mechanical tests involving bending, indenting, heating, and so on have their analogues in various metallurgical processing protocols such as cold-working and annealing. Similarly, finely focused electron beams for imaging and diffraction in electron microscopy have their analogue in electron-beam welding, albeit at quite different scales of spatial resolution and intensity. Likewise, whereas ion beams can probe the structure and composition of a sample, they also can implant electrically active impurities into semiconductors for use in devices. Whereas neutrons have special abilities to probe phonons and magnetic ordering in solids and can reveal composition through activation analysis, the public is more aware of the medical isotopes they provide for tests and therapies in nuclear medicine. One example presaged over 25 years ago was the use of a scanning tunneling microscope to write the IBM logo in xenon atoms on a nickel crystal Reference Eigler and Schweizer14 ( Figure 2 ). The imaging tools with the most extreme spatial resolution, such as the atomic force microscope, which is used extensively for characterization today, can actually be used to “write” molecules onto a surface in a nanoscale manufacturing regime. Reference Imboden and Bishop15
Another way to look at dual use in the context of characterization tools is found in how some techniques cross disciplinary boundaries. Take, for example, nuclear magnetic resonance (NMR) spectroscopy, a nuclear physics technique first applied to a molecular beam of LiCl by Rabi and co-workers in 1938 to measure nuclear moments. Reference Rabi, Zacharias, Millman and Kusch16 Soon after, in 1946, NMR spectroscopy was applied to water Reference Bloch, Hansen and Packard17 and to wax. Reference Purcell, Torrey and Pound18 Today, solid-state NMR spectroscopy uses the coupling of nuclear moments to the internal fields of a solid to study its chemistry, anisotropy, magnetism, and time-dependent phenomena such as diffusion. One could not have foretold in 1938 that the same nuclear resonance observed in lithium would today be central to NMR diagnostics applied in situ to study lithium-ion batteries. Reference Dogan, Long, Croy, Gallagher, Iddir, Russell, Balasubramanian and Key19 Indeed, by a slightly different name—magnetic resonance imaging—nuclear resonance now takes pictures of the internal structure not only of solids but also of us.
It is not surprising that a given tool finds multiple applications. The point to be made here is that materials research is unique. It is its multidisciplinary nature that mandates the adoption of the tools of all of its component disciplines.
Direct versus indirect measurements
If we are interested in how the electrical resistance of a material varies with temperature, we can attach our thermocouples (or focus our infrared camera) on the sample, pass a current through it, attach a voltmeter, and read the meter as we vary the temperature. This is a fairly direct measurement if one allows for the use of Ohm’s law. It is slightly less direct if we want the bulk resistivity, because then we also need to know or measure the effective cross-sectional area of our sample.
Yet, what if we want to use this result to infer impurity or defect concentration? We can either compare our resistivity measurement to empirical data on samples of known purity or rely on a theory that connects our directly measured data to sample purity based on assumptions about the character of the scattering of carriers by defects.
Such indirect access to the ultimate desired quantity is most often the case. Deriving electronic band structures from photoemission spectroscopy; identifying microstructural phases from a Laue x-ray diffraction pattern; or, more generally, extracting property information from data sets using methods as simple as least-squares regression to more sophisticated statistical algorithms Reference Krauthäuser20 all involve indirect methods. Models, theories, and computational algorithms—not to mention the tables of data collected over many years—must therefore all be considered a part of the characterization tool set at our disposal.
Quality monitoring and control
The tools that discover material properties may also serve to monitor and control a materials production. The optical photons and high-energy electrons of spectroscopy and diffraction are also tools for monitoring film growth while simultaneously extracting information on electronic properties and growth mechanisms. Reference Gruenewald, Nichols and Seo21 At the infrared end of the spectrum, in addition to simply monitoring temperature, infrared thermography offers a way to nondestructively inspect weld quality. Reference Chen, Zhang, Yu, Feng, DebRoy, David, DuPont, Koseki and Bhadeshia22 A nearly limitless supply of such examples can easily be found.
Several techniques from nuclear and atomic physics have materials-characterization applications. Techniques such as ion scattering, x-ray spectroscopies, and nuclear magnetic resonance spectroscopy and imaging are more familiar, but even the lesser-known Mössbauer-effect spectroscopy has industrial uses. As an excerpt from a 1974 publication Reference Schwartzendruber, Bennett, Schoefer, Delong and Campbell23 confirms, “The Mössbauer-effect scattering method measures the relative amounts of the austenite and ferrite phases … The method is nondestructive … This investigation establishes … information for the calibration of instruments suitable for industrial use.” It might be particularly serendipitous in this example that the best known Mössbauer-effect isotope, Reference Mann57 Fe, happens also to be the main ingredient in steel, but more generally, many methods are better suited for some applications than others, and suitability and cost determine which ones are employed in connection with real-world manufacturing.
In a manufacturing environment, very far afield from the basic science laboratory, techniques familiar to researchers are found monitoring everything from thickness uniformity and surface finish to circuit integrity. The range of sophistication in profilometry, for example, extends from drawing a diamond stylus across a surface and reading an analog deflection plot to a “shop-floor profilometer system [that] makes it easy to perform surface profiling of precision parts in a production environment,” a quote taken from a corporate website offering white-light optical profilometers backed by data analysis and control software 24 (see Figure 3 ).
The stage of a sample along the innovation chain determines the needs and goals of materials testing. In the basic research laboratory and even at the device-development stage, a given sample is normally characterized only once. Whether a simple test or a complex multipart experiment, the data are gathered and analyzed, and unless the results are somehow suspicious or the goal is to demonstrate reproducibility, the same test is not repeated on the same sample in the same way (see the sidebars on Quasicrystals and the Gunn effect). Parameters of the sample or of the test protocols would normally be changed before a subsequent test is performed. On the other hand, in production, a quality-control test that remains unchanged would perform continuous monitoring of a material’s properties or random sampling with associated statistical considerations. Reference Prins, Croarkin and Tobias33
Characterization tools are often brought to bear on questions of provenance or in failure analysis. For example, each site around the planet where rare-earth-element ores are mined has a unique distribution of elements and mineral types Reference Long, Van Gosen, Foley and Cordier34 —information that, in principle, would determine the provenance of a shipment of ore. Even a highly processed alloy’s history can be deduced from its minor impurities. Reference Nicolaou35 The microstructure, morphology, and chemical analysis of an exposed fracture surface in a failed part can distinguish between ductile and brittle fracture and identify corrosion products that, in turn, might reveal a failure mechanism. Reference Lewis, Reynolds and Gagg36 Accident and crime investigations often rely, in part, on analysis of materials found at the scenes of the events. 37,38 As another example, the microstructure of a metal sheet reveals details of the manufacturing process, potentially making it possible to identify forgeries of ancient artifacts such as astrolabes. Reference Hehmeyer39
A single method is rarely enough
Whereas monitoring the value of one parameter relevant to a quality-control task requires application of a single tool, complete characterization of a material relies on an entire battery of tools, each yielding only a partial picture of electrical, electronic, elastic, electrochemical, magnetic, structural, thermal, tribological, and many more properties. In addition to adding to the information known about a specimen, data from multiple methods can be confirmatory. Reference Benmore and Kaufmann40 Multiple tests can also raise questions about the validity of conclusions based on limited prior tests.
When dealing with a new material that exhibits new and interesting behavior, one generally wants not only to determine several of the basic properties but also to see how those properties change when the elemental composition is varied, the sample environment is changed, or the synthesis protocol is adjusted. The high-critical-temperature (high-T c) copper oxide superconductors (see the “Measurement mania” subsection) provide a good example. Once the solid-state synthesis routes that yielded the correct crystal structure of the compounds were clear, varying oxygen stoichiometry proved to be an important additional test of behavior. Early observations of T c versus pressure, Reference Chu, Hor, Meng, Gao and Huang41,Reference Hor, Gao, Meng, Huang, Wang, Forster, Vassilious, Chu, Wu, Ashburn and Torng42 a variable that might seem irrelevant to a superconductor’s eventual use, showed how behavior changes with interatomic distances, a microscopic parameter that could then be optimized through changes in composition.
Finally, when a new characterization tool becomes available, using it to look at well-studied “classical” materials that are thought to be fully understood can always provide the occasional surprise (see the sidebar on Subjecting a classic material to modern analyses).
Small science at big facilities
Much characterization can be performed with tools that fit in both size and cost in an individual-investigator laboratory. When cost is a consideration, arrangements for sharing access are effective. A state-of-the-art transmission electron microscope and a molecular beam epitaxy sample preparation system are examples. However, today’s large and powerful probes utilizing UV and x-ray photons, neutrons, magnetic fields, and petascale or greater computational power are now found at national user facilities. A fair fraction of the users of such facilities come from industrial development laboratories, confirming the penetration of these largest characterization machines into the domain of the materials engineer (see the sidebar on The role of commercial services).
A virtuous cycle
In general, the sophistication and power of the entire ensemble of materials characterization instruments have increased continuously from the initial invention of each instrument. Many of these advances were aided, directly or indirectly, by new materials components. To the extent that this is true, the improved components must have enjoyed a good deal of characterization during their development phase.
The anecdotal evidence is clear. Most obvious is how materials developments, such as the discovery of giant magnetoresistance, Reference Binasch, Grünberg, Saurenbach and Zinn46,Reference Baibich, Broto, Fert, Nguyen Van Dau, Petroff, Etienne, Creuzet, Friederich and Chazelas47 have propelled computational power (Moore’s Law and circuit density) (see the “Photoelectrons bring layers to light” subsection) and data storage capacity (optical and magnetic components) to new heights. In addition to their huge societal impact, these developments have contributed to the enhancement of successive generations of measurement tools.
X-ray sources are another example, not merely moving from the laboratory generator to the synchrotron, but seeing a succession of eponymous transitions from the first-generation through to the most recent fourth-generation machines. Advances in the materials used in vacuum systems and in advanced magnet designs, for example, helped make more powerful light sources possible.
Keeping pace with the advance of the x-ray sources has been the development of new detectors with improved time, energy, and spatial resolution that can handle higher data-acquisition rates. 48 Thus, today’s “big facility” comprises more than the next larger, more powerful machine. Rather, it embodies a nexus of the more powerful probe, state-of-the-art sensors, and the capability to manage the flood of data produced by that combination. This last requirement, currently referred to as a “big data” challenge, arises in finance, medicine, the Internet, and many other areas. In our context, it entails high-rate data acquisition and massive data-set transfer and storage capacity, but also, at the leading edge of addressing this challenge, real-time visualization and on-the-fly data analysis is trained to retain only the essential data for further analysis.
One particularly elegant example of this virtuous cycle involves a class of materials that was subjected to decades of extensive characterization, returning as the basis for exquisitely sensitive radiation detectors. Low-temperature superconductors are the active component in transition-edge sensors (TESs), which serve as bolometers or calorimeters for radiation detection. Similar to an x-ray-detecting microcalorimeter in a scanning electron microscope, a TES achieves 2 eV energy resolution at 1.5 keV, far better than a conventional semiconductor detector for x-ray microanalysis. Reference Wollman, Nam, Newbury, Hilton, Irwin, Bergren, Deiker, Rudman and Martinis49 TESs are also in use today at the South Pole Telescope measuring cosmic microwave background radiation. Reference Hubmayr, Austermann, Beall, Becker, Bennett, Benson, Bleem, Chang, Carlstrom, Cho, Crites, Dobbs, Everett, George, Holzapfel, Halverson, Henning, Hilton, Irwin, Li, Lowell, Lueker, McMahon, Mehl, Meyer, Nibarger, Niemack, Schmidt, Shirokoff, Simon, Yoon and Young50 Although we cannot claim that the search for dark matter is a materials characterization story, the TES story is even more prophetic when one realizes that the advent of the superconducting quantum interference device made impedance matching and low-noise data readout from the TES a practical reality. Reference Irwin, Hilton and Enss51 There are other examples of this interplay between technology and discovery. One is the silicon drift detector for energy-dispersive x-ray spectrometry (EDS), which has an energy resolution comparable to that of lithium-drifted silicon x-ray detectors at higher count rates, but requires no liquid-nitrogen cooling. Another is the electron-multiplying charge-coupled device, which is a substantial improvement over conventional charge-coupled device technology and has made Raman spectroscopy faster and more practical, especially for chemical spectral imaging and Raman mapping. Reference Yang44
The evolution of materials characterization as a distinct field not only implies following materials innovation through to the factory floor, but also continual self-improvement of measurement tools as the problems needing solutions become harder to solve—a real-life validation of the saying mater artium necessitas (necessity is the mother of invention). It should be noted that instrumental improvements within the panoply of characterization tools not only accompany advanced materials developments from the laboratory to the marketplace, but also result in the commercialization of the tools themselves. A modern example of that transition is the scanning tunneling microscope, invented at IBM Zurich in 1981, Reference Binnig, Rohrer, Gerbe and Weibe52 which, along with its many scanning probe variants, Reference Binnig and Quate53 is widely available as an off-the-shelf product today. It is hard to think of a commercially available characterization tool that did not evolve from a rudimentary version patched together in a research laboratory.
Odds and ends
Some tools are more generic in their application (e.g., x-ray diffraction and microscopy), whereas others, such as deep-level transient spectroscopy, target a specific material type, in this case, semiconductors with electrically active defects. Although the type of material under examination could have been the most salient way to organize a discussion on characterization, we are loath to draw a sharp distinction between appropriate and inappropriate tools for a particular class of material.
First, a material’s “type” is not always easily defined, nor is it always a constant: consider semimetals and insulating oxides that, under some conditions, are good conductors. Whether a material is above or below a ductile–brittle or magnetic phase transition can also determine the choice of the most useful measurement tool.
Material or not?
It is also wise to avoid too narrow a definition of what one considers to be a material in the first place. Given that the “solid state” morphed into “condensed matter” to include liquids and that the tools of the physical sciences have proved useful for biological substances, chemical catalysts, and electrochemical cells, it is clear that the utility and purview of materials characterization respects no artificial distinctions among disciplines or among a sample’s possible alternative classifications. One might ask how many molecules must aggregate before a molecular cluster is deemed to be a material, with all the mechanical and electromagnetic properties that entails. Does a single atom enter and leave its classification as a material when it adsorbs and desorbs from a surface? We can leave these distinctions to the philosophers. But as nano- and subnano-sized materials systems are becoming the focus of much research and development, the prescience of Feynman’s “There’s Plenty of Room at the Bottom” prediction in 1959 Reference Feynman54 leads us to avoid applying any definition for the infinitesimal dimension.
There is a positive type of opportunism: When a new material or material behavior is found, there is a rush by experts in any given measurement method to apply their own tool to the new discovery. The advent of the entirely unanticipated high-temperature copper oxide superconductors, first with T c ≈ 35 K in Switzerland in 1986 Reference Bednorz and Müller55 and then with T c ≈ 90 K in Texas in 1987, Reference Wu, Ashburn, Torng, Hor, Meng, Gao, Huang, Wang and Chu56 led to a spate of measurements—some, in retrospect, very useful; others, not so much. As of 2011, there had been nearly 200,000 publications in that one field, with new materials still being discovered; yet a full theoretical understanding of the physics underlying the high-T c phenomenon in copper oxide superconductors is still wanting. Reference Mann57 Perhaps the long wait for basic understanding should have been expected, given the interval from the discovery of the superconductivity phenomenon itself in 1911 Reference van Delft and Kes58 to its eventual explanation in 1957. Reference Bardeen, Cooper and Schrieffer59
A positive byproduct of the rush to measure resistivity was the realization that measuring zero resistance is not a trivial exercise, and for a supposed new superconductor, looking for a confirming magnetic field effect became necessary. More generally, responding to the discovery of new material behavior with a relatively short-lived “overcharacterization” effort reflects a healthy competition among researchers in the materials science and engineering field, is liable to quickly uncover many new details about new phenomena and how best to measure them, and serves to accelerate the evolution of materials toward potential practical applications.
Among the many modern characterization methods, two of the most mature and general workhorses of the field are electron microscopy and x-ray analysis, as described next.
Modern scanning electron microscopy (SEM) and transmission electron microscopy (TEM) play essential roles in the characterization of material structures and properties. Reference Goldstein, Newbury, Echlin, Joy, Romig, Lyman, Fiori and Lifshin60–Reference Chescoe and Goodhew62 SEM utilizes electrons of a preset energy, usually from tens of electronvolts to 30 keV. The beam is focused to angstrom-scale diameter and rastered across a specimen to generate secondary signals.
The signals arising from the beam–specimen interaction in SEM include secondary electrons (SEs), backscattered electrons (BSEs), characteristic x-rays, Auger electrons, cathodoluminescence (CL), and electron-beam-induced current (EBIC). Each type or combination of signals can provide imaging or mapping contrast at its corresponding resolution. Based on the signal nature and location, SEs and BSEs can reflect the surface topography and elemental distribution, respectively; EDS provides compositional analysis; Auger electron spectroscopy accesses the top few atomic layers; and CL and EBIC are often used for defect imaging/mapping and phase-structure detection. BSEs can also be used for electron backscatter diffraction to reveal surface crystal textures and stress–strain distributions.
TEM and its scanning variation (STEM) use monochromatic electron beams at energies of 60–300 keV to penetrate a specimen foil for imaging, diffraction, and analysis. TEM specimens must be prepared so that the electron beam can penetrate the area to be analyzed. Well-controlled methods such as chemical etching and ion milling have been developed to produce appropriately thinned areas of the samples.
Whereas the signal generation and detection in SEM for imaging or mapping can generally be described by particle theory, TEM imaging and diffraction are better understood by high-energy particle–wave theory. Here, the incident electron has an extremely short effective “de Broglie” wavelength λ given by λ = h/p (where h is Planck’s constant and p is electron momentum). Reference de Broglie63 For example, an electron with 200 keV of kinetic energy corresponds to λ = 2.74 pm.
Just as in light scattering or other high-energy particle–wave scattering, the scattered electrons can exit the specimen in constructive or diffused waves that offer a variety of measurement modes, including bright-field imaging, dark-field imaging, and electron diffraction. Further, through manipulation of the beams and lenses, various diffraction techniques are available, including selected-area electron diffraction, convergent-beam electron diffraction, and nano- or microdiffraction.
The image contrast in TEM originates from wave scattering and interference that yield mass and thickness contrast, diffraction contrast, atomic-number (Z) contrast, and phase contrast. One of these contrast mechanisms might dominate in imaging depending on the operation chosen to reveal specific characteristics in the specimen. For example, if one uses an annular electron detector that selects a diffracted beam at a high scattering angle, Z contrast, which emphasizes high-atomic-number constituents, might dominate the dark-field image.
Just as in SEM, elemental analysis is available in TEM through addition of peripheral equipment with EDS capability or an electron spectrometer for electron energy-loss spectroscopy (EELS). An EELS spectrum is sensitive not only to elemental composition but also to chemical bonding (e.g., a silicon–oxygen bond can be distinguished from a silicon–silicon bond) and to collective excitations such as phonons or plasmons.
Some improvements in characterization techniques derive less from long-term incremental changes than from true paradigm shifts. The electron microscope (transmission and scanning transmission) is a case in point. What were thought to be insurmountable theoretical limits to instrument resolution have been overcome through a combination of sophisticated multipole magnetic lens and mirror designs, aided by electron optical computer simulations and improved physical stability. The advent of spherical and chromatic aberration correctors in the electron optics of the microscope columns has provided resolutions below 1 Å and opened a vastly smaller realm for study, just in time to support the nanotechnology revolution. Reference Zhu and Dürr64–Reference Pennycook66
Including this aberration correction, the past decade has witnessed many technological breakthroughs at the frontier of electron microscopy, such as
• integration of multiple techniques (e.g., focused-ion-beam scanning electron microscopy [FIB/SEM]);
• ultrahigh-resolution microscopy and atomic-resolution spectroscopy;
• in situ and dynamic microscopy, including cryo-microscopy, for the observation and characterization of material structures and properties under a specific environment or exposure to designed stimuli;
• tomography and three-dimensional reconstruction; and
• ultraprecise atomistic and nanoscale fabrication.
Together with the advancement of peripheral technologies, such as ultrafast and ultrahigh-resolution charge-coupled device or complementary metal oxide semiconductor (CMOS) cameras, full computerization, and software integration, these new developments in electron microscopy offer unprecedented opportunities for the scientific and technological exploration of a previously inaccessible territory.
As an example, modern FIB/SEM has been successfully integrated by the semiconductor industry into production lines to facilitate chip fabrication. The diagram in Figure 4 illustrates typical linkages involving FIB/SEM in a production line. Here, the automated physical characterization can include electrical measurement of critical testing points, whereas the structural characterization usually starts with wafer inspection utilizing laser scattering tools. If electrical testing or defect inspection identifies faulty conditions on the wafer, FIB/SEM can then provide further structural characterization based on the defect registration on a wafer map to reveal defect root causes. Figure 5 presents an FIB/SEM image showing the microstructure of a copper deposit, in which distinctive twinned substructures can be observed, together with sporadic voids in the film.
The versatility of electron microscope techniques in visualizing lattice imperfections, such as dislocations, stacking faults, and voids, and watching their elastic and plastic behaviors as functions of time, temperature, and stress has contributed enormously to the understanding—and therefore the design—of modern engineering materials. The near-century-long transformation of an empirical metallurgical alchemy to an atomic-level cause-and-effect understanding tells a beautiful story of the characterization-driven evolution of materials.
Lighting the way: From Röntgen to MAX IV
To re-emphasize the convoluted relationship between materials engineering and scientific discovery and between photons and electrons, recall that the impetus for Wilhelm Röntgen’s systematic study of discharge-tube emissions leading to the discovery of x-rays was provided by numerous reports of sealed photographic plates darkening in the vicinity of an apparatus used to study cathode rays. The silver halide-containing gelatin-coated glass photographic plates Röntgen used to create the first radiographic images were a mature commercial technology in 1895,67 and the ability of a casual observer to quickly comprehend the first radiographic images ignited imaginations. Within months of Röntgen’s 1895 paper, practical applications were explored in archeology, botany, medicine, manufacturing, quality assurance, public safety, archeology, forensics, art, and more. Reference Glasser68 Thus, the widespread availability of photographic plates, as well as the pre-existence of a niche market for the apparatus for studying cathode rays, enabled both Röntgen’s discovery and the subsequent rapid worldwide deployment of radiography.
The early successful application of radiography to reveal defects inside welded metals and castings for ordnance shaped modern industrial quality-assurance practice, with x-ray and γ-ray radiography still an important part of the growing collection of standardized nondestructive testing methods. The inherent value in nondestructively peering inside opaque objects has kept radiography at the forefront of materials characterization techniques, and with the evolution of x-ray sources—rotating anodes, synchrotrons, free-electron lasers—radiography has come to encompass the ultrasmall (nanometer), ultrafast (femtosecond), element-specific (fluorescence microprobe), and three-dimensional (tomography).
Synchrotron radiation light sources provide x-rays of up to nine orders of magnitude more intensity (brilliance) than conventional x-ray sources in a continuous spectrum covering infrared through γ-rays (2.9 GeV). Reference Riken69 The high intensity makes it possible to collect analyzable data from photon–matter interactions with very small cross sections. This has led to a smorgasbord of characterization techniques, Reference Als-Nielsen and McMorrow70,Reference Willmott71 each with inherent sensitivities that make it appealing for particular samples or problems. Although most of these are more demanding than the turnkey operation found in medical radiography, the proliferation of light sources (over 50 worldwide, including storage-ring and free-electron laser sources),Footnote † as well as their growing industrial use, speaks to the ability of synchrotron-based techniques to provide unique insights into material properties.
Laboratory-based x-ray fluorescence, diffraction, and absorption spectroscopy, supported by high-rate data acquisition, easily satisfy the needs of the majority of researchers. The value proposition for high-brightness sources—synchrotrons and free-electron lasers—is the ability to measure ultrasmall, highly dilute, and inhomogeneous samples, at time resolutions down to the tens of femtoseconds, Reference Zhang, Unal, Gati, Han, Liu, Zatsepin, James, Wang, Nelson, Weierstall, Sawaya, Xu, Messerschmidt, Williams, Boutet, Yefanov, White, Wang, Ishchenko, Tirupula, Desnoyer, Coe, Conrad, Fromme, Stevens, Katritch, Karnik and Cherezov72 at energy resolutions (∆E/E) on the order of 10–8, Reference Shu, Stoupin, Khachatryan, Goetze, Roberts, Mundboth, Collins and Shvyd’ko73,Reference Budai, Hong, Manley, Specht, Li, Tischler, Abernathy, Said, Leu, Boatner, McQueeney and Delaire74 and in sample environments that mimic real-word conditions. In extreme cases, such as crystal structure determination during shock compression Reference Gupta, Turneaure, Perkins, Zimmerman, Arganbright, Shen and Chow75,Reference Eakins and Chapman76 or imaging of dendrite formation in metal-alloy melts, high-brightness sources provide invaluable experimental data to inform computational models.
Of particular note over the past decade is the proliferation of x-ray imaging techniques that exploit the spatial coherence of the beam, such as coherent diffraction imaging (CDI) and x-ray photon correlation spectroscopy. CDI has been used to obtain three-dimensional images of nanometer-scale objects embedded in complex environments, such as individual grains, including lattice strain, in macroscopic samples of polycrystalline materials. Reference Ulvestad, Singer, Cho, Clark, Harder, Maser, Meng and Shpyrko77 The possibility for new science with increased temporal and spatial x-ray beam coherence is one of the primary drivers for the next generation of synchrotron light sources, which replace the bending magnets with a series of shorter magnets—a multiband acromat Reference Einfeld, Plesko and Schaperc78 (MBA)—to significantly decrease the horizontal divergence and increase the brilliance. The newly completed MAX IV facility, hosted by Lund University (Lund, Sweden), the first subnanometer radian MBA lattice synchrotron light source, is scheduled to begin accepting users in the summer of 2016. 79
Metal alloy solidification modeling
Dendritic microstructures are ubiquitous in metal alloys. Where they first emerge during solidification provides the first opportunity to influence structural, chemical, and defect evolution that dictates the mechanical performance of cast parts. From a theoretical standpoint, dendritic growth is a long-standing example of complex pattern formation that involves structural and chemical changes over multiple length and time scales. Characterization of metal-alloy solidification dynamics using synchrotron x-ray Reference Clarke, Tourret, Imhoff, Gibbs, Fezzaa, Cooley, Lee, Deriy, Patterson, Papin, Clarke, Field and Smith80 and proton Reference Clarke, Imhoff, Gibbs, Cooley, Morris, Merrill, Hollander, Mariam, Ott, Barker, Tucker, Lee, Fezzaa, Deriy, Patterson, Clarke, Montalvo, Field, Thoma, Smith and Teter81 imaging techniques over multiple length scales has advanced the development of computational models for the optimization of casting parameters. Reference Tourret, Karma, Clarke, Gibbs and Imhoff82,Reference Tourret, Clarke, Imhoff, Gibbs, Gibbs and Karma83
At the scale of a dendritic array, Figure 6 shows successful comparisons between synchrotron x-ray imaging (top) and multiscale simulations with a newly developed dendritic needle network model (bottom) during directional solidification of aluminum-based alloys, namely, Al–12 at.% Cu Reference Tourret, Karma, Clarke, Gibbs and Imhoff82 and Al–9.8 wt% Si, Reference Tourret, Clarke, Imhoff, Gibbs, Gibbs and Karma83 at a controlled temperature gradient G and solid–liquid interfacial growth velocity V. The model allows for predictions of microstructural characteristics, such as primary dendritic spacing important to mechanical properties, at the scale of entire dendritic arrays, which is not possible with simulation techniques such as phase-field modeling. Reference Boettinger, Warren, Beckermann and Karma84 The multiscale integration of in situ characterization and modeling will result in the prediction and control of metal-alloy solidification and will enable the development of advanced manufacturing processes.
Photoelectrons bring layers to light
Synchrotron-based hard x-ray photoemission spectroscopy is an exciting development for the characterization of multilayered structures. Until recently, x-ray generators primarily employed aluminum (Kα, 1486.6 eV) or magnesium (Kα, 1253.6 eV) anodes as sources. Anodes and filament assemblies are compact, and the equipment built around them easily fits in standard laboratory spaces.
Inelastic scattering of electrons excited by these relatively low-energy photons limits the probe depths of techniques based on these sources to about 3 nm and requires the removal of layers of material using a damaging ion-beam sputtering process to access subsurface layers. By providing higher photon energies than are available in the laboratory and high intensity over a continuous spectrum, synchrotrons offer access to deeper layers, increasing accessible depths by an order of magnitude ( Figure 7 ), along with the ability to vary the x-ray energy.
The familiar MOS material stack ( Figure 8 ) is composed of layers often deposited using chemical vapor deposition, atomic layer deposition, or physical vapor deposition on a semiconductor substrate. Device miniaturization to achieve increases in circuit density (as anticipated by Moore’s Law) resulted in SiO2 gate oxides in these MOS structures that were too thin to maintain low leakage currents. Various high-dielectric-constant (high-κ) substitutes such as HfO2 have been developed, and in each case, the layer structures required study.
With nanoscale devices, abrupt morphological changes will have an important role. In multilayer stacks, an obvious area of interest is the interfaces between unlike materials, where chemistry, defect propagation, and chemical contaminants are less predictable and harder to control. Because x-ray photoelectron spectroscopy (XPS) is sensitive to both chemical and electrical environments, it is an important characterization tool for understanding these interfacial phenomena.
Silicon substrate 1s core-level spectra for a multilayer stack with and without a metal cap layer are shown in Figure 9 . A 23-nm layer of Al2O3 covers the silicon, topped by a metal cap of 3 nm. Standard XPS could not detect the substrate silicon signal through the 26-nm overlayer. However, at the US National Institute of Standards and Technology beamline X24A at the National Synchrotron Light Source (NSLS) at Brookhaven National Laboratory (BNL), detectable photoelectrons were generated using photon beam energies of 3.5–5 keV. Main and satellite peaks revealed a shift in binding energy upon the addition of the 3-nm cap, with band bending near the silicon interface with the overlayer oxide being the likely cause. Reference Church, Weiland and Opila85
Synchrotron facilities continue to push the limits of temporal and spatial resolution. BNL’s Hard X-ray Nanoprobe at its newly constructed NSLS-II is expected to be able to examine IBM’s newest circuit design with 10-nm features. Reference Kramer86
Ultrananocrystalline diamond: An example
Carbon is a wonderfully versatile material. (See the article in this issue by Gogotsi for a discussion of carbon nanomaterials.) Forms including buckyballs (or fullerenes), nanotubes, and graphene receive the most attention these days, but the old standbys of graphite, diamond, and various amorphous allotropes (including soot) are still interesting as well. Diamond displays several desirable chemical and physical properties, but its artificial synthesis in the laboratory or factory, which requires very high pressures and temperatures, was not achieved until 1955. Reference Howard87
Beginning in the 1990s, chemical vapor deposition methods were developed to deposit diamond-like films on substrates using carefully adjusted pressures of hydrogen and hydrocarbon gases. Now, improved processes produce pure diamond films with nano-sized grains for several commercial applications. These films, which retain diamond’s hardness, stiffness, chemical inertness, and tribological and electrical properties, are finding applications as low-friction wear-resistant seals and bearings, sensor substrates, and components in microelectromechanical systems (MEMS). Reference Sumant, Auciello, Carpick, Srinivasan and Butler88–Reference Buja, Sumant, Kokorian and van Spengen90 MEMS development alone relies on cross-sectional SEM and electrical, thermal, and mechanical measurements. Integration of these films with CMOS devices and doping of the diamond with electrically active impurities such as boron have extended both the applications and the characterization needs of this new materials technology. The current availability of 100% diamond tips for atomic force microscopy is another example of the virtuous cycle.
Development of ultrananocrystalline diamond films requires and still relies on several characterization techniques. Reference Carlisle and West91 A chronology of the primary methods can be summarized as follows:
• In 1995, near-edge x-ray absorption fine structure confirmed that nanostructured diamond thin films were phase-pure diamond.
• In 1998–2003, high-resolution TEM showed the critical role of hydrogen in the growth process, grain structure, grain-boundary morphology, impact of nitrogen additions, and low-temperature growth.
• From 2005 to the present, Raman spectroscopy and linear profilometry have provided rapid bulk and surface characterization tools needed to mature the manufacturing process for volume production of real products.
• From 2014 to the present, optical metrology has provided rapid characterization of several critical surface features that impact tribological applications (pump seals, bearings, etc.).
The measurement and development of successful components with demanding surface interactions requires combinations of micro-Raman spectroscopy, conformal microscopy, improved high-resolution composition determination, and the measurement and analysis of short-order “roughness” metrics that need to be controlled in a production-inspection environment.
In this article, we have mentioned several characterization tools, some briefly and some at greater length. Those selected are indicative of the range of measurement methods, but are by no means exhaustive; many valuable ones have been omitted. The various modes of electron microscopy and x-ray analysis dominate the leading-edge fundamental studies from which the most penetrating insights are gleaned. It is also clear that, without access to the broadest array of measurement options, from the most modern and sophisticated to the mature and routine, the advanced materials that surround our everyday lives would be far less advanced. The tools themselves will surely continue to improve by continual increments and by the occasional, but inevitable, game-changing innovation.
Quasicrystals and the Gunn effect (see the sidebars on Quasicrystals and the Gunn effect) epitomize how many serendipitous discoveries occur. Observation of x-ray diffraction itself in the laboratory of Max von Laue came as a surprise, Reference Eckert92 as did the extraordinarily narrow nuclear γ-ray resonance absorption line in iridium-191 when recoilless resonant absorption (Mössbauer effect) was first seen. Reference Mössbauer93
Early in the series of attempts to develop the Nobel-worthy blue-light light-emitting diode, researchers observed an unanticipated enhanced electroluminescence from GaN(Zn) samples that were irradiated by SEM electrons. Reference Amano, Akasaki, Kozawa, Hiramatsu, Sawaki, Ikeda and Ishii94 That led to an understanding of the passivating effect of hydrogen Reference Nakamura, Iwasa, Senoh and Mukai95,Reference Nakamura, Mukai, Senoh and Iwasa96 on otherwise electrically active dopants and to better p-type-doped materials. Reference Amano, Kito, Hiramatsu and Akasaki97
Thus, a quite noticeable aspect of the role of characterization tools in the evolution of materials is the unexpected extra insights and information that our instruments can find in a willing specimen.
Some materials questions must await invention of more sensitive and sophisticated tools before they can be answered. In industrial metallurgy, grain refinement, or inoculation, has become a commonly used process for strengthening grain boundaries. The most routinely used grain-refining material for aluminum alloys is Al–5Ti–1B, which contains the TiAl3 intermetallic and TiB2 particles in an aluminum matrix. Reference Cibula4 Although Al–Ti–B mixtures have been used for more than 60 years and studied intensively, the precise mechanism involved in the inoculation has always attracted a great deal of controversy.
It was initially proposed in the 1950s Reference Cibula4 that the TiB2 particles could be responsible for promoting heterogeneous nucleation. However, subsequent electron-probe microanalysis studies showed that the borides were forced out to the grain boundaries, suggesting a high interfacial energy with aluminum and only an indirect role in grain refinement. In the presence of excess titanium, on the other hand, precipitation of a thin layer of TiAl3 occurred on the boride. Reference Mohanty and Gruzleski5 These observations led to numerous conjectures, hypotheses, and theories on the subject. Reference Fan, Wang, Zhang, Qin, Zhou, Thompson, Pennycooke and Hashimoto6
The understanding of complex melt phenomena was significantly improved by the use of the metallic glass technique Reference McKay, Cizek, Schumacher and O’Reilly7 in tandem with high-resolution transmission electron microscopy (TEM), Reference Schumacher, Greer, Worth, Evans, Kearns, Fisher and Green8 especially for studies of the early stages of nucleation and growth. With this approach, Schumacher and Greer Reference Schumacher, Greer and Hale9 showed the presence of a highly coherent surface layer on a TiB2 particle embedded in an aluminum-based glassy matrix that had lattice spacing consistent with TiAl3. Consequently, it was proposed that this layer makes TiB2 a potent nucleant while saving the TiAl3 from dissolution.
A problem with these observations was that theoretical analysis indicated that a TiAl3 phase should be thermodynamically unstable on the surface of boride particles when there is only a dilute titanium concentration in the melt (typically 0.1 wt% of Al–5Ti–1B alloy). So, could this phase be TiAl3?
This question led to a full armory of electron microscopic characterization techniques being applied at Brunel University, Reference Mohanty and Gruzleski5 including high-resolution TEM, high-resolution scanning TEM, and atomic-resolution electron energy-loss spectroscopy mapping, in particular.Footnote * The existence of a titanium-rich monolayer on the (0001) TiB2 surface was confirmed, as shown in the figure. The nucleation potency of the TiB2 particles is thus significantly increased by the formation of a titanium-rich monolayer.
Effective grain refinement by the Al–5Ti–1B grain refiner was therefore conclusively established. It could now be directly attributed to the enhanced potency of TiB2 particles with the titanium-rich layer and sufficient free titanium solute in the melt after grain-refiner addition to achieve a columnar-to-equiaxed transition, where all grain axes have approximately the same length.
When, in 1982, an electron microscope presented eventual Nobel laureate Daniel Shechtman with a crystallographically impossible result, there was every reason to suspect some kind of malfunction. Yet, after some years of controversy, the microscope and the investigator were vindicated, and Shechtmanite, now known as quasicrystals, a lattice with a local icosahedral structure but no translational symmetry, was accepted as a new state of matter. Reference Shechtman, Blech, Gratias and Cahn25, Footnote * Unlike the Gunn diode (see the sidebar on the Gunn effect), these quasiperiodic materials have not found their major commercial niche (yet), but the moral of the story is clear. With due respect for possible systematic error, instrumental flaws, and researcher bias, the most unexpected messages from our measurement tools deserve a fair hearing.
The Gunn diode is like no other. Although a two-terminal device, it is made from only lightly n-doped semiconductor material and does not rectify alternating current. Rather, when a high field is applied, its resistivity is reduced, and it displays negative differential resistance that enables it to amplify high frequencies or, when biased with a DC voltage, to oscillate and become unstable. For reasons briefly mentioned later, this device is also known as a “transferred electron device.”
In 1962, J.B. Gunn, a British physicist working at IBM, was studying high-speed hot-carrier effects in germanium (Ge), particularly noise characteristics. He decided to compare these observations with those he might obtain from semiconductor compounds. Because it was already being produced in sufficiently good quality at his own laboratory, he decided to focus his attention on GaAs. After all, as he wrote later, “n-type GaAs … is really only Ge with a misplaced proton.” Reference Gunn27
Other groups had already predicted, from their knowledge of bandgap theory, that GaAs should exhibit differential resistance due to the transfer of electrons in the conduction band. Reference Ridley and Watkins28,Reference Hilsum29 Gunn later admitted that, at the time, he “thought their ideas farfetched and didn’t believe the predictions.” Reference Cahn, Shechtman and Gratias26 Unfortunately, these groups were unable to apply fields greater than 1.5 kV/cm, and their GaAs samples were too heavily doped and of such poor quality that they would, in any case, have masked deviations from Ohm’s law.
Gunn had developed extraordinary expertise in high-speed pulse measurements. He availed himself of a state-of-the-art mercury-relay pulse generator that provided 10-ns pulses, rather than the microsecond pulses he had previously used, and acquired a prototype of a “new-fangled” Tektronix traveling-wave oscilloscope to enable him to see such short pulses. To detect any small changes in resistance, he monitored the pulse reflected down a 50-Ω transmission-line termination that gave him the equivalent of a slightly unbalanced pulse bridge, which was superior to a conventional current–voltage measurement.
With this exploratory, yet advanced electrical characterization setup, a small increase in resistance was detected with rising voltage, but the reflected signal simultaneously became very noisy, exhibiting an amplitude of several amperes, whereas at higher voltages, the resistance decreased significantly. In his laboratory notebook for February 19, 1962, Gunn recorded against the results for 741 V and 861 V, the word “noisy.” Although he has stated that it worried him at the time, he published later that it seemed “the most important single word I ever wrote down.” Reference Gunn27
Gunn suspected at first that this was due to a faulty contact, but observed the same behavior with different GaAs samples and with reversed polarity; the use of a 47-Ω resistor also ruled out defects in the experimental apparatus. He further deduced that the effect had to be generated in a nonlinear portion of the circuit (i.e., the sample) and not in any of the linear components. Finally, and importantly, he found that, during the current–voltage measurements, some portions of the current fell below the low voltage levels—equivalent to a time-dependent negative conductance. The figure gives an example of what Gunn observed when applying 16-V, 10-ns pulses to a thin sliver of GaAs. Reference Gunn30,Reference Gunn31
This is an exemplary case of a scientist refusing to accept an apparently noisy, inexplicable measurement, through dogged belief in the analytical equipment employed and the experimental technique being followed. When called by his management to explain what he was doing, he replied, “Any phenomenon capable of turning off 10 Amps in half a nanosecond had to be good for something.” Reference Gunn27 It was received with lukewarm enthusiasm. Twenty-five years later, used mostly in high-frequency electronics for microwaves, communications, many types of sensors, and radar, some six million “Gunn” diode oscillators were manufactured by about a dozen companies per annum. Reference Voelcker32
Austenitic stainless steel (16–28% chromium and up to 35% nickel) is used where toughness, strength, and resistance to rust and shocks are required. Chromium imparts resistance to heat and corrosion, and nickel improves elasticity. Austenitic stainless steel is widely used in automotive and aerospace components under conditions of high temperature and severe stress. Failure mechanisms have been extensively examined with a view to prolonging operational lifetimes.
Diesel-engine fuel-injector nozzles have been of specific recent interest. They are usually tested under high-pressure cyclic loading using costly engine test rigs. Fatigue is the main cause of failure. Therefore, analyses to reveal the underlying fatigue mechanism are needed.
As a test case, 18Cr8Ni specimens’ compositions were confirmed using spark emission spectrometry. After being machined, turned, and polished, samples were treated by low-pressure carburizing (LPC), standard gas carburizing, or gas carbonitriding. Vickers indentation methods showed core microhardness to be nearly independent of the type of treatment, and surface hardness values were also all quite similar. Rotating bending tests evaluated fatigue strength, and scanning electron microscopy with electron probe microanalysis and x-ray diffraction revealed defect structures and fatigue-crack inclusions through elemental mappings. Reference Kabir and Bulpett43
Fatigue strength was found to improve significantly for samples finely machined after rather than before heat treatment. This sequence removes chromium oxide formed during gas carburization and promotes grain-boundary etching in the LPC-treated material. In separate examinations of failed engine-stressed nozzles, the expected chromium oxide surface layer was accompanied by penetration of the oxide along MnS inclusions inside the components. These foreign incursions increased the localized stress, which ultimately led to fatigue failure.
Therefore, the mean stress of diesel-engine components is reduced by ever-finer polishing down to 30-µm roughness after appropriate heat treatment, and LPC is preferred because it minimizes—if not eliminates—oxide incursions.
The largest technology-based corporations still maintain their own internal research and development (R&D) facilities (e.g., Alcatel-Lucent’s Bell Labs, IBM, ABB, Siemens, Kyocera, Samsung). However, many companies have eliminated such laboratories, and the rest have noticeably shifted focus away from curiosity-driven to applied-research areas with direct relevance to product development. With the most specialized and sophisticated instruments confined to university and government laboratories, where do small-, medium-, or even large-sized companies go when beyond-routine diagnoses and product-improvement options require understanding of a materials issue? Arrangements to use academic and government resources are, of course, common and often involve collaborative relationships. For highly proprietary studies, however, commercial characterization services are the place to go. That these services primarily support the highly applied end of the R&D chain is borne out by the anecdotal experience of two such providers.Footnote *
Exponential Business and Technologies Co. (Ebatco) Reference Yang44 tells us that they assist customers in making connections to real-world applications, from macro- to micro- down to nanoscale, by providing nanoscale analytical and laboratory testing services. They support customers in R&D of novel materials, new products, and process optimization; root-cause determination of failed parts; system and part-performance verifications; and industrial and regulatory compliance tests. Clients come from the United States, Canada, Europe, and Asia. The great majority of them are industrial and commercial institutions: roughly 95% industrial/commercial, 4% academic, and 1% government. The approximate breakdown across fundamental science, applied research, device and process development, and manufacturing quality control is 5%, 10%, 55%, and 30%, respectively.
“Our client base is dominated by private industrial customers,” according to the Evans Analytical Group (EAG). “However, universities, national labs, and government labs are all substantial … users of our services.” Reference Mowat45 EAG’s questions from its customers are at the interface where fundamental research touches applied research. The larger percentage of EAG’s work is support for applied R&D or product development after fundamental concepts have been confirmed, and the task is to build a prototype or first-generation product. “We help throughout that process. Once a product is being manufactured, we may stay involved to qualify the materials supply chain, quality processes, or help with the failure analysis of products that fail during testing or in the marketplace.”
The authors appreciate discussions with and advice from M. Green (NIST); D. Yang (Ebatco); I. Mowat (EAG); A.P. Malozemoff (American Superconductor); A. Clarke (LANL); Y. Yacoby (HUJI); J.R. Church and Chaoying Ni (University of Delaware); J. Carlisle and C. West (Advanced Diamond Technologies); R. Bulpett, Z. Fan, and Y. Wang (Brunel University); and Roger A. Giblin (deceased) (University College London).