To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Ion implantation has been the dominant doping technique for silicon integrated circuits (ICs) and most other semiconductors for the past 45 years. It is expected to retain this position of dominance for the foreseeable future. In this process, dopant ions are accelerated to 0.1–1000 keV of energy and smashed into a crystalline semiconductor substrate, creating a cascade of damage that may displace hundreds or thousands of lattice atoms for each implanted ion. In this chapter, we will seek to understand how such an energetic and violent technique has become the dominant and preferred method of doping semiconductor wafers in manufacturing. At first glance, it seems that the technique would not be of much use in the precise art of fabricating integrated circuits. Indeed, although the original patent for ion implantation was issued to William Shockley in 1954, it was not until the late 1970s that ion implantation was used in manufacturing.
In this study, the effects of antagonistic muscle actuation on the propulsion of a bilaminar-structure fish fin ray were investigated using a two-dimensional computational flow–structure interaction (FSI) model. The structure and material properties of the model were based on the realistic biological data of the sunfish fin. The effect of muscle actuation was modelled using root displacement offset between the two hemitrichs. Parametric FSI simulations were conducted by assuming a sinusoidal function of the offset over a cycle and varying the amplitude and phase difference between the actuations and pitching/plunging motions. The results show that the phase of muscle actuation is a critical factor affecting its effects. Three performance regions can be identified with different phase ranges, including a thrust-favour region, an efficiency-favour region and a thrust-efficiency-unfavour region. In each region, the relationships among the root actuations, fin-ray kinematics, vortex dynamics and resulting performance are studied and discussed. Furthermore, a strong positive correlation between the trailing–leading amplitude ratio and thrust coefficient as well as a negative relationship between the efficiency and angle of attack at the centre of mass of the fin ray are observed.
We investigate the effects of fluid elasticity on the flow forces and the wake structure when a rigid cylinder is placed in a viscoelastic flow and is forced to oscillate sinusoidally in the transverse direction. We consider a two-dimensional, uniform, incompressible flow of viscoelastic fluid at $Re=100$, and use the FENE-P model to represent the viscoelastic fluid. We study how the flow forces and the wake patterns change as the amplitude of oscillations, $A^*$, the frequency of oscillations (inversely proportional to a reduced velocity, $U^*$), the Weissenberg number, $Wi$, the square of maximum polymer extensibility, $L^2$, and the viscosity ratio, $\beta$, change individually. We calculate the lift coefficient in phase with cylinder velocity to determine the range of different system parameters where self-excited oscillations might occur if the cylinder is allowed to oscillate freely. We also study the effect of fluid elasticity on the added mass coefficient as these parameters change. The maximum elastic stress of the fluid occurs in between the vortices that are observed in the wake. We observe a new mode of shedding in the wake of the cylinder: in addition to the primary vortices that are also observed in the Newtonian flows, secondary vortices that are caused entirely by the viscoelasticity of the fluid are observed in between the primary vortices. We also show that, for a constant $Wi$, the strength of the polymeric stresses increases with increasing reduced velocity or with decreasing amplitude of oscillations.
If workers from one of today’s multi-billion-dollar integrated circuit (IC) manufacturing plants were suddenly transported to a 1960s semiconductor plant, they would likely be amazed that chips could be successfully manufactured in such a place. Such factories were “dirty” by today’s standards, and wafer cleaning procedures were poorly understood. Of course, chips were manufacturable even in those days, but they were very small and contained very few components by today’s standards. Since defects on a chip tend to reduce yields (fraction of good chips on a wafer) exponentially as chip size increases, small chips can be manufactured with a yield greater than zero even in quite dirty environments. However, all of the progress that has been made in the past six decades in shrinking device sizes and designing very complex chips would have been for naught if similar advances had not been made in manufacturing capability, especially in defect density.
Insights gained from modal analysis are invoked for predictive large-eddy simulation (LES) wall modelling. Specifically, we augment the law of the wall (LoW) by an additional mode based on a one-dimensional proper orthogonal decomposition (POD) applied to a two-dimensional turbulent channel. The constructed wall model contains two modes, i.e. the LoW-based mode and the POD-based mode, and the model matches with the LES at two, instead of one, off-wall locations. To show that the proposed model captures non-equilibrium effects, we perform a priori and a posteriori tests in the context of both equilibrium and non-equilibrium flows. The a priori tests show that the proposed wall model captures extreme wall-shear stress events better than the equilibrium wall model. The model also captures non-equilibrium effects due to adverse pressure gradients. The a posteriori tests show that the wall model captures the rapid decrease and the initial decrease of the streamwise wall-shear stress in channels subjected to suddenly imposed adverse and transverse pressure gradients, respectively, both of which are missed by currently available wall models. These results show promise in applying modal analysis for turbulence wall modelling. In particular, the results show that employing multiple modes helps in the modelling of non-equilibrium flows.
Trip-resolved large-eddy simulations of the DARPA SUBOFF are performed to investigate the development of turbulent boundary layers (TBLs) in model-scale studies. The primary consideration of the study is the extent to which the details of tripping affect statistics in large-eddy simulations of complex geometries, which are presently limited to moderate Reynolds number TBLs. Two trip wire configurations are considered, along with a simple numerical trip (wall-normal blowing), which serves as an exemplar of artificial computational tripping methods often used in practice. When the trip wire height exceeds the laminar boundary layer thickness, shedding from the trip wire initiates transition, and the near field is characterized by an elevation of the wall-normal Reynolds stress and a modification of the turbulence anisotropy and mean momentum balance. This trip wire also induces a large jump in the boundary layer thickness, which affects the way in which the TBL responds to the pressure gradients and streamwise curvature of the hull. The trip-induced turbulence decays along the edge of the TBL as a wake component that sits on top of the underlying TBL structure, which dictates the evolution of the momentum and displacement thicknesses. In contrast, for a trip wire height shorter than the laminar boundary layer thickness, transition is initiated at the reattachment point of the trip-induced recirculation bubble, and the artificial trip reasonably replicates the resolved trip wire behaviour relatively shortly downstream of the trip location. For each case, the inner layer collapses rapidly in terms of the mean profile, Reynolds stresses and mean momentum balance, which is followed by the collapse of the Reynolds stresses in coordinates normalized by the local momentum thickness, and finally against the 99 % thickness. By this point, the lasting impact of the trip is the offset in boundary layer thickness due to the trip itself, which becomes a diminishing fraction of the total boundary layer thickness as the TBL grows. The importance of tripping the model appendages is also highlighted due to their lower Reynolds numbers and susceptibility to laminar separations.
We present detailed characterization of laser-driven fusion and neutron production ($\sim {10}^5$/second) using 8 mJ, 40 fs laser pulses on a thin (<1 μm) D${}_2$O liquid sheet employing a measurement suite. At relativistic intensity ($\sim 5\times {10}^{18}$ W/cm${}^2$) and high repetition rate (1 kHz), the system produces deuterium–deuterium (D-D) fusion, allowing for consistent neutron generation. Evidence of D-D fusion neutron production is verified by a measurement suite with three independent detection systems: an EJ-309 organic scintillator with pulse-shape discrimination, a ${}^3\mathrm{He}$ proportional counter and a set of 36 bubble detectors. Time-of-flight analysis of the scintillator data shows the energy of the produced neutrons to be consistent with 2.45 MeV. Particle-in-cell simulations using the WarpX code support significant neutron production from D-D fusion events in the laser–target interaction region. This high-repetition-rate laser-driven neutron source could provide a low-cost, on-demand test bed for radiation hardening and imaging applications.
In this paper, transient granular flows are examined both numerically and experimentally. Simulations are performed using the continuous three-dimensional (3-D) granular model introduced in Daviet & Bertails-Descoubes (ACM Trans. Graph., vol. 35, no. 4, 2016b, p. 102), which represents the granular medium as an inelastic and dilatable continuum subject to the Drucker–Prager yield criterion in the dense regime. One notable feature of this numerical model is to resolve such a non-smooth rheology without any regularisation. We show that this non-smooth model, which relies on a constant friction coefficient, is able to reproduce with high fidelity various experimental granular collapses over inclined erodible beds, provided the friction coefficient is set to the avalanche angle – and not to the stop angle, as generally done. In order to better characterise the range of validity of the fully plastic rheology in the context of transient frictional flows, we further revisit scaling laws relating the shape of the final collapse deposit to the initial column aspect ratio, and accurately recover established power-law dependences up to aspect ratios of the order of 10. The influence of sidewall friction is then examined through experimental and simulated collapses with varying channel widths. The analysis offers a comprehensive framework for estimating the effective flow thickness in relation to the channel width, thereby challenging previously held assumptions regarding its estimation in the literature. Finally, we discuss the possibility to extend the constant coefficient model with a hysteretic model in order to refine the predictions of the early-stage dynamics of the collapse. This illustrates the potential effects of such phenomenology on transient flows, paving the way to more elaborate analysis.
We study a fifty-year-old problem of fast acoustic streaming, that is, the generation of moderate or large hydrodynamic Reynolds number ($\textit {Re}$) acoustic streaming (or steady flow) by the convection of momentum in an acoustic wave (or another periodic flow), while the latter is simultaneously altered by the former. The intrinsic disparity of length and time scales makes a brute-force solution of the full Navier–Stokes and continuity equations a formidable problem. Circumventing this difficulty, we split the problem into a time-averaged system of equations for the steady flow component and a dynamic system of equations for its quasi-periodic flow counterpart. The latter system of equations is obtained by subtracting the time-averaged Navier–Stokes equation from its original dynamic form, and is rendered a nonlinear wave equation using the continuity equation and an adiabatic connection between density and pressure. The resulting equations are compatible with the theory by Eckart for small $\textit {Re}$ flow, and capture large-$\textit {Re}$ effects. Scaling analysis and a case study show that acoustic streaming is weak and does not contribute to the acoustic wave close to the wave source, relevant to many microfluidic systems. At small $\textit {Re}$, the streaming magnitude is proportional to an inverse Strouhal number, a small quantity in experiments. Moderate and large $\textit {Re}$ render the streaming magnitude comparable to the pre-attenuating periodic flow (or particle velocity of the wave) at approximately a wave attenuation length away from the wave source or further; the wave is altered by the streaming that it generates, and the streaming dominates the flow far from the wave source.
Radar absorption structures made of an active frequency selective surfaces (AFSS) have enormous potential in the aviation, naval, and other industries. In this research paper, a systematic review (SR) is carried out in the field of the AFSS to bring uncertainties, obstacles, challenges, classifications, applications, and design issues that arrive in the development of the sub-6 GHz architecture. To bias the AFSS component, as per the signal requirements, a unique set of circuits (PIN diode) is required, with ON and OFF state and a transmission zone. The bandwidth of which is determined by the bias voltage supplied. It can behave as a complicated hybrid impedance structure by providing ON and OFF biasing voltage to a PIN diode embodied in an FSS structure. Higher manufacturing costs of AFSS components, more significant complexities involved, a large amount of power consumption, and reactive impedance losses are some common limitations faced while implementing and designing an AFSS. Many envisioned problems are corrected with the AFSS design, current or creative implementations, and processing parameters are investigated progressively. It implies that new AFSSs will be an alternative to regular FSSs in the future. This paper is based on Kitchenham’s three-phase review procedure and supplements it with results, views, and recommendations from other leading experts in the field.
When a saturated brine layer is cooled from above, both a convective temperature front as well as a front of sedimenting salt crystals can form. We employ direct numerical simulations to investigate the evolution and interaction of these two density fronts. Depending on the ratio of the temperature front velocity and the crystal settling velocity, which is governed by a dimensionless parameter in the form of a Rayleigh number, we find that either two separate fronts exist for all times, two initially separate fronts combine into a single front after some time or a single front exists at all times. We furthermore propose approximate scaling laws for the propagation of the thermal and crystal fronts in each regime and compare them with the simulation data, with generally good agreement.
The velocity interferometer system for any reflector (VISAR) coupled with a streaked optical pyrometer (SOP) system is used as a diagnostic tool in inertial confinement fusion (ICF) experiments involving equations of state and shock timing. To validate the process of adiabatically compressing the fuel shell through precise tuning of shocks in experimental campaigns for the double-cone ignition (DCI) scheme of ICF, a compact line-imaging VISAR with an SOP system is designed and implemented at the Shenguang-II upgrade laser facility. The temporal and spatial resolutions of the system are better than 30 ps and 7 μm, respectively. An illumination lens is used to adjust the lighting spot size matching with the target size. A polarization beam splitter and λ/4 waveplate are used to increase the transmission efficiency of our system. The VISAR and SOP work at 660 and 450 nm, respectively, to differentiate the signals from the scattered lights of the drive lasers. The VISAR can measure the shock velocity. At the same time, the SOP system can give the shock timing and relative strength. This system has been used in different DCI campaigns, where the generation and propagation processes of multi-shock are carefully diagnosed.
We use the Dyson–Wyld diagrammatic technique to analyse the infinite series for the correlation functions of the velocity in hydrodynamic turbulence. We demonstrate the fundamental role played by the triple correlator of the velocity in determining the entire statistics of the hydrodynamic turbulence. All higher-order correlation functions are expressed through the triple correlator. This is shown through the suggested triangular re-summation of the infinite diagrammatic series for multi-point correlation functions. The triangular re-summation is the next logical step after the Dyson–Wyld line re-summation for the Green's function and the double correlator. In particular, it allows us to explain why the inverse cascade of the two-dimensional hydrodynamic turbulence is close to Gaussian. Since the triple correlator dictates the flux of energy $\varepsilon$ through the scales, we support the Kolmogorov-1941 idea that $\varepsilon$ is one of the main characteristics of hydrodynamic turbulence.
The dynamics of turbulent flows is chaotic and difficult to predict. This makes the design of accurate reduced-order models challenging. The overarching objective of this paper is to propose a nonlinear decomposition of the turbulent state to predict the flow based on a reduced-order representation of the dynamics. We divide the turbulent flow into a spatial problem and a temporal problem. First, we compute the latent space, which is the manifold onto which the turbulent dynamics live. The latent space is found by a series of nonlinear filtering operations, which are performed by a convolutional autoencoder (CAE). The CAE provides the decomposition in space. Second, we predict the time evolution of the turbulent state in the latent space, which is performed by an echo state network (ESN). The ESN provides the evolution in time. Third, by combining the CAE and the ESN, we obtain an autonomous dynamical system: the CAE-ESN. This is the reduced-order model of the turbulent flow. We test the CAE-ESN on the two-dimensional Kolmogorov flow and the three-dimensional minimal flow unit. We show that the CAE-ESN: (i) finds a latent-space representation of the turbulent flow that has ${\lesssim }1\,\%$ of the degrees of freedom than the physical space; (ii) time-accurately and statistically predicts the flow at different Reynolds numbers; and (iii) takes ${\lesssim }1\,\%$ computational time to predict the flow with respect to solving the governing equations. This work opens possibilities for nonlinear decomposition and reduced-order modelling of turbulent flows from data.
We develop a theoretical model to study (dense) two-dimensional gravity current flow in a laterally extensive porous medium experiencing leakage through a discrete fissure situated along this boundary at some finite distance from the injection point. Our model, which derives from the depth-averaged mass and buoyancy equations in conjunction with Darcy's law, considers dispersive mixing between the gravity current and the surrounding ambient by allowing two different gravity current phases. Thus do we define a bulk phase consisting of fluid whose density is close to that of the source fluid and a dispersed phase consisting of fluid whose density is close to that of the ambient. We characterize the degree of dispersion by estimating, as a function of time, the buoyancy of the dispersed phase and the separation distance between the bulk nose and the dispersed nose. On this basis, it can be shown that the amount of dispersion depends on the flow conditions upstream of the fissure, the fissure permeability and the vertical and horizontal extents of the fissure. We also show that dispersion is larger when the gravity current propagates along an inclined barrier rather than along a horizontal barrier. Model predictions are fitted against numerical simulations. The simulations in question are performed using COMSOL and consider different inclination angles and fissure and upstream flow conditions. Our study is motivated by processes related to underground $\mathrm {H}_2$ storage e.g. an irrecoverable loss of $\mathrm {H}_2$ when it is injected into the cushion gas saturating an otherwise depleted natural gas reservoir.
As sketched in the metamodel of Chapter 2, literary experiences result from the dynamic interaction between author, (con-)text and reader. Neurocomputational Poetics focusses on text and reader because these can best be characterized by quantifiable features. In the previous chapter on text analysis, I presented methods for computing text features that, according to the NCPM, can bias a reader's mind more towards the upper or the lower route of processing. This bias can be induced globally by the choice of a novel instead of a poetry collection, for example, or locally when re-reading a section of a text to reflect upon its form or content. In this chapter, I deal with both the reader and the act of reading. The ‘reading motivation and mode’ box of the mesomodel in Chapter 2 brings a number of reader-related factors into play that also influence this bias towards one of the two routes. Among those are stable personality variables called ‘traits’ or more transient, local aspects like spontaneous mood management called ‘states’ in personality psychology. Here, I discuss methods for analyzing these in Neurocomputational Poetics studies.
Reader Analysis
Most empirical studies of literature and reading psychology focus on the ‘average reader’: a purely statistical creature typically represented by mean values averaged across the data from some rather small (N∼20) and generally non-representative sample. Indeed, the large majority of empirical studies on reading so far have used undergraduate psychology students. When examining the processing of non-literary, short expository texts – so-called textoids – typical for these studies, the distortion in the data produced by this overselective sampling method may not be as detrimental as when studying the reception of verbal art. But even if the error introduced by this sampling method were negligible, reading ultimately remains a solitary, subjective and private act. It goes without saying that readers have different cultural and social backgrounds, education, habits, skills, personalities and preferences. And all these produce variables that contribute to the reading act and can be more or less well assessed. Luckily, psychology also offers methods to study readers’ reading skills, personalities or interests, and these provide useful data when trying to predict the outcome of a reading act via models like the NCPM. Indeed, empirical studies have shown that reading can change both personality states and traits, and these, in turn, can change the way texts are read and appreciated.
In the preceding chapters, I have laid the ground for actual applications in Neurocomputational Poetics: we have the model and a set of methods for text, reader and reading act analyses. The next two chapters discuss concrete examples of how we can apply this toolbox.
My aim in the present chapter about simple applications is to
• make people who love literature aware of methods for computational poetics and their utility to further our understanding of how the pleasures of reading are constructed in the brain in response to a myriad of simple features that, in concert produce a complex symphony;
• show people who have no skills or interest in programming languages how to apply simple tools and ready-to-go apps so that they produce fascinating analyses of complex texts that not only offer new insights about verbal art but also testable predictions for scientific studies.
Euphony and Eusemy: Sound and Meaning Beauty
In poetry speech sounds spontaneously and immediately display their proper semantic function.
–––Jakobson and Waugh, (1979, p. 225).
According to Tenet 2 in Chapter 2, poetic effects start at the micropoetic level of single words, and already young children are able to both perceive and produce them. There are a number of simple methods to compute the potential of single words to create such effects based on two fundamental aspects: sound and meaning. No doubt, words can have a more or less pleasing sound such as in ‘pee’ vs. ‘piss’ and they can have more or less ugly meanings such as in ‘murder’ vs. ‘beauty’. But how do these two potential sources of micropoetic effects interact at the lexical level? And what role do they play when acting in the context of a line or stanza?
From the very beginning of poetry on Sumerian plates in the twenty-fourth century BC, poets knew that sound and meaning must not be independent – as posited in de Saussure's famous first principle of general linguistics – but could very well influence each other. A book summarizing results of the annual elections of the most beautiful German word is full of examples for words in which ‘euphony ∼ eusemy’, that is, they are beautiful in both sound and meaning. On the other hand, there are words that mean something beautiful, a colourful butterfly for instance, but they do not sound nicely. The German word for butterfly, ‘Schmetterling’, is a notorious example.