To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This study presents a comprehensive analysis on the extreme positive and negative events of wall shear stress and heat flux fluctuations in compressible turbulent boundary layers (TBLs) solved by direct numerical simulations. To examine the compressibility effects, we focus on the extreme events in two representative cases, i.e. a supersonic TBL of Mach number $M=2$ and a hypersonic TBL of $M=8$, by scrutinizing the coherent structures and their correlated dynamics based on conditional analysis. As characterized by the spatial distribution of wall shear stress and heat flux, the extreme events are indicated to be closely related to the structural organization of wall streaks, in addition to the occurrence of the alternating positive and negative structures (APNSs) in the hypersonic TBL. These two types of coherent structures are strikingly different, namely the nature of wall streaks and APNSs are shown to be related to the solenoidal and dilatational fluid motions, respectively. Quantitative analysis using a volumetric conditional average is performed to identify and extract the coherent structures that directly account for the extreme events. It is found that in the supersonic TBL, the essential ingredients of the conditional field are hairpin-like vortices, whose combinations can induce wall streaks, whereas in the hypersonic TBL, the essential ingredients become hairpin-like vortices as well as near-wall APNSs. To quantify the momentum and energy transport mechanisms underlying the extreme events, we proposed a novel decomposition method for extreme skin friction and heat flux, based on the integral identities of conditionally averaged governing equations. Taking advantage of this decomposition method, the dominant transport mechanisms of the hairpin-like vortices and APNSs are revealed. Specifically, the momentum and energy transports undertaken by the hairpin-like vortices are attributed to multiple comparable mechanisms, whereas those by the APNSs are convection dominated. In that, the dominant transport mechanisms in extreme events between the supersonic and hypersonic TBLs are indicated to be totally different.
Helicopter component load estimation can be achieved through a variety of machine learning techniques and algorithms. A range of ensemble integration techniques were investigated in order to leverage multiple machine learning models to estimate main rotor yoke loads from flight state and control system parameters. The techniques included simple averaging, weighted averaging and forward selection. Performance of the models was evaluated using four metrics: root mean squared error, correlation coefficient and the interquartile ranges of these two metrics. When compared, every ensemble outperformed the best individual model. The ensembles using forward selection achieved the best performance. The resulting output is more robust, more highly correlated and achieves lower error values as compared to the top individual models. While individual model outputs can vary significantly, confidence in their results can be greatly increased through the use of a diverse set of models and ensemble techniques.
This paper investigates the multilayer Rayleigh–Taylor instability (RTI) using statistically stationary experiments conducted in a gas tunnel. Employing diagnostics such as particle image velocimetry (PIV) and planar laser induced fluorescence (PLIF), we make simultaneous velocity–density measurements to study how dynamics and mixing are linked in this variable density flow. Experiments are conducted in a newly built, blow-down three-layer gas tunnel facility. Mixing between three gas streams is studied, where the top and bottom streams are comprised of air, and the middle stream is an air–helium mixture. Shear is minimized between these streams by matching their inlet velocities. The four experimental conditions investigated here consist of two different density ratios (Atwood numbers 0.3 and 0.6), each investigated at two instability development times (or equivalently, two streamwise locations), and all experiments are with the same middle stream thickness of 3 cm. The growth of the middle layer is measured using laser-based planar Mie scattering visualization. The mixing width is found to grow linearly with time at late times. Various quantitative measures of molecular mixing indicate a very high degree of molecular mixing at late times in the multilayer RTI flow. The vertical turbulent mass flux $a_y$ is calculated. In addition to mostly negative values of $a_y$, typical of buoyancy-dominated flows due to negative correlation between velocity and density fluctuations, positive regions are also observed in profiles of $a_y$ due to entrainment and erosion at the lower edge of the mixing region. Global energy budgets are calculated for the multilayer RTI flow at late times and it is found that the majority of potential energy released has been dissipated due to viscous effects, and a large value of mixing efficiency ($\sim$60 %) is observed.
Musculoskeletal disorders have the highest prevalence of work-related health problems. Due to the aging population, the prevalence of shoulder pain in workers in physically demanding occupations is increasing, thereby causing rising costs to society and underlining the need for preventive technologies. Wearable support structures are designed to reduce the physical work load during physically demanding tasks. Here, we evaluate the physiological benefit of the DeltaSuit, a novel passive shoulder exoskeleton, using an assessment framework that conforms to the approach proposed in the literature.
In this study, 32 healthy volunteers performed isometric, quasi-isometric, and dynamic tasks that represent typical overhead work to evaluate the DeltaSuit performance. Muscle activity of the arm, neck, shoulder, and back muscles, as well as cardiac cost, perceived exertion, and task-related discomfort during task execution with and without the exoskeleton were compared.
When working with the DeltaSuit, muscle activity was reduced up to 56% (p < 0.001) in the Trapezius Descendens and up to 64% (p < 0.001) in the Deltoideusmedius. Furthermore, we observed no additional loading on the abdomen and back muscles. The use of the exoskeleton resulted in statistically significant reductions in cardiac cost (15%, p < 0.05), perceived exertion (21.5%, p < 0.001), and task-related discomfort in the shoulder (57%, p < 0.001).
These results suggest that passive exoskeletons, such as the DeltaSuit, have the potential to meaningfully support users when performing tasks in overhead postures and offer a valuable solution to relieve the critical body parts of biomechanical strains for workers at high risk of musculoskeletal disorders.
Non-stationarity is the rule in the atmospheric boundary layer (ABL). Under such conditions, the flow may experience departures from equilibrium with the underlying surface stress, misalignment of shear stresses and strain rates, and three-dimensionality in turbulence statistics. Existing ABL flow theories are primarily established for statistically stationary flow conditions and cannot predict such behaviours. Motivated by this knowledge gap, this study analyses the impact of time-varying pressure gradients on mean flow and turbulence over urban-like surfaces. A series of large-eddy simulations of pulsatile flow over cuboid arrays is performed, programmatically varying the oscillation amplitude $\alpha$ and forcing frequency $\omega$. The analysis focuses on both longtime-averaged and phase-dependent flow dynamics. Inspection of longtime-averaged velocity profiles reveals that the aerodynamic roughness length $z_0$ increases with $\alpha$ and $\omega$, whereas the displacement height $d$ appears to be insensitive to these parameters. In terms of oscillatory flow statistics, it is found that $\alpha$ primarily controls the oscillation amplitude of the streamwise velocity and Reynolds stresses, but has a negligible impact on their wall-normal structure. On the other hand, $\omega$ determines the size of the region affected by the unsteady forcing, which identifies the so-called Stokes layer thickness $\delta _s$. Within the Stokes layer, phase-averaged resolved Reynolds stress profiles feature substantial variations during the pulsatile cycle, and the turbulence is out of equilibrium with the mean flow. Two phenomenological models have been proposed that capture the influence of flow unsteadiness on $z_0$ and $\delta _s$, respectively.
Laser-directed energy deposition (L-DED) is a key enabling technology for the repair of high-value aerospace components, as damaged regions can be removed and replaced with additively deposited material. While L-DED repair improves strength and fatigue performance compared to conventional subtractive techniques, mechanical performance can be limited by process-related defects. To assess the role of oxygen on defect formation, local and chamber-based shielding methods were applied in the repair of 300M high strength steel. Oxidation between layers for locally shielded specimens is confirmed to cause large gas pores which have deleterious effects on fatigue life. Such pores are eliminated for chamber shielded specimens, resulting in an increased ductility of ∼15%, compared to ∼11% with chamber shielding. Despite this, unmelted powder defects are not affected by oxygen content and are found in both chamber- and locally shielded samples, which still have negative consequences for fatigue.
There have been consistent calls for more research on managing teams and embedding processes in data science innovations. Widely used frameworks (e.g., the cross-industry standard process for data mining) provide a standardized approach to data science but are limited in features such as role clarity, skills, and cross-team collaboration that are essential for developing organizational capabilities in data science. In this study, we introduce a data workflow method (DWM) as a new approach to break organizational silos and create a multi-disciplinary team to develop, implement and embed data science. Different from current data science process workflows, the DWM is managed at the system level that shapes business operating model for continuous improvement, rather than as a function of a particular project, one single business unit, or isolated individuals. To further operationalize the DWM approach, we investigated an embedded data workflow at a mining operation that has been using geological data in a machine-learning model to stabilize daily mill production for the last 2 years. Based on the findings in this study, we propose that the DWM approach derives its capability from three aspects: (a) a systemic data workflow; (b) multi-disciplinary networks of collaboration and responsibility; and (c) clearly identified data roles and the associated skills and expertise. This study suggests a whole-of-organization approach and pathway to develop data science capability.
Modeling complex dynamical systems with only partial knowledge of their physical mechanisms is a crucial problem across all scientific and engineering disciplines. Purely data-driven approaches, which only make use of an artificial neural network and data, often fail to accurately simulate the evolution of the system dynamics over a sufficiently long time and in a physically consistent manner. Therefore, we propose a hybrid approach that uses a neural network model in combination with an incomplete partial differential equations (PDEs) solver that provides known, but incomplete physical information. In this study, we demonstrate that the results obtained from the incomplete PDEs can be efficiently corrected at every time step by the proposed hybrid neural network—PDE solver model, so that the effect of the unknown physics present in the system is correctly accounted for. For validation purposes, the obtained simulations of the hybrid model are successfully compared against results coming from the complete set of PDEs describing the full physics of the considered system. We demonstrate the validity of the proposed approach on a reactive flow, an archetypal multi-physics system that combines fluid mechanics and chemistry, the latter being the physics considered unknown. Experiments are made on planar and Bunsen-type flames at various operating conditions. The hybrid neural network—PDE approach correctly models the flame evolution of the cases under study for significantly long time windows, yields improved generalization and allows for larger simulation time steps.
A solid body in a viscous fluid undergoing oscillatory motion naturally produces a steady secondary flow due to convective inertia. This phenomenon is embodied in the streaming flow generated by a sphere in an unbounded fluid executing rectilinear oscillations. We review the considerable literature on this canonical problem and summarise exact and asymptotic formulas in the small-amplitude limit. These analytical formulas are used to explore the characteristic flow structure of this problem and clarify previously unreported features. A single, toroidal-shaped vortex exists in each hemisphere regardless of the oscillation frequency, which can drive a counter-flow away from the sphere. The vortex centre moves monotonically away from the sphere with decreasing oscillation frequency, and engulfs the entire flow domain for $\beta \equiv \omega R^2/\nu < 16.317$, where $\omega$ is the angular oscillation frequency, $R$ the sphere radius, and $\nu$ the fluid kinematic viscosity. This seemingly abrupt change in flow structure at the critical frequency $\beta _{critical} =16.317$, and its quantification, appear to have not been reported previously. We perform a direct numerical simulation of the Navier–Stokes equations, to (1) confirm existence of this critical frequency at finite amplitude, and (2) examine its variation with amplitude. This reveals a universal relationship between the critical frequency and oscillation amplitude, clarifying previous reports on the structure of this streaming flow. The critical frequency is shown to be identical for the streaming flow and the cycle-averaged particle paths, establishing that the critical frequency is accessible directly using standard measurements.
This chapter presents an overview of VR systems from hardware (Section 2.1) to software (Section 2.2), including the introduction of the Virtual World Generator (VWG), which maintains the geometry and physics of the virtual world, to human perception (Section 2.3). The purpose is to quickly provide a sweeping perspective so that the detailed subjects in the remaining chapters will be understood within the larger context.
The primary task of electrostatics is to find the electric field of a given stationary charge distribution. In principle, this purpose is accomplished by Coulomb’s law, in the form of Eq. 2.8:
The fundamental problem electrodynamics hopes to solve is this (Fig. 2.1): We have some electric charges, (call them source charges); what force do they exert on another charge, (call it the test charge)? The positions of the source charges are given (as functions of time); the trajectory of the test particle is to be calculated.
Remember the basic problem of classical electrodynamics: we have a collection of charges (the “source” charges), and we want to calculate the force they exert on some other charge (the “test” charge – Fig. 2.1). According to the principle of superposition, it is sufficient to find the force of a single source charge – the total is then the vector sum of all the individual forces.