To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This study proposes a geometric solution to the norm differential game design problem in target-attacker-defender (TAD) engagements, addressing key limitations of conventional zero-effort-miss approaches. By leveraging the geometric analogy between guidance-law-generated trajectories and Dubins paths, we reformulate the derivation of zero-effort-miss-based guidance laws as a Nash equilibrium optimisation problem, with optimal strategies determined through reachable set analysis of Dubins path frontier. The resulting model is a non-convex optimisation problem, which prevents the derivation of traditional state-feedback control laws. To overcome this limitation and enable real-time implementation, we develop a custom back propagation neural network, enhanced with a relaxation factor method for output filtering, a Holt linear trend model for outlier compensation and a saturation function for oscillation suppression. Extensive simulations demonstrate that the proposed framework significantly outperforms baseline methods. These results validate the effectiveness and robustness of our approach for high-performance TAD applications.
This chapter reviews techniques to address the processor memory speed gap. We start with concepts behind modern memory hierarchies: the principle of locality of accesses, coherence in the memory hierarchy, and cache and memory inclusion. We then review the architecture of main memory systems, including the architecture of DRAM devices and DRAM systems. This is followed by concepts of cache hierarchies, including cache mappings and access, replacements and write policies, and classification of cache misses. We cover techniques needed to cope with processors exploiting high degrees of instruction-level parallelism, including lockup-free caches, cache prefetching, and preloading. The chapter reviews data compression in the memory hierarchy to allow for higher memory capacity and effective bandwidth. Finally, the chapter covers hardware support for virtual memory, page tables and translation lookaside buffers, and virtual address caches.
In biology, cells undergo deformations under the action of flow caused by the fluid surrounding them. These flows lead to shape changes and instabilities that have been explored in detail for single component vesicles. However, cell membranes are often multicomponent in nature, made up of multiple phospholipids and cholesterol mixtures that give rise to interesting thermodynamics and fluid mechanics. Our work analyses shear flow around a multicomponent vesicle using a small-deformation theory based on vector and scalar spherical harmonics. We set up the problem by laying out the governing momentum equations and the traction balance arising from the phase separation and bending. These equations are solved along with a Cahn–Hilliard equation that governs the coarsening dynamics of the phospholipid–cholesterol mixture. We provide a detailed analysis of the vesicle dynamics (e.g. tumbling, breathing, tank-treading and swinging/phase-treading) in two regimes – when flow is faster than coarsening dynamics (Péclet number ${\textit{Pe}} \gg 1$) and when the two time scales are comparable ($\textit{Pe} \sim O(1)$) – and provide a discussion on when these behaviours occur. The analysis aims to provide an experimentalist with important insights pertaining to the phase separation dynamics and their effect on the deformation dynamics of a vesicle.
This chapter is devoted to design principles of multiprocessor systems, focusing on two architectural styles: shared-memory and message-passing. Both styles use multiple processors with to achieve a linear speedup of computational power with the number of processors but differ in the method of data exchange. Processors in shared-memory multiprocessors share the same address space and can exchange data through shared-memory locations by regular load and store instructions. This chapter reviews the programming model abstractions for shared-memory and message-passing multiprocessors, then the semantics of message-passing primitives, the protocols needed, and architectural support to accelerate message processing. It covers support of a shared-memory model abstraction by reviewing the concept of cache coherence, the design space of snoopy-cache coherence protocols, classification of communication events, and translation-lookaside buffer consistency strategies. Scalable models of shared memory are treated, with an emphasis on the design of cache coherence solutions that can be applied at a large scale as well as the software techniques to deal with page mappings to exploit locality.
In this investigation, the effect of Ekman pumping on a quasi-geostrophic (QG) system is explored via the vertical buoyancy flux. The vertical buoyancy flux is the quantity in QG flows that is responsible for the adiabatic transfer between kinetic energy (KE) and available potential energy (APE), as well as the slow-time evolution of the mean buoyancy. Ekman pumping (or suction) is a phenomenon that arises through conservation of mass at no-slip boundaries of rotating fluid systems. Three-dimensional QG numerical simulations are run with and without Ekman pumping at the bottom boundary, as well as with and without a realistic stratification profile. Through theory and numerical experiment, it is shown that Ekman pumping drives a conversion of energy from APE to KE at small scales, and from KE to APE at large scales, even in the absence of a mean isopycnal slope. It is also shown that Ekman pumping affects the mean buoyancy by slightly weakening the stratification near the bottom boundary.
For the past 30 years we have lived through the information revolution, powered by the explosive growth of semiconductor integration and the internet. The exponential performance improvement of semiconductor devices was predicted by Moore’s law as early as the 1960s. Moore’s law predicts that the computing power of microprocessors will double every 18-24 months at constant cost so that their cost-effectiveness (the ratio between performance and cost) will grow at an exponential rate. It has been observed that the computing power of entire systems also grows at the same pace. This law has endured the test of time and remains valid today. This law will be tested repeatedly, both now and in the future, as many people today see strong evidence that the "end of the ride" is near, mostly because the miniaturization of CMOS technology is rapidly reaching its limit. This chapter reviews technology trends underpinning the evolution of computer systems. It also introduces metrics for performance comparison of computer systems and fundamental laws that drive the field of computer systems such as Amdahl’s law.
This chapter is dedicated to the correct and reliable communication of values in shared-memory multiprocessors. Correctness properties of the memory system of shared-memory multiprocessors include coherence, the memory consistency model, and the reliable execution of synchronization primitives. Since CMPs are designed as shared-memory multi-core systems, this chapter targets correctness issues not only in symmetric multiprocessors (SMPs) or large-scale cache coherent distributed shared-memory systems, but also in CMPs with core multi-threading. The chapter reviews the hardware components of a shared-memory architecture and why memory correctness properties are so hard to enforce in modern shared-memory multiprocessor systems. We then treat various levels of coherence and the difference between plain memory coherence and store atomicity. We introduce memory models and sequential consistency, the most fundamental memory model, enforcing sequential consistency by store synchronization. Finally, we review thread synchronization and ISA-level synchronization primitives and relaxed memory models based on hardware efficiency and relaxed memory models relying on synchronization.
The chapter also covers compiler-centric approaches to build computers known as VLIW computers. Apart from reviewing the design principles of VLIW pipelines, we also review compiler techniques to uncover instruction-level parallelism, including loop unrolling, software pipelining, and trace scheduling. Finally, this chapter covers vector machines.
The instruction set is the interface between the hardware and the software and must be followed meticulously when designing a computer. This chapter starts with introducing the instruction set of a computer. A basic instruction set is used throughout the book. This instruction set is broadly inspired by the MIPS instruction set, a rather simple instruction set which is representative of many instruction sets such as ARM and RISC V. We then review how one can support a representative instruction set with the concept of static pipelining. We start with reviewing a simple 5-stage pipeline and all issues involved in avoiding hazards. This simple pipeline is gradually augmented to allow for higher instruction execution rates including out-of-order instruction completion, superpipelining, and superscalar designs.
Given the widening gaps between processor speed, main memory (DRAM) speed, and secondary memory (disk) speed, it has become more and more difficult in recent years to feed data and instructions at the speed required by the processor while providing the ever-expanding memory space expected by modern applications.