To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Research on preferences has significantly increased in recent years, as it involves not only many subproblems to be investigated, such as elicitation, representation, and reasoning, but has also been the target of different research areas, for example, artificial intelligence and databases. In particular, much work has focused on qualitative preferences, because these are closer to the way people express their preferences in comparison with quantitative preferences. Against this background, a large number of approaches have been proposed, associated with heterogeneous areas, so that these approaches are usually just compared with those of the same area. In response, we present in this paper a survey of approaches to qualitative multi-attribute preference reasoning, covering different research areas. We introduce selected approaches that propose different techniques and algorithms, which take as input qualitative multi-attribute preference statements following a particular structure specified by the approach. We analyse each approach in a systematic way and discuss their commonalities and limitations.
In this paper, the pick and place trajectory planning of a planar 3-RRR parallel manipulator is studied in the presence of joint clearance, which is one of the main sources of error in the positioning accuracy. Joint clearance can be modeled as a massless virtual link, with its direction determined from dynamic analysis. A 3–4–5 interpolating polynomial is used to plan the trajectories for the manipulator in the vertical and horizontal planes, in the presence of clearances. We compare the trajectories with those in the ideal cases, i.e., without clearances at the joints, and demonstrate that one can easily compensate for the errors in the trajectories by appropriate changes of the inputs. A similar method works for the compensation of the errors due to clearances at the joints, in trajectory planning of any parallel manipulator with any running speeds and payloads.
Consider n servers having different exponential service distributions. All servers are initially busy and there are m customers waiting in queue in an ordered line. A server becoming idle is offered to the first in line, if rejected it is then offered to the second, and so on. The objective of each person in line is to minimize their expected time until service completion. We give a simple approach for finding the optimal policy and also show that this policy also minimizes the expected sum of the times the customers spend in the system.
Legged locomotion systems have been effective in numerous robotic missions, and such locomotion is especially useful for providing better mobility over irregular landscapes. However, locomotion capabilities of robots are often constrained by a limited range of gaits and the associated energy efficiency. This paper presents the design of a novel reconfigurable Klann mechanism capable of producing a variety of useful gait cycles. Such an approach opens up new research avenues, opportunities and applications. The position analysis problem that arises when dealing with reconfigurable Klann mechanisms was solved here using a bilateration method, which is a distance-based formulation. By changing the linkage configurations, our aim was to generate a set of useful gaits for a legged robotic platform. In this study, five gait patterns of interest were identified, analysed and discussed that validate the feasibility of our approach and considerably extend the capabilities of the original design.
A novel wireless sensor network system with compatibility of microwave power transmission (MPT) using a Gallium Nitride (GaN) power amplifier has been fabricated and tested. The sensor node operates using electrical power supplied by the MPT system. Time- and frequency-division operations are proposed for the compatibility. Under the frequency-division operation, receiving signal strength indicator of −85 dBm and packet error rate of <1% were measured when the available DC power of a rectifier with 160 mW output power. Under 120-min measurement, sustainable power balance between radio-frequency–DC conversion and power consumption in stable operation of the sensor node was achieved.
This paper presents the design procedure of two ultra-high-frequency radio frequency identification reader antennas used in searching tagged items. They consist of microstrip arrays with alternating orthogonal dipoles, which are fed in series by a pair of microstrip lines. The dipoles are designed properly to provide the required bandwidth. The inter-element distance is adjusted to the center frequency, where the elements provide in-phase excitation and create two orthogonal electric-field components that give beams with direction diversity. Simulated results show that the return loss bandwidth (RL > 13 dB) of the first antenna design covers the required frequency band of ETSI (865–868 MHz) standard. In addition, simulated and measured results of the second antenna design indicate that the return loss bandwidth covers both the frequency bands of european telecommunications standards institute (ETSI) and federal communications commission (FCC) (865–928 MHz) standards. Regarding the coverage volume in the vicinity of the antenna, it was deduced that both antennas can read tagged items in a semi-cylindrical volume that extends to a radius of more than 50 cm. Finally, a case study of reading tagged books in front of a library cabinet with six shelves has been presented.
This paper proposes a combined harvesting system to improve the efficiency and flexibility of autonomous wireless network nodes, supplied by means of wireless power transfer technique. In particular, a mixed system for electromagnetic (EM) and thermal energy harvesting (EH), conceived for passive nodes of wireless sensor networks and radio frequency (RF) identification tags, is described. The proposed system aims at increasing the effectiveness and the efficiency of the EH system by integrating an antenna and a rectifier with a thermo-electric generator (TEG) able to perform thermal EH. The energy provided by the thermal harvester is exploited twice: to increase the rectifier efficiency by providing a voltage usable to improve the bias condition of the rectifying diode, and to provide additional dc energy, harvested for free. Ultimately, a great efficiency improvement, especially at low incident RF power, has been observed. The design methodology and the EM performance of a quarter-wavelength patch antenna, integrated with the TEG are resumed. Then, a test campaign to evaluate the thermal EH performance has been carried out. Afterward, a rectifier with variable bias voltage, operating at the same frequency of the antenna, has been opportunely designed to exploit the harvested thermal energy to bias the diode. A measurement campaign has been then carried out to test the efficiency increment obtained and to validate the proposed solution.
Magneto-inductive waves are a form of propagation which only exists in certain types of magnetic metamaterials formed from inductively coupled resonant circuits. We present an investigation of their potential as contactless power transfer devices capable of carrying power along a surface between suitably prepared terminals while simultaneously offering a broadband data channel. Input impedances and their matching conditions are explored with a view to offering a simple power system design. A device with 75% peak and 40% minimum efficiency is demonstrated and designs with potential for better than 70% mean and 90% peak are reported. The product of planar magnetic coupling and metamaterial cell Q factor is determined to be a key optimization parameter for high efficiency.
We proposed and examined a microwave power transfer system for electric vehicles (EVs). In this system, electricity is transmitted from a transmitting antenna over an EV to a receiving antenna on the roof of the EV. We used a rectenna to convert the received microwave power to direct current power. The conversion efficiency of a rectenna array is affected by the input power level distribution, and we have to form a flat-topped beam pattern to increase the conversion efficiency. We conducted an experiment to form a flat-topped beam pattern by using a phased array antenna. In this experiment, the output power of each antenna element is uniform and cannot be controlled independently. Hence, we controlled only the output phases of each antenna element and formed a flat-topped beam pattern. The distance between the transmitting antenna and the receiving area is 6.45 m, and the receiving area corresponds to a space in which the azimuth and elevation are in the range of −5°–5°.
The main innovation of this paper is determining the dynamic load carrying capacity (DLCC) of a flexible joint manipulator (FJM) using a closed form nonlinear optimal control approach. The proposed method is compared with two closed loop nonlinear methods that are usually applied to robotic systems. As a new idea, DLCC of the manipulator is considered as a criterion to compare how well controllers perform point to point mission for the FJMs. The proposed method is compared with feedback linearization (FL) and robust sliding mode control (SMC) methods to show better performance of proposed nonlinear optimal control approach. An optimal controller is designed by solving a nonlinear partial differential equation named the Hamilton–Jacobi–Bellman (HJB) equation. This equation is complicated to solve exactly for complex dynamics so it is solved numerically using an iterative approximation combined with the Galerkin method. In the FL method, angular position, velocity, acceleration and jerk of links are considered as new states to linearize the dynamic equations. In the case of SMC, the dynamic equations of manipulator are changed to the standard form then the Slotine method is used to design the sliding mode controller. Two simulations are performed for a planar and a spatial Puma manipulator and performances of controllers are compared. Finally an experimental test is done on 6R manipulator and the Stereo vision method is used to determine the position and orientation of the end-effector.
Tissue P systems are a class of bio-inspired computing models motivated by biochemical interactions between cells in a tissue-like arrangement. Tissue P systems with cell division offer a theoretical device to generate an exponentially growing structure in order to solve computationally hard problems efficiently with the assumption that there exists a global clock to mark the time for the system, the execution of each rule is completed in exactly one time unit. Actually, the execution time of different biochemical reactions in cells depends on many uncertain factors. In this work, with this biological inspiration, we remove the restriction on the execution time of each rule, and the computational efficiency of tissue P systems with cell division is investigated. Specifically, we solve subset sum problem by tissue P systems with cell division in a time-free manner in the sense that the correctness of the solution to the problem does not depend on the execution time of the involved rules.
A hyper-redundant manipulator is made by mounting the serial and/or parallel mechanisms on top of each other as modules. In discrete actuation, the actuation amounts are a limited number of certain values. It is not feasible to solve the kinematic analysis problems of discretely actuated hyper-redundant manipulators (DAHMs) by using the common methods, which are used for continuous actuated manipulators. In this paper, a new method is proposed to solve the trajectory tracking problem in a static prescribed obstacle field. To date, this problem has not been considered in the literature. The removing first collision (RFC) method, which is originally proposed for solving the inverse kinematic problems in the obstacle fields was modified and used to solve the motion planning problem. For verification, the numerical results of the proposed method were compared with the results of the genetic algorithm (GA) method. Furthermore, a novel DAHM designed and implemented by the authors is introduced.
Many recent writers in the philosophy of mathematics have put great weight on the relative categoricity of the traditional axiomatizations of our foundational theories of arithmetic and set theory (Parsons, 1990; Parsons, 2008, sec. 49; McGee, 1997; Lavine, 1999; Väänänen & Wang, 2014). Another great enterprise in contemporary philosophy of mathematics has been Wright’s and Hale’s project of founding mathematics on abstraction principles (Hale & Wright, 2001; Cook, 2007). In Walsh (2012), it was noted that one traditional abstraction principle, namely Hume’s Principle, had a certain relative categoricity property, which here we term natural relative categoricity. In this paper, we show that most other abstraction principles are not naturally relatively categorical, so that there is in fact a large amount of incompatibility between these two recent trends in contemporary philosophy of mathematics. To better understand the precise demands of relative categoricity in the context of abstraction principles, we compare and contrast these constraints to (i) stability-like acceptability criteria on abstraction principles (cf. Cook, 2012), (ii) the Tarski-Sher logicality requirements on abstraction principles studied by Antonelli (2010b) and Fine (2002), and (iii) supervaluational ideas coming out of the work of Hodes (1984, 1990, 1991).
In Chapter 2, we learnt how signal processing-based techniques can be employed to accelerate MRI scans. These techniques were developed after the advent of compressed sensing. Since these techniques only require changes in the sampling and reconstruction modules of the software, such methods are called software-based acceleration techniques. But prior to the development of such signal processing-based techniques, physics-based acceleration techniques were popular. Physics-based acceleration techniques change the hardware of the scanner in order to facilitate faster scans. Multi-coil parallel MRI is a classic example of physics-based acceleration.
In single-channel MRI, there is a single receiver coil with uniform sensitivity over the full Field of View (FoV). In multichannel MRI, there are several receiver coils that are located at separate locations of the scanner; consequently, they do not have a uniform FoV, for example, see Fig. 3.1. Each of the coils can only distinctly “see” a small portion of the full FoV.
In the multichannel MRI, each of the receiver coils partially samples the K-space; since the K-space is only partially sampled, the scan time is reduced. The image is reconstructed from all the K-space samples collected from all the coils. The theoretical possibility of combining multi-coil Fourier frequency samples for the purpose of reconstructing a single MR image follows from Papoulis’ generalized sampling theorem [1]. The ratio of the total number of possible K-space samples (size of the image) to the number of partial samples collected per coil is called the acceleration factor. In theory, the maximum acceleration factor is the same as the number of coils, but in practice, it is always less than that, for example, if there are 8 receiver coils, the maximum possible acceleration factor will be 8, that is, each coil will sample 12.5% of the total K-space; but in practice, each coil may sample 25% of the K-space and hence the acceleration factor will be 4 instead of 8.
For accelerating the scans, the K-space is partially sampled. There are two possibilities – one can interpolate the unsampled K-space locations, approximate the full K-space, and then reconstruct the image via inverse FFT; or one can directly recover the image from the partially sampled K-space scans.
This book is about modern approaches to magnetic resonance imaging (MRI) reconstruction. In the last decade, MRI has benefitted immensely from advances in applied mathematics and signal processing. Leveraging these techniques, MRI scans are now being performed two to four times faster than before. In this book, we learn how these techniques have been used in the recent past to accelerate MRI scans.
During my PhD, I worked on a few different areas of MRI reconstruction – static MRI, dynamic MRI, parallel MRI (static and dynamic) and quantitative MRI. After I relocated to India, Manish Chaudhury commissioning editor at Cambridge University Press, inspired me to write a book and I was eager to write about signal processing techniques in MRI. It took me about one and half years to complete this volume.
When I started working on MRI reconstruction, I felt that there is a gap between the practitioners and the theoreticians. On one side, there were researchers in signal processing and applied maths who were interested in theoretical proofs and algorithms. On the other, there were the MRI physicists and engineers who had lots of interesting problems that were waiting to be solved. Since then, many researchers have worked very hard to reduce this gap. The concerted effort of so many researchers is finally bearing fruit; in the past few ISMRMs, MRI scanner manufacturers showed interest in adopting these advanced signal processing techniques for image reconstruction.
In this book, I have made every effort to incorporate interesting studies on MRI reconstruction, but I may have missed out a few unintentionally. Thus, this book does not claim to be an encyclopaedic review on the subject of signal processing techniques in MRI reconstruction.
The targeted audience of the book are signal processing engineers who want to learn about MRI problems and MRI physicists who want to know how signal processing is benefitting MRI. The book can also be perused by doctors who have a background in mathematics. I do not presume a reader who has an advanced background in mathematics. But the reader is expected to have some undergraduate training in linear algebra, probability and convex optimization. Otherwise, the book may not be easy to follow.
Magnetic Resonance Imaging (MRI) is a versatile medical imaging modality that can produce high quality images and is safe when operated within approved limits. X-ray Computed Tomography (CT) produces good quality images but is harmful owing to the ionizing radiation. On the other hand, ultrasound is safe, but the images are of very poor quality. The main challenge that MRI faces today is its comparatively long data-acquisition time. This poses a problem from different ends. For the patient, this is uncomfortable because he/she has to spend a long period of time in a claustrophobic environment (inside the bore of the scanner) – thus there is always the requirement of an attending technician to look after the patient's comfort. To make matters worse, the scanner is very noisy owing to the periodic switching on and off of the gradient coils.
However, patient discomfort is not the only issue. As the data acquisition time is long, any patient movement inside the scanner results in unwanted motion artifacts in the final image; some of these movements are inadvertent, such as breathing. Even these small movements may hamper the quality of images.
It is not surprising that reducing the data acquisition time in MRI has been the biggest challenge for the past two decades. The work is far from complete. Broadly speaking there are two approaches to reduce the data acquisition/scan times – the hardware-based approach and the software-based approach. Initial attempts to reduce the scan time were hardware-based methods where the design of the MRI scanner had to be changed in order to acquire faster scans. The multichannel parallel MRI technique is the most well-known example of this exercise. Unfortunately, research and implementation of hardware-based acceleration techniques were expensive and time-consuming; this is obvious as new scanners had to be designed, built, and tested. Moreover, the collateral damages caused by such hardware-based acceleration techniques were not trivial. For example, multichannel parallel MRI has been around for about 20 years. Reconstructing images from such scanners requires knowledge of sensitivity profiles of individual coils; this sensitivity information is never fully available. Therefore, reconstructing images from such parallel multichannel MRI scans remains an active area of research even today.