To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Chapter 9 considers the role of design rights for DMFs. In it, I argue that DMFs should only receive design protection if the object they will print would receive such protection. Current practice in many jurisdictions is to the contrary. They protect any qualifying images if they appear on a computer screen. I argue that this approach impermissibly protects mere artistic images, which should be protected, if at all, by copyright law. I offer a framework for a teleological approach to design right in digital images and focus the approach on DMFs specifically. In addition, I describe how the EU Design Directive includes many important safeguards for free speech, experimentation, and private use. I recommend that the United States include similar protections in its design patent laws.
Some languages have very few NLP resources, while many of them are closely related to better-resourced languages. This paper explores how the similarity between the languages can be utilised by porting resources from better- to lesser-resourced languages. The paper introduces a way of building a representation shared across related languages by combining cross-lingual embedding methods with a lexical similarity measure which is based on the weighted Levenshtein distance. One of the outcomes of the experiments is a Panslavonic embedding space for nine Balto-Slavonic languages. The paper demonstrates that the resulting embedding space helps in such applications as morphological prediction, named-entity recognition and genre classification.
A theory of magnitudes involves criteria for their equivalence, comparison and addition. In this article we examine these aspects from an abstract viewpoint, by focusing on the so-called De Zolt’s postulate in the theory of equivalence of plane polygons (“If a polygon is divided into polygonal parts in any given way, then the union of all but one of these parts is not equivalent to the given polygon”). We formulate an abstract version of this postulate and derive it from some selected principles for magnitudes. We also formulate and derive an abstract version of Euclid’s Common Notion 5 (“The whole is greater than the part”), and analyze its logical relation to the former proposition. These results prove to be relevant for the clarification of some key conceptual aspects of Hilbert’s proof of De Zolt’s postulate, in his classical Foundations of Geometry (1899). Furthermore, our abstract treatment of this central proposition provides interesting insights for the development of a well-behaved theory of compatible magnitudes.
This article builds on Humberstone’s idea of defining models of propositional modal logic where total possible worlds are replaced by partial possibilities. We follow a suggestion of Humberstone by introducing possibility models for quantified modal logic. We show that a simple quantified modal logic is sound and complete for our semantics. Although Holliday showed that for many propositional modal logics, it is possible to give a completeness proof using a canonical model construction where every possibility consists of finitely many formulas, we show that this is impossible to do in the first-order case. However, one can still construct a canonical model where every possibility consists of a computable set of formulas and thus still of finitely much information.
This paper presents a comparative study on three types of slim coil structures used as a three-dimensional (3-D) receiver in a wireless power transfer system with a planar transmitter coil. The mutual coupling values and their variations between the receiver structures and the transmitter coil are compared under different distances and angular orientations with respect to the transmitter coil. The merits of performance are related to the consistency of the mutual coupling values under different orientations in a range of distances from the transmitter coil. The practical results show that slim 3-D receiver coil structures can be compatible with a planar transmitter coil with reasonably high-mutual coupling.
An internal wireless system (IWS) for satellites was proposed in a previous study to reduce the weight of satellites. It is a system that uses wireless communication modules to communicate between the satellite's subsystems. We proposed a complete IWS that employs microwave wireless power transmission technology, and we proposed a design of GHz band high efficiency rectifier based charge pump rectifiers with a class-f filter called class-f charge pump rectifiers. We theoretically compare the diode losses in a charge pump and single shunt rectifier, and experimentally verify the results. Apart from this, we consider that the class-f charge pump rectifiers will be used for a rectenna array. In order to know the direct current (DC) load change of class-f charge pump circuits is connected as a rectenna array, we measured the conversion efficiencies of a 2 by 2 rectenna array, connected in series and in parallel. The results of the experiment indicate that the optimum load of the rectifier changes to four times DC load when connected in series, and to 1/4 the DC load when connected in parallel.
Radiofrequency surface coils used as receivers in magnetic resonance imaging (MRI) rely on cables for communication and power from the MRI system. Complex surface coil arrays are being designed for improving acquisition speed and signal-to-noise ratio. This, in-turn makes the cables bulky, expensive, and the currents induced on cables by time-varying magnetic fields of the MRI system may cause patient harm. Though wireless power transfer (WPT) can eliminate cables and make surface coils safer, MRI poses a challenging electromagnetic environment for WPT antennas because the antennas made using long conductors interact with the static and dynamic fields of the MRI system. This paper analyses the electromagnetic compatibility of WPT antennas and reveals that commercially available antennas are not compatible with MRI systems, presenting a safety risk for patients. Even when the risk is minimized, the antennas couple with surface coils leading to misdiagnosis. This paper presents an approach to eliminate safety risks and minimize coupling using a filter named “floating filter.” A WPT antenna without a filter has a distortion of 27%, and floating filters reduce the distortion to 2.3%. Secondly, the floating filter does not affect the power transfer efficiency, and the transfer efficiency of 60% is measured with and without filters.
In this paper, possible coupling configurations of a four-plate capacitive power transfer system are studied by varying the combinations of its input and output ports. A voltage source is applied between two of the four plates, and a load is connected to the other two to form different circuit topologies. A mathematical model based on a 4 × 4 mutual capacitance matrix is established for equidistantly placed four identical metal plates. Based on the proposed model, four separate circuit topologies are identified and analysed in detail and described in a general form. The electric field distributions of the coupling configurations are simulated by ANSYS Maxwell. The theoretical modeling and analysis are then verified by a practical system, in which four aluminum plates of 300 mm × 300 mm are used and placed with a gap of 10 mm between adjacent plates. The experimental results show that the measured output voltage and power under the four coupling configurations are in good agreement with the theoretical results. It has found that the voltage gain is the highest when the two inner plates are connected to the source, and this coupling configuration also has the lowest leakage electric field.
Advances in technology have seen mobile robots becoming a viable solution to many global challenges. A key limitation for tetherless operation, however, is the energy density of batteries. Whilst significant research is being undertaken into new battery technologies, wireless power transfer may be an alternative solution. The majority of the available technologies are not targeted toward the medium power requirements of mobile robots; they are either for low powers (a few Watts) or very large powers (kW). This paper reviews existing wireless power transfer technologies and their applications on mobile robots. The challenges of using these technologies on mobile robots include delivering the power required, system efficiency, human safety, transmission medium, and distance, all of which are analyzed for robots operating in a hazardous environment. The limitations of current wireless power technologies to meet the challenges for mobile robots are discussed and scenarios which current wireless power technologies can be used on mobile robots are presented.
This paper presents an effective and time saving procedure for designing a three-coil resonant inductive wireless power transfer (WPT) link. The proposed approach aims at optimizing the power transfer efficiency of the link for given constraints imposed by the specific application of interest. The WPT link is described as a two-port network with equivalent lumped elements analytically expressed as function of the geometrical parameters. This allows obtaining a closed-form expression of the efficiency that can be maximized by acting on the geometrical parameters of the link by using a general purpose optimization algorithm. The proposed design procedure allows rapidly finding the desired optimal solution while minimizing the computational efforts. Referring to the case of an application constraining the dimensions of the receiver, analytical data are validated through full-wave simulations and measurements.
A novel, dual-band, voltage-multiplying (RF-DC) rectifier circuit with load-tuned stages resulting in a 50 Ω input-impedance and high RF-DC conversion in 2.4 and 5.8 GHz bands for wireless energy-harvesting is presented. Its novelty is in the use of optimal-length transmission lines on the load side of the 4 half-wave rectifying stages within the two-stage voltage multiplier topology. Doing so boosts the rectifier's output voltage due to an induced standing-wave peak at each diode's input, and gives the rectifier a 50 Ω input-impedance without an external-matching-network in the 2.4 GHz band. Comparisons with other rectifiers show the proposed design achieving a higher DC output and better immunity to changing output loads for similar input power levels and load conditions. The second novelty of this rectifier is a tuned secondary feed that connects the rectifier's input to its second stage to give dual-band performance in the 5.8 GHz band. By tuning this feed such that the second stage and first stage reactances cancel, return-loss resonance in the 5.8 GHz band is achieved in addition to 2.4 GHz. Simulations and measurements of the design show RF-DC sensitivity of −7.2 and −3.7 dBm for 1.8V DC output, and better than 10 dB return-loss, in 2.4 and 5.8 GHz bands without requiring an external-matching-network.
FOR STUDYING MANUSCRIPTS, scholars have critiqued both text and 2D images. Of the latter, my earlier chapters provide sufficient critique in pointing out material features that normally go uncaptured when manuscripts are digitized. However, this critique becomes strengthened when a manuscript is viewed as more than its text and decoration, whether as a holistic expression or as a socially transmitted interaction. For example, Elaine Treharne points out that a physical encounter is necessary to gauge a manuscript's heft. Heft portrays crucial information about socially transmitted interactions, such as a sense of portability (for missionary travel) or grandeur (for large gatherings). Critiques of this nature recognize that manuscripts are meant to be engaged by human bodies, demanding first-hand experience for knowing.
Although scholars have identified shortcomings in 2D images, they have likewise identified shortcomings in textual remediation. In Virtually Anglo-Saxon, Martin Foys demonstrates how textual descriptions of the Bayeux Tapestry reconstruct it linearly, remaking it into a radically different form from its physical embodiment. Such a representation can encourage misinterpretation and flawed conclusions. For a Chi-Rho page or page of text organized around the interplay of decorated initials, the artistry guides the eye and shapes the experience of the content and its meaning. By altering the journey to meaning, meaning is lost and/ or reshaped.
With these critiques in mind, this chapter explores virtual reality (VR) as a response to and an alternative for studying manuscripts. It recognizes that VR is likewise imperfect and will struggle to duplicate physical attributes such as the smell of parchment. However, VR represents a profound shift for studying manuscripts. It provides a shared space for a digital encounter, eliminating the barrier of a screen. To understand the significance of this shared space, I turn to neuroscientists. Rather than accept the notion of five isolated senses inherited from Aristotle, neuroscientists have identified twentytwo to thirty-three. They have also demonstrated that perceptions such as sight are constructed using information from multiple senses. A digital technique that focuses on one isolated sense, therefore, provides a limited representation of human experience of a manuscript.
Being able to compare Codex Vaticanus, Codex Sinaiticus and the newly “discovered” Khabouris Codex side by side from my home in America would have been science fiction just a few decades ago. It is such a thrill to be able to do this. It's like cheating on a cosmic scale.
Will Berry (Manuscript Enthusiast)
IF I WERE TO choose a mantra for the digital humanities, it would be “open access.” As a guiding principle, open access makes the digital transformational. It delivers digital content onto the hard drives of scholars and the public. But even when manuscripts are only viewable through high-resolution images, presenting them online radically increases their availability, as radically as the printing press once increased availability of texts. As stated in the epigraph, the rapid pace of these changes feels like science fiction, as if “cheating on a cosmic scale.” At this writing, the Vatican Library has 14,623 manuscripts online, its goal to provide online access to all of its 80,000 manuscripts. Yet this is only a mere portion of the manuscripts available. More than five hundred libraries worldwide provide online viewing of at least some of their collection. The recentness of this availability is exemplified by the St. Chad Gospels. In 2009, because neither a printed facsimile nor a full set of online images were available, I travelled to England to study the manuscript. Prior to this, I had relied on a few photographs reproduced in scholarly works and a small number of low-resolution images placed online through a British Library Turning the Pages interface. The images were inadequate. The low resolution made features of the manuscript impossible to study. Such features are critical for understanding early Christian and monastic practices in the British Isles. But they are also critical for telling the story of the artistic and scribal accomplishments that lead to the Book of Kells, generally considered the flowering of Insular art. My need for access made travelling to Lichfield Cathedral a necessity, a necessity that if not fulfilled would limit my scholarship, similar to ways lack of access has limited scholarship on manuscripts throughout the ages.
“Colour is light embodied in a diaphanous medium. Indeed, this medium possesses two different qualities, for it is either pure, without the elements of earth, or impure, mixed with the elements of earth.”
Robert Grosseteste (1168– 1253)
METHODS OF RECOVERY have not always been kind to manuscripts. Concealing knowledge, damaged pages have long tantalized with traces of effaced script. To entice them to give up their secrets, reagents were applied, a practice which began at least in the seventeenth century but was more rigorously pursued in the nineteenth century. The results, however, were disastrous. These chemicals, such as gallic acid and ammonium sulphide, were theorized to revitalize inks. Instead, they did so only temporarily, before turning a page into something less than its previous self: regularly a brown slur (Web Fig. 1.1). Relying on the best knowledge of their day, such attempts remain a cautionary tale about the complex chemistry and fragility of the seemingly simple and common medieval materials of ink, pigment, and parchment.
Creating a theoretical frame for digital methods, early photography provided a means to recover damaged content through noninvasive techniques. It marked a critical turn from chemistry to physics, that is, recovery based on the properties of light. Early experiments generated new methods, such as using orange lighting to increase contrast and coloured filters to reduce “the obscuring effects of stains.” More complex methods demonstrated further possibilities. For example, in the early twentieth century, Pringsheim and Gradeviss devised a method for recovering erased text from palimpsests. This method required two film negatives: the first focused as sharply as possible on the erased script; the second focused equally on the two. To make the erased script more pronounced, Pringsheim and Gradeviss made a glass positive of the second negative (equal focus) and aligned and overlaid it with the negative focused on the erased script. When examined under a light, the glass positive helped to neutralize the later writing, making traces of the erased script stand out. Such early techniques exemplify the overarching goal for recovery through noninvasive photographic methods: capitalize on properties of light to increase contrast, revealing that which has been lost to the unaided eye.
FOR MONITORING CHANGES in manuscripts, scholars and conservators have long recognized the value of photography. In the 1980s, to check for damage, conservators at the British Library compared earlier photographs of the Lindisfarne Gospels to the manuscript. However, assessing cultural heritage through photographs has its origins in the earliest days of photography. For example, shortly after Louis-Jacques-Mandé Daguerre successfully demonstrated his daguerreotype in Paris (1839), the French Commission des Monuments Historiques hired five photographers to capture images of the nation's architectural heritage. Responsible for its preservation, the Commission wished to assess the conditions of historic sites throughout France, many not accessible by rail. This lack of access by rail echoes access to the past conditions of manuscripts. Both require photography. Now, the digital opens ways for understanding these past conditions and the aging processes of manuscripts by extending what can be known from historical photographs.
Digitized photographs provide three major advantages over printed ones. First, once digitized, photographs can be overlaid, and the transparency of the top image adjusted. This eliminates reliance on side-by-side comparisons and looking back-andforth, making losses easier to identify. Second, while overlay benefits identifying losses, it becomes more beneficial when images align. Because the content of digitized images is malleable, such alignment is possible, a digital process called registration. Through overlay and transparency, comparing registered images makes smaller differences more apparent. It also enables identifying differences through computer-assisted means, such as subtracting one image from another. Through subtraction, resulting images reveal miniscule changes that otherwise likely go unnoticed, such as losses measuring .1 mm in diameter. Third, digitizing historical photographs preserves them in a digital form, allowing them to be collected and redundantly stored, safeguarding their irreplaceable content.
In this chapter, I demonstrate the value of and provide an approach for digitizing, registering, and comparing historical photographs. I make this case, however, not simply to propose a practice worthy of becoming best practice whenever digitizing a manuscript. Because of its unprecedented benefits, I want to encourage its practice even when a major digitization of a manuscript is not at hand. Information gained from digitizing historical photographs is far too important to ignore and too valuable to risk losing. In my work with the St.