To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This text explores how electronic musical instruments and electronic music ensembles can relate to composition and music notation by discussing the instruments in terms of existing practice in traditional instrumentation and in relation to symbolic electroacoustic music analysis. Starting from orchestration theory, the text considers how electronic musical instruments behave and are used, both with support from the author’s own practice and from a case study with students within the framework of a live-electronic ensemble course. The case study reflected the participants’ practice as creative composers/musicians, and how their exploratory and experimental approaches to their instruments proved important, creating challenges for notation. Traditionally, music notation relies on continuous changes of simple parameters, while performances with complex electronic instruments may have just as important information to document regarding their initial connectivity and parameter settings.
To address the limitations of existing external pipeline inspection robots, including a narrow range of adaptable pipe diameters and difficulty traversing obstacles like cross-pipelines, a novel wheel-clamping robot capable of circumferential rotation was designed. The composite drive mechanism of the robot adopts a dual-slider multi-link mechanism to realize the rapid switching of the two motion modes of the robot axis: forward and circumferential rotation. Following clarification of the robotic mechanism and component dimensions, a geometric model of key points was established, determining an adaptable pipeline diameter range of 74–203 mm. The force analysis was carried out to analyze the working state of the robot axis forward and circumferential rotation, and the minimum driving torque required to complete the above two motions is 0.84 and 1.23 N·m, respectively. Finally, the robot prototype was made, and the experiments of the prototype running on the pipeline were carried out. The experimental results show that the average speed of the robot is 0.195 m/s when it is moving along the axis on the pipeline, and it stably navigates obstacles through circumferential rotation, smoothly crossing T-shaped pipelines of different diameters, which is adaptable to the complex pipeline working conditions.
Étude n°1 is a solo for feedback and effects pedals by David Caulet. Originally designed to enrich the electric guitar’s timbre, effects pedals have been widely repurposed by experimental and improvising musicians. Although their use is now common, notation practices associated with these devices remain underdeveloped. This work explores the development of a graphic system dedicated to representing instrumental gestures and opens perspectives for a notation framework adapted to contemporary musical practices incorporating electronic technologies.
Building energy management (BEM) tasks require processing and learning from a variety of time-series data. Existing solutions rely on bespoke task- and data-specific models to perform these tasks, limiting their broader applicability. Inspired by the transformative success of Large Language Models (LLMs), Time-Series Foundation Models (TSFMs), trained on diverse datasets, have the potential to change this. Were TSFMs to achieve a level of generalizability across tasks and contexts akin to LLMs, they could fundamentally address the scalability challenges pervasive in BEM. To understand where they stand today, we evaluate TSFMs across four dimensions: (1) generalizability in zero-shot univariate forecasting, (2) forecasting with covariates for thermal behavior modeling, (3) zero-shot representation learning for classification tasks, and (4) robustness to performance metrics and varying operational conditions. Our results reveal that TSFMs exhibit limited generalizability, performing only marginally better than statistical models on unseen datasets and modalities for univariate forecasting. Similarly, inclusion of covariates in TSFMs does not yield performance improvements, and their performance remains inferior to conventional models that utilize covariates. While TSFMs generate effective zero-shot representations for downstream classification tasks, they may remain inferior to statistical models in forecasting when statistical models perform test-time fitting. Moreover, TSFMs’ forecasting performance is sensitive to evaluation metrics, and they struggle in more complex building environments compared to statistical models. These findings underscore the need for targeted advancements in TSFM design, particularly their handling of covariates and incorporating context and temporal dynamics into prediction mechanisms, to develop more adaptable and scalable solutions for BEM.
This book offers a practical introduction to digital history with a focus on working with text. It will benefit anyone who is considering carrying out research in history that has a digital or data element and will also be of interest to researchers in related fields within digital humanities, such as literary or classical studies. It offers advice on the scoping of a project, evaluation of existing digital history resources, a detailed introduction on how to work with large text resources, how to manage digital data and how to approach data visualisation. After placing digital history in its historiographical context and discussing the importance of understanding the history of the subject, this guide covers the life-cycle of a digital project from conception to digital outputs. It assumes no prior knowledge of digital techniques and shows you how much you can do without writing any code. It will give you the skills to use common formats such as plain text and XML with confidence. A key message of the book is that data preparation is a central part of most digital history projects, but that work becomes much easier and faster with a few essential tools.
This chapter provides a survey of the landscape of contemporary digital history, with coverage of the way individual research projects have built upon each other. An understanding of what is available and how it can be used is vital to choosing a viable research project, and this chapter covers technologies such as optical character recognition (OCR), handwritten archives, crowdsourcing, big data and web archives. The chapter concludes with discussion of publication broadly conceived, so not simply of the final outputs of a project.
This chapter outlines the history of digital history and digital humanities more broadly. This historical narrative is intertwined with coverage of the technological changes which have made certain types of digital history feasible or even popular, and noting the economic drivers to certain types of material being preferentially digitised. The effect of the digital on the way historians approach reading, writing, collaboration, discovery (search) and citation is also discussed.
This chapter offers a guide to visualising historical data, with two case studies centred on the Post Office directory data used throughout the book. The first visualisation is two stacked bar charts showing the most common female professions against men in the same professions and breaking down professions by married and unmarried women. The second visualisation is a map of one London street in 1879, with discussion of the process and the thinking that led to the finished visualisation.
The Introduction provides a summary of the aims and intended audience of the book, and a justification of the choice of tools to be used: the book recommends well-tested, free tools for working with large amounts of text. The Introduction also draws attention to the importance of data cleaning – the preparation of data for use in a project. A precis of the following chapters and appendices is given.
The second of two chapters on working with text, this chapter covers structured text and, in particular, the markup language XML, with a short passage on the Text Encoding Initiative (TEI) guidelines. As with the previous chapter, the Post Office directory is used throughout as an example historical text.
This chapter gives a description of the life-cycle of a digital history project, from digitisation of source material onwards, with advice on the practicalities and costs of different approaches to producing machine-readable text. There is introductory coverage of data cleaning and version control using Git, although these are covered more fully in later chapters.