To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
We develop semantics and syntax for bicategorical type theory. Bicategorical type theory features contexts, types, terms, and directed reductions between terms. This type theory is naturally interpreted in a class of structured bicategories. We start by developing the semantics, in the form of comprehension bicategories. Examples of comprehension bicategories are plentiful; we study both specific examples as well as classes of examples constructed from other data. From the notion of comprehension bicategory, we extract the syntax of bicategorical type theory, that is, judgment forms and structural inference rules. We prove soundness of the rules by giving an interpretation in any comprehension bicategory. The semantic aspects of our work are fully checked in the Coq proof assistant, based on the UniMath library.
In this study, we investigate the creation and persistence of interfirm ties in a large-scale business transaction network. Business transaction relations (firms buying or selling products or services to each other) are driven by economic motives, but because trust is essential to business relationships, the social connections of owners or the geographical proximity of firms can also influence their development. However, studying the formation of interfirm business transaction ties on a large scale is rare, because of the significant data demand. The business transaction and the ownership networks of Hungarian firms are constructed from two administrative datasets for 2016 and 2017. We show that direct or indirect connections in this two-layered network, including open triads in the business network, contribute to both the creation and persistence of business transaction ties. For our estimations, we utilize log-linear models and emphasize their efficiency in predicting links in such large networks. We contribute to the literature by presenting different patterns of business connections in a nationwide multilayer interfirm network.
Within the last decade, online sustainability knowledge-action platforms have proliferated. We surveyed 198 sustainability-oriented sites and conducted a review of 41 knowledge-action platforms, which we define as digital tools that advance sustainability through organized activities and knowledge dissemination. We analyzed platform structure and functionality through a systematic coding process based on key issues identified in three bodies of literature: (a) the emergence of digital platforms, (b) the localization of the sustainable development goals (SDGs), and (c) the importance of multi-level governance to sustainability action. While online collaborative tools offer an array of resources, our analysis indicates that they struggle to provide context-sensitivity and higher-level analysis of the trade-offs and synergies between sustainability actions. SDG localization adds another layer of complexity where multi-level governance, actor, and institutional priorities may generate tensions as well as opportunities for intra- and cross-sectoral alignment. On the basis of our analysis, we advocate for the development of integrative open-source and dynamic global online data management tools that would enable the monitoring of progress and facilitate peer-to-peer exchange of ideas and experience among local government, community, and business stakeholders. We argue that by showcasing and exemplifying local actions, an integrative platform that leverages existing content from multiple extant platforms through effective data interoperability can provide additional functionality and significantly empower local actors to accelerate local to global actions, while also complex system change.
This article proposes a control method for underactuated cartpole systems using semi-implicit cascaded proportional-derivative (PD) controller. The proposed controller is composed of two conventional PD controllers, which stabilizes the pole and the cart second-order dynamics respectively. The first PD controller is realized by transforming the pole dynamics into a virtual PD controller, with the coupling term exploited as the internal tracking target for the cart dynamics. Then, the second PD controller manipulates the cart dynamics to track that internal target. The solution to the internal tracking target relies on an equation set and features a semi-implicit process, which exploits the internal dynamics of the system. Besides, the design of second PD controller relies on the parameters of the first PD controller in a cascaded manner. A stability analysis approach based on Jacobian matrix is proposed and implemented for this fourth-order system. The proposed method is simple in design and intuitive to comprehend. The simulation results illustrate the superiority of proposed method compared with conventional double-loop PD controller in terms of convergence, with the theoretical conclusion of at least locally asymptotic stability.
The main goal of this study is to investigate if the publicly available sea state forecasts for the Aran Islands region in the Republic of Ireland can be improved. This improvement is achieved by using the combination of local scale sea state forecasts and Bayesian Model Averaging techniques. The question of a good forecast has been around since the start of forecasting. With current state-of-the-art numerical models, computational power, and vast data availability, we consider whether it is possible to improve model forecasts only by using the combination of publicly available forecasts, free open-source software, and very moderate computational power. It is shown that it is possible to improve the sea state forecast by at least $ 1\% $, and in some cases up to $ 8\% $. The reduction of error is between $ 6\% $ and $ 48\% $. With a more careful and specific selection of training parameters, it is possible to improve the forecast accuracy even more. The possibility of extending this local improvement to the whole coastal area around the island of Ireland is explored. Unfortunately, it is currently impossible, due to a lack of live data buoys in the coastal waters. Nonetheless, it is shown that the proposed process is simple and can be implemented by anyone whose livelihood depends on an accurate sea state forecast. It does not require large computational power, model forecasts are publicly available, and there is minimal to no training in forecasting and statistics required to enable one to perform such improvements for one’s area of interest, provided one has access to live wave data.
Virtual reality (VR) is a powerful technology that promises to transform our lives. This balanced and interdisciplinary text blends the key components from computer graphics, perceptual psychology, human physiology, behavioral science, media studies, human-computer interaction, optical engineering, and sensing and filtering, showing how each contributes to engineering perceptual illusions. Steven LaValle draws on his unique experience as a teacher, researcher, and early founder of Oculus VR, to demonstrate how the best practices and insights from industry are built on fundamental computer science principles. Topics include media history, geometric modeling, optical systems, displays, eyes, ears, low-level perception, neuroscience of vision, graphical rendering, tracking systems, interaction mechanisms, audio, evaluating VR systems, and mitigating side effects. Students, researchers, and developers will gain a clear understanding of timeless foundations and new applications, enabling them to make innovative contributions to this growing field as scientists, engineers, business developers, and content makers.
Edited by
Jong Chul Ye, Korea Advanced Institute of Science and Technology (KAIST),Yonina C. Eldar, Weizmann Institute of Science, Israel,Michael Unser, École Polytechnique Fédérale de Lausanne
Edited by
Jong Chul Ye, Korea Advanced Institute of Science and Technology (KAIST),Yonina C. Eldar, Weizmann Institute of Science, Israel,Michael Unser, École Polytechnique Fédérale de Lausanne
We provide a short, self-contained introduction to deep neural networks that is aimed at mathematically inclined readers. We promote the use of a vect--matrix formalism that is well suited to the compositional structure of these networks and that facilitates the derivation/description of the backpropagation algorithm. We present a detailed analysis of supervised learning for the two most common scenarios, (i) multivariate regression and (ii) classification, which rely on the minimization of least squares and cross-entropy criteria, respectively.
Edited by
Jong Chul Ye, Korea Advanced Institute of Science and Technology (KAIST),Yonina C. Eldar, Weizmann Institute of Science, Israel,Michael Unser, École Polytechnique Fédérale de Lausanne
Since the groundbreaking performance improvement by AlexNet at the ImageNet challenge, deep learning has provided significant gains over classical approaches in various fields of data science including imaging reconstruction. The availability of large-scale training datasets and advances in neural network research have resulted in the unprecedented success of deep learning in various applications. Nonetheless, the success of deep learning appears very mysterious. The basic building blocks of deep neural networks are convolution, pooling, and nonlinearity, which are primitive tools of mathematics. Interestingly, the cascaded connection of these primitive tools results in superior performance over traditional approaches. To understand this mystery, one can go back to the basic ideas of the classical approaches to understand the similarities and differences from modern deep-neural-network methods. In this chapter, we explain the limitations of the classical machine learning approaches, and provide a review of mathematical foundations to understand why deep neural networks have successfully overcome their limitations.
With diploid organisms, one is interested not only in discovering variants but also in discovering to which of the two haplotypes each variant belongs. One would thus like to identify the variants that are co-located on the same haplotype, a process called haplotype phasing. Assume we have managed to do haplotype phasing for several individuals. It is then of interest to do haplotype matching, that is, to locate long contiguous blocks shared by multiple individuals. The chapter covers algorithms and complexity analysis of these key haplotype analysis tasks. A close connection between classical indexes and a tailored data structure called the positional BWT index is established.
Edited by
Jong Chul Ye, Korea Advanced Institute of Science and Technology (KAIST),Yonina C. Eldar, Weizmann Institute of Science, Israel,Michael Unser, École Polytechnique Fédérale de Lausanne
Inspired by the success of deep learning in computer vision tasks, deep learning approaches for various MRI problems have been extensively studied in recent years. Early deep learning studies for MRI reconstruction and enhancement were mostly based on image-domain learning. However, because the MR signal is acquired in the k-space domain, researchers have demonstrated that deep neural networks can be directly designed in k-space to utilize the physics of MR acquisition. In this chapter, the recent trend of k-space deep learning for MRI reconstruction and artifact removal are reviewed. First, scan-specific k-space learning, which is inspired by parallel MRI, is covered. Then we provide an overview of data-driven k-space learning. Subsequently, unsupervised learning for MRI reconstruction and motion artifact removal are discussed.
Edited by
Jong Chul Ye, Korea Advanced Institute of Science and Technology (KAIST),Yonina C. Eldar, Weizmann Institute of Science, Israel,Michael Unser, École Polytechnique Fédérale de Lausanne
Edited by
Jong Chul Ye, Korea Advanced Institute of Science and Technology (KAIST),Yonina C. Eldar, Weizmann Institute of Science, Israel,Michael Unser, École Polytechnique Fédérale de Lausanne
Edited by
Jong Chul Ye, Korea Advanced Institute of Science and Technology (KAIST),Yonina C. Eldar, Weizmann Institute of Science, Israel,Michael Unser, École Polytechnique Fédérale de Lausanne
Ultrasound imaging (US) is susceptible to several types of artifacts. Most artifacts appear because of transducer limitations and simplified assumptions on the wave propagation. The artifacts are sometimes used as a component that contains tissue information; however, they often lead to a misinterpretation in the clinical diagnosis. Therefore, to improve the clinical utility of ultrasound in difficult-to-image patients and settings, a number of artifact removal methods have been proposed that aim at boosting image quality. Classical optimization-based methods have severe limitations due to their limited performance and high computation requirements. Furthermore, it is difficult to obtain parameters for producing high-quality output. A quick remedy for the aforementioned issues is the deep learning approach, which offers high performance compared with the traditional methods despite the significantly reduced runtime complexity. Another big advantage is that the same parameters as those learned during the training phase can be used to process different input images. This has motivated the scientific community to design deep-neural-network-based approaches for US artifact removal tasks.
Analysing the content of a biological sequence can often be modeled as a segmentation problem. For example, one may wish to segment a genome in coding and non-coding regions, where only the former are translated to proteins. Statistical features of what genes usually look like can be used to derive an optimization framework. This process can be formalized through hidden Markov models, and the underlying segmentation problem can be solved using dynamic programming. This chapter introduces the key methods related to such optimization.
Classical index structures like suffix trees are powerful, but they occupy much more space than the data they are built on. Many space-efficient alternatives exist that occupy space close to the input data. This chapter covers such data structures based on the Burrows–Wheeler transform (BWT). A special emphasis is given to the bidirectional BWT index, which can be used for solving basic genome analysis tasks by simulating suffix tree exploration without any sacrifice in run time. Space-efficient representations of de Bruijn graphs are also covered.
Soft sets were introduced as a means to study objects that are not defined in an absolute way and have found applications in numerous areas of mathematics, decision theory, and in statistical applications. Soft topological spaces were first considered in Shabir and Naz ((2011). Computers & Mathematics with Applications61 (7) 1786–1799) and soft separation axioms for soft topological spaces were studied in El-Shafei et al. ((2018). Filomat32 (13) 4755–4771), El-Shafei and Al-Shami ((2020). Computational and Applied Mathematics39 (3) 1–17), Al-shami ((2021). Mathematical Problems in Engineering2021). In this paper, we introduce the effective versions of soft separation axioms. Specifically, we focus our attention on computable u-soft and computable p-soft separation axioms and investigate various relations between them. We also compare the effective and classical versions of these soft separation axioms.