To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This paper proves normalisation theorems for intuitionist and classical negative free logic, without and with the operator for definite descriptions. Rules specific to free logic give rise to new kinds of maximal formulas additional to those familiar from standard intuitionist and classical logic. When is added it must be ensured that reduction procedures involving replacements of parameters by terms do not introduce new maximal formulas of higher degree than the ones removed. The problem is solved by a rule that permits restricting these terms in the rules for $\forall $, $\exists $ and to parameters or constants. A restricted subformula property for deductions in systems without is considered. It is improved upon by an alternative formalisation of free logic building on an idea of Jaśkowski’s. In the classical system the rules for require treatment known from normalisation for classical logic with $\lor $ or $\exists $. The philosophical significance of the results is also indicated.
Anselm described god as “something than which nothing greater can be thought” [1, p. 93], and Descartes viewed him as “a supreme being” [7, p. 122]. I first capture those characterizations formally in a simple language for monadic predicate logic. Next, I construct a model class inspired by Stoic and medieval doctrines of grades of being [8, 20]. Third, I prove the models sufficient for recovering, as internal mathematics, the famous ontological argument of Anselm, and show that argument to be, on this formalization, valid. Fourth, I extend the models to incorporate a modality fit for proving that any item than which necessarily no greater can be thought is also necessarily real. Lastly, with the present approach, I blunt the sharp edges of notable objections to ontological arguments by Gaunilo and by Grant. A trigger warning: every page of this writing flouts the old saw “Existence is not a predicate” and flagrantly.
Public agencies routinely collect administrative data that, when shared and integrated, can form a rich picture of the health and well-being of the communities they serve. One major challenge is that these datasets are often siloed within individual agencies or programs and using them effectively presents legal, technical, and cultural obstacles. This article describes work led by the North Carolina Department of Health and Human Services (NCDHHS) with support from university-based researchers to establish enterprise-wide data governance and a legal framework for routine data sharing, toward the goal of increased capacity for integrated data analysis, improved policy and practice, and better health outcomes for North Carolinians. We relied on participatory action research (PAR) methods and Deliberative Dialogue to engage a diverse range of stakeholders in the co-creation of a data governance process and legal framework for routine data sharing in NCDHHS. Four key actions were taken as a result of the participatory research process: NCDHHS developed a data strategy road map, created a data sharing guidebook to operationalize legal and ethical review of requests, staffed the Data Office, and implemented a legal framework. In addition to describing how these ongoing streams of work support data use across a large state health and human services agency, we provide three use cases demonstrating the impact of this work. This research presents a successful, actionable, and replicable framework for developing and implementing processes to support intradepartmental data access, integration, and use.
In the era of the Industrial Revolution 4.0 (IR 4.0), the adequacy of training models for industrial needs is being challenged. Africa is a skills hub, threatened by unemployment among young people, especially graduates, competition, and the sustainability of industrial fabrics. By carrying out a systematic literature review, this article aims to highlight the aspects and outcomes of the educational revolution that must accompany IR 4.0. The results show that IR 4.0 offers new careers, and that training is a key barrier to the successful digital transformation of the industry. University 4.0 is the conversion needed to overcome this barrier. This article explains this new academic model generating skills, which refers to the ability to perform activities effectively with high technical, digital, and flexible management capacities. Faced with the low adoption of IR 4.0, and the lack of a systematic literature review, this article offers a significant platform for the research community, both academic and industrial.
Enabling private sector trust stands as a critical policy challenge for the success of the EU Data Governance Act and Data Act in promoting data sharing to address societal challenges. This paper attributes the widespread trust deficit to the unmanageable uncertainty that arises from businesses’ limited usage control to protect their interests in the face of unacceptable perceived risks. For example, a firm may hesitate to share its data with others in case it is leaked and falls into the hands of business competitors. To illustrate this impasse, competition, privacy, and reputational risks are introduced, respectively, in the context of three suboptimal approaches to data sharing: data marketplaces, data collaboratives, and data philanthropy. The paper proceeds by analyzing seven trust-enabling mechanisms comprised of technological, legal, and organizational elements to balance trust, risk, and control and assessing their capacity to operate in a fair, equitable, and transparent manner. Finally, the paper examines the regulatory context in the EU and the advantages and limitations of voluntary and mandatory data sharing, concluding that an approach that effectively balances the two should be pursued.
The momentum surrounding the use of data for the public good has grown over the past few years, resulting in several initiatives, and rising interest from public bodies, intergovernmental organizations, and private organizations. The potential benefits of data collaboratives (DCs) have been proved in several contexts, including health, migration, pandemics, and public transport. However, these cross-sectoral partnerships have frequently not progressed beyond the pilot level, a condition hindering their ability to generate long-term societal benefits and scale their impact. Governance models play an important role in ensuring DCs’ stability over time; however, existing models do not address this issue. Our research investigates DCs’ governance settings to determine governance dimensions’ design settings enhancing DCs’ long-term stability. The research identifies through the literature on collaborative governance and DCs seven key governance dimensions for the long-term stability of DCs. Then, through the analysis of 16 heterogeneous case studies, it outlines the optimal design configurations for each dimension. Findings make a significant contribution to academic discourse by shedding light on the governance aspects that bolster the long-term stability of DCs. Additionally, this research offers practical insights and evidence-based guidelines for practitioners, aiding in the creation and maintenance of enduring DCs.
A number of data governance policies have recently been introduced or revised by the Indian Government with the stated goal of unlocking the developmental and economic potential of data. The policies seek to implement standardized frameworks for public data management and establish platforms for data exchange. However, India has a longstanding history of record-keeping and information transparency practices, which are crucial in the context of data management. These connections have not been explicitly addressed in recent policies like the Draft National Data Governance Framework, 2022. To understand if record management has a role to play in modern public data governance, we analyze the key new data governance framework and the associated Indian Urban Data Exchange platform as a case study. The study examines the exchange where public records serve as a potential source of data. It evaluates the coverage and the actors involved in the creation of this data to understand the impact of records management on government departments’ ability to publish datasets. We conclude that while India recognizes the importance of data as a public good, it needs to integrate digital records management practices more effectively into its policies to ensure accurate, up-to-date, and accessible data for public benefit.
This article proposes five ideas that the design of data governance policies for the trustworthy use of artificial intelligence (AI) in Africa should consider. The first is for African states to assess their domestic strategic priorities, strengths, and weaknesses. The second is a human-centric approach to data governance, which involves data processing practices that protect the security of personal data and the privacy of data subjects; ensure that personal data are processed in a fair, lawful, and accountable manner; minimize the harmful effect of personal data misuse or abuse on data subjects and other victims; and promote a beneficial, trusted use of personal data. The third is for the data policy to be in alignment with supranational rights-respecting AI standards like the African Charter on Human and Peoples Rights, the AU Convention on Cybersecurity, and Personal Data Protection. The fourth is for states to be critical about the extent to which AI systems can be relied on in certain public sectors or departments. The fifth and final proposition is for the need to prioritize the use of representative and interoperable data and ensure a transparent procurement process for AI systems from abroad where no local options exist.
In 2022, the world experienced the deadliest year of armed conflict since the 1994 Rwandan genocide. Much of the intensity and frequency of recent conflicts has drawn more attention to failures in forecasting—that is, a failure to anticipate conflicts. Such capabilities have the potential to greatly reduce the time, motivation, and opportunities peacemakers have to intervene through mediation or peacekeeping operations. In recent years, the growth in the volume of open-source data coupled with the wide-scale advancements in machine learning suggests that it may be possible for computational methods to help the international community forecast intrastate conflict more accurately, and in doing so reduce the rise of conflict. In this commentary, we argue for the promise of conflict forecasting under several technical and policy conditions. From a technical perspective, the success of this work depends on improvements in the quality of conflict-related data and an increased focus on model interpretability. In terms of policy implementation, we suggest that this technology should be used primarily to aid policy analysis heuristically and help identify unexpected conflicts.
In the burgeoning landscape of African smart cities, education stands as a cornerstone for sustainable development and unlocking future potential. Accurate student performance prediction holds immense social importance, enabling early intervention, improved learning outcomes, and equitable access to quality education, aligning with sustainable development goals. Traditional models often falter in Africa due to imbalanced datasets and irrelevant features. This research leverages machine learning in Nigerian classrooms to predict underperforming students. Techniques like synthetic minority oversampling, edited nearest neighbors, and the Boruta algorithm for feature selection, alongside genetic algorithms for efficiency, enhance model performance. The ensemble models achieve AUCs of 90–99.7%, effectively separating low-performing and high-performing students. Implemented via Streamlit and Heroku, these models support real-time, data-driven decisions, enhancing early intervention, personalized learning, and informing policy and public service design. This ensures equitable education and a brighter future across Africa. By leveraging ML, this research empowers universities to support struggling students, optimize educational costs, and promote inclusive development, fostering data-driven decision-making and resource allocation optimization. Ultimately, this research paves the way for a future where data empowers education within African smart cities, unlocking the full potential of data-driven solutions and ensuring equitable educational opportunities across the continent.
Studies on court administration in India have so far focused their attention largely on caseload management and judge strength of the higher judiciary. In-depth investigations of the performance of India’s lower courts, the primary loci of a citizen’s contact with the judiciary, are rarer, largely due to the lack of available data at scale. We conduct a quantitative analysis of a large dataset of more than 1700 Indian district courts between 2010 and 2018, to assess court performance through the measure of timeliness of case disposal. We use median days to decision—the median number of days it takes for a district court in India to decide a case. We aim to understand the impact of well-established factors—working strength and tenure of judges, case administration, age distribution of cases, and category or case type—against district courts’ performance. We find that court type and nature of cases are important predictors of a district court’s performance, and that the total number of judge working days and average bench strength are not good indicators of courts’ performance—the workload per judge being actually lower in low-performance district courts, compared to high-performing courts. Our study also reveals the strengths and weaknesses of the available judicial data platforms and points toward reforms in judicial administration to address these concerns.
This innovative introduction to the foundations of signals, systems, and transforms emphasises discrete-time concepts, smoothing the transition towards more advanced study in Digital Signal Processing (DSP). A digital-first approach, introducing discrete-time concepts from the beginning, equips students with a firm theoretical foundation in signals and systems, while emphasising topics fundamental to understanding DSP. Continuous-time approaches are introduced in later chapters, providing students with a well-rounded understanding that maintains a strong digital emphasis. Real-world applications, including music signals, signal denoising systems, and digital communication systems, are introduced to encourage student motivation. Early introduction of core concepts in digital filtering, DFT and FFT provide a frictionless transition through to more advanced study. Over 325 end-of-chapter problems, and over 50 computational problems using Matlab. Accompanied online by solutions and code for instructors, this rigorous textbook is ideal for undergraduate students in electrical engineering studying an introductory course in signals, systems, and signal processing.
Diabetic retinopathy (DR) is a complication of diabetes that causes blindness and the early detection of diabetics using retinopathy images remains a challenging task. Hence, a novel, earlier smart prediction of diabetic retinopathy from fundus image under Innovative ResNet Optimization is introduced to effectively detect the earlier stage of DR from Fundus image. Initially, the fundus image is scaled during preprocessing and converted into a grayscale format. As the existing studies neglect some deserving unique features that are crucial for predicting the earliest signs of DR, a novel Fractional Radon Transform with Visibility Graph is introduced for extracting the novel features such as microaneurysms count, dot and blot hemorrhages count, statistical measures, and retinal layer thickness, in which a Generalized Cosine Fractional Radon Transform is used to capture the image’s fine-scale texture information thereby effectively capturing the statistical measures, while a weighted Horizontal Visibility Graph is made to examine the apparent spatial relationships between pixel pairs in the image based on the values of the pixels’ gray levels. Further, the existing works failed to identify the small fine dark areas that were ignored throughout the morphological opening process. In order to overcome this issue, a Morphological Black Hat Transform with Optimized ResNet Algorithm is implemented, where segmentation is made through Enriched Black Hat Transform-based Morphological operation to identify fine dark regions among the pixels inside the eye samples, and the classification is done by using ResNet-driven S-GOA (Socio Grasshopper Optimization Algorithm), to optimally predict the stages of DR. The result obtained showed that the proposed model outperforms existing techniques with high performance and accuracy.
New scientific knowledge is needed more urgently than ever, to address global challenges such as climate change, sustainability, health, and societal well-being. Could artificial intelligence (AI) accelerate science to meet these global challenges in time? AI is already revolutionizing individual scientific disciplines, but we argue here that it could be more holistic and encompassing. We introduce the concept of virtual laboratories as a new perspective on scientific knowledge generation and a means to incentivize new AI research and development. Despite the often perceived domain-specific research practices and inherent tacit knowledge, we argue that many elements of the research process recur across scientific domains and that even common software platforms for serving different domains may be possible. We outline how virtual laboratories will make it easier for AI researchers to contribute to a broad range of scientific domains, and highlight the mutual benefits virtual laboratories offer to both AI and domain scientists.
Prostate cancer is the second most common malignancy in American men. High-dose-rate brachytherapy is a popular treatment technique in which a large, localized radiation dose is used to kill cancer. Utilization of curvilinear catheter implantation inside the prostate gland to provide access channels to host the radiation source has shown superiority in terms of improved dosimetric constraints compared to straight needles. To this aim, we have introduced an active needle to curve inside the prostate conformal to the patient’s specific anatomical relationship for improved dose distribution to the prostate and reduced toxicity to the organs at risk. This work presents closed-loop control of our tendon-driven active needle in water medium and air using the position feedback of the tip obtained in real time from an ultrasound (US) or an electromagnetic (EM) tracking sensor, respectively. The active needle consists of a compliant flexure section to realize bending in two directions via actuation of two internal tendons. Tracking errors using US and EM trackers are estimated and compared. Results show that the bending angle of the active needle could be controlled using position feedback of the US or the EM tracking system with a bending angle error of less than 1.00 degree when delay is disregarded. It is concluded that the actuation system and controller, presented in this work, are able to realize a desired bending angle at the active needle tip with reasonable accuracy paving the path for tip tracking and manipulation control evaluations in a prostate brachytherapy.