To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
In order to be effective mathematics educators, teachers need more than content knowledge: they need to be able to make mathematics comprehensible and accessible to their students. Teaching Key Concepts in the Australian Mathematics Curriculum Years 7 to 10 ensures that pre-service and practising teachers in Australia have the tools and resources required to teach lower secondary mathematics.
By simplifying the underlying concepts of mathematics, this book equips teachers to design and deliver mathematics lessons at the lower secondary level. The text provides a variety of practical activities and teaching ideas that translate the latest version of the Australian Curriculum into classroom practice. It covers the challenges of middle year mathematics, including the current decline in student numeracy, as well as complex theories which teachers can struggle to explain clearly. Topics include number, algebra, measurement, space, statistics and probability. Whether educators have recently studied more complicated mathematics or are teaching out of field, they are supported to recall ideas and concepts that they may have forgotten – or that may not have been made explicit in their own education.
Authored by experienced classroom educators and academics, this book is a vital resource for pre-service and practising Years 7 to 10 mathematics teachers, regardless of their backgrounds and experiences.
Picture, for a moment, enlisting the help of automatic translation when you seek medical attention in a foreign country and need to explain, in no uncertain terms, where you experience pain and in what intensity. I have experienced this in my first year in the US after moving there from Israel. Now consider that I’m not only a user of language technologies but also a researcher working on these technologies. As such, I’m also aware of their limitations. For example, I know that translation systems may translate figurative expressions literally, or that certain inputs can make them generate incorrect “translations” in the form of a religious text.
In order to be effective mathematics educators, teachers need more than content knowledge: they need to be able to make mathematics comprehensible and accessible to their students. Teaching Key Concepts in the Australian Mathematics Curriculum Years 7 to 10 ensures that pre-service and practising teachers in Australia have the tools and resources required to teach lower secondary mathematics.
By simplifying the underlying concepts of mathematics, this book equips teachers to design and deliver mathematics lessons at the lower secondary level. The text provides a variety of practical activities and teaching ideas that translate the latest version of the Australian Curriculum into classroom practice. It covers the challenges of middle year mathematics, including the current decline in student numeracy, as well as complex theories which teachers can struggle to explain clearly. Topics include number, algebra, measurement, space, statistics and probability. Whether educators have recently studied more complicated mathematics or are teaching out of field, they are supported to recall ideas and concepts that they may have forgotten – or that may not have been made explicit in their own education.
Authored by experienced classroom educators and academics, this book is a vital resource for pre-service and practising Years 7 to 10 mathematics teachers, regardless of their backgrounds and experiences.
Although the internet has removed geographical boundaries, transforming the world into a global village, English is still the most dominant language online. New forms of online communication such as emoji and memes have become an integral part of internet language. While it’s tempting to think of such visual communication formats as removing the cultural barriers – after all, emoji appear like a universal alphabet – their interpretation may rely on cultural references.
We are entering a new phase in the information revolution driven by the introduction of innovative artificial intelligence (AI) technologies. From the rise of mass media, mass communications and the expansion of the internet, to mobile computing, social networks and Generative AI, this important and authoritative book outlines the key changes over the last thirty years that have led to this moment.
Drawing on established frameworks, theories, historical research and empirical evidence, this book argues that the current wave of AI-driven innovations represents a step-change in how organisations can extract value from data and that this will have significant implications for business innovation and how companies compete. Individual chapters explore (a) the history of the information industry and key milestones in artificial intelligence, (b) an overview of the data and AI landscape, (c) the opportunities and challenges of the AI revolution, (d) the ethical, policy and legal issues of data-driven AI, (e) and scenarios for where the data revolution is heading up to 2030.
Let $\Sigma$ be an alphabet and $\mu$ be a distribution on $\Sigma ^k$ for some $k \geqslant 2$. Let $\alpha \gt 0$ be the minimum probability of a tuple in the support of $\mu$ (denoted $\mathsf{supp}(\mu )$). We treat the parameters $\Sigma , k, \mu , \alpha$ as fixed and constant. We say that the distribution $\mu$ has a linear embedding if there exist an Abelian group $G$ (with the identity element $0_G$) and mappings $\sigma _i : \Sigma \rightarrow G$, $1 \leqslant i \leqslant k$, such that at least one of the mappings is non-constant and for every $(a_1, a_2, \ldots , a_k)\in \mathsf{supp}(\mu )$, $\sum _{i=1}^k \sigma _i(a_i) = 0_G$. In [Bhangale-Khot-Minzer, STOC 2022], the authors asked the following analytical question. Let $f_i: \Sigma ^n\rightarrow [\!-1,1]$ be bounded functions, such that at least one of the functions $f_i$ essentially has degree at least $d$, meaning that the Fourier mass of $f_i$ on terms of degree less than $d$ is at most $\delta$. If $\mu$ has no linear embedding (over any Abelian group), then is it necessarily the case that
where the right hand side $\to 0$ as the degree $d \to \infty$ and $\delta \to 0$?
In this paper, we answer this analytical question fully and in the affirmative for $k=3$. We also show the following two applications of the result.
1. The first application is related to hardness of approximation. Using the reduction from [5], we show that for every $3$-ary predicate $P:\Sigma ^3 \to \{0,1\}$ such that $P$ has no linear embedding, an SDP (semi-definite programming) integrality gap instance of a $P$-Constraint Satisfaction Problem (CSP) instance with gap $(1,s)$ can be translated into a dictatorship test with completeness $1$ and soundness $s+o(1)$, under certain additional conditions on the instance.
2. The second application is related to additive combinatorics. We show that if the distribution $\mu$ on $\Sigma ^3$ has no linear embedding, marginals of $\mu$ are uniform on $\Sigma$, and $(a,a,a)\in \texttt{supp}(\mu )$ for every $a\in \Sigma$, then every large enough subset of $\Sigma ^n$ contains a triple $({\textbf {x}}_1, {\textbf {x}}_2,{\textbf {x}}_3)$ from $\mu ^{\otimes n}$ (and in fact a significant density of such triples).
The aim of this study is to explore how large language models (LLMs) integrated with structured versus unstructured concept generation techniques (CGTs) influence designers’ creative thinking processes and outputs. Using human–human collaboration (HHC) as a baseline, a 2 × 2 mixed factorial design was adopted to investigate the effects of collaborator type (between-subjects: LLM-based agents vs. experienced designers) and CGT type (within-subjects: brainstorming vs. TRIZ). Two LLM-based agents, IntelliStorm and EvoluTRIZ, were developed for the study, with 32 participants randomly assigned to either the HHC or human–agent collaboration (HAC) groups. Brain activity was measured using functional near-infrared spectroscopy, while outputs were assessed through expert evaluations. Results showed that designers exhibited lower cognitive load, better cognitive resource coordination, and enhanced fluency and flexibility in thinking in HAC than in HHC. Moreover, distinct patterns were revealed in different CGTs: brainstorming activated the right dorsolateral prefrontal cortex (PFC) as the core connectivity region, enhancing ideational fluency, whereas TRIZ activated the left dorsolateral PFC, facilitating refined thinking. Although HAC demonstrated stronger overall performance, HHC retained unique advantages in originality. This research offers novel neuroscientific insights and provides evidence-based guidance for developing more effective LLM-based design agents.
Probabilistic Logic Programming (PLP) under the distribution semantics is a leading approach to practical reasoning under uncertainty. An advantage of the distribution semantics is its suitability for implementation as a Prolog or Python library, available through two well-maintained implementations, namely ProbLog and cplint/PITA. However, current formulations of the distribution semantics use point-probabilities, making it difficult to express epistemic uncertainty, such as arises from, for example, hierarchical classifications from computer vision models. Belief functions generalize probability measures as non-additive capacities and address epistemic uncertainty via interval probabilities. This paper introduces interval-based Capacity Logic Programs based on an extension of the distribution semantics to include belief functions and describes properties of the new framework that make it amenable to practical applications.
6D pose estimation can perceive an object’s position and orientation in 3D space, playing a critical role in robotic grasping. However, traditional sparse keypoint-based methods generally rely on a limited number of feature points, restricting their performance under occlusion and viewpoint variations. To address this issue, we propose a novel Neighborhood-aware Graph Aggregation Network (NGANet) for precise pose estimation, which combines fully convolutional networks and graph convolutional networks (GCNs) to establish dense correspondences between 2D–3D and 3D–3D spaces. The $K$-nearest neighbor algorithm is integrated to build neighborhood relationships within isolated point clouds, followed by GCNs to aggregate local geometric features. When combined with mesh data, both surface details and topological shapes can be modeled. A positional encoding attention mechanism is introduced to adaptively fuse these multimodal features into a unified, spatially coherent representation about pose-specific features. Extensive experiments indicate that our proposed NGANet achieves a higher estimation accuracy on LINEMOD and Occlusion-LINEMOD datasets. In addition, its effectiveness is also validated under real-world scenarios.
The Fregean ontology can be naturally interpreted within set theory with urelements, where objects correspond to sets and urelements, and concepts to classes. Consequently, Fregean abstraction principles can be formulated as set-theoretic principles. We investigate how the size of reality—i.e., the number of urelements—interacts with these principles. We show that Basic Law V implies that for some well-ordered cardinal $\kappa $, there is no set of urelements of size $\kappa $. Building on recent work by Hamkins [10], we show that, under certain additional axioms, Basic Law V holds if and only if the urelements form a set. We construct models of urelement set theory in which the Reflection Principle holds while Hume’s Principle fails for sets. Additionally, assuming the consistency of an inaccessible cardinal, we produce a model of Kelley–Morse class theory with urelements that has a global well-ordering but lacks a definable map satisfying Hume’s Principle for classes.
We study randomized generation of sequences of test inputs to a system using Prolog. Prolog is a natural fit to generate test sequences that have complex logical interdependent structure. To counter the problems posed by a large (or infinite) set of possible tests, randomization is a natural choice. We study the impact that randomization in conjunction with SLD resolution have on the test performance. To this end, this paper proposes two strategies to add randomization to a test-generating program. One strategy works on top of standard Prolog semantics, whereas the other alters the SLD selection function. We analyze the mean time to reach a test case and the mean number of generated test cases in the framework of Markov chains. Finally, we provide an additional empirical evaluation and comparison between both approaches.
Where dual-numbers forward-mode automatic differentiation (AD) pairs each scalar value with its tangent value, dual-numbers reverse-mode AD attempts to achieve reverse AD using a similarly simple idea: by pairing each scalar value with a backpropagator function. Its correctness and efficiency on higher-order input languages have been analysed by Brunel, Mazza and Pagani, but this analysis used a custom operational semantics for which it is unclear whether it can be implemented efficiently. We take inspiration from their use of linear factoring to optimise dual-numbers reverse-mode AD to an algorithm that has the correct complexity and enjoys an efficient implementation in a standard functional language with support for mutable arrays, such as Haskell. Aside from the linear factoring ingredient, our optimisation steps consist of well-known ideas from the functional programming community. We demonstrate the use of our technique by providing a practical implementation that differentiates most of Haskell98. Where previous work on dual numbers reverse AD has required sequentialisation to construct the reverse pass, we demonstrate that we can apply our technique to task-parallel source programs and generate a task-parallel derivative computation.
One common approach to solve multi-objective reinforcement learning (MORL) problems is to extend conventional Q-learning by using vector Q-values in combination with a utility function. However issues can arise with this approach in the context of stochastic environments, particularly when optimising for the scalarised expected reward (SER) criterion. This paper extends prior research, providing a detailed examination of the factors influencing the frequency with which value-based MORL Q-learning algorithms learn the SER-optimal policy for an environment with stochastic state transitions. We empirically examine several variations of the core multi-objective Q-learning algorithm as well as reward engineering approaches and demonstrate the limitations of these methods. In particular, we highlight the critical impact of the noisy Q-value estimates issue on the stability and convergence of these algorithms.
Poor socket fit is the leading cause of prosthetic limb discomfort. However, currently clinicians have limited objective data to support and improve socket design. Finite element analysis predictions might help improve the fit, but this requires internal and external anatomy models. While external 3D surface scans are often collected in routine clinical computer-aided design practice, detailed internal anatomy imaging (e.g., MRI or CT) is not. We present a prototype statistical shape model (SSM) describing the transtibial amputated residual limb, generated using a sparse dataset of 33 MRI and CT scans. To describe the maximal shape variance, training scans are size-normalized to their estimated intact tibia length. A mean limb is calculated and principal component analysis used to extract the principal modes of shape variation. In an illustrative use case, the model is interrogated to predict internal bone shapes given a skin surface shape. The model attributes ~52% of shape variance to amputation height and ~17% to slender-bulbous soft tissue profile. In cross-validation, left-out shapes influenced the mean by 0.14–0.88 mm root mean square error (RMSE) surface deviation (median 0.42 mm), and left-out shapes were recreated with 1.82–5.75 mm RMSE (median 3.40 mm). Linear regression between mode scores from skin-only- and full-model SSMs allowed prediction of bone shapes from the skin with 3.56–10.9 mm RMSE (median 6.66 mm). The model showed the feasibility of predicting bone shapes from surface scans, which addresses a key barrier to implementing simulation within clinical practice, and enables more representative prosthetic biomechanics research.
We present a novel approach to synthesizing recursive functional programs from input–output examples. Synthesizing a recursive function is challenging because recursive subexpressions should be constructed while the target function has not been fully defined yet. We address this challenge by using a new technique we call block-based pruning. A block refers to a recursion- and conditional-free expression (i.e., straight-line code) that yields an output from a particular input. We first synthesize as many blocks as possible for each input–output example, and then we explore the space of recursive programs, pruning candidates that are inconsistent with the blocks. Our method is based on an efficient version space learning, thereby effectively dealing with a possibly enormous number of blocks. In addition, we present a method that uses sampled input–output behaviors of library functions to enable a goal-directed search for a recursive program using the library. We have implemented our approach in a system called Trio and evaluated it on synthesis tasks from prior work and on new tasks. Our experiments show that Trio significantly outperforms prior work.
In order to be effective mathematics educators, teachers need more than content knowledge: they need to be able to make mathematics comprehensible and accessible to their students. Teaching Key Concepts in the Australian Mathematics Curriculum Years 7 to 10 ensures that pre-service and practising teachers in Australia have the tools and resources required to teach lower secondary mathematics. By simplifying the underlying concepts of mathematics, this book equips teachers to design and deliver mathematics lessons at the lower secondary level. The text provides a variety of practical activities and teaching ideas that translate the latest version of the Australian Curriculum into classroom practice. Whether educators have recently studied more complicated mathematics or are teaching out of field, they are supported to recall ideas and concepts that they may have forgotten – or that may not have been made explicit in their own education.
We give a simple diagrammatic proof of the Frobenius property for generic fibrations that does not depend on any additional structure on the interval object such as connections.