To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Critical cascades are found in many self-organizing systems. Here, we examine critical cascades as a design paradigm for logic and learning under the linear threshold model (LTM), and simple biologically inspired variants of it as sources of computational power, learning efficiency, and robustness. First, we show that the LTM can compute logic, and with a small modification, universal Boolean logic, examining its stability and cascade frequency. We then frame it formally as a binary classifier and remark on implications for accuracy. Second, we examine the LTM as a statistical learning model, studying benefits of spatial constraints and criticality to efficiency. We also discuss implications for robustness in information encoding. Our experiments show that spatial constraints can greatly increase efficiency. Theoretical investigation and initial experimental results also indicate that criticality can result in a sudden increase in accuracy.
In a disruptive media landscape characterized by the relentless death of legacy newspapers, Nigeria's Digital Diaspora shows that a country's transnational elite can shake its media ecosystem through distant online citizen journalism.
Erdős asked if, for every pair of positive integers g and k, there exists a graph H having girth (H) = k and the property that every r-colouring of the edges of H yields a monochromatic cycle Ck. The existence of such graphs H was confirmed by the third author and Ruciński.
We consider the related numerical problem of estimating the order of the smallest graph H with this property for given integers r and k. We show that there exists a graph H on R10k2; k15k3 vertices (where R = R(Ck; r) is the r-colour Ramsey number for the cycle Ck) having girth (H) = k and the Ramsey property that every r-colouring of the edges of H yields a monochromatic Ck Two related numerical problems regarding arithmetic progressions in subsets of the integers and cliques in graphs are also considered.
The integration of Artificial Neural Networks (ANNs) and Feature Extraction (FE) in the context of the Sample- Partitioning Adaptive Reduced Chemistry approach was investigated in this work, to increase the on-the-fly classification accuracy for very large thermochemical states. The proposed methodology was firstly compared with an on-the-fly classifier based on the Principal Component Analysis reconstruction error, as well as with a standard ANN (s-ANN) classifier, operating on the full thermochemical space, for the adaptive simulation of a steady laminar flame fed with a nitrogen-diluted stream of n-heptane in air. The numerical simulations were carried out with a kinetic mechanism accounting for 172 species and 6,067 reactions, which includes the chemistry of Polycyclic Aromatic Hydrocarbons (PAHs) up to C$ {}_{20} $. Among all the aforementioned classifiers, the one exploiting the combination of an FE step with ANN proved to be more efficient for the classification of high-dimensional spaces, leading to a higher speed-up factor and a higher accuracy of the adaptive simulation in the description of the PAH and soot-precursor chemistry. Finally, the investigation of the classifier’s performances was also extended to flames with different boundary conditions with respect to the training one, obtained imposing a higher Reynolds number or time-dependent sinusoidal perturbations. Satisfying results were observed on all the test flames.
We present a polynomial-time Markov chain Monte Carlo algorithm for estimating the partition function of the antiferromagnetic Ising model on any line graph. The analysis of the algorithm exploits the ‘winding’ technology devised by McQuillan [CoRR abs/1301.2880 (2013)] and developed by Huang, Lu and Zhang [Proc. 27th Symp. on Disc. Algorithms (SODA16), 514–527]. We show that exact computation of the partition function is #P-hard, even for line graphs, indicating that an approximation algorithm is the best that can be expected. We also show that Glauber dynamics for the Ising model is rapidly mixing on line graphs, an example being the kagome lattice.
Neuroscience has begun to intrude deeply into what it means to be human, an intrusion that offers profound benefits but will demolish our present understanding of privacy. In Privacy in the Age of Neuroscience, David Grant argues that we need to reconceptualize privacy in a manner that will allow us to reap the rewards of neuroscience while still protecting our privacy and, ultimately, our humanity. Grant delves into our relationship with technology, the latest in what he describes as a historical series of 'magnitudes', following Deity, the State and the Market, proposing the idea that, for this new magnitude (Technology), we must control rather than be subjected to it. In this provocative work, Grant unveils a radical account of privacy and an equally radical proposal to create the social infrastructure we need to support it.
Malicious hackers utilize the World Wide Web to share knowledge. Analyzing the online communication of these threat actors can help reduce the risk of attacks. This book shifts attention from the defender environment to the attacker environment, offering a new security paradigm of 'proactive cyber threat intelligence' that allows defenders of computer networks to gain a better understanding of their adversaries by analyzing assets, capabilities, and interest of malicious hackers. The authors propose models, techniques, and frameworks based on threat intelligence mined from the heart of the underground cyber world: the malicious hacker communities. They provide insights into the hackers themselves and the groups they form dynamically in the act of exchanging ideas and techniques, buying or selling malware, and exploits. The book covers both methodology - a hybridization of machine learning, artificial intelligence, and social network analysis methods - and the resulting conclusions, detailing how a deep understanding of malicious hacker communities can be the key to designing better attack prediction systems.
This chapter provides an introduction to uncertainty relations underlying sparse signal recovery. We start with the seminal work by Donoho and Stark (1989), which defines uncertainty relations as upper bounds on the operator norm of the band-limitation operator followed by the time-limitation operator, generalize this theory to arbitrary pairs of operators, and then develop, out of this generalization, the coherence-based uncertainty relations due to Elad and Bruckstein (2002), plus uncertainty relations in terms of concentration of the 1-norm or 2-norm. The theory is completed with set-theoretic uncertainty relations which lead to best possible recovery thresholds in terms of a general measure of parsimony, the Minkowski dimension. We also elaborate on the remarkable connection between uncertainty relations and the “large sieve,” a family of inequalities developed in analytic number theory. We show how uncertainty relations allow one to establish fundamental limits of practical signal recovery problems such as inpainting, declipping, super-resolution, and denoising of signals corrupted by impulse noise or narrowband interference.
In compressed sensing (CS) a signal x ∈ Rn is measured as y =A x + z, where A ∈ Rm×n (m<n) and z ∈ Rm denote the sensing matrix and measurement noise. The goal is to recover x from measurements y when m<n. CS is possible because we typically want to capture highly structured signals, and recovery algorithms take advantage of a signal’s structure to solve the under-determined system of linear equations. As in CS, data-compression codes take advantage of a signal’s structure to encode it efficiently. Structures used by compression codes are much more elaborate than those used by CS algorithms. Using more complex structures in CS, like those employed by data-compression codes, potentially leads to more efficient recovery methods requiring fewer linear measurements or giving better reconstruction quality. We establish connections between data compression and CS, giving CS recovery methods based on compression codes, which indirectly take advantage of all structures used by compression codes. This elevates the class of structures used by CS algorithms to those used by compression codes, leading to more efficient CS recovery methods.
Facilitation style appears to be an important determinant of design team effectiveness. The neutrality of the group facilitator may be a key factor; however, the characteristics and impact of neutrality are relatively understudied. In a designed classroom setting, we examine the impact of two different approaches to group facilitation: (i) facilitator’s neutrality expressed as low equidistance and high impartiality and (ii) facilitator’s neutrality expressed as high equidistance and low impartiality, on team trust, trust to the facilitator and team potency. To do this, we conducted a repeated-measures experiment with a student sample. Our results indicate that facilitators expressing neutrality through low equidistance and high impartiality had a greater positive impact on team trust. The two approaches did not differ on team potency and facilitator trust. These results contribute to developing theories of design facilitation and team effectiveness by suggesting how facilitation may shape team trust and potency in group design. Based on our findings, we point to the need for future work to further examine the impact of facilitator’s process awareness and neutrality, and show how facilitation methods may benefit teams during creative design teamwork.
Social media and electronic communications dominate modern life. Workplaces have been transformed by email, teleconferencing and an array of new applications, along with our homes and social lives. Fewer people today, go to a travel agent to book flights, subscribe to newspaper delivery, or even watch free-to-air television. All of this can be done more conveniently and with greater individual choice and control online, often guided by social media applications to channel information, in ways not mediated or filtered as in the past. Social relationships have changed along the way, with many people now exchanging texts rather than speaking face-to-face or by phone.
Prototyping constitutes a major theme of design education and an integral part of engineering design academic courses. Physical prototypes and the model building process, in particular, have been proved to boost students’ creativity and resourcefulness and assist in the better evaluation of concepts. However, students’ usage of prototypes has still not been explored in depth with the aim of being transformed into educational guidelines. This paper presents an investigation of students’ reasoning behind prototyping activities based on the concept of Purposeful Prototyping, developed in the authors’ previous work. This is performed by identifying instances of prototype use in students’ design projects and by discovering which types of prototyping purposes they apply and to what extent, as well as by studying the relationships between purposes, early design stages, academic performance and project planning. The analysis of the results shows that prototyping can support students’ learning objectives by acting as a project scheduling tool and highlights the contribution of early-stage prototyping in academic performance. It is also confirmed that students’ limited prototyping scope prevents them from gaining prototyping’s maximum benefits and that they require strategic guidelines tailored to their needs. A new, improved list of prototyping purposes is proposed based on the study’s results.
This chapter provides a survey of the common techniques for determining the sharp statistical and computational limits in high-dimensional statistical problems with planted structures, using community detection and submatrix detection problems as illustrative examples. We discuss tools including the first- and second-moment methods for analyzing the maximum-likelihood estimator, information-theoretic methods for proving impossibility results using mutual information and rate-distortion theory, and methods originating from statistical physics such as the interpolation method. To investigate computational limits, we describe a common recipe to construct a randomized polynomial-time reduction scheme that approximately maps instances of the planted clique problem to the problem of interest in total variation distance.
We prove that if $A \subseteq [X,\,2X]$ and $B \subseteq [Y,\,2Y]$ are sets of integers such that gcd (a, b) ⩾ D for at least δ|A||B| pairs (a, b) ε A × B then $|A||B|{ \ll _{\rm{\varepsilon }}}{\delta ^{ - 2 - \varepsilon }}XY/{D^2}$. This is a new result even when δ = 1. The proof uses ideas of Koukoulopoulos and Maynard and some additional combinatorial arguments.