To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
We give an enumeration of all positive definite primitive $ \mathbb{Z} $-lattices in dimension $n\geq 3$ whose genus consists of a single isometry class. This is achieved by using bounds obtained from the Smith–Minkowski–Siegel mass formula to computationally construct the square-free determinant lattices with this property, and then repeatedly calculating pre-images under a mapping first introduced by G. L. Watson.
We hereby complete the classification of single-class genera in dimensions 4 and 5 and correct some mistakes in Watson’s classifications in other dimensions. A list of all single-class primitive $ \mathbb{Z} $-lattices has been compiled and incorporated into the Catalogue of Lattices.
For a given cusp form $\phi $ of even integral weight satisfying certain hypotheses, Waldspurger’s theorem relates the critical value of the $\mathrm{L} $-function of the $n\mathrm{th} $ quadratic twist of $\phi $ to the $n\mathrm{th} $ coefficient of a certain modular form of half-integral weight. Waldspurger’s recipes for these modular forms of half-integral weight are far from being explicit. In particular, they are expressed in the language of automorphic representations and Hecke characters. We translate these recipes into congruence conditions involving easily computable values of Dirichlet characters. We illustrate the practicality of our ‘simplified Waldspurger’ by giving several examples.
We consider the 33 conjugacy classes of genus zero, torsion-free modular subgroups, computing ramification data and Grothendieck’s dessins d’enfants. In the particular case of the index 36 subgroups, the corresponding Calabi–Yau threefolds are identified, in analogy with the index 24 cases being associated to K3 surfaces. In a parallel vein, we study the 112 semi-stable elliptic fibrations over ${ \mathbb{P} }^{1} $ as extremal K3 surfaces with six singular fibres. In each case, a representative of the corresponding class of subgroups is identified by specifying a generating set for that representative.
Self-identification with peer crowds (jocks, popular kids, druggies, etc.) has an important influence on adolescent substance use behavior. However, little is known about the impact of the shared nature of crowd identification on different stages of adolescent drinking behavior, or the way crowd identification interacts with participation in school-sponsored sports activities. This study examines drinking influences from (1) peers with shared crowd identities, and (2) peers who jointly participate in organized sports at their school (activity members). This study introduces a new network analytic approach that can disentangle the effects of crowd identification and sports participation on individual behavior. Using survey data from adolescents in five high schools in a predominantly Hispanic/Latino district (N = 1,707), this paper examines the association between social influences and each stage of drinking behavior (intention to drink, lifetime, past-month, and binge drinking) by conducting an ordinal regression analysis. The results show that both shared identities and joint participation were associated with all stages of drinking, controlling for friends' influence. Additionally, shared identification overlapped with joint participation was associated with more frequent drinking. Related policy implications are discussed.
Entailment recognition is a primary generic task in natural language inference, whose focus is to detect whether the meaning of one expression can be inferred from the meaning of the other. Accordingly, many NLP applications would benefit from high coverage knowledgebases of paraphrases and entailment rules. To this end, learning such knowledgebases from the Web is especially appealing due to its huge size as well as its highly heterogeneous content, allowing for a more scalable rule extraction of various domains. However, the scalability of state-of-the-art entailment rule acquisition approaches from the Web is still limited. We present a fully unsupervised learning algorithm for Web-based extraction of entailment relations. We focus on increased scalability and generality with respect to prior work, with the potential of a large-scale Web-based knowledgebase. Our algorithm takes as its input a lexical–syntactic template and searches the Web for syntactic templates that participate in an entailment relation with the input template. Experiments show promising results, achieving performance similar to a state-of-the-art unsupervised algorithm, operating over an offline corpus, but with the benefit of learning rules for different domains with no additional effort.
This paper presents a variable neighborhood search (VNS) algorithm that is specially designed for the blockmodeling of two-mode binary network matrices in accordance with structural equivalence. Computational results for 768 synthetic test networks revealed that the VNS heuristic outperformed a relocation heuristic (RH) and a tabu search (TS) method for the same problem. Next, the three heuristics were applied to two-mode network data pertaining to the votes of member countries on resolutions in the United Nations General Assembly. A comparative analysis revealed that the VNS heuristic often provided slightly better criterion function values than RH and TS, and that these small differences in criterion function values could sometimes be associated with substantial differences in the actual partitions obtained. Overall, the results suggest that the VNS heuristic is a promising approach for blockmodeling of two-mode binary networks. Recommendations for extensions to stochastic blockmodeling applications are provided.
Biologists depend on visual representations, and their use of diagrams has drawn the attention of philosophers, historians, and sociologists interested in understanding how these images are involved in biological reasoning. These studies, however, proceed from identification of diagrams on the basis of their spare visual appearance, and do not draw on a foundational theory of the nature of diagrams as representations. This approach has limited the extent to which we understand how these diagrams are involved in biological reasoning. In this paper, I characterize three different kinds of figures among those previously identified as diagrams. The features that make these figures distinctive as representational types, furthermore, illuminate the ways in which they are involved in biological reasoning.
The concept of manipulative abduction is devoted to capture the role of action in many interesting cognitive situations: action provides otherwise unavailable information that enables the agent to solve problems by starting and performing a suitable abductive process of generation or selection of hypotheses. We observe that many external things, usually inert from an epistemological point of view, can be transformed into epistemic mediators. I will present some details derived from the history of the discovery of the non-Euclidean geometries that illustrate the relationships between strategies for anomaly resolution and visual thinking. Geometrical diagrams are external representations that play both a mirror role (to externalize rough mental models) and an unveiling role (as gateways to imaginary entities). I describe them as epistemic mediators able to perform various explanatory, non-explanatory, and instrumental abductive tasks (discovery of new properties or new propositions/hypotheses, provision of suitable sequences of models as able to convincingly verifying theorems, etc.). I am also convinced that they can be exploited and studied in everyday non-mathematical applications also to the aim of promoting new trends in artificial intelligence modeling of various aspects of hypothetical reasoning: finding routes, road signs, buildings maps, for example, in connection with various zooming effects of spatial reasoning. I also think that the cognitive activities of optical, mirror, and unveiling diagrams can be studied in other areas of manipulative and model-based reasoning, such as the ones involving creative, analogical, and spatial inferences, both in science and everyday situations so that this can extend the epistemological, computational, and the psychological theory.
We use data on a real, large-scale social network of 27 million individuals interacting daily, together with the day-by-day adoption of a new mobile service product, to inform, build, and analyze data-driven simulations of the effectiveness of seeding (network targeting) strategies under different social conditions. Three main results emerge from our simulations. First, failure to consider homophily creates significant overestimation of the effectiveness of seeding strategies, casting doubt on conclusions drawn by simulation studies that do not model homophily. Second, seeding is constrained by the small fraction of potential influencers that exist in the network. We find that seeding more than 0.2% of the population is wasteful because the gain from their adoption is lower than the gain from their natural adoption (without seeding). Third, seeding is more effective in the presence of greater social influence. Stronger peer influence creates a greater than additive effect when combined with seeding. Our findings call into question some conventional wisdom about these strategies and suggest that their overall effectiveness may be overestimated.
We've talked about how difficult it can be to find or construct an optimal solution to real-world management problems, in which we're faced with nonlinear relationships and constraints that make it difficult to predict how specific decisions work together to impact performance. But in a certain way we've continued to simplify these real-world problems. There may be some shortcomings in the approaches we've taken in finding solutions, but what about the approaches we use to come up with the problems that we're trying to solve?
When we create a mathematical form to represent reality so that we can ultimately use analytics to provide an applicable real-world solution, are we missing something? And how much does this impact the real-world applicability and effectiveness of the solution we develop? These are critical questions for managers who want additional support in their decision making. Project managers don't want suggestions that come out of inappropriate assumptions.
What steps can we take to help ensure that we are, in fact, providing appropriate characterizations of reality when we structure problems and make sense of solutions? Although there are a lot of good places to start, one obvious place is an attempt to take into account the uncertainty associated with just about everything that takes place in the real world.
There is a clear truism in George Box's 1979 statement that “all models are wrong, some models are useful.” We attempt to model reality to see how changes can affect it – hopefully, for the better. But models of reality are, by their very nature, incomplete depictions, which tend to be misleading. Still worse are models and associated solutions that faithfully attempt to do justice to reality by incorporating many facets of reality into their structures. Unfortunately, a common result is an overemphasis on certain issues in decision making that, although interesting, are far less relevant to effective decisions than other issues that have been overlooked.
Ultimately, any approach to decision making is a balancing act between an appropriate accounting of relevant reality (that is, the objectives, decision variables, and constraints discussed in Chapter 4) and avoiding details that only obscure or mislead. When we attempt to rationalize all of the factors that might go into a decision-making process, as well as possible solutions that might be practically viable, we often “satisfice,” a term used to describe making a decision about a set of alternatives that respects the limitations of human time and knowledge.
As an extension to the discussion in Chapter 6, it's relevant at this point to reconsider how a feature such as Solver can come up with a solution. Although it's not necessarily critical for developers to understand the detailed technicalities of these packaged programs, any developer worth his or her salt should understand at least the limitations of these algorithms.
How Solver “Solves”
Many people use Solver with the expectation that it can find the optimal solution for any kind of problem (of reasonable size). But even small problems can have their nuances that make the job extremely difficult for the standard Solver add-in, and the resulting solutions are prone to poor performance (substantially less-than-optimal managerial recommendations). Engines like Solver commonly use hill-climbing algorithms to search for optimal solutions. In reality, this is just another heuristic (see Chapter 5). It starts with a guess of what the solution might be and then it sees if small changes to any of the decision variables of that solution can result in better value for the objective function that is subject to constraints.
Hill-climbing algorithms typically look into only one solution at a time. For example, consider the following hypothetical performance surface (where performance along the z-axis is some function of the two decision variables x and y). In Figure 7.1, a shaded dot represents a possible solution, one that, at this point, appears to be less than ideal. From a local perspective, it certainly doesn’t represent the apparent peak value of z attainable (shown by an ellipse).
The world changes all the time. Technological resources advance at such a clip it seems hard to keep up. But not all technological developments really need shadowing. What we really need to keep up with are developments that help us do the great things we've been doing all along…only more effectively.
Such is the case with spreadsheet applications. New interfaces and bells and whistles appear in each new version. Some are wonderful; however, not all are particularly useful. And sadly, some of the best features often get left behind (I always give as an example the losses in macro writing capabilities and graphical direct-editing we lost after MS Excel 2007 replaced Excel 2003). But that's history. We need to keep looking forward, and where new capabilities replace older ones, we are not necessarily left in the lurch. After all, MS Excel and VBA provide the remarkable ability to be expanded and customized to our particular needs, even in ways that can replicate some of what we loved about previous versions, and even in ways we might expect future versions to automate (but which are not yet in place).
With that in mind, the second edition of Excel Basics to Blackbelt capitalizes on many of the recent developments made available by more contemporary versions of Excel (while retaining a nod to the features of the past). Furthermore, this edition looks forward to capabilities that are not currently standard through the introduction of a series of newly developed Excel addins. The application and description of these are peppered throughout the second-edition text with an aim to augment the text and fill in the capability gaps that were present in the original edition.