To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Dempster (1958, 1960) proposed a non-exact test for the two-sample significance test when the dimension of data is larger than the degrees of freedom. He raised the question of what statisticians should do if traditional multivariate statistical theory does not apply when the dimension of data is too large. Later, Bai and Saranadasa (1996) found that even when traditional approaches can be applied, they are much less powerful than the non-exact test when the dimension of data is large. This raised another question of how classical multivariate statistical procedures could be adapted and improved when the data dimension is large. These problems have attracted considerable attention since the middle of the first decade of this century. Efforts towards solving these problems have been made along two directions: the first is to propose special statistical procedures to solve ad hoc large-dimensional statistical problems where traditional multivariate statistical procedures are inapplicable or perform poorly, for some specific large-dimensional hypotheses. The family of various non-exact tests follows this approach. The second direction, following the work of Bai et al. (2009a), is to make systematic corrections to the classical multivariate statistical procedures so that the effect of large dimension is overcome. This goal is achieved by employing new and powerful asymptotic tools borrowed from the theory of random matrices, such as the central limit theorems in Bai and Silverstein (2004) and Zheng (2012).
Recently, research along these two directions has become very active in response to an increasingly important need for analysis of massive and large-dimensional data. Indeed, such “big data” are nowadays routinely collected owing to rapid advances in computer-based or web-based commerce and data-collection technology.
To accommodate such need, this monograph collects existing results along the aforementioned second direction of large-dimensional data analysis. In Chapters 2 and 3, the core of fundamental results from random matrix theory about sample covariance matrices and random Fisher matrices is presented in detail. Chapters 4–12 collect large-dimensional statistical problems in which the classical large sample methods fail and the new asymptotic methods, based on the fundamental results of the preceding chapters, provide a valuable remedy.
The concept of personalization in its many forms has gained traction driven by the demands of computer-mediated interactions generally implemented in large-scale distributed systems and ad hoc wireless networks. Personalization requires the identification and selection of entities based on a defined profile (a context); an entity has been defined as a person, place, or physical or computational object. Context employs contextual information that combines to describe an entities current state. Historically, the range of contextual information utilized (in context-aware systems) has been limited to identity, location, and proximate data; there has, however, been advances in the range of data and information addressed. As such, context can be highly dynamic with inherent complexity. In addition, context-aware systems must accommodate constraint satisfaction and preference compliance.
This article addresses personalization and context with consideration of the domains and systems to which context has been applied and the nature of the contextual data. The developments in computing and service provision are addressed with consideration of the relationship between the evolving computing landscape and context. There is a discussion around rule strategies and conditional relationships in decision support. Logic systems are addressed with an overview of the open world assumption versus the closed world assumption and the relationship with the Semantic Web. The event-driven rule-based approach, which forms the basis upon which intelligent context processing can be realized, is presented with an evaluation and proof-of-concept. The issues and challenges identified in the research are considered with potential solutions and research directions; alternative approaches to context processing are discussed. The article closes with conclusions and open research questions.
Multi-agent systems have been widely used in the literature, including for the monitoring of distributed systems. However, one of the unresolved issues in this technology remains in the reassignment of the responsibilities of monitoring agents when some of them become unable to meet their obligations. This paper proposes a new approach for solving this problem based on (a) the gathering of evidence on whether the agent can or cannot fulfil the tasks it has been assigned and (b) the reassignment of the task to alternative agents using their trust level as a selection parameter. A weather station case study is proposed as an instantiation of the proposed model.