To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Frequency hopping sequences sets are required in frequency hopping code division multiple access systems. For the anti-jamming purpose, frequency hopping sequences are required to have a large linear span. In this paper, by using a permutation polynomial δ(x) over a finite field, we transform several optimal sets of frequency hopping sequences with small linear span into ones with large linear span. The exact values of the linear span are presented by using the methods of counting the terms of the sequences representations. The results show that the transformed frequency hopping sequences are optimal with respect to the Peng-Fan bound, and can resist the analysis of Berlekamp-Massey algorithm.
In a previous paper [L. Giambruno and S. Mantaci, Theoret. Comput. Sci.411 (2010) 1785–1792] a bideterministic transducer is defined forthe bidirectional deciphering of words by the method introduced by Girod [IEEECommun. Lett. 3 (1999) 245–247]. Such a method is defined usingprefix codes. Moreover a coding method, inspired by the Girod’s one, is introduced, and atransducer that allows both right-to-left and left-to-right decoding by this method isdefined. It is proved also that this transducer is minimal. Here we consider the number ofstates of such a transducer, related to some features of the considered prefix codeX. We find some bounds of such a number of states in relation withdifferent notions of “size” of X. In particular, we give an exact formulafor the number of states of transducers associated to maximal prefix codes. We moreoverconsider two special cases of codes: maximal uniform codes and a class of codes, that wename string-codes. We show that they represent, for maximal codes, the extreme cases withregard to the number of states in terms of different sizes. Moreover we prove that prefixcodes corresponding to isomorphic trees have transducers that are isomorphic as unlabeledgraphs.
A closed λ-term M is easy if, for anyother closed term N, the lambda theory generated byM = N is consistent. Recently, it has been introduceda general technique to prove the easiness of λ-terms through thesemantical notion of simple easiness. Simple easiness implies easiness and allows to proveconsistency results via construction of suitable filter models ofλ-calculus living in the category of complete partial orderings: givena simple easy term M and an arbitrary closed term N, itis possible to build (in a canonical way) a non-trivial filter model which equates theinterpretation of M and N. The question whether easinessimplies simple easiness constitutes Problem 19 in the TLCA list of open problems. In thispaper we negatively answer the question providing a non-empty co-r.e. (complement of arecursively enumerable) set of easy, but not simple easy, λ-terms.
In this paper, we model knowledge dynamics in agents’ belief bases in a collaborative multi-agent system (MAS). Four change operators are introduced: expansion, contraction, prioritized revision, and non-prioritized revision. For all of them, both constructive definitions and an axiomatic characterization by representation theorems are given. We formally justify minimal change, consistency maintenance, and non-prioritization principles. These operators are based on an epistemic model for multi-source belief revision in which a rational way to weigh the beliefs using a credibility order among agents is developed. The defined operators can be seen as skills added to the agents improving the collective reasoning of a MAS.
This paper describes and compares the most relevant approaches for associating tags with semantics in order to make explicit the meaning of those tags. We identify a common set of steps that are usually considered across all these approaches and frame our descriptions according to them, providing a unified view of how each approach tackles the different problems that appear during the semantic association process. Furthermore, we provide some recommendations on (a) how and when to use each of the approaches according to the characteristics of the data source, and (b) how to improve results by leveraging the strengths of the different approaches.
In recent years, digital content of three-dimensional (3D) scientific objects has become widespread and is made available in a plethora of on-line scientific repositories. A systematic and formal approach becomes necessary to represent the knowledge/information related to these objects, in order to facilitate their search, retrieval and reuse. In this paper, we employ semantic interoperability to manage 3D scientific object content in a large-scale framework of scientific applications that consists of 3D shape models, associated tools and resources. We use an advanced ontological organization for their metadata, powerful reasoning engines for their search and retrieval and we elucidate several crucial issues in the design and implementation of 3D knowledge-based management systems.
In the framework of fully cooperative multi-agent systems, independent (non-communicative) agents that learn by reinforcement must overcome several difficulties to manage to coordinate. This paper identifies several challenges responsible for the non-coordination of independent agents: Pareto-selection, non-stationarity, stochasticity, alter-exploration and shadowed equilibria. A selection of multi-agent domains is classified according to those challenges: matrix games, Boutilier's coordination game, predators pursuit domains and a special multi-state game. Moreover, the performance of a range of algorithms for independent reinforcement learners is evaluated empirically. Those algorithms are Q-learning variants: decentralized Q-learning, distributed Q-learning, hysteretic Q-learning, recursive frequency maximum Q-value and win-or-learn fast policy hill climbing. An overview of the learning algorithms’ strengths and weaknesses against each challenge concludes the paper and can serve as a basis for choosing the appropriate algorithm for a new domain. Furthermore, the distilled challenges may assist in the design of new learning algorithms that overcome these problems and achieve higher performance in multi-agent applications.
The paper is concerned with the way in which “ontology” and “realism” are to be interpreted and applied so as to give us a deeper philosophical understanding of mathematical theories and practice. Rather than argue for or against some particular realistic position, I shall be concerned with possible coherent positions, their strengths and weaknesses. I shall also discuss related but different aspects of these problems. The terms in the title are the common thread that connects the various sections.
Starting from late 90’s the public administration has started to employ a quite relevantamount of its budget in developing ICT solutions to better deliver services to citizens.In spite of this effort many statistics show that the mere availability of ICT basedservices does not guarantee per se their usage. Citizens have continued to largely accessservices through “traditional” means. In our study we suggest that the highlightedsituation is partly due to the fact that relevant domain dependent requirements, mainlyrelated to the delivery process of e-government digital services, are often ignored in thedevelopment of e-government solutions. We provide here a domain related quality frameworkand encoded it in a set of formal statements, so that we can apply automatic verificationtechniques to assess and improve ICT solutions adopted by public administrations. Thepaper discusses both the defined quality framework and the tool chain we developed toenable automatic assessment of ICT solutions. The tool chain is based on a denotationalmapping of business process modeling notation elements into process algebraic descriptionsand to the encoding of quality requirements in linear temporal logic formulas. Theresulting approach has been applied to real case studies with encouraging results.
In this part we address theoretical aspects of density-ratio estimation.
In Chapter 13, we analyze the asymptotic properties of density-ratio estimation. We first establish the consistency and asymptotic normality of the KLIEP method (see Chapter 5) in Section 13.1, and we elucidate the asymptotic learning curve of the LSIF method (see Chapter 6) in Section 13.2. Then, in Section 13.3, we explain that the logistic regression method (see Chapter 4) achieves the minimum asymptotic variance when the parametric model is specified correctly. Finally, in Section 13.4, we compare theoretically the performance of density-ratio estimation methods, showing that separate density estimation (see Chapter 2) is favorable if correct density models are available, and direct density-ratio estimation is favorable otherwise.
In Chapter 14, the convergence rates of KLIEP (see Chapter 5) and uLSIF (see Chapter 6) are investigated theoretically under the non-parametric setup.
In Chapter 15, a parametric method of a two-sample test is described, and its properties are analyzed. We derive an optimal estimator of the divergence in the sense of the asymptotic variance, which is based on parametric densityratio estimation. Then we provide a statistic for two-sample tests based on the optimal divergence estimator, which is proved to dominate the existing empirical likelihood-score test.
Finally, in Chapter 16, the numerical stability of kernelized density-ratio estimators is analyzed. As shown in Section 7.2.2, the ratio fitting and the moment matching methods share the same solution in theory, although the optimization criteria are different.
In this part we show how density-ratio estimation methods can be used for solving various machine learning problems.
In the context of importance sampling (Fishman, 1996), where the expectation over one distribution is computed by the importance-weighted expectation over another distribution, density ratios play an essential role. In Chapter 9, the importance sampling technique is applied to non-stationarity/domain adaptation in the semi-supervised learning setup (Shimodaira, 2000; Zadrozny, 2004; Sugiyama and Müller, 2005; Storkey and Sugiyama, 2007; Sugiyama et al., 2007; Quiñonero- Candela et al., 2009; Sugiyama and Kawanabe, 2011). It is also shown that the same importance-weighting idea can be used for solving multi-task learning (Bickel et al., 2008).
Another major usage of density ratios is distribution comparisons. In Chapter 10, two methods of distribution comparison based on density-ratio estimation are described: inlier-base outlier detection, where distributions are compared in a pointwise manner (Smola et al., 2009; Hido et al., 2011), and two-sample tests, where the overall difference between distributions is compared within the framework of hypothesis testing (Sugiyama et al., 2011c).
In Chapter 11 we show that density-ratio methods allow one to accurately estimate mutual information (Suzuki et al., 2008, 2009a). Mutual information is a key quantity in information theory (Cover and Thomas, 2006), and it can be used for detecting statistical independence between random variables.