We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
We have used the Automated Plate Scanner (APS) at the University of Minnesota to digitize glass copies of the blue and red plates of the original Palomar Observatory Sky Survey (POSS I) with |b| > 20°. The APS Image Database is a database of all digitized images larger than the photographic noise threshold. It includes all of the matched images in the object catalog, as well as those unmatched images above the noise threshold. The matched image data of the catalog has the advantage of confirming the reality of the image. This is especially important for small images near the plate limit. But these are not all of the detected real images; very blue or very red faint objects may be excluded by this matching requirement. The image database allows information on them to be retrieved, and is therefore a valuable complement to the object catalog. The operation of the APS and the scanning procedures are described in detail in Pennington et al. (1993). We are now processing plate data into the image database. A set of query forms, a tutorial and documentation can be found at http://isis.spa.umn.edu/IDB/homepage.idb.html.
This paper highlights the role that the World Wide Web (WWW) has to play as an aid to psychiatry. A basic history of the WWW is provided as is an introduction to some search techniques involved with the WWW. The literature on applications potentially relevant to psychiatry is reviewed using computer search facilities (BIDS, PsychLit and Medline). The WWW is one of the aspects of the Internet that possesses a huge potential for exploitation, both the clinical and research psychiatrist are able to benefit from its use.
Fourier coefficients are a valuable tool in the study of a wide variety of pulsating stars. They can be used to derive various physical parameters, including mass, luminosity, metallicity and effective temperature and are frequently used to discriminate between different pulsation modes. With the increase in large-scale surveys and the availability of data on the Internet, the number of Fourier coefficients available for study has expanded greatly and it is difficult to find all current data for individual stars or a subset of stars. To assist others in obtaining and making use of Fourier coefficients, an archive of published values of Fourier coefficients has been set up. Users can search for data on individual stars or for a range of parameters. Several Java programs are used to display the data in a variety of ways. The archive is located at the Web site http://www.earth.uni.edu/fourier/.
The observation of an area of 120° × 56° centered on RA=8h DEC=20° at 408 MHz was the first astronomical use of the MPIfR 100-m telescope (1970) and was designed to compile a complete sky survey using also data from Jodrell Bank and Parkes (Haslam et al., 1982). The observation of the northern sky at 1420 MHz started in 1972 using the Stockert 25-m telescope and was finished in 1976 (Reich and Reich 1986). This survey has been completed to an all sky survey using data from Villa Elisa (Argentina). The two surveys are absolutely calibrated. The angular resolutions are 0.8° and 0.59°, respectively. A number of surveys of the Galactic plane have been made with the 100-m telescope at arc minute angular resolution. Surveys at 2695 MHz (|b| ≤ 5°) (Reich et al. 1990, Fürst et al. 1990) and at 1410 MHz (|b| < 4°) (Reich et al. 1990) are public.
At medium Galactic latitudes (up to |b| = 20°) the emission consists mainly of faint extended ridges or arcs superimposed on the still dominating, about 10 times stronger, diffuse Galactic emission. They have never been investigated in a systematic way although they provide important clues for the understanding of the “disk-halo connection”. This region is covered by new observations at 1400 MHz with the 100-m telescope.
The status of the WWW-based Fourier Coefficient web site is presented. Currently the database has coefficients for not only galactic field variables, but also those found in globular clusters and other galaxies, including the Magellanic Clouds. The database can be used to show various correlations between physical characteristics of the stars and the coefficients, as well as inter-relationships between the coefficients themselves. The database is accessible at http://nitro9.earth.uni.edu/fourier/.
Economists choose theories and they choose ways of pursuing theories, and they leave others unchosen. Why do economists choose the way they do? How should economists choose? What are the objectives and what are the constraints? What should they be? The questions are both descriptive and prescriptive.
There are two broad classes of “criteria of choice” that have been somewhat systematically considered in the recent literature on economic methodology:
Empirical criteria. There are several possible ways of incorporating empirical criteria in one's theory of science. The respective methodology of theory assessment may be static or dynamic, it may be deductivist or inductivist, it may include various ideas of what constitutes empirical evidence, and so on. What they all share is the general idea that scientific theories are, or are to be, checked against empirical evidence according to some rules, and that this determines the choice of theory.
Social criteria. Again, there are several options. The social criteria may be related to the social interests of scientists or larger social collectives, they may be based on the persuasiveness and tradition-boundedness of theories, they may involve social or moral norms, they may be derived from various costs and benefits of holding a theory in a given research community, and so on. If they involve empirical data, it is the social aspects of the data that matter. What all these views share is that scientific theories are taken to have social attributes (functions, consequences) that play or should play a major role in theory choice.
We discuss from a practical point of view a number of issues involved in writing distributed Internet and WWW applications using LP/CLP systems. We describe PiLLoW, a public-domain Internet and WWW programming library for LP/CLP systems that we have designed to simplify the process of writing such applications. PiLLoW provides facilities for accessing documents and code on the WWW; parsing, manipulating and generating HTML and XML structured documents and data; producing HTML forms; writing form handlers and CGI-scripts; and processing HTML/XML templates. An important contribution of PiLLoW is to model HTML/XML code (and, thus, the content of WWW pages) as terms. The PiLLoW library has been developed in the context of the Ciao Prolog system, but it has been adapted to a number of popular LP/CLP systems, supporting most of its functionality. We also describe the use of concurrency and a high-level model of client-server interaction, Ciao Prolog's active modules, in the context of WWW programming. We propose a solution for client-side downloading and execution of Prolog code, using generic browsers. Finally, we also provide an overview of related work on the topic.
We investigate the problem of complex answers in question answering. Complex answers consist of several simple answers. We describe the online question answering system SHAPAQA, and using data from this system we show that the problem of complex answers is quite common. We define nine types of complex questions, and suggest two approaches, based on answer frequencies, that allow question answering systems to tackle the problem.
Since 1997, the EuroSOMNET project, funded by the EU-ENRICH programme, has assembled a metadatabase, and separate experimental databases, of European long-term experiments that investigate changes in soil organic matter. In this paper, we describe the WWW-based metadatabase, which is a product of this project. The database holds detailed records of 110 long-term soil organic matter experiments, giving a wide geographical coverage of Europe, and includes experiments from the European part of the former Soviet Union, many of which have not been available previously. For speed of access, records are stored as hyper-text mark-up language (HTML) files. In this paper, we describe the metadatabase, the experiments for which records are held, the information stored about each experiment, and summarize the main characteristics of these experiments. Details from the metadatabase have already been used to examine regional trends in soil organic matter in Germany and eastern Europe, to construct and calibrate a regional statistical model of humus balance in Russia, to examine the effects of climatic conditions on soil organic matter dynamics, to estimate the potential for carbon sequestration in agricultural soils in Europe, and to test and improve soil organic matter models. The EuroSOMNET metadatabase provides information applicable to a wide range of agricultural and environmental questions and can be accessed freely via the EuroSOMNET home page at URL: http://www.iacr.bbsrc.ac.uk/aen/eusomnet/index.htm.
Logical frameworks and meta-languages are intended as a common substrate for representing and implementing a wide variety of logics and formal systems. Their definition and implementation have been the focus of considerable work over the last decade. At the heart of this work is a quest for generality: A logical framework provides a basis for capturing uniformities across deductive systems and support for implementing particular systems. Similarly a meta-language supports reasoning about and using languages.
Logical frameworks have been based on a variety of different languages including higher-order logics, type theories with dependent types, linear logic, and modal logic. Techniques of representation of logics include higher-order abstract syntax, inductive definitions or some form of equational or rewriting logic in which substitution is explicitly encoded.
Examples of systems that implement logical frameworks include Alf, Coq, NuPrl, HOL, Isabelle, Maude, lambda-Prolog and Twelf. An active area of research in such systems is the study of automated reasoning techniques. Current work includes the development of various automated procedures as well as the investigation of rewriting tools that use reflection or make use of links with systems that already have sophisticated rewriting systems. Program extraction and optimization are additional topics of ongoing work.