To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Sharjah Lunar Impact Observatory (SLIO), established in 2020, is known to be the only observatory in the Middle East and North Africa region that fully focuses on lunar impact observation and analysis. The Observatory is located inside the cosmic garden of the Sharjah Academy for Astronomy, Space Sciences and Technology, University of Sharjah, Sharjah, United Arab Emirates. The coordinates of the site are: 25°17’02.1”N 55°27’48.4”E, with an altitude of 80 m above sea level. We present 5 lunar impact events that were detected by SLIO in the year 2020. The associated properties of the events were deduced from comprehensive analysis in which we have recorded apparent magnitudes of 7.94, 8.92, 9.54, 10.06, and 7.79 and associated durations of 0.04, 0.08, 0.08, 0.04, 0.08 seconds respectively. Essentially, since the Moon is the closest companion to our Earth, these meteorites represent possible dangers on Earth as well as on the Moon. Therefore, a continuous monitoring system that provides an estimation of number, size and distribution of meteorites hitting the lunar surface can allow to predict threats to Earth as it would give information about the meteorite activity in Earth’s neighborhood, which can considerably help prevent potential disasters.
Life on Earth can be (has been) affected by various phenomena of non-biological or extraterrestrial origin, such as worldwide volcanic eruptions, the impacts of asteroids and comets, solar storms, the evolution of the Sun, supernova explosion, cosmic ray showers, …
These phenomena and the associated catastrophes can be grouped into different categories depending on its origin: terrestrial, solar system, galactic, and extragalactic, and the final destiny of the Universe. We will shortly described many of the identified risks and compare them by the degree of affectation for Life and Humanity.
The time scales and the area on Earth affected by each phenomenon vary considerably among them. We list the phenomena that can affect a region the size of a country, a continent, or a global catastrophe. However, we note that, given humanity’s degree of global economic and social interdependence, a local-scale phenomenon can even have global consequences.
The risks can be further classified in random and deterministic. Random threats are those associated with an event that has a certain probability of occurrence on a time scale, but we do not the exact date in the future, i.e.: an asteroid impact or a supernova explosion. Deterministic threats are those that will surely occur in a range of time in the future, i.e.: the evolution of the Sun into a red giant.
This comparative study will analyze what the “certainties” are (in statistical terms) about the different phenomena of extraterrestrial origin that will affect life on Earth on different geographical and temporal scales.
Could Machine Learning (ML) make fundamental discoveries and tackle unsolved problems in Cosmology? Detailed observations of the present contents of the universe are consistent with the Cosmological Constant ʌ & Cold Dark Matter model, subject to some unresolved inconsistencies (‘tensions’) among observations of the Hubble Constant and the clumpiness factor. To understand these issues further, large surveys of billions of galaxies and other probes require new statistical approaches. In recent years the power of ML, and in particular ‘Deep Learning’, has been demonstrated for object classification, photometric redshifts, anomaly detection, enhanced simulations, and inference of cosmological parameters. It is argued that the more traditional ‘shallow learning’ (i.e. with pre-processing feature extraction) is actually quite deep, as it brings in human knowledge, while ‘deep learning’ might be perceived as a black box, unless supplemented by explainability tools. The ‘killer applications’ of ML for Cosmology are still to come. New ways to train the next generation of scientists for the Data Intensive Science challenges ahead are also discussed. Finally, the chatbot ChatGPT is challenged to address the question posed in this article’s title.1
We performed astrometric and multicolor photometric observations of near-Earth asteroids at the SBG telescope of the Kourovka Astronomical Observatory of the Ural Federal University and the Zeiss-1000 telescope of the Simeiz Observatory of the Institute of Astronomy of the Russian Academy of Sciences. We improved orbital elements and estimated the A2 acceleration due to the Yarkovsky effect for asteroids (52768) 1998 OR2, (65690) 1991 DG, (159857) 2004 LJ1, (326732) 2003 HB6, (332446) 2008 AF4, (388945) 2008 TZ3, 2015 NU13 from astrometric observations with the SBG telescope. Furthermore, we estimated the axial rotation periods of the asteroids (137170) 1999 HF1, (159857) 2004 LJ1, (326732) 2003 HB6 from photometric observations with the SBG telescope. We obtained color indices for the asteroids (137170) 1999 HF1, (138127) 2000 EE14, (153591) 2001 SN263, (159857) 2004 LJ1, (326732) 2003 HB6, 2010 TV149 from multicolor photometric observations with the Zeiss-1000 telescope. Furthermore, we estimated the taxonomic classes for three asteroids, according to the color indices: the asteroid (153591) 2001 SN263 has class C, (159857) 2004 LJ1 has class S, and (326732) 2003 HB6 has class D.
As surveys grow, the challenge is how to explore and interpret the increasing quantity of data. Integral field spectroscopic (IFS) galaxy surveys are a good example of how data have grown in complexity and in volume. In order to find complex relations between the spatially resolved structures of galaxies and their accretion histories, we combine IFS high-dimensionality data, deep learning and numerical simulations to infer the evolutionary paths of galaxies. In this work we generate 10,000 simulated galaxies from TNG50 hydro-cosmological simulation to compare with the 10,000 galaxies observed in MaNGA, thus generating a mock MaNGA sample. We then analyse how the simulated galaxies reproduce the properties of MaNGA galaxies and study how the evolutionary paths of the mock galaxies relate to their observable properties. We finish by outlining our next steps which include using contrastive learning.
The ecological well-being of the Earth is closely connected with the prevention of asteroid-comet-meteoroid hazard. Asteroids and comets are the parent bodies of many meteoroids. Meteoroids, which are observed as meteors in the Earth’s atmosphere, always collide with the Earth. This means that the orbits of such meteoroids can be a signal for the detection of potentially dangerous larger bodies in such orbits. At the same time, the dynamics of the complex of meteoroid orbits has a complex intricate character. Meteor science is also engaged in unraveling the patterns of orbital paths and paths of interplanetary bodies potentially dangerous for life on Earth. A separate section of meteor science is associated with the chemistry of the influx of meteoric matter. The report is devoted to the analysis of the above problems, as well as related issues, using open databases of meteor data and others, with an emphasis on radio data.
The ability of any Machine Learning method to classify the spectra of galaxies depending on the properties of the stellar component rests on the information content of the data. The well-known degeneracies found in population synthesis models suggest this information might be so entangled as to challenge the most sophisticated Deep Learning approaches. This contribution focuses on the traditional definition of entropy to explore this problem from a fundamental viewpoint. We find that the information content – when interpreting the spectrum as a probability distribution function – is reduced to a few spectral intervals that are strongly correlated. Dimensionality reduction via PCA suggests the standard 4000Å break strength and Balmer absorption are the two most informative regions in the analysis of galaxy spectra.
With the volume and availability of astronomical data growing rapidly, astronomers will soon rely on the use of machine learning algorithms in their daily work. This proceeding aims to give an overview of what machine learning is and delve into the many different types of learning algorithms and examine two astronomical use cases. Machine learning has opened a world of possibilities for us astronomers working with large amounts of data, however if not careful, users can trip into common pitfalls. Here we’ll focus on solving problems related to time-series light curve data and optical imaging data mainly from the Deeper, Wider, Faster Program (DWF). Alongside the written examples, online notebooks will be provided to demonstrate these different techniques. This guide aims to help you build a small toolkit of knowledge and tools to take back with you for use on your own future machine learning projects.
Large area astronomical surveys will almost certainly contain new objects of a type that have never been seen before. The detection of ‘unknown unknowns’ by an algorithm is a difficult problem to solve, as unusual things are often easier for a human to spot than a machine. We use the concept of apparent complexity, previously applied to detect multi-component radio sources, to scan the radio continuum Evolutionary Map of the Universe (EMU) Pilot Survey data for complex and interesting objects in a fully automated and blind manner. Here we describe how the complexity is defined and measured, how we applied it to the Pilot Survey data, and how we calibrated the completeness and purity of these interesting objects using a crowd-sourced ‘zoo’. The results are also compared to unexpected and unusual sources already detected in the EMU Pilot Survey, including Odd Radio Circles, that were found by human inspection.
Current and upcoming large optical and near-infrared astronomical surveys have fundamental science as their primary drivers. To cater to those, these missions scan large fractions of the entire sky at multiple wavelengths and epochs. These aspects make these data sets also valuable for investigations into astronomical hazards for life on Earth. The Netherlands Research School for Astronomy (NOVA) is a partner in several optical / near-infrared surveys. In this paper we focus on the astronomical hazard value for two sets of those: the surveys with the OmegaCAM wide-field imager at the VST and with the Euclid Mission. For each of them we provide a brief overview of the astronomical survey hardware, the data and the information systems. We present first results related to the astronomical hazard investigations. We evaluate to what extent the existing functionality of the information systems covers the needs for the astronomical hazard investigations.
Most of our knowledge about the Universe comes from the careful analysis of light that reaches us. Spectroscopy, which is the most detailed method of spectrum analysis, when applied to stars provides information on the parameters of their atmospheres, including effective temperature, acceleration, velocity fields, and their chemical composition. Stellar classification brought forth the understanding of what physical parameters are critical in shaping stellar atmospheres. It is a key element that has linked efforts related to numerical modelling of atmospheres with observations. We present preliminary results on the method of stellar spectra classification based on large-scale unsupervised pre-training. The applied deep neural network of the auto-encoder type, thanks to the use of differentiable elements of physical modelling in the decoder, allows to work with medium to high-resolution spectra, is insensitive to normalization errors, and different radial and rotational velocity, and operates in a wide range of signal-to-noise ratio.
Determination of membership of star clusters is a very important criterion in their study as they effect determination of cluster parameters like radius, age, distance, mass functions, etc. In this paper, we apply the unsupervised method of Gaussian Mixture Modelling (GMM) to find membership of 9 open star clusters of varying ages and locations in the galaxy using Gaia DR3 data. We compare our results to help understand the efficiency of GMM. We find that this method works well with relaxed clusters with ages larger than their relaxation times they approximate Gaussians better.
Machine Learning is a powerful tool for astrophysicists, which has already had significant uptake in the community. But there remain some barriers to entry, relating to proper understanding, the difficulty of interpretability, and the lack of cohesive training. In this discussion session we addressed some of these questions, and suggest how the field may move forward.
During the IAU symposium, 368 “Machine Learning in Astronomy: Possibilities and Pitfalls” in Busan, we organized a panel discussion on the different aspects of data-fusion for large data-sets. Driven by the needs of the scientists, data-fusion technics had been introduces to enable multi-wavelength as well as multi-messenger approaches. This is necessary to get a more detailed and more complete representation of physical phenomena. We identified six different aspects related to data-fusion. Those aspects cover missing data, heterogeneous data, data-access in general, challenges related to data-size, FAIR-data, and future challenges.1
In the field of gravitational-wave (GW) interferometers, the most severe limitation to the detection of transient signals from astrophysical sources comes from transient noise artefacts, known as glitches, that happens at a rate around 1 per minute. Because glitches reduce the amount of scientific data available, there is a need for better modelling and inclusion of glitches in large-scale studies, such as stress testing the search pipelines and increasing the confidence of detection. In this work, we employ a Generative Adversarial Network (GAN) to produce a particular class of glitches (blip) in the time domain. We share the trained network through a user-friendly open-source software package called <monospace>gengli</monospace> and provide practical examples of its usage.