Hostname: page-component-89b8bd64d-r6c6k Total loading time: 0 Render date: 2026-05-08T13:25:28.618Z Has data issue: false hasContentIssue false

RFI detection with spiking neural networks

Published online by Cambridge University Press:  04 April 2024

N.J. Pritchard*
Affiliation:
International Centre for Radio Astronomy Research, University of Western Australia, Perth, WA, Australia
A. Wicenec
Affiliation:
International Centre for Radio Astronomy Research, University of Western Australia, Perth, WA, Australia
M. Bennamoun
Affiliation:
School of Physics, Mathematics and Computing, University of Western Australia, Perth, WA, Australia
R. Dodson
Affiliation:
International Centre for Radio Astronomy Research, University of Western Australia, Perth, WA, Australia
*
Corresponding author: N.J. Pritchard; Email: nicholas.pritchard@icrar.org
Rights & Permissions [Opens in a new window]

Abstract

Detecting and mitigating radio frequency interference (RFI) is critical for enabling and maximising the scientific output of radio telescopes. The emergence of machine learning (ML) methods capable of handling large datasets has led to their application in radio astronomy, particularly in RFI detection. Spiking neural networks (SNNs), inspired by biological systems, are well suited for processing spatio-temporal data. This study introduces the first exploratory application of SNNs to an astronomical data processing task, specifically RFI detection. We adapt the nearest latent neighbours (NLNs) algorithm and auto-encoder architecture proposed by previous authors to SNN execution by direct ANN2SNN conversion, enabling simplified downstream RFI detection by sampling the naturally varying latent space from the internal spiking neurons. Our subsequent evaluation aims to determine whether SNNs are viable for future RFI detection schemes. We evaluate detection performance with the simulated HERA telescope and hand-labelled LOFAR observation dataset the original authors provided. We additionally evaluate detection performance with a new MeerKAT-inspired simulation dataset that provides a technical challenge for machine-learnt RFI detection methods. This dataset focuses on satellite-based RFI, an increasingly important class of RFI and is an additional contribution. Our SNN approach remains competitive with the original NLN algorithm and AOFlagger in AUROC, AUPRC, and F1-scores for the HERA dataset but exhibits difficulty in the LOFAR and Tabascal datasets. However, our method maintains this accuracy while completely removing the compute and memory-intense latent sampling step found in NLN. This work demonstrates the viability of SNNs as a promising avenue for ML-based RFI detection in radio telescopes by establishing a minimal performance baseline on traditional and nascent satellite-based RFI sources and is the first work to our knowledge to apply SNNs in astronomy.

Information

Type
Research Article
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution and reproduction, provided the original article is properly cited.
Copyright
© The Author(s), 2024. Published by Cambridge University Press on behalf of Astronomical Society of Australia
Figure 0

Table 1. Results of hyperparameter search. The number of trials conducted for each dataset is listed in parentheses next to the dataset name. LOFAR optimisation is limited due to extensive training length.

Figure 1

Table 2. Description of each dataset used for training and testing. We reproduce values for the HERA and LOFAR datasets from Mesarcik et al. (2022a).

Figure 2

Figure 1. Comparison of NLN and SNLN methods on HERA data with all available noise sources. The AOFlagger threshold sets the baseline amplitude used to determine noise. The SNLN example used runs for 256 inference steps averaging over the last 128. Both the NLN and SNLN methods outperform AOFlagger at a low threshold, with NLN outperforming AOFlagger in all metrics at all thresholds and SNLN outperforming NLN in F1-scores in most cases and AUPRC in some cases. This demonstrates that SNLN retains the NLN’s principal benefit: the ability to train and perform on over-flagged data.

Figure 3

Figure 2. Out-of-distribution (OOD) performance comparison between AOFlagger and the NLN and SNLN methods. For each RFI morphology listed, all examples of that noise are withheld from the training data and exclusively present in the testing set. This test demonstrates the ability of NLN and SNLN to flag RFI that is completely unknown to the auto-encoder. The SNLN example used runs for 256 inference steps averaging over the last 128. Broad-band transient RFI is modelled on events like lighting, present across all frequencies but isolated in time. Broad-band continuous RFI is modelled on satellite communications present across a wide range of contiguous frequencies and isolated in time. Narrow-band burst RFI is modelled on ground station communication that is isolated in frequency but present over all time. Blips are isolated in frequency and time to a single impulse. See Mesarcik et al. (2022a) for for further details on the RFI included in the HERA dataset.

Figure 4

Table 3. Performance comparison between NLN and SNLN methods on HERA data set with AOFlagger threshold of 10. Best scores in bold. The first number in SNLN entries indicates the number of inference timesteps, and the second is the number of averaged inference frames.

Figure 5

Figure 3. Example HERA Spectrogram, the original mask, the output mask of the NLN algorithm and the output mask of the SNLN algorithm.

Figure 6

Table 4. Performance comparison between NLN and SNLN methods on LOFAR data set with AOFlagger threshold of 10. Best scores in bold. The first number in SNLN entries indicates the number of inference timesteps, and the second is the number of averaged inference frames. The AOFlagger results are taken from Mesarcik et al. (2022a).

Figure 7

Figure 4. Example LOFAR Spectrogram, the original mask, the output mask of the NLN algorithm and the output mask of the SNLN algorithm.

Figure 8

Table 5. Performance comparison between NLN and SNLN methods on Tabascal data set with AOFlagger threshold of 10. Best scores in bold. The first number in SNLN entries indicates the number of inference timesteps, and the second is the number of averaged inference frames.

Figure 9

Figure 5. Example Tabascal Spectrogram, the original mask, the output mask of the NLN algorithm and the output mask of the SNLN algorithm.