Hostname: page-component-77f85d65b8-6c7dr Total loading time: 0 Render date: 2026-03-28T15:24:34.586Z Has data issue: false hasContentIssue false

A comparison between regional sea-level projections and observations

Published online by Cambridge University Press:  04 February 2026

Carolina M. L. Camargo*
Affiliation:
Department of Physical Oceanography, Woods Hole Oceanographic Institution, Woods Hole, USA
Victor Malagón-Santos
Affiliation:
Department Estuarine and Delta Systems, NIOZ Royal Netherlands Institute for Sea Research, Yerseke, The Netherlands
Jeemijn Scheen
Affiliation:
Department Estuarine and Delta Systems, NIOZ Royal Netherlands Institute for Sea Research, Yerseke, The Netherlands
Brendan Oerlemans
Affiliation:
Department Estuarine and Delta Systems, NIOZ Royal Netherlands Institute for Sea Research, Yerseke, The Netherlands Department of Earth Science, Vrije Universiteit Amsterdam , Amsterdam, The Netherlands
Aimee B.A. Slangen
Affiliation:
Department Estuarine and Delta Systems, NIOZ Royal Netherlands Institute for Sea Research, Yerseke, The Netherlands Department of Physical Geography, Utrecht University, Utrecht, The Netherlands
*
Corresponding author: Carolina M.L. Camargo; Email: carolina.camargo@whoi.edu
Rights & Permissions [Opens in a new window]

Abstract

Sea-level projections are highly anticipated outcomes of climate model simulations, relevant for coastal management worldwide. Ideally, any model simulation needs to be validated against observations, but this is impossible for the most recent sets of future climate model simulations, which start near to the present day (2020). Here, we compare satellite observations of regional sea-level change against projections from the Intergovernmental Panel on Climate Change Fifth Assessment Report for total sea-level change and its individual components over their overlapping period, from 2007–2022. We first test and compare three different methods to reduce the internal variability in the observations, which hampers the comparison with models, in particular for shorter time periods. While all three methods reduce the internal variability, we find the low-frequency component analysis (LFCA) removes most of the internal variability. We find that the regional projections are in good agreement with the LFCA-filtered observations, for 96% of ocean area within the 90% confidence interval. For the total sea level and sterodynamic component, the projections both under- and overestimate the observations, depending on the region. For mass-driven sea-level change, the regional projections tend to overestimate the observations. Our analysis gives confidence in sea-level projections for the instrumental era.

Information

Type
Review
Creative Commons
Creative Common License - CCCreative Common License - BYCreative Common License - NC
This is an Open Access article, distributed under the terms of the Creative Commons Attribution-NonCommercial licence (http://creativecommons.org/licenses/by-nc/4.0), which permits non-commercial re-use, distribution, and reproduction in any medium, provided the original article is properly cited. The written permission of Cambridge University Press or the rights holder(s) must be obtained prior to any commercial use.
Copyright
© The Author(s), 2026. Published by Cambridge University Press
Figure 0

Figure 1. Trends from observed (a) regional sea-level change, (c) sterodynamic sea-level (SDSL) change and (e) mass redistribution, for the period 2007–2022 (see Methods section for observation details). The corresponding IPCC AR5 projections under the RCP4.5 scenario are shown in (b), (d) and (f), respectively.

Figure 1

Figure 2. (a–c) Altimetry-observed regional sea-level change trends (mm/yr) for the period 2007–2022 after removing internal variability with (a) low frequency component analysis (LFCA), (b) multiple variable linear regression (MVLR) and (c) self-organizing maps (SOM). (d) Global mean sea-level change with respect to the year 2007, showing the altimetry-observed changes before and after removing variability, and the IPCC AR5 model projections for the RCP4.5 scenario. The shaded area represents the interval between the 5% and 95% percentiles of the ensemble members of the AR5 RCP4.5 scenario projections.

Figure 2

Table 1. Area-weighted global mean trends of total sea level from IPCC AR5 projections, satellite altimetry observations and altimetry (alt.) observations after filtering using MVLR, SOM and LFCA, over the period 2007–2022

Figure 3

Figure 3. RMSE between IPCC AR5 total sea-level projections and (a) unfiltered, (b) MVLR-filtered, (c) SOM-filtered and (d) LFCA-filtered satellite altimetry observations.

Figure 4

Figure 4. Difference between IPCC AR5 total, SDSL and mass-driven projections (RCP4.5) and (LFCA-filtered) observations, expressed in RMSE (a, d, g), trends anomalies (projections - observations, b, e, h) and trend histograms (c, f, i).

Supplementary material: File

Camargo et al. supplementary material

Camargo et al. supplementary material
Download Camargo et al. supplementary material(File)
File 6.2 MB

Author comment: A comparison between regional sea-level projections and observations — R0/PR1

Comments

No accompanying comment.

Review: A comparison between regional sea-level projections and observations — R0/PR2

Conflict of interest statement

Reviewer declares none.

Comments

Recommendation: Major revision

Review of: “A comparison between regional sea-level projections and observations” by Carolina M. L.Camargo, Victor Malagón-Santos, Jeemijn Scheen, Brendan Oerlemans, and Aimée B. A. Slangen

General comments:

This paper makes a potentially valuable contribution for researchers and decision-makers interested in regional sea-level projections. A notable strength is the use of different statistical methods to separate internal variability from the forced signal in satellite observations and reanalysis products, thereby allowing a more direct comparison with the scenario-based projections in IPCC AR5.

The study focuses on the 15-year period 2007–2022 and applies three filtering techniques—Low-Frequency Component Analysis (LFCA), Multiple Variable Linear Regression (MVLR), and Self-Organizing Maps (SOM)—to reduce internal variability in the observational datasets. The manuscript could be strengthened by slightly shifting the emphasis toward a more direct comparison of the filtering techniques, which would make the AR5 assessment more robust. I also encourage the authors to add methodological detail in the Data and Methods sections. This would not only support reproducibility but also make the results more convincing. The most significant limitation of the study is the relatively short observational record.

The manuscript is generally well written. However, some figures are difficult to interpret due to plotting styles and missing legends.

Please see the comments below for my detailed review:

Section: Data, Observations

The authors use total sea level from satellite altimetry, while sterodynamic sea level (SDSL) and barystatic components are derived from CMEMS and GRACE/GRACE-FO. This raises concerns, as the sea-level budget may not close by summing SDSL and barystatic contributions to match the total. Consequently, the AR5 assessment of the total and component contributions is less robust, though SDSL indeed explains most of the spatial variability in Fig. 3.

Section: Data, IPCC AR5 sea-level projections

• It is unclear whether pre-industrial control drifts were removed from the CMIP5 models in the AR5 projections. Otherwise, model drift could contaminate both global and regional trend estimates.

• Given the short study period, would it be possible to extend the analysis by including an additional decade from CMIP5 historical simulations?

• The authors note that discrepancies largely stem from internal variability, which is suppressed in the multi-model ensemble mean. Filtering observations is therefore a sensible step toward consistency. However, a methodological asymmetry remains: filtering is applied only to observations, not to individual CMIP5 runs before averaging. While internal variability should be cancelled in the ensemble mean, a sensitivity check (e.g., filtered vs. unfiltered ensemble mean) would increase confidence that the assessment is not biased by processing choices. At minimum, the rationale for applying filtering only to observations should be clarified.

Section: Methods, Low-frequency Component Analysis (LFCA)

Because the observational record (1993–2022) is short, EOF-based methods like LFCA are strongly influenced by internal variability. Over this time span, modes such as ENSO and PDO dominate variance, while the externally forced trend is relatively weak. This raises two concerns:

1. Leakage of the forced signal into variability modes.

2. Temporal instability of EOF patterns depending on which strong events occur within the analysis window.

As a result, there is ambiguity as to whether the leading low-frequency components truly represent the forced response or simply realized internal variability. The robustness of the results would be enhanced by:

I. Including sensitivity tests that shift the analysis period (e.g., early vs. late altimetry years).

II. Demonstrating the LFCA approach on long control or single-forcing model runs, where the forced signal is known.

III. At minimum, clarifying in the discussion that the limited record length constrains the robustness of the LFCA separation.

Section: Methods, Multiple Variable Linear Regression (MVLR)

Equation (2) shows that MVLR reconstructs sea level as a function of climate indices plus a linear trend. This suggests the MVLR output reflects patterns of internal variability rather than the forced signal. This distinction should be made clearer, as it has implications for interpreting the RMSE results.

Section: Methods, Self-organizing maps (SOM)

The authors cite previous work for the SOM clustering. For clarity, it would be helpful to briefly summarize the procedure here, including how many clusters were created and how they were determined.

Section: Results, comparing regional AR5 projections to filtered observations

Figures 1 and 3 highlight regional discrepancies between AR5 and observations, with LFCA filtering emphasized in the main text. However, Appendix Fig. 5 is especially instructive, as it illustrates how internal variability explains much of the mismatch. The similarity in RMSE patterns between the raw comparison (panel a) and the MVLR case (panel c) indicates that discrepancies largely reflect internal variability in observations but absent in the AR5 ensemble mean. This reinforces that ensemble averaging suppresses internal variability, leaving only the forced response. The MVLR method essentially reconstructs observed internal variability, which explains why its comparison to AR5 ensemble the raw case. Bringing Appendix Fig. 5 into the main text and highlighting this point more explicitly would strengthen the manuscript. It would also clarify that the high RMSE in unfiltered comparisons is not evidence of model failure, but rather the expected result of comparing a forced-only signal to observations containing substantial internal variability.

Figures

Please add units to the y-axis of the histograms in Fig. 3.

It would be helpful to explain why a two-sided histogram was chosen rather than a

Review: A comparison between regional sea-level projections and observations — R0/PR3

Conflict of interest statement

Reviewer declares none.

Comments

This is a review for the manuscript titled: A comparison between regional sea-level projections and observations. I think that this paper addresses some interesting details regarding how one can go about removing natural, low-frequency variability from the climate system due to underlying dynamics responding to an unperturbed climate system in order to obtain the changes due to perturbations such as GHG, solar, and volcanic forcing anomalies. I believe that I would have already chosen the low-pass filtered data to give the best results a priori since, for the MVLR, the climate indices are surely missing some unperturbed low-frequency climate system responses which just so happen to not be indexed (weakly connected regions, delays in regional responses which are non-stationary, etc.), and I am still skeptical about how well neural-network analyses can inform complex non-linear physical processes, though pleased to see that there is at least some utility for this method presented here in this paper.

Even though the LFCA does the best job of reducing RMSE between obs and projections, there is still the difficulty of estimating ‘true’ trends from low-pass filtered variability, since some amount of it very well may reveal real monotonic changes in sea level due to climate change and some amount of it will be unresolved intrinsic climate variability. This doesn’t seem stated explicitly in the paper, but I would ask that this be briefly addressed in some way (and hopefully better than I did here!).

The results as they are presented are interesting and I believe haven’t been done in this way in other work. I would like to make a suggestion that can be dismissed, because I don’t want to impose more analytical work on the authors. But I do think it could be worth examining briefly. Maybe you could test out how well regions match between trends from obs and projections by doing a trend analysis on the ‘raw’ (unfiltered) data, but reducing the degrees of freedom by the integral time scale method via the autocorrelation (see, e.g., Emery & Thomson, Data Analysis Methods in Physical Oceanography, Sec. 3.15.1). This has the advantage of not having to remove variability that may be partly intrinsic and partly forced response, but you can still test the trend’s difference from zero at a confidence level (say, 90% CL). It’s just a thought, and this is a well-accepted statistical test for trends in data with high levels of autocorrelation.

Anyway, I think that the paper overall looks good, and is comprehensible and delivers valuable results. I recommend publication after a little more description of limits of the methods as outlined above (2nd paragraph) and fixing a few errors below.

Page 1, line 42: ‘published in 2007’ - the AR5 main document for sea level (the Church et al. one) was 2013. The dataset is 2007 onward.

Page 3, line 37: ‘average of the four products’ - I see a global ocean physics ensemble of three products. Even if I’m missing something, please reference the names and / or IDs of these products either in the main text, or in the appendix (and if the latter, reference that they are listed there).

Recommendation: A comparison between regional sea-level projections and observations — R0/PR4

Comments

This manuscript was very well written, and I believe it makes a valuable contribution to the literature. In addition to the reviewer’s comments, I would encourage more methodological detail in regard to the SOM approach and the AR5 projections and a clearer discussion on limitations of the approach/methods/data used.

Decision: A comparison between regional sea-level projections and observations — R0/PR5

Comments

No accompanying comment.

Author comment: A comparison between regional sea-level projections and observations — R1/PR6

Comments

Dear Editor,

We had prepared already a rebuttal pdf and a track-changes version of the manuscript for the revision process. I did not see where to updload those files, and the rebuttal was on a simple text box, which makes it hard to follow. Thus, I uploaded the rebuttal pdf and the track-changes manuscript as supplemental material in the ‘File Upload’ option.

Review: A comparison between regional sea-level projections and observations — R1/PR7

Conflict of interest statement

Reviewer declares none.

Comments

I have read through all the responses and changes to the manuscript. I am satisfied with all of these.

Review: A comparison between regional sea-level projections and observations — R1/PR8

Conflict of interest statement

Reviewer declares none.

Comments

The authors address all my questions and concerns. Now I am confident to suggest that this manuscript be accepted as is.

Recommendation: A comparison between regional sea-level projections and observations — R1/PR9

Comments

The authors have satisfactorily addressed all comments and made changes where necessary. I believe the manuscript is now suitable for publication.

Decision: A comparison between regional sea-level projections and observations — R1/PR10

Comments

No accompanying comment.