Hostname: page-component-68c7f8b79f-xmwfq Total loading time: 0 Render date: 2025-12-22T07:07:34.845Z Has data issue: false hasContentIssue false

Exploring unstable approaches in aviation: utilising functional resonance analysis method

Published online by Cambridge University Press:  22 December 2025

G. K. Kaya*
Affiliation:
Safety and Accident Investigation Centre, Cranfield University , Cranfield, UK
R. Stallard
Affiliation:
UK Civil Aviation Authority, London, UK
M. St-Laurent
Affiliation:
Safety and Accident Investigation Centre, Cranfield University , Cranfield, UK
W.-C. Li
Affiliation:
Safety and Accident Investigation Centre, Cranfield University , Cranfield, UK
M. Sujan
Affiliation:
Centre for Assuring Autonomy, Department of Computer Science, University of York, Heslington, UK
*
Corresponding author: G. K. Kaya; Email: kubra.kaya@cranfield.ac.uk
Rights & Permissions [Opens in a new window]

Abstract

Unstable approaches are one of the main safety concerns that contribute to approach and landing accidents. The International Air Transport Association reports that, between 2012 and 2016, 61% of accidents occurred during the approach and landing phase, of which 16% involved unstable approaches. This study addresses this issue by applying the Functional Resonance Analysis Method to examine the dynamics of stable approaches. A total of 195 aviation safety reports, which referred to near-miss data from a single airline, were used in the analysis to identify both actual and aggregated variability. The findings revealed that variability mainly occurred in the following functions: control speed, configure aircraft for landing, communicate with air traffic control and manage flight paths. Effective communication, coordination and collaboration, as well as monitoring, briefings and checklists, were key factors in managing the variability of a stable approach. The study reveals how adopting a perspective of ‘how things go right’ provides insightful findings regarding approach stability, complementing traditional approaches focused on ‘what went wrong’. This study also highlights the value of utilising the Functional Resonance Analysis Method to analyse near-miss data and uncover systemic patterns in everyday flight operations.

Information

Type
Research Article
Creative Commons
Creative Common License - CCCreative Common License - BYCreative Common License - NCCreative Common License - ND
This is an Open Access article, distributed under the terms of the Creative Commons Attribution-NonCommercial-NoDerivatives licence (https://creativecommons.org/licenses/by-nc-nd/4.0/), which permits non-commercial re-use, distribution, and reproduction in any medium, provided that no alterations are made and the original article is properly cited. The written permission of Cambridge University Press or the rights holder(s) must be obtained prior to any commercial use and/or adaptation of the article.
Copyright
© The Author(s), 2025. Published by Cambridge University Press on behalf of Royal Aeronautical Society

Nomenclature

ATC

air traffic control

Cn

Case n

ECAM

electronic centralised aircraft monitor

EFB

electronic flight bag

EGPWS

enhanced ground proximity warning system

fpm

feet per minute

FRAMF

unctional Resonance Analysis Method

ft

feet

IATA

International Air Transport Association

min

minute

nm

nautical mile pf pilot flying

PM

pilot monitoring

Rn

recommendation n

RA

radar altimeter

SOP

standard operating procedure

Vapp

approach speed

VLS

lowest selectable speed

WAD

work-as-done

WAI

work-as-imagined

1.0 Introduction

The aviation industry is held up as the model of a safety-critical sector that prioritises safety. However, despite its high safety performance, the aviation industry continues to experience catastrophic accidents, most of which occur during the approach and landing phases of the flight [12, 27, 64]. An unstable approach is one of the primary safety concerns, as it increases the risk of accidents during the approach and landing phases [Reference Dai, Liu and Hansen6, 27, Reference Moriarty and Jarvis43]. Although not all unstable approaches lead to adverse outcomes, they can be considered the symptoms of accidents [Reference Wischmeyer69]. An unstable approach occurs when the aircraft does not follow a stable and predictable flight path along prescribed parameters. In a stable approach, the aircraft is correctly configured for landing, descends at a consistent rate, is on the correct glide path, and maintains the proper airspeed and power control; all briefings and checklists are completed [Reference Blajev and Curtis3, Reference Moriarty and Jarvis43].

Significant efforts have been made to manage the unstable approach problem in the aviation industry. Various parts of the aviation industry, including regulators and international organisations, have launched numerous campaigns and action plans. For instance, the Flight Safety Foundation gathered a task force to explore the unstable approach. The outputs of the task force precipitated two decades of corrective actions across parts of the aviation industry that have an interest in and impact on instances of unstable approach [15]. However, despite the success of these activities and reduced unstable approach rates, IATA reports that 61% of the accidents between 2012 and 2016 occurred during the approach and landing phase, of which 16% were unstable approaches [26]. More recent assessments have shown that the issue remains a prevalent risk in the aviation industry [Reference Jarry, Delahaye and Feron28]. Albeit at exceptionally low rates, an unstable approach remains a safety concern [27]. The question then arises as to why, despite significant overall reductions in accidents, unstable approaches continue to occur.

In the literature, a few studies have explored the ‘unstable approach’ problem, although the issue has been well-recognised in many studies. Moriarty and Jarvis [Reference Moriarty and Jarvis43] interviewed pilots to understand unstable approaches and how pilots adjust their performance to ensure flight stability. Duchevet et al. [Reference Duchevet, Imbert, Hogue, Ferreira, Moens, Colomer and Vázquez11] developed a digital assistant to support pilot decision-making. Carroll [Reference Carroll4] examined unstable approaches using flight data monitoring, with a focus on energy management. Some studies used flight monitoring data to predict unstable approaches [Reference Martínez, Fernández, Hernández, Cristóbal, Schwaiger, Nuñez and Ruiz40, Reference Wang, Sherry and Shortle67]. Lai et al. [Reference Lai, Chen, Zheng and Khoo35] developed an agent-based model to analyse the impact of mental model disconnects between pilots and air traffic controllers in unstable approach scenarios.

So far, efforts to reduce unstable approaches in aviation have focused on analysing ‘what went wrong’, which aligns with the Safety I approach. The Safety I approach focuses on component-level failures by considering the reliability of each component and aims to create systems that work as imagined (WAI) [Reference Hollnagel21]. This approach has limitations in understanding complex systems and the interactions of system elements. Hollnagel [Reference Hollnagel21] introduced the Safety II approach as a complement to the Safety-I approach, shifting the focus from analysing failures to understanding how everyday operations succeed under varying conditions [Reference Papadimitriou, Pooyan Afghari, Tselentis and van Gelder46, Reference Sujan, Huang and Braithwaite58]. Safety II focuses on system-level success by ensuring system resilience and understanding work-as-done (WAD). With Safety II, WAI systems can differ from WAD to sustain success in a changing work context, whereas Safety I assumes they should be the same, and any deviation from the WAI is considered a failure [Reference Hollnagel21].

In cases involving ‘how things go right’ and revealing system component interactions, the Functional Resonance Analysis Method (FRAM) has proved to be a fruitful method for analysing systems in various industries, including aviation [Reference De Carvalho7, Reference Reiser and Villani51, Reference Studic, Majumdar, Schuster and Ochieng57], healthcare [Reference Kaya, Ovali and Ozturk32, Reference Sujan, Lounsbury, Pickup, Kaya, Earl and Mcculloch59], industrial operations [Reference Hollnagel and Fujita23] and oil and gas [Reference Yousefi, Rodriguez Hernandez and Lopez Peña71], despite the criticisms of Safety II [Reference Leveson36] or Safety I, II and III concepts [Reference Aven2]. FRAM has been used for various purposes [Reference Patriarca, Di Gravio, Woltjer, Costantino, Praetorius, Ferreira and Hollnagel47, Reference Salehi, Smith, Veitch and Hanson53], including revealing the interrelationship of system elements [Reference Hollnagel, Pruchnicki, Woltjer and Etcher24], identifying gaps between WAD and WAI [Reference McNab, Freestone, Black, Carson-Stevens and Bowie42], supporting risk assessment [Reference Kaya and Hocaoglu30], assessing resilience [Reference Ransolin, Saurin and Formoso50], analysing incidents [Reference Nouvel, Travadel and Hollnagel45], addressing industrial problems [Reference Li, He, Sun and Cao38]), analysing near misses and integrating Safety I and Safety II approaches [Reference De Leo, Elia, Gnoni and Tornese8].

FRAM is built on four principles: (1) equivalence of failures and success, (2) approximate adjustments, (3) emergence and (4) resonance. The first principle emphasises that failures and successes have the same origin and are thus equivalent. In other words, things could go wrong and right for the same reasons. The second principle explains that human performance in everyday work is variable due to various factors, including physiological, psychological and organisational factors. FRAM views performance variability as a strength rather than a liability, arguing that it is a key factor in the success of socio-technical systems. People adjust their performance to match working conditions [Reference Hollnagel20]. The third principle recognises that safety is an emergent property in socio-technical systems [Reference Hollnagel, Woods and Levenson22, Reference Yang, Tian and Zhao70]. In complex systems, it might not be possible to explain how things happen by focusing on system components. In those cases, the outcome is emergent rather than resultant [Reference Hollnagel20]. The final principle emphasises that the functions of a system interact with each other and work together to achieve the overall system’s goal [Reference Furniss, Curzon and Blandford16]. The variability of two or more functions can interact and resonate, resulting in amplifying or dampening effects within the system [Reference Hollnagel19].

FRAM identifies system functions, examines variability in each function, explores how they may resonate and proposes ways to manage performance variability [Reference Furniss, Curzon and Blandford16, Reference Hollnagel20]. This study applies FRAM to investigate the unstable approach problem in commercial aviation, utilising aviation safety report data from a single airline. The study has two contributions: (1) revisiting the unstable approach problem in aviation by using near-miss data and (2) exploring functional variabilities in a stable approach.

2.0 Methods

2.1 Data collection

This study applies FRAM to explore the unstable approach problem, using reports, standard operating procedure (SOP) and interview data.

This study collected aviation safety reports from a single airline operator for unstable approach occurrences between 2012 and 2023. In total, we received 6461 aviation safety reports from various event locations. To minimise the variability of external factors, we selected a single event location and runway by choosing the most frequently used one. We first selected reports involving London Gatwick Airport (n = 447), then filtered for those related to runway 26L (n = 196), and finally included only arrival flights (n = 195). Among these, 106 involved the A319 aircraft, and 89 involved the A320 aircraft.

The aviation safety report dataset contained 40 parameters, with the event description being the key parameter for understanding the aircraft’s landing configuration (at the 1000ft radio altitude gate), speed, power and sink rate (vertical profile at 1000ft) for approach stability. The rest of the parameters included event date and reference, event location, route, departure/arrival flights, aircraft registration, flight phase, event summary, descriptor type, descriptor name, speed, altitude, visibility, light conditions (e.g., daylight and night), weather conditions (e.g., temperature, turbulent, wind, cloud, ice, rain and fog) and runway state (e.g., wet and dry). The dataset represented the WAD practice and was used in the FRAM application to identify functional variabilities and determine aggregated variabilities.

In addition to aviation safety reports, we reviewed Flight Safety Foundation reports on the unstable approach [Reference Blajev and Curtis3, 14, 15] and the airline’s SOP to understand the WAI practice and to identify functions.

Furthermore, we conducted semi-structured interviews with seven subject matter experts via face-to-face and online meetings to undertake the FRAM analysis and review findings. Participants were purposively selected based on their experience in flying and safety management (see Table 1). Interviews took between 45 minutes and 120 minutes. The interviews provided insights into the WAD practice, informing the FRAM analysis steps.

The research elements of this study were conducted in accordance with the ethical procedures of Cranfield University (approval reference 18126/2023). In this study, no personal data was explicitly requested or collected. All data gathered was securely stored.

2.2 FRAM application

In this study, FRAM was applied in four steps: (1) identify and describe functions, (2) identify variability, (3) aggregate variability and (4) analyse consequences.

Step 1: Identify and describe functions

In FRAM, functions are identified as ‘a set of activities’, and characterised by six aspects: inputs, outputs, precision, control, resources and time [Reference Hollnagel20]. In this study, we identified functions for a stable approach. Initially, we reviewed the airline’s SOP and Flight Safety Foundation reports to understand the WAI practice. Then, we interviewed subject matter experts and watched three recorded videos, including two complete cockpit recordings of flights into airports and one recreation of a near-miss resulting from an unstable approach, to understand WAD practice. Having built on these, we identified background and foreground functions for a stable approach. Foreground functions were described considering five aspects: inputs, outputs, preconditions, resources and control. The ‘time’ aspect was intentionally left blank, not because it was irrelevant, but because it was too dynamic to meaningfully assign fixed temporal constraints to individual functions. Background functions were identified to represent activities that produce outputs used by the foreground functions.

Table 1. Participants characteristics

After identifying all functions, they were refined through an iterative process based on inputs from subject matter experts. We used the FRAM model visualiser to describe the functions and to build a FRAM model, representing the everyday successful performance of a stable approach (Fig. 1).

Figure 1. FRAM model for a stable approach.

Step 2: Identify variability

We used aviation safety reports to identify actual variability within the function, primarily using the event description parameter, which pilots enter following the events. We created an Excel sheet (see Table 2) to list all functions.

We considered functional variability in terms of time (i.e., too early, on time, too late and not at all) and precision (i.e., precise, acceptable and imprecise). The output of a function can be too early, on time, too late or not at all. Here, ‘not at all’ means an extreme version of too late. A precise output of a function refers to satisfying the needs of the downstream function, thereby not increasing the variability of the downstream function. An acceptable variability requires adjustments and will likely increase a downstream function’s variability. An imprecise output indicates that it is inaccurate, incomplete, or misleading [Reference Hollnagel20]. For instance, one of the event descriptions says, ‘I decided to use flap 3, given the potential for wind shear…’. The variability type for the ‘configure aircraft for landing’ function was selected as ‘precise’ to maintain a stable approach. For the same function, we categorised the variability as ‘too late’ when reviewing the event description, saying ‘…late configuration full selection, believing that they did by the 1000 ft gate.’. In some event descriptions, we identified multiple variability types, whereas on other occasions, no variabilities were identified (see Table 2).

Step 3: Aggregate variability

This step analyses how variability in one function may propagate, combine or resonate with variability in other functions [Reference Patriarca, Di Gravio, Costantino and Tronci48]. Variability has three sources: internal (resulting from the function itself), external (such as weather conditions), and variability from an upstream function [Reference Hollnagel20]. This study used aviation safety reports to identify couplings between upstream and downstream functions. For instance, in one case, the decision to use flap 3 dampened the variability of downstream functions by aiding speed control. In contrast, delayed configuration was found to either amplify variability, creating time pressure on subsequent tasks or have minimal downstream impact, depending on the context. There might also be cases where an upstream function has an amplified effect, and another has a dampening effect on the same function. All these variabilities reveal how the variability of a function influences others.

Step 4: Analyse consequences

Variability can lead to both desirable and undesirable outcomes. This step proposes ways to manage variability by sustaining desirable variability and preventing undesirable variability [Reference Hollnagel20]. In this step, we interviewed four subject-matter experts to review aggregated variability findings and propose ways to manage variability. In addition to expert inputs, the authors generated suggestions considering variability aggregation for each event.

Table 2. The template used to identify functional variability

3.0 Results

In this study, we used FRAM to explore the unstable approach. The stable approach was modelled with 10 foreground functions (shown as hexagons in Fig. 1) and 11 background functions (shown as grey-background rectangles in Fig. 1). Table 3 provides an example of the function description.

Table 3. Description of control speed function

The FRAM model was developed after all functions had been identified and described. Figure 1 shows all functions of a stable approach, revealing their interactions. The FRAM model visualiser is used to examine the interactions between functions and to explore how variability might aggregate across the system.

In the second step, performance variability was identified by analysing the aviation safety reports. The findings revealed that variability mainly occurred in the following functions: control speed, configure aircraft for landing, communicate with air traffic controllers and manage flight path. Considering the source of variability, weather conditions, such as wind, turbulence and gusts, were the primary sources of external variability, especially regarding the control power function. For instance, an air traffic controller reporting to pilots about ‘wind, gusts and previous aircraft going around’ is labelled as ‘precise’, and this supports crew decision-making on whether to continue the landing or effect a go-around. Similarly, pilots changing the role of pilot flying when necessary was another example of variability, which is labelled as ‘acceptable’, contributing to safe operations and sustaining a stable approach. Figure 2 presents the actual variability in all background (B1–B11) and foreground (F1–F10) functions. Variability is categorised by both time (too early, on time, too late and not at all) and precision (precise, acceptable and imprecise), with colour coding as shown in the legend. Notably, certain functions (e.g., F6: To configure aircraft for landing and F5: To control speed) exhibit high levels of temporal and precision variability, highlighting critical operational points. In contrast, others (e.g., B7: To establish procedures) show minimal variability. This could be due to various factors, including B7 being an organisation-related function, where variability tends to be less than human-related functions or simply the lack of data capturing the variability of the function.

Figure 2. The variability of the functions in terms of time and precision. The y-axis lists all background and foreground functions, and the x-axis illustrates the frequency of actual variabilities in these functions, with colour coding shown in the legend.

In the third step, the authors explored aggregated variability. For example, an air traffic controller gave an unusually expeditious approach to approximately 11nm final in one of the cases analysed. The variability of the outputs from the functions ‘To communicate with air traffic controller’ and ‘To control air traffic’ amplified the variabilities of the functions ‘To configure aircraft for landing’, ‘To command input by pilot’ and ‘To control speed’ by using speed brake to keep the variability of ‘To manage flight path’ function at an acceptable level. While the variability on the ‘To manage flight path’ function was dampened, the amplified effects on the ‘control speed’ function triggered the ‘To decide landing/going around’ function, in which case the pilot decided to go around. Figure 3 illustrates the aggregated variability effects for this instance. The highlighted connections show the link between the ‘To communicate with ATC’ function and other functions, and the waveforms in certain hexagons (e.g., To communicate with ATC) indicate points where variability was observed in this instance. As shown in Fig. 3, the variability of the selected function does not always lead to amplified variability in connected functions, showing that variability does not always propagate linearly or predictably.

Figure 3. Aggregated variability effects from the ‘communicate with ATC’ function.

In this study, several instances occurred where performance variability in the stable approach led to go-around decisions (C1 to C6 in Fig. 4). While these actions may appear inconsistent with standard procedures or WAI practice, they reflect the adaptive strategies pilots employ in real-world conditions, aligning more closely with WAD practice. We further analysed these events by considering the decision gates identified in the airline’s SOP, representing the WAI practice. In the SOP, two gate points were identified at 1000ft and 500ft radio altitude, associated with the following prescribed speeds: speed at 1000ft should be Vapp (approx. 140kts) + 30 and at 500ft between Vapp− 5 and Vapp+ 10 for Airbus A320 and A319 aircraft types. An approach is considered stable when the aircraft meets the gate criteria; if the gate criteria are not met, the approach should be deemed unstable, and the pilot should initiate a go-around. Figure 4 illustrates the categorisation of events in the analysed aviation safety reports. Events were categorised depending on whether they met or failed to meet the 1000ft and 500ft gates, and whether this resulted in a go-around or a landing. For instance, C3 represents 24 events in the dataset, where a go-around was initiated between 1000ft and 500ft, and pilots did not meet the 1000ft gate criteria. C9 represents 37 events where the flight did not meet the 1000ft gate criteria (N) but continued landing and met the 500ft gate criteria (Y).

Figure 4. Categorisation of events in the analysed aviation safety reports.

From the crew’s compliance with SOP perspective, the cases could be interpreted as follows: (1) C1, C2 and C4 (in a total of 43 events) being compliant and cautious – pre-emptive initiation of a go-around, (2) C3, C5 and C7 (in a total of 100 events) being compliant – crew initiation of go-around in accordance with SOP, or continued approach to land within SOP, and (3) C6, C8, C9 and C10 (in a total of 52 events) being non-compliant – crew continued approach to land or delayed the initiation of go-around in contravention of SOP. From the WAD aspect, all these cases were reported as having landed/gone around successfully, with only two of the flights stating that the landing was slightly firmer than usual.

Table 4 summarises the FRAM analysis findings by listing all functions, providing variability examples (linking them with the cases demonstrated in Fig. 4), explaining the variability context and its aggregated impacts, and proposing measures for managing variability. In Table 4, the authors aimed to provide examples of variability with both dampening and amplifying effects; however, Table 4 predominantly lists examples of undesired variability. This was due to the nature of the aviation safety reports, which represented near misses. Near misses are events that could have led to accidents. Near-miss data provided more information on ‘how things went wrong’ than ‘how things went right’ or ‘how they recovered from the situation’.

Table 4. FRAM findings summary

In most cases where variability had an amplified effect, it led to delays in the operation by initiating a go-around and repeating all activities for the subsequent approach and landing. In cases where variability had a dampening effect, this was often linked to having effective communication, whether between pilots, pilots and ATC, or pilots and cabin crew. These interactions increased the situational awareness and preparedness of pilots to make landing/ going-around decisions. Results showed various cases (e.g., C1, C2 and C4 in Fig. 4) where a trade-off between efficiency and safety was made by prioritising safety.

With inputs from subject-matter experts, Table 4 presents measures to manage performance variabilities. The recommendations encompass various aspects, including checklist use, situational awareness, pilot training, communication and coordination, fatigue risk management, crew rostering, pilot callouts, monitoring, equipment maintenance, energy and speed management and the use of automation.

4.0 Discussion

The stability of an approach is identified by meeting a set of criteria; however, various factors impact approach stability, including pilot experience, automation, energy management and environmental factors [Reference Carroll4]. Not all unstable approaches lead to unsuccessful landings [Reference Wischmeyer69].

In this study, the authors analysed 195 aviation safety reports from a single airline to explore unstable approaches. Of these, the authors identified 43 events as being compliant and cautious, 100 as compliant, and 52 non-compliant with the SOP (see Fig. 4). Despite the variability in performance, all events resulted in successful landings, with only two being considered as hard landings. The analysis revealed specific cases in which pilots mitigated earlier imprecisions or timing deviations, such as late checklists or incorrect configurations, through timely communication, role reassignment or energy management. These actions often resulted in dampening variability in other functions, allowing the approach to stabilise. This aligns well with a previous study in which Wischmeyer [Reference Wischmeyer69] suggested that pilot skills and aircraft performance enable recovery from an unstable approach.

The following subsections discuss recommendations for managing variability (see Table 4) in a stable approach, the use of FRAM to analyse near-miss data, and the limitations of this study.

4.1 Managing variabilities in a stable approach

Performance variability can lead to both desirable and undesirable outcomes. FRAM explores variability, enabling the proposal of measures to sustain the desirable ones and mitigate the undesirable ones. This study provided numerous suggestions (Table 4) related to checklist use, situational awareness, pilot training, communication and coordination, fatigue risk management, crew rostering, pilot callouts, monitoring, equipment maintenance, energy and speed management and automation use to manage variabilities. Among these suggestions, crew resource management training, including effective communication, coordination, collaboration, monitoring, briefings and checklists, was the key factor in managing variability (see Table 4).

Effective communication, coordination and collaboration were found to be the keys to a stable approach, which requires shared situational awareness. Data analysis revealed that pilots adjust their performance when they have a shared situational awareness, which occurs when they perceive the necessary information from each other and link all the information to understand the status of the approach, despite the time pressure involved in the flying task. Routine callouts and corresponding acknowledgements are also found to play an essential role in perceiving the required information and providing feedback. All these contributed to making relevant trade-offs and decisions to ensure the stability of the approach and manage variability. Indeed, the importance of shared situational awareness between pilots and between pilots and ATCs is highly acknowledged in the literature [Reference Lai, Chen, Khoo and Zheng34, Reference Stanton, Stewart, Harris, Houghton, Baber, McMaster and Green55]. Studies have found that impaired situational awareness negatively impacts decision-making processes and is linked to poor system performance [Reference Dhief, Alam, Lilith and Mean10, Reference Lai, Chen, Zheng and Khoo35, Reference Li, Zhang, Court, Kearney and Braithwaite37]. In this study, the authors proposed several suggestions to maintain situational awareness during flight by ensuring effective communication, enhancing pilot competencies, creating training opportunities and improving organisational culture.

Effective monitoring was found to be another factor that contributes to managing variability in a stable approach. Data analysis revealed that pilots could manage variability when effective monitoring was in place. Monitoring tasks involve monitoring pilot actions and flight instruments, as well as communicating with ATC, cabin crew and the pilot. All require cognitive efforts and the distribution of situational awareness. Monitoring tasks can be improved when effective communication and shared situational awareness exist and effective cross-checks are completed [26]. Furthermore, monitoring tasks can be enhanced by design changes and the implementation of real-time monitoring tools [Reference Dai, Liu and Hansen6]. Based on the FRAM findings, the authors suggest improving workload management, making design changes, reducing distractions and enhancing pilot competencies to maintain adequate monitoring.

Briefings and checklists were also found to be another key factor in managing variability. The findings from the FRAM analysis showed that variabilities in the ‘conduct all briefings and checklists’ function (see Fig. 2) led to both dampening and amplifying impacts on downstream functions. Non-rushed briefings and checklists can increase situational awareness and mitigate risks. For instance, the landing checklist ensures that all tasks are completed safely. However, checklists and briefings are most valuable when a strong safety culture, team building and effective training are in place. The influence of safety culture on safety outcomes has been well-recognised in the literature [Reference Nævestad, Storesund Hesjevoll and Elvik44, Reference Terzioglu61]. From a Safety-I perspective, improper or missed briefings and checklists are identified as contributory factors for accidents [Reference Chang, Yang and Hsiao5, 26].

In the literature, previous studies have found that speed changes, aircraft configurations and deviations in flight paths and sink rates were key factors contributing to an unstable approach [Reference Blajev and Curtis3]. This study, however, found that variability in these factors contributed to both stable and unstable approaches. Our study has also revealed the influence of external factors, such as turbulence, tailwinds, gusts and crosswinds, which contribute to undesirable outcomes. In addition, ATC requirements such as shortcuts, minimum runway time, or other aircraft or runway conditions occasionally increased the variability of the functions. In contrast, clear communication between pilots and ATC contributed to a stable approach. In the traditional safety approach, any variability from procedures is considered a failure, and individuals are punished when they do not comply [Reference Lima Brugnara, de Andrade, de Souza Fontes and Soares Leão39]. Our findings demonstrate that not all variabilities lead to failures or undesirable outcomes. Understanding the trade-offs made by front-line staff has significant value in everyday operations [Reference Hollnagel18].

In this study, there were cases, such as C3 and C5 in Fig. 4, where pilots displayed safety-conscious behaviour by initiating and conducting a go-around when they determined the approach to be unrecoverable. However, there were also some cases, such as C8, C9 and C10 in Fig. 4, where the crew continued the approach despite not meeting the SOP stable approach criteria and still completed landings within the acceptable flight data monitoring criteria. This highlights an opportunity for further research into identifying how the crew regains or maintains the approach within stability parameters that would ensure a successful landing (i.e., well within the acceptable flight data monitoring parameters). This could also allow the stable approach criteria to be redefined based on these newly developed crew skills and abilities. Here, we can question whether policies (WAI) are reflected well in the real (WAD) practical working environment. Indeed, Blajev and Curtis [Reference Blajev and Curtis3] found that pilots do not think policies reflect well in the actual practical working environment and that training does not adequately cover the challenges they face. The findings from the FRAM analysis can be used to identify the gap between the WAI and WAD, and new training scenarios can be generated.

4.2 Discussion on the use of FRAM in analysing near-miss events

In this study, the authors utilised near-miss data to conduct a FRAM analysis, exploring the unstable approach. The FRAM model successfully linked the functions involved in a stable approach, revealed the complexity of the process and enabled the identification and management of variabilities. However, here, we must mention that the use of near-miss data and the nature of the reports led the authors to primarily consider uncontrolled variability. Thus, the results revealed more variability in cases where the outputs of the functions were too late and imprecise, often due to forgetting things and making mistakes, rather than explaining trade-off cases. Recommendations were made around managing uncontrolled variabilities. This naturally led to the FRAM analysis being a combination of Safety I and Safety II. An analyst’s mindset can align a traditional method with Safety II thinking [Reference Sujan, Lounsbury, Pickup, Kaya, Earl and Mcculloch59], or the analyst can use FRAM with Safety I thinking. There may be reconsiderations on using variability types in FRAM, as current safety reports and practices are built on the Safety-I approach, focusing on how things go wrong. Prompting questions about the sources of variability can help focus on the lessons learned from good practices, which in turn raises the need to report such practices. However, it can be challenging, as variability is often unnoticed during everyday working activities and is only likely to be noticed when things go wrong [Reference Hollnagel, Pruchnicki, Woltjer and Etcher24].

While this research provided recommendations for managing variability (see Table 4), such as suggesting that airlines provide training in flight simulators on specific scenarios, some remained high-level or more akin to a requirement for a recommendation. Many of the suggestions from the FRAM analysis provided findings similar to those of other studies [Reference Blajev and Curtis3, 26]. This brings us to the considerations of how the outputs from the Safety II approach differ from the Safety-I approach. Sujan et al. [Reference Sujan, Pickup, de Vos, Patriarca, Konwinski, Ross and McCulloch60] highlighted the issue of FRAM applications focusing on recommendations to control performance variability rather than ensure system resilience, and they suggest identifying recommendations by considering the resilience abilities of monitoring, responding, anticipating, and learning. However, despite FRAM leading the inclusion of common improvement suggestions, it can provide further suggestions around trade-offs and system interactions, which help to learn from near-miss events. This study aimed to explore unstable approaches by examining how a stable approach is typically achieved, rather than focusing solely on unstable approaches and identifying contributory factors. This approach helped to understand the complexity and context-conditioned variability.

Despite the successful applications of FRAM in various industries, the Safety II approach has been criticised [Reference Aven2, Reference De Leo, Elia, Gnoni and Tornese8, Reference Farooqi, Ryan and Cobb13, Reference Karanikas and Zerguine29, Reference Leveson36, Reference Verhagen, De Vos, Sujan and Hamming66]. Karanikas and Zerguine [Reference Karanikas and Zerguine29] highlight that FRAM moves attention from safety to resilience, which can be criticised from a safety perspective. Leveson [Reference Leveson36] disagrees with the definitions of Safety I and Safety II, highlighting that the systems approach has already been applied in safety-critical industries. Some question the empirical evidence on the validity of Safety II. Farooqi et al. [Reference Farooqi, Ryan and Cobb13] and Underwood and Waterson [Reference Underwood and Waterson65] identify the gap between research and practice regarding the use of FRAM. Indeed, researchers have often adjusted FRAM to improve its practical applicability. Such challenges and particular industrial needs encouraged researchers to propose suggestions for improvements to FRAM. For instance, Li et al. [Reference Li, He, Sun and Cao38] integrated accident causation analysis and taxonomy into FRAM to provide a more systematic function identification based on control constraints, Rosa et al. [Reference Rosa, Haddad and de Carvalho52] integrated multi-criteria decision-making methods to reduce subjectivity in identifying variability, and Patriarca et al. [Reference Patriarca, Falegnami, Costantino and Bilotta49] used Monte Carlo simulation to quantify variability. Some suggestions have also been made for integrating the Safety I and II approaches to propose practical solutions to industrial problems [Reference De Leo, Elia, Gnoni and Tornese8, Reference Martins, Carim, Saurin and Costella41].

There have also been limitations identified, including the complexity of the method, difficulties in understanding its fundamentals, the resources required for analysis and the reliability of the findings [Reference Farooqi, Ryan and Cobb13]. However, these criticisms should not undervalue the method’s strength in analysing complex systems. We believe the value of FRAM lies in its ability to explore complex systems, a claim also made by other researchers [Reference Furniss, Curzon and Blandford16, Reference Patriarca, Di Gravio, Woltjer, Costantino, Praetorius, Ferreira and Hollnagel47, Reference Salehi, Smith, Veitch and Hanson53, Reference Verhagen, De Vos, Sujan and Hamming66]. This study revealed an additional value of using FRAM in understanding near misses.

Near misses are considered to be an opportunity to predict and analyse accidents [Reference Kaya, Humphreys, Camelia and Chatzimichailidou31, Reference Thoroman, Salmon and Goode63]. This understanding is built on very early studies with Heinrich [Reference Heinrich17] and later Salminen et al. [Reference Salminen, Saari, Saarela and Räsänen54] the iceberg model, providing ratios from near misses to accidents, despite later non-linear accident models explaining accidents with systemic causes [Reference Hollnagel20, Reference Thoroman, Salmon and Goode63]. The latter models explain that factors can lead to successful and unsuccessful outcomes [Reference Thoroman, Goode, Salmon and Wooley62]. This study revealed that the variability of a function can lead to both desirable and undesirable outcomes, which aligns well with Safety II [Reference Hollnagel21] and other terms, such as resilience engineering [Reference Hollnagel, Woods and Leveson25] and safety differently [Reference Dekker9]. Using near-miss data to understand the impacts of variability and make performance adjustments enabled learning from near-misses, which may support the prevention of accidents. In various industries, analysing near-miss events enabled the prospective identification of causal factors of accidents [Reference Andriulo and Gnoni1, Reference Konstandinidou, Nivolianitou, Kefalogianni and Caroni33, Reference Wears68]. Thoroman et al. [Reference Thoroman, Goode, Salmon and Wooley62] highlight the value of analysing near misses to identify the protective factors and claim that these protective factors share many properties with the causal factors identified in accidents.

In this study, using near-miss data in FRAM analysis not only identified valuable lessons learnt that traditionally could have been learnt from analysing accident data, but it also moved the focus from human error to system behaviour or ‘systemic ability to do safety’, which is the context of Safety II [Reference Hollnagel21, Reference Lima Brugnara, de Andrade, de Souza Fontes and Soares Leão39].

4.3 Limitations

This study has limitations. Although the aviation safety reports were of great value, in several cases, the narrative/descriptive elements of the report were limited in providing a sufficient understanding and insight into the circumstances, especially regarding good practices. Moreover, event cases are provided at different levels of detail. It would be great to interview the reporter pilots and consider future studies that summarise the case debriefing with the crew, particularly regarding their actions. It would be crucial for future studies to identify critical human factors elements and access richer information from more in-depth interviews with reporter pilots, in addition to having snapshots of the flight data. This would be essential to identify those specific crew skills, behaviours and decision-making strategies that can have a positive effect on their abilities to not only operate the aircraft but also to maintain situational awareness, anticipate and decide the best course of action (i.e. abandon or continue approach) given demanding or adverse circumstances (environmental or ATC), effectively manage aircraft energy when time/distance constrained, and maintain or regain sufficient control over the stability of the approach to ensure that the landing is effected well within flight data monitoring parameters. Finally, while the FRAM model provides valuable insights into complex systems, understanding the interactions between functions requires the use of the FRAM model visualiser, as Fig. 1 does not clearly show the individual couplings. That is why the authors included Fig. 3, highlighting the couplings between ‘To communicate with ATC’ and other related functions.

5.0 Conclusion

This study applied FRAM as a complementary analysis to explore the issue of an unstable approach. The FRAM model enabled the exploration of complex interactions between functions involved in the approach phase by tracing how variability emerged, propagated and was dampened or amplified across functions. This analysis was based on textual aviation safety report descriptions aligned with SOP-derived functions and SME inputs. The analysis revealed that system behaviours were not observable through structured aviation safety report data alone. The study identified various sources of variability that contribute to a stable approach. Effective communication, monitoring, briefings and checklists were key factors that led to sustaining or regaining a stable approach.

This study contributes to aviation safety by revisiting the unstable approach problem and using FRAM to analyse near-miss data. The lessons learned from this study can also be used in other industries. Using FRAM in analysing near-miss data allows for a more balanced analysis that yields greater opportunities to identify what the crew did to either prevent or recover from an undesirable situation, and what specific behaviours, skills and abilities the organisation needs to reinforce further to improve safety. This creates an opportunity for further research into more accurately identifying how higher-level key human factor skills and abilities can contribute towards operational resilience. In turn, findings from such further research could form the basis of improved crew training and operational procedures, resulting in the long-awaited marked improvements in flight safety during the approach and landing phases of flights and further reducing the number of approach and landing accidents.

Furthermore, this study highlights the need to develop effective reporting systems for Safety II applications, acknowledging the practical challenges of using FRAM. Further studies can investigate the optimal application of FRAM in various industries and its long-term effects on industrial practices.

Data availability statement

Data supporting this study cannot be made available due to the confidentiality agreement with the data provider.

References

Andriulo, S. and Gnoni, M.G. Measuring the effectiveness of a near-miss management system: An application in an automotive firm supplier, Reliab. Eng. Syst. Saf., 2014, 132, pp 154162. https://doi.org/10.1016/j.ress.2014.07.022 CrossRefGoogle Scholar
Aven, T. A risk science perspective on the discussion concerning safety I, safety II and safety III, Reliab. Eng. Syst. Saf., 2022, 217, p 108077. https://doi.org/10.1016/j.ress.2021.108077 CrossRefGoogle Scholar
Blajev, T. and Curtis, W. Go-Around decision-making and execution project report. Flight Safety Foundation, 2017.Google Scholar
Carroll, D.A. Examining Unstable Approach Predictors Using Flight Data Monitoring Information. PhD dissertation, Embry-Riddle Aeronautical University, Daytona Beach, FL, 2020. https://commons.erau.edu/edt/546 Google Scholar
Chang, Y.H., Yang, H.H. and Hsiao, Y.J. Human risk factors associated with pilots in runway excursions, Accid. Anal. Prev., 2016, 94, pp 227237. https://doi.org/10.1016/j.aap.2016.06.007 CrossRefGoogle ScholarPubMed
Dai, L., Liu, Y. and Hansen, M. Modeling go-around occurrence using principal component logistic regression, Transp. Res. Part C Emerg. Technol., 2021, 129, p 103262. https://doi.org/10.1016/j.trc.2021.103262 CrossRefGoogle Scholar
De Carvalho, P.V.R. The use of Functional Resonance Analysis Method (FRAM) in a mid-air collision to understand some characteristics of the air traffic management system resilience, Reliab. Eng. Syst. Saf., 2011, 96, pp 14821498. https://doi.org/10.1016/j.ress.2011.05.009 CrossRefGoogle Scholar
De Leo, F., Elia, V., Gnoni, M.G. and Tornese, F. Integrating safety-I and safety-II approaches in near miss management: A critical analysis, Sustainability, 2023, 15,p 2130. https://doi.org/10.3390/su15032130 CrossRefGoogle Scholar
Dekker, S. Safety Differently (2nd ed.), CRC Press, 2014, London.10.1201/b17126CrossRefGoogle Scholar
Dhief, I., Alam, S., Lilith, N. and Mean, C.C. A machine learned go-around prediction model using pilot-in-the-loop simulations, Transp. Res. Part C Emerg. Technol., 2022, 140, p 103704. https://doi.org/10.1016/j.trc.2022.103704 CrossRefGoogle Scholar
Duchevet, A., Imbert, J.P., Hogue, T.D.La, Ferreira, A., Moens, L., Colomer, A., … Vázquez, A.L.R. HARVIS: A digital assistant based on cognitive computing for non-stabilized approaches in single pilot operations, Transp. Res. Procedia, 2022, 66, pp 253261. https://doi.org/10.1016/j.trpro.2022.12.025 CrossRefGoogle Scholar
EASA. Annual safety review, 2022, Koln, Germany. https://doi.org/10.2822/056444 CrossRefGoogle Scholar
Farooqi, A., Ryan, B. and Cobb, S. Using expert perspectives to explore factors affecting choice of methods in safety analysis, Saf. Sci., 2022, 146, p 105571. https://doi.org/10.1016/j.ssci.2021.105571 CrossRefGoogle Scholar
Flight Safety Foundation. ALAR Tool Kit: FSF ALAR briefing note 7.1- Stabilized Approach, 2000.Google Scholar
Flight Safety Foundation ALAR Task Force. Analysis of critical factors during approach and landing in accidents and normal flight. Flight Safety Digest, 1999.Google Scholar
Furniss, D., Curzon, P. and Blandford, A. Using FRAM beyond safety: a case study to explore how sociotechnical systems can flourish or stall, Theor. Issues Ergon. Sci., 2016, 17, pp 507532. https://doi.org/10.1080/1463922X.2016.1155238 CrossRefGoogle Scholar
Heinrich, H.W. Industrial Accident Prevention: A Scientific Approach (2nd ed), McGraw-Hill, 1941, New York.Google Scholar
Hollnagel, E. The ETTO Principle: Efficiency-Thoroughness Trade-Off, CRC Press, 2017, London. http://onlinelibrary.wiley.com/doi/10.1111/j.1539-6924.2009.01333.x/abstract CrossRefGoogle Scholar
Hollnagel, E. The Functional Resonance Analysis Method, CRC Press, 2018, London.Google Scholar
Hollnagel, E. FRAM: The Functional Resonance Analysis Method Modelling Complex Socio-Technical Systems, Ashgate, 2012, Surrey.Google Scholar
Hollnagel, E. Safety I and Safety II. The Past and Future of Safety Management, Ashgate, 2014, Surrey.Google Scholar
Hollnagel, E, Woods, D. and Levenson, N. Resilience Engineering: Concepts and Precepts, CRC Press, 2006, London.Google Scholar
Hollnagel, E. and Fujita, Y. The Fukushima disaster-systemic failures as the lack of resilience, Nucl. Eng. Technol., 2013, 45, pp 1320. https://doi.org/10.5516/NET.03.2011.078 CrossRefGoogle Scholar
Hollnagel, E., Pruchnicki, S., Woltjer, R. and Etcher, S. Analysis of comair flight 5191 with the functional resonance accident model. Proceedings of the 8th International Symposium of the Australian Aviation Psychology Association, 2008. http://hal.archives-ouvertes.fr/docs/00/61/42/54/PDF/Hollnagel-et-al--FRAM-analysis-flight-5191.pdf Google Scholar
Hollnagel, E., Woods, D.D. and Leveson, N. Resilience engineering: Concepts and precepts. Ashgate, 2012. https://doi.org/10.1136/qshc.2006.018390 CrossRefGoogle Scholar
IATA. Unstable Approaches Risk Mitigation Policies, Procedures and Best Practices (3rd ed.), International Air Transport Association, 2017, Montreal, Quebec, Canada.Google Scholar
IATA. Annual Safety Report 2022 Recommendations for Accident Prevention In Aviation, 2022, IATA, Montreal, Quebec, Canada.Google Scholar
Jarry, G., Delahaye, D. and Feron, E. Flight safety during Covid-19: A study of Charles de Gaulle airport atypical energy approaches, Transp. Res. Interdiscip. Perspect., 2021, 9, p 100327. https://doi.org/10.1016/j.trip.2021.100327 Google Scholar
Karanikas, N. and Zerguine, H. Are the new safety paradigms (only) about safety and sufficient to ensure it? An overview and critical commentary, Saf. Sci., 2024, 170, p 106367. https://doi.org/10.1016/j.ssci.2023.106367 CrossRefGoogle Scholar
Kaya, G.K. and Hocaoglu, M.F. Semi-quantitative application to the Functional Resonance Analysis Method for supporting safety management in a complex health-care process. Reliab. Eng. Syst. Saf., 2020, 202, p 106970. https://doi.org/10.1016/j.ress.2020.106970 CrossRefGoogle Scholar
Kaya, G.K., Humphreys, M., Camelia, F. and Chatzimichailidou, M. Integrating causal analysis based on system theory with network modelling to enhance accident analysis, Ergonomics, 2025, 0, pp 128. https://doi.org/10.1080/00140139.2025.2516060 CrossRefGoogle Scholar
Kaya, G.K., Ovali, H.F. and Ozturk, F. Using the functional resonance analysis method on the drug administration process to assess performance variability, Saf. Sci., 2019, 118, pp 835840. https://doi.org/10.1016/j.ssci.2019.06.020 CrossRefGoogle Scholar
Konstandinidou, M., Nivolianitou, Z., Kefalogianni, E. and Caroni, C. In-depth analysis of the causal factors of incidents reported in the Greek petrochemical industry, Reliab. Eng. Syst. Saf., 2011, 96, pp 14481455. https://doi.org/10.1016/j.ress.2011.07.010 CrossRefGoogle Scholar
Lai, H.Y., Chen, C.H., Khoo, L.P. and Zheng, P. Unstable approach in aviation: Mental model disconnects between pilots and air traffic controllers and interaction conflicts, Reliab. Eng. Syst. Saf., 2019, 185, pp 383391. https://doi.org/10.1016/j.ress.2019.01.009 CrossRefGoogle Scholar
Lai, H.Y., Chen, C.H., Zheng, P. and Khoo, L.P. Investigating the evolving context of an unstable approach in aviation from mental model disconnects with an agent-based model, Reliab. Eng. Syst. Saf., 2020, 193, p 106657. https://doi.org/10.1016/j.ress.2019.106657 CrossRefGoogle Scholar
Leveson, N.G. Safety III : A systems approach to safety and resilience. MIT Engineering Systems Lab, 2020.Google Scholar
Li, W.C., Zhang, J., Court, S., Kearney, P. and Braithwaite, G. The influence of augmented reality interaction design on Pilot’s perceived workload and situation awareness, Int. J. Ind. Ergon., 2022, 92, p 103382. https://doi.org/10.1016/j.ergon.2022.103382 CrossRefGoogle Scholar
Li, W., He, M., Sun, Y. and Cao, Q. A proactive operational risk identification and analysis framework based on the integration of ACAT and FRAM, Reliab. Eng. Syst. Saf., 2019, 186, 101109. https://doi.org/10.1016/j.ress.2019.02.012 CrossRefGoogle Scholar
Lima Brugnara, R., de Andrade, D., de Souza Fontes, R. and Soares Leão, M. Safety-II: Building safety capacity and aeronautical decision-making skills to commit better mistakes, Aeronaut. J., 2023, 127, pp 511536. https://doi.org/10.1017/aer.2022.74 CrossRefGoogle Scholar
Martínez, D., Fernández, A., Hernández, P., Cristóbal, S., Schwaiger, F., Nuñez, J.M. and Ruiz, J.M. Forecasting unstable approaches with boosting frameworks and LSTM networks. SESAR Innovation Days, 2019, (December).Google Scholar
Martins, J.B., Carim, G., Saurin, T.A. and Costella, M.F. Integrating safety-I and safety-II: Learning from failure and success in construction sites, Saf. Sci., 2022, 148, p 105672. https://doi.org/10.1016/j.ssci.2022.105672 CrossRefGoogle Scholar
McNab, D., Freestone, J., Black, C., Carson-Stevens, A. and Bowie, P. Participatory design of an improvement intervention for the primary care management of possible sepsis using the Functional Resonance Analysis Method, BMC Med, 2018, 16, pp 120. https://doi.org/10.1186/s12916-018-1164-x CrossRefGoogle ScholarPubMed
Moriarty, D. and Jarvis, S. A systems perspective on the unstable approach in commercial aviation, Reliab. Eng. Syst. Saf., 2014, 131, pp 197202. https://doi.org/10.1016/j.ress.2014.06.019 CrossRefGoogle Scholar
Nævestad, T.O., Storesund Hesjevoll, I. and Elvik, R. How can regulatory authorities improve safety in organizations by influencing safety culture? A conceptual model of the relationships and a discussion of implications, Accid. Anal. Prev., 2021, 159, p 106228. https://doi.org/10.1016/j.aap.2021.106228 CrossRefGoogle Scholar
Nouvel, D., Travadel, S. and Hollnagel, E. Introduction of the concept of functional resonance in the analysis of a near-accident in aviation, 33rd ESReDA Seminar: Future challenges of accident investigation, 2007, p. 9. https://hal.archives-ouvertes.fr/hal-00614258 Google Scholar
Papadimitriou, E., Pooyan Afghari, A., Tselentis, D. and van Gelder, P. Road-safety-II: Opportunities and barriers for an enhanced road safety vision, Accid. Anal. Prev., 2022, 174, p 106723. https://doi.org/10.1016/j.aap.2022.106723 CrossRefGoogle ScholarPubMed
Patriarca, R., Di Gravio, G., Woltjer, R., Costantino, F., Praetorius, G., Ferreira, P. and Hollnagel, E. Framing the FRAM: A literature review on the functional resonance analysis method, Saf. Sci., 2020, 129, p 104827. https://doi.org/10.1016/j.ssci.2020.104827 CrossRefGoogle Scholar
Patriarca, R., Di Gravio, G., Costantino, F. and Tronci, M. The Functional Resonance Analysis Method for a systemic risk based environmental auditing in a sinter plant: A semi-quantitative approach, Environ. Impact Assess. Rev., 2017, 63, pp 7286. https://doi.org/10.1016/j.eiar.2016.12.002 CrossRefGoogle Scholar
Patriarca, R., Falegnami, A., Costantino, F. and Bilotta, F. Resilience engineering for socio-technical risk analysis: Application in neuro-surgery, Reliab. Eng. Syst. Saf., 2018, 180, pp 321335. https://doi.org/10.1016/j.ress.2018.08.001 CrossRefGoogle Scholar
Ransolin, N., Saurin, T.A. and Formoso, C.T. Integrated modelling of built environment and functional requirements: Implications for resilience, Appl. Ergon., 2020, 88, p 103154. https://doi.org/10.1016/j.apergo.2020.103154 CrossRefGoogle ScholarPubMed
Reiser, C. and Villani, E. A novel approach to runway overrun risk assessment using FRAM and flight data monitoring, Aeronaut. J., 2024, 128, pp 119. https://doi.org/10.1017/aer.2024.37 CrossRefGoogle Scholar
Rosa, L.V., Haddad, A.N. and de Carvalho, P.V.R. Assessing risk in sustainable construction using the Functional Resonance Analysis Method (FRAM), Cogn. Technol. Work., 2015, 17, pp 559573. https://doi.org/10.1007/s10111-015-0337-z CrossRefGoogle Scholar
Salehi, V., Smith, D., Veitch, B. and Hanson, N. A dynamic version of the FRAM for capturing variability in complex operations. MethodsX, 2021, 8, p 101333. https://doi.org/10.1016/j.mex.2021.101333 CrossRefGoogle ScholarPubMed
Salminen, S., Saari, J., Saarela, K.L. and Räsänen, T. Fatal and non-fatal occupational accidents: identical versus differential causation, Saf. Sci., 1992, 15, pp 109118. https://doi.org/10.1016/0925-7535(92)90011-N CrossRefGoogle Scholar
Stanton, N.A., Stewart, R., Harris, D., Houghton, R.J., Baber, C., McMaster, R., … Green, D. Distributed situation awareness in dynamic systems: Theoretical development and application of an ergonomics methodology, Ergonomics, 2006, 49, pp 12881311. https://doi.org/10.1080/00140130600612762 CrossRefGoogle ScholarPubMed
Stanton, N.A and Piggott, J. Situational awareness and safety, Saf. Sci., 2017, 7535, pp 189204.Google Scholar
Studic, M., Majumdar, A., Schuster, W. and Ochieng, W.Y. A systemic modelling of ground handling services using the functional resonance analysis method, Transp. Res. Part C Emerg. Technol., 2017, 74, pp 245260. https://doi.org/10.1016/j.trc.2016.11.004 CrossRefGoogle Scholar
Sujan, M.A., Huang, H. and Braithwaite, J. Learning from incidents in health care: Critique from a safety-II perspective, Saf. Sci., 2017, 99, pp 115121. https://doi.org/10.1016/j.ssci.2016.08.005 CrossRefGoogle Scholar
Sujan, M., Lounsbury, O., Pickup, L., Kaya, G., Earl, L. and Mcculloch, P. What kinds of insights do safety-I and safety-II approaches provide ? A critical reflection on the use of SHERPA and FRAM in healthcare, Saf. Sci., 2024, 173, p 106450. https://doi.org/10.1016/j.ssci.2024.106450 CrossRefGoogle Scholar
Sujan, M., Pickup, L., de Vos, M.S., Patriarca, R., Konwinski, L., Ross, A. and McCulloch, P. Operationalising FRAM in Healthcare: A critical reflection on practice, Saf. Sci., 2023, 158, p 105994. https://doi.org/10.1016/j.ssci.2022.105994 CrossRefGoogle Scholar
Terzioglu, M. The effects of crew resource management on flight safety culture: Corporate crew resource management (CRM 7.0), Aeronaut. J., 2023, 128, pp 17431766. https://doi.org/10.1017/aer.2023.113 CrossRefGoogle Scholar
Thoroman, B., Goode, N., Salmon, P. and Wooley, M. What went right? An analysis of the protective factors in aviation near misses, Ergonomics, 2019, 62, pp 192203. https://doi.org/10.1080/00140139.2018.1472804 CrossRefGoogle ScholarPubMed
Thoroman, B., Salmon, P. and Goode, N. Applying AcciMap to test the common cause hypothesis using aviation near misses, Appl. Ergon., 2020, 87, p 103110. https://doi.org/10.1016/j.apergo.2020.103110 CrossRefGoogle ScholarPubMed
UK CAA. Annual safety review CAP 2590. West Sussex, 2022.Google Scholar
Underwood, P. and Waterson, P. Systemic accident analysis: examining the gap between research and practice, Accid. Anal. Prev., 2013, 55, pp 154164. https://doi.org/10.1016/j.aap.2013.02.041 CrossRefGoogle Scholar
Verhagen, M.J., De Vos, M.S., Sujan, M. and Hamming, J.F. The problem with making Safety-II work in healthcare, BMJ Qual. Saf., 2022, 31, pp 402408. https://doi.org/10.1136/bmjqs-2021-014396 CrossRefGoogle ScholarPubMed
Wang, Z., Sherry, L. and Shortle, J. Feasibility of using historical flight track data to nowcast unstable approaches, ICNS 2016: Secur. Integr. CNS Syst. Meet Future Chall., 2016, pp 4C1-14C1-7. https://doi.org/10.1109/ICNSURV.2016.7486345 Google Scholar
Wears, R.L. Learning from near misses in aviation: so much more to it than you thought, BMJ Qual. Saf., 2017, 26, pp 513514. https://doi.org/10.1136/bmjqs-2016-005990 CrossRefGoogle Scholar
Wischmeyer, E. The myth of the unstable approach, International Society of Air Safety Investigators, 2004, pp. 1–8. https://asasi.org/papers/2004/Wischmeyer_Unstable Approach_ISASI04.pdfGoogle Scholar
Yang, Q., Tian, J. and Zhao, T. Safety is an emergent property: Illustrating functional resonance in air traffic management with formal verification, Saf. Sci., 2017, 93, pp 162177. https://doi.org/10.1016/j.ssci.2016.12.006 CrossRefGoogle Scholar
Yousefi, A., Rodriguez Hernandez, M. and Lopez Peña, V. Systemic accident analysis models: A comparison study between AcciMap, FRAM, and STAMP, Process Saf. Prog., 2019, 38, p e12002. https://doi.org/10.1002/prs.12002 CrossRefGoogle Scholar
Figure 0

Table 1. Participants characteristics

Figure 1

Figure 1. FRAM model for a stable approach.

Figure 2

Table 2. The template used to identify functional variability

Figure 3

Table 3. Description of control speed function

Figure 4

Figure 2. The variability of the functions in terms of time and precision. The y-axis lists all background and foreground functions, and the x-axis illustrates the frequency of actual variabilities in these functions, with colour coding shown in the legend.

Figure 5

Figure 3. Aggregated variability effects from the ‘communicate with ATC’ function.

Figure 6

Figure 4. Categorisation of events in the analysed aviation safety reports.

Figure 7

Table 4. FRAM findings summary