We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The scope of unconscious processing has long been, and still remains, a hotly debated issue. This is driven in part by the current diversity of methods to manipulate and measure perceptual consciousness. Here, we provide ten recommendations and nine outstanding issues about designing experimental paradigms, analyzing data, and reporting the results of studies on unconscious processing. These were formed through dialogue among a group of researchers representing a range of theoretical backgrounds. We acknowledge that some of these recommendations naturally do not align with some existing approaches and are likely to change following theoretical and methodological development. Nevertheless, we hold that at this stage of the field they are instrumental in evoking a much-needed discussion about the norms of studying unconscious processes and helping researchers make more informed decisions when designing experiments. In the long run, we aim for this paper and future discussions around the outstanding issues to lead to a more convergent corpus of knowledge about the extent – and limits – of unconscious processing.
Diagnostic stewardship of urine cultures from patients with indwelling urinary catheters may improve diagnostic specificity and clinical relevance of the test, but risk of patient harm is uncertain.
Methods:
We retrospectively evaluated the impact of a computerized clinical decision support tool to promote institutional appropriateness criteria (neutropenia, kidney transplant, recent urologic surgery, or radiologic evidence of urinary tract obstruction) for urine cultures from patients with an indwelling urinary catheter. The primary outcome was a change in catheter-associated urinary tract infection (CAUTI) rate from baseline (34 mo) to intervention period (30 mo, including a 2-mo wash-in period). We analyzed patient-level outcomes and adverse events.
Results:
Adjusted CAUTI rate decreased from 1.203 to 0.75 per 1,000 catheter-days (P = 0.52). Of 598 patients triggering decision support, 284 (47.5%) urine cultures were collected in agreement with institutional criteria and 314 (52.5%) were averted. Of 314 patients whose urine cultures were averted, 2 had a subsequent urine culture within 7 days that resulted in a change in antimicrobial therapy and 2 had diagnosis of bacteremia with suspected urinary source, but there were no delays in effective treatment.
Conclusion:
A diagnostic stewardship intervention was associated with an approximately 50% decrease in urine culture testing for inpatients with a urinary catheter. However, the overall CAUTI rate did not decrease significantly. Adverse outcomes were rare and minor among patients who had a urine culture averted. Diagnostic stewardship may be safe and effective as part of a multimodal program to reduce unnecessary urine cultures among patients with indwelling urinary catheters.
Background: External comparisons of antimicrobial use (AU) may be more informative if adjusted for encounter characteristics. Optimal methods to define input variables for encounter-level risk-adjustment models of AU are not established. Methods: This retrospective analysis of electronic health record data included 50 US hospitals in 2020-2021. We used NHSN definitions for all antibacterials days of therapy (DOT), including adult and pediatric encounters with at least 1 day present in inpatient locations. We assessed 4 methods to define input variables: 1) diagnosis-related group (DRG) categories by Yu et al., 2) adjudicated Elixhauser comorbidity categories by Goodman et al., 3) all Clinical Classification Software Refined (CCSR) diagnosis and procedure categories, and 4) adjudicated CCSR categories where codes not appropriate for AU risk-adjustment were excluded by expert consensus, requiring review of 867 codes over 4 months to attain consensus. Data were split randomly, stratified by bed size as follows: 1) training dataset including two-thirds of encounters among two-thirds of hospitals; 2) internal testing set including one-third of encounters within training hospitals, and 3) external testing set including the remaining one-third of hospitals. We used a gradient-boosted machine (GBM) tree-based model and two-staged approach to first identify encounters with zero DOT, then estimate DOT among those with >0.5 probability of receiving antibiotics. Accuracy was assessed using mean absolute error (MAE) in testing datasets. Correlation plots compared model estimates and observed DOT among testing datasets. The top 20 most influential variables were defined using modeled variable importance. Results: Our datasets included 629,445 training, 314,971 internal testing, and 419,109 external testing encounters. Demographic data included 41% male, 59% non-Hispanic White, 25% non-Hispanic Black, 9% Hispanic, and 5% pediatric encounters. DRG was missing in 29% of encounters. MAE was lower in pediatrics as compared to adults, and lowest for models incorporating CCSR inputs (Figure 1). Performance in internal and external testing was similar, though Goodman/Elixhauser variable strategies were less accurate in external testing and underestimated long DOT outliers (Figure 2). Agnostic and adjudicated CCSR model estimates were highly correlated; their influential variables lists were similar (Figure 3). Conclusion: Larger numbers of CCSR diagnosis and procedure inputs improved risk-adjustment model accuracy compared with prior strategies. Variable importance and accuracy were similar for agnostic and adjudicated approaches. However, maintaining adjudications by experts would require significant time and potentially introduce personal bias. If findings are confirmed, the need for expert adjudication of input variables should be reconsidered.
Disclosure: Elizabeth Dodds Ashley: Advisor- HealthTrackRx. David J Weber: Consultant on vaccines: Pfizer; DSMB chair: GSK; Consultant on disinfection: BD, GAMA, PDI, Germitec
Background: Indiscriminate urine culturing of patients with indwelling urinary catheters may lead to overdiagnosis of urinary tract infections, resulting in unnecessary antibiotic treatment and inaccurate reporting of catheter-associated urinary tract infections (CAUTIs) as a hospital quality metric. We evaluated the impact of a computerized diagnostic stewardship intervention to improve urine culture testing among patients with indwelling urinary catheters. Methods: We performed a single-center retrospective observational study at Rush University Medical Center from April 2018 – July 2023. In February 2021, we implemented a computerized clinical decision support tool to promote adherence to our internal urine culture guidelines for patients with indwelling urinary catheters. Providers were required to select one guideline criteria: 1) neutropenia, 2) kidney transplant, 3) recent urologic procedure, 4) urinary tract obstruction; or if none of the criteria were met, then an infectious diseases consultation was required for approval. We compared facility-wide CAUTI rate per 10,000 catheter days and standardized infection ratio (SIR) during baseline and intervention periods using ecologic models, controlling for time and for monthly Covid-19 hospitalizations. In the intervention period, we evaluated how providers responded to the intervention. Potential harm was defined as collection of a urine culture within 7 days of the intervention that resulted in a change in clinical management. Results: In unadjusted models, CAUTI rate decreased from 12.5 to 7.6 per 10,000 catheter days (p=0.04) and SIR decreased from 0.77 to 0.49 (p=0.09) during baseline vs intervention periods. In adjusted models, the CAUTI rate decreased from 6.9 to 5.5 per 10,000 catheter days (p=0.60) (Figure 1) and SIR decreased from 0.41 to 0.35 (p=0.65) during baseline vs intervention periods. Urine catheter standard utilization ratio (SUR) did not change (p=0.36). There were 598 patient encounters with ≥1 intervention. Selecting the first intervention for each encounter, 284 (47.5%) urine cultures met our guidelines for testing and 314 (52.5%) were averted (Figure 2). Of these, only 3 ( < 1 %) had a urine culture collected in the subsequent 7 days that resulted in change in clinical management. Conclusion: We observed a trend of decreased CAUTIs over time, but effect of our diagnostic stewardship intervention was difficult to assess due to healthcare disruption caused by Covid-19. Adverse outcomes were rare among patients who had a urine culture averted. A computerized clinical decision support tool may be safe and effective as part of a multimodal program to reduce unnecessary urine cultures in patients with indwelling urinary catheters.
Blood-based biomarkers represent a scalable and accessible approach for the detection and monitoring of Alzheimer’s disease (AD). Plasma phosphorylated tau (p-tau) and neurofilament light (NfL) are validated biomarkers for the detection of tau and neurodegenerative brain changes in AD, respectively. There is now emphasis to expand beyond these markers to detect and provide insight into the pathophysiological processes of AD. To this end, a reactive astrocytic marker, namely plasma glial fibrillary acidic protein (GFAP), has been of interest. Yet, little is known about the relationship between plasma GFAP and AD. Here, we examined the association between plasma GFAP, diagnostic status, and neuropsychological test performance. Diagnostic accuracy of plasma GFAP was compared with plasma measures of p-tau181 and NfL.
Participants and Methods:
This sample included 567 participants from the Boston University (BU) Alzheimer’s Disease Research Center (ADRC) Longitudinal Clinical Core Registry, including individuals with normal cognition (n=234), mild cognitive impairment (MCI) (n=180), and AD dementia (n=153). The sample included all participants who had a blood draw. Participants completed a comprehensive neuropsychological battery (sample sizes across tests varied due to missingness). Diagnoses were adjudicated during multidisciplinary diagnostic consensus conferences. Plasma samples were analyzed using the Simoa platform. Binary logistic regression analyses tested the association between GFAP levels and diagnostic status (i.e., cognitively impaired due to AD versus unimpaired), controlling for age, sex, race, education, and APOE e4 status. Area under the curve (AUC) statistics from receiver operating characteristics (ROC) using predicted probabilities from binary logistic regression examined the ability of plasma GFAP to discriminate diagnostic groups compared with plasma p-tau181 and NfL. Linear regression models tested the association between plasma GFAP and neuropsychological test performance, accounting for the above covariates.
Results:
The mean (SD) age of the sample was 74.34 (7.54), 319 (56.3%) were female, 75 (13.2%) were Black, and 223 (39.3%) were APOE e4 carriers. Higher GFAP concentrations were associated with increased odds for having cognitive impairment (GFAP z-score transformed: OR=2.233, 95% CI [1.609, 3.099], p<0.001; non-z-transformed: OR=1.004, 95% CI [1.002, 1.006], p<0.001). ROC analyses, comprising of GFAP and the above covariates, showed plasma GFAP discriminated the cognitively impaired from unimpaired (AUC=0.75) and was similar, but slightly superior, to plasma p-tau181 (AUC=0.74) and plasma NfL (AUC=0.74). A joint panel of the plasma markers had greatest discrimination accuracy (AUC=0.76). Linear regression analyses showed that higher GFAP levels were associated with worse performance on neuropsychological tests assessing global cognition, attention, executive functioning, episodic memory, and language abilities (ps<0.001) as well as higher CDR Sum of Boxes (p<0.001).
Conclusions:
Higher plasma GFAP levels differentiated participants with cognitive impairment from those with normal cognition and were associated with worse performance on all neuropsychological tests assessed. GFAP had similar accuracy in detecting those with cognitive impairment compared with p-tau181 and NfL, however, a panel of all three biomarkers was optimal. These results support the utility of plasma GFAP in AD detection and suggest the pathological processes it represents might play an integral role in the pathogenesis of AD.
Blood-based biomarkers offer a more feasible alternative to Alzheimer’s disease (AD) detection, management, and study of disease mechanisms than current in vivo measures. Given their novelty, these plasma biomarkers must be assessed against postmortem neuropathological outcomes for validation. Research has shown utility in plasma markers of the proposed AT(N) framework, however recent studies have stressed the importance of expanding this framework to include other pathways. There is promising data supporting the usefulness of plasma glial fibrillary acidic protein (GFAP) in AD, but GFAP-to-autopsy studies are limited. Here, we tested the association between plasma GFAP and AD-related neuropathological outcomes in participants from the Boston University (BU) Alzheimer’s Disease Research Center (ADRC).
Participants and Methods:
This sample included 45 participants from the BU ADRC who had a plasma sample within 5 years of death and donated their brain for neuropathological examination. Most recent plasma samples were analyzed using the Simoa platform. Neuropathological examinations followed the National Alzheimer’s Coordinating Center procedures and diagnostic criteria. The NIA-Reagan Institute criteria were used for the neuropathological diagnosis of AD. Measures of GFAP were log-transformed. Binary logistic regression analyses tested the association between GFAP and autopsy-confirmed AD status, as well as with semi-quantitative ratings of regional atrophy (none/mild versus moderate/severe) using binary logistic regression. Ordinal logistic regression analyses tested the association between plasma GFAP and Braak stage and CERAD neuritic plaque score. Area under the curve (AUC) statistics from receiver operating characteristics (ROC) using predicted probabilities from binary logistic regression examined the ability of plasma GFAP to discriminate autopsy-confirmed AD status. All analyses controlled for sex, age at death, years between last blood draw and death, and APOE e4 status.
Results:
Of the 45 brain donors, 29 (64.4%) had autopsy-confirmed AD. The mean (SD) age of the sample at the time of blood draw was 80.76 (8.58) and there were 2.80 (1.16) years between the last blood draw and death. The sample included 20 (44.4%) females, 41 (91.1%) were White, and 20 (44.4%) were APOE e4 carriers. Higher GFAP concentrations were associated with increased odds for having autopsy-confirmed AD (OR=14.12, 95% CI [2.00, 99.88], p=0.008). ROC analysis showed plasma GFAP accurately discriminated those with and without autopsy-confirmed AD on its own (AUC=0.75) and strengthened as the above covariates were added to the model (AUC=0.81). Increases in GFAP levels corresponded to increases in Braak stage (OR=2.39, 95% CI [0.71-4.07], p=0.005), but not CERAD ratings (OR=1.24, 95% CI [0.004, 2.49], p=0.051). Higher GFAP levels were associated with greater temporal lobe atrophy (OR=10.27, 95% CI [1.53,69.15], p=0.017), but this was not observed with any other regions.
Conclusions:
The current results show that antemortem plasma GFAP is associated with non-specific AD neuropathological changes at autopsy. Plasma GFAP could be a useful and practical biomarker for assisting in the detection of AD-related changes, as well as for study of disease mechanisms.
Neurocognitive impairment and quality of life are two important long-term challenges for patients with complex CHD. The impact of re-interventions during adolescence and young adulthood on neurocognition and quality of life is not well understood.
Methods:
In this prospective longitudinal multi-institutional study, patients 13–30 years old with severe CHD referred for surgical or transcatheter pulmonary valve replacement were enrolled. Clinical characteristics were collected, and executive function and quality of life were assessed prior to the planned pulmonary re-intervention. These results were compared to normative data and were compared between treatment strategies.
Results:
Among 68 patients enrolled from 2016 to 2020, a nearly equal proportion were referred for surgical and transcatheter pulmonary valve replacement (53% versus 47%). Tetralogy of Fallot was the most common diagnosis (59%) and pulmonary re-intervention indications included stenosis (25%), insufficiency (40%), and mixed disease (35%). There were no substantial differences between patients referred for surgical and transcatheter therapy. Executive functioning deficits were evident in 19–31% of patients and quality of life was universally lower compared to normative sample data. However, measures of executive function and quality of life did not differ between the surgical and transcatheter patients.
Conclusion:
In this patient group, impairments in neurocognitive function and quality of life are common and can be significant. Given similar baseline characteristics, comparing changes in neurocognitive outcomes and quality of life after surgical versus transcatheter pulmonary valve replacement will offer unique insights into how treatment approaches impact these important long-term patient outcomes.
Recombinant angiotensin II is an emerging drug therapy for refractory hypotension. Its use is relevant to patients with disruption of the renin–angiotensin–aldosterone system denoted by elevated direct renin levels. We present a child that responded to recombinant angiotensin II in the setting of right ventricular hypertension and multi-organism septic shock.
While ethics has been identified as a core component of health technology assessment (HTA), there are few examples of practical, systematic inclusion of ethics analysis in HTA. Some attribute the scarcity of ethics analysis in HTA to debates about appropriate methodology and the need for ethics frameworks that are relevant to local social values. The “South African Values and Ethics for Universal Health Coverage” (SAVE-UHC) project models an approach that countries can use to develop HTA ethics frameworks that are specific to their national contexts.
Methods
The SAVE-UHC approach consisted of two phases. In Phase I, the research team convened and facilitated a national multistakeholder working group to develop a provisional ethics framework through a collaborative, engagement-driven process. In Phase II, the research team refined the model framework by piloting it through three simulated HTA appraisal committee meetings. Each simulated committee reviewed two case studies of sample health interventions: opioid substitution therapy and either a novel contraceptive implant or seasonal influenza immunization for children under five.
Results
The methodology was fit-for-purpose, resulting in a context-specified ethics framework and producing relevant findings to inform application of the framework for the given HTA context.
Conclusions
The SAVE-UHC approach provides a model for developing, piloting, and refining an ethics framework for health priority-setting that is responsive to national social values. This approach also helps identify key facilitators and challenges for integrating ethics analysis into HTA processes.
Multicentre research databases can provide insights into healthcare processes to improve outcomes and make practice recommendations for novel approaches. Effective audits can establish a framework for reporting research efforts, ensuring accurate reporting, and spearheading quality improvement. Although a variety of data auditing models and standards exist, barriers to effective auditing including costs, regulatory requirements, travel, and design complexity must be considered.
Materials and methods:
The Congenital Cardiac Research Collaborative conducted a virtual data training initiative and remote source data verification audit on a retrospective multicentre dataset. CCRC investigators across nine institutions were trained to extract and enter data into a robust dataset on patients with tetralogy of Fallot who required neonatal intervention. Centres provided de-identified source files for a randomised 10% patient sample audit. Key auditing variables, discrepancy types, and severity levels were analysed across two study groups, primary repair and staged repair.
Results:
Of the total 572 study patients, data from 58 patients (31 staged repairs and 27 primary repairs) were source data verified. Amongst the 1790 variables audited, 45 discrepancies were discovered, resulting in an overall accuracy rate of 97.5%. High accuracy rates were consistent across all CCRC institutions ranging from 94.6% to 99.4% and were reported for both minor (1.5%) and major discrepancies type classifications (1.1%).
Conclusion:
Findings indicate that implementing a virtual multicentre training initiative and remote source data verification audit can identify data quality concerns and produce a reliable, high-quality dataset. Remote auditing capacity is especially important during the current COVID-19 pandemic.
In this chapter the major conservation issues bears face is reviewed and management actions that can address these conservation issues are highlighted. The future of bears across the world is bright for some species but dark for others. In some areas such as North America and in parts of Europe and Asia, bear populations have increased and stabilized because of increased management effort and increasing support for bears and their needs by the humans who share habitat with them. However, for most bear species, the future is uncertain. Andean bears continue to be threatened by habitat loss and human encroachment. In much of Asia outside Japan, Asiatic black bear, sloth bear, and sun bear populations are increasingly threatened by unmanaged excessive mortality combined with habitat loss to timber harvest, plantation agriculture, and human encroachment. The long-term future for polar bears is threatened by the unmanageable threat of climate change. Giant pandas are fragmented into small populations despite intense conservation efforts. Improving public and political support for bears is the most important need if we are to realize successful bear conservation and management.
The Fontan Outcomes Network was created to improve outcomes for children and adults with single ventricle CHD living with Fontan circulation. The network mission is to optimise longevity and quality of life by improving physical health, neurodevelopmental outcomes, resilience, and emotional health for these individuals and their families. This manuscript describes the systematic design of this new learning health network, including the initial steps in development of a national, lifespan registry, and pilot testing of data collection forms at 10 congenital heart centres.
What is the function of babbling in language learning? We examined the structure of parental speech as a function of contingency on infants’ non-cry prelinguistic vocalizations. We analyzed several acoustic and linguistic measures of caregivers’ speech. Contingent speech was less lexically diverse and shorter in utterance length than non-contingent speech. We also found that the lexical diversity of contingent parental speech only predicted infant vocal maturity. These findings illustrate a new form of influence infants have over their ambient language in everyday learning environments. By vocalizing, infants catalyze the production of simplified, more easily learnable language from caregivers.
Drawing on a landscape analysis of existing data-sharing initiatives, in-depth interviews with expert stakeholders, and public deliberations with community advisory panels across the U.S., we describe features of the evolving medical information commons (MIC). We identify participant-centricity and trustworthiness as the most important features of an MIC and discuss the implications for those seeking to create a sustainable, useful, and widely available collection of linked resources for research and other purposes.
Transcatheter right ventricle decompression in neonates with pulmonary atresia and intact ventricular septum is technically challenging, with risk of cardiac perforation and death. Further, despite successful right ventricle decompression, re-intervention on the pulmonary valve is common. The association between technical factors during right ventricle decompression and the risks of complications and re-intervention are not well described.
Methods
This is a multicentre retrospective study among the participating centres of the Congenital Catheterization Research Collaborative. Between 2005 and 2015, all neonates with pulmonary atresia and intact ventricular septum and attempted transcatheter right ventricle decompression were included. Technical factors evaluated included the use and characteristics of radiofrequency energy, maximal balloon-to-pulmonary valve annulus ratio, infundibular diameter, and right ventricle systolic pressure pre- and post-valvuloplasty (BPV). The primary end point was cardiac perforation or death; the secondary end point was re-intervention.
Results
A total of 99 neonates underwent transcatheter right ventricle decompression at a median of 3 days (IQR 2–5) of age, including 63 patients by radiofrequency and 32 by wire perforation of the pulmonary valve. There were 32 complications including 10 (10.5%) cardiac perforations, of which two resulted in death. Cardiac perforation was associated with the use of radiofrequency (p=0.047), longer radiofrequency duration (3.5 versus 2.0 seconds, p=0.02), and higher maximal radiofrequency energy (7.5 versus 5.0 J, p<0.01) but not with patient weight (p=0.09), pulmonary valve diameter (p=0.23), or infundibular diameter (p=0.57). Re-intervention was performed in 36 patients and was associated with higher post-intervention right ventricle pressure (median 60 versus 50 mmHg, p=0.041) and residual valve gradient (median 15 versus 10 mmHg, p=0.046), but not with balloon-to-pulmonary valve annulus ratio, atmospheric pressure used during BPV, or the presence of a residual balloon waist during BPV. Re-intervention was not associated with any right ventricle anatomic characteristics, including pulmonary valve diameter.
Conclusion
Technical factors surrounding transcatheter right ventricle decompression in pulmonary atresia and intact ventricular septum influence the risk of procedural complications but not the risk of future re-intervention. Cardiac perforation is associated with the use of radiofrequency energy, as well as radiofrequency application characteristics. Re-intervention after right ventricle decompression for pulmonary atresia and intact ventricular septum is common and relates to haemodynamic measures surrounding initial BPV.
The Lothagam harpoon site in north-west Kenya's Lake Turkana Basin provides a stratified Holocene sequence capturing changes in African fisher-hunter-gatherer strategies through a series of subtle and dramatic climate shifts (Figure 1). The site rose to archaeological prominence following Robbins's 1965–1966 excavations, which yielded sizeable lithic and ceramic assemblages and one of the largest collections of Early Holocene human remains from Eastern Africa (Robbins 1974; Angel et al. 1980).
As part of an investigation of a suspected "outbreak" of Bell's palsy in the Greater Toronto Area, a population-based sample of patients with Bell's palsy was investigated electrophysiologically to help understand the spectrum of abnormalities that can be seen in this setting.
Methods:
Two hundred and twenty-four patients were surveyed, of whom 91 underwent formal neurological assessment. Of the latter, 44 were studied electrophysiologically using standard techniques. Thirty-two of the 44 patients fulfilled clinical criteria for Bell's palsy.
Results:
A wide range of electrophysiological changes was observed. Blink responses were the most useful test showing diagnostic sensitivity of 81% and specificity of 94% compared to the contralateral control side. Needle electromyography was additionally helpful in only one patient of six with normal conduction studies.
Conclusions:
There is a wide spectrum of electrophysiological abnormalities in Bell's palsy. Blink reflex latencies may be under-utilized in the assessment of the facial nerve in Bell's palsy. Facial EMG is not generally useful in routine assessment.
The aim of this study was to describe previously unrecognised or under-recognised adverse events associated with Melody® valve implantation.
Background
In rare diseases and conditions, it is typically not feasible to conduct large-scale safety trials before drug or device approval. Therefore, post-market surveillance mechanisms are necessary to detect rare but potentially serious adverse events.
Methods
We reviewed the United States Food and Drug Administration’s Manufacturer and User Facility Device Experience (MAUDE) database and conducted a structured literature review to evaluate adverse events associated with on- and off-label Melody® valve implantation. Adverse events were compared with those described in the prospective Investigational Device Exemption and Post-Market Approval Melody® transcatheter pulmonary valve trials.
Results
We identified 631 adverse events associated with “on-label” Melody® valve implants and 84 adverse events associated with “off-label” implants. The most frequent “on-label” adverse events were similar to those described in the prospective trials including stent fracture (n=210) and endocarditis (n=104). Previously unrecognised or under-recognised adverse events included stent fragment embolisation (n=5), device erosion (n=4), immediate post-implant severe valvar insufficiency (n=2), and late coronary compression (n=2 cases at 5 days and 3 months after implantation). Under-recognised adverse events associated with off-label implantation included early valve failure due to insufficiency when implanted in the tricuspid position (n=7) and embolisation with percutaneous implantation in the mitral position (n=5).
Conclusion
Post-market passive surveillance does not demonstrate a high frequency of previously unrecognised serious adverse events with “on-label” Melody® valve implantation. Further study is needed to evaluate safety of “off-label” uses.