Open access
Open access
Powered by Google Translator Translator

Editor's Choice

Guideline: Management of Urinary Tract Infections in Pediatrics and Adults

5 Nov, 2024 | 18:59h | UTC

Introduction: Urinary tract infections (UTIs) are among the most common infections worldwide, significantly impacting patient quality of life and imposing substantial clinical and economic burdens. Despite advancements in diagnosis and treatment, UTIs continue to cause high morbidity and mortality, ranging from simple cystitis to life-threatening sepsis. Addressing the discrepancy between evidence quality and recommendation strength in existing guidelines, the WikiGuidelines Group has developed a consensus statement. This guideline aims to provide evidence-based recommendations for the prevention, diagnosis, and management of UTIs across diverse clinical settings.

Key Recommendations:

  1. Cranberry Products:
    • Recommendation: Cranberry juice or supplements are recommended for preventing symptomatic, culture-verified UTIs in women with recurrent UTIs, children, and individuals susceptible after interventions.
    • Quality of Evidence: Moderate
    • Recommendation Strength: Strong
  2. Methenamine Hippurate:
    • Recommendation: Methenamine hippurate is recommended as an alternative to prophylactic antibiotics for preventing recurrent UTIs in patients with intact bladder anatomy.
    • Quality of Evidence: Moderate
    • Recommendation Strength: Strong
  3. Topical Estrogen:
    • Recommendation: Vaginal estrogen therapy is recommended for postmenopausal women to reduce recurrent UTIs by restoring the vaginal microbiome.
    • Quality of Evidence: High
    • Recommendation Strength: Strong
  4. Empirical Treatment Regimens:
    • Recommendation: For uncomplicated cystitis, nitrofurantoin is recommended as a first-line agent. For pyelonephritis, trimethoprim/sulfamethoxazole or a first-generation cephalosporin are reasonable first-line agents, depending on local resistance rates.
    • Quality of Evidence: Moderate
    • Recommendation Strength: Strong
  5. Treatment Duration for Acute Cystitis in Adults:
    • Recommendation:
      • Nitrofurantoin: 5 days
      • Trimethoprim/sulfamethoxazole: 3 days
      • Oral fosfomycin: Single dose
    • Quality of Evidence: High
    • Recommendation Strength: Strong
  6. Treatment Duration for Acute Pyelonephritis in Adults:
    • Recommendation:
      • Fluoroquinolones: 5–7 days
      • Dose-optimized β-lactams: 7 days
    • Quality of Evidence: High
    • Recommendation Strength: Strong
  7. Antimicrobial Stewardship:
    • Recommendation: De-escalation of antibiotics and the use of mostly or all oral treatment regimens are recommended to optimize antimicrobial use and reduce adverse effects.
    • Quality of Evidence: High
    • Recommendation Strength: Strong

Conclusion: The consensus highlights a significant lack of high-quality prospective data in many areas related to UTIs, limiting the ability to provide clear recommendations. Implementing these evidence-based guidelines can enhance patient care by promoting effective prevention strategies, accurate diagnosis based on clinical symptoms, appropriate treatment durations, and robust antimicrobial stewardship. This approach is expected to improve clinical outcomes, reduce antimicrobial resistance, and preserve the effectiveness of current treatments.

Reference: Nelson Z, Aslan AT, Beahm NP, et al. Guidelines for the Prevention, Diagnosis, and Management of Urinary Tract Infections in Pediatrics and Adults: A WikiGuidelines Group Consensus Statement. JAMA Network Open. 2024;7(11). DOI: http://doi.org/10.1001/jamanetworkopen.2024.44495

 


Clinical Trial Follow-up: Empagliflozin Continues to Reduce Cardiorenal Risks Post-Discontinuation in CKD Patients

3 Nov, 2024 | 13:08h | UTC

Background: Chronic kidney disease (CKD) progression leads to end-stage kidney disease, adversely affecting quality of life, increasing cardiovascular morbidity and mortality, and imposing high economic costs. Previous trials, including the EMPA-KIDNEY trial, demonstrated that empagliflozin, a sodium–glucose cotransporter 2 (SGLT2) inhibitor, provides cardiorenal benefits in CKD patients at risk of progression. The persistence of these benefits after discontinuation of the drug remains uncertain.

Objective: To assess how the effects of empagliflozin on kidney disease progression and cardiovascular outcomes evolve after discontinuation in patients with CKD.

Methods: In this randomized, double-blind, placebo-controlled trial, 6,609 patients with CKD were assigned to receive empagliflozin 10 mg daily or placebo and followed for a median of 2 years during the active trial period. Eligible patients had an estimated glomerular filtration rate (eGFR) between 20 and less than 45 ml/min/1.73 m², or between 45 and less than 90 ml/min/1.73 m² with a urinary albumin-to-creatinine ratio of at least 200 mg/g. After the active trial, 4,891 surviving patients consented to a 2-year post-trial follow-up without the trial drug, during which local practitioners could prescribe open-label SGLT2 inhibitors. The primary composite outcome was kidney disease progression or cardiovascular death from the start of the active trial to the end of the post-trial period.

Results: During the combined active and post-trial periods, a primary outcome event occurred in 26.2% of patients in the empagliflozin group and 30.3% in the placebo group (hazard ratio [HR], 0.79; 95% confidence interval [CI], 0.72–0.87). During the post-trial period alone, the HR was 0.87 (95% CI, 0.76–0.99), indicating continued benefit after drug discontinuation. The risk of kidney disease progression was 23.5% in the empagliflozin group versus 27.1% in the placebo group (HR, 0.79; 95% CI, 0.72–0.87). Cardiovascular death occurred in 3.8% and 4.9% of patients, respectively (HR, 0.75; 95% CI, 0.59–0.95). No significant effect was observed on death from noncardiovascular causes (5.3% in both groups).

Conclusions: In patients with CKD at risk for progression, empagliflozin continued to confer cardiorenal benefits for up to 12 months after discontinuation. These findings suggest that short-term treatment with empagliflozin has lasting effects on kidney and cardiovascular outcomes.

Implications for Practice: Empagliflozin should be considered for a broad range of CKD patients to slow disease progression and reduce cardiovascular risk, with benefits extending beyond active treatment. Clinicians should initiate empagliflozin therapy in eligible CKD patients to maximize long-term cardiorenal protection.

Study Strengths and Limitations: Strengths include the large sample size, broad eligibility criteria, and high follow-up rates. Limitations involve the exclusion of certain regions during post-trial follow-up and reliance on local eGFR measurements during this period.

Future Research: Further studies are needed to understand the mechanisms behind the sustained benefits of empagliflozin after discontinuation and to explore the long-term effects of extended treatment durations.

Reference: The EMPA-KIDNEY Collaborative Group. Long-Term Effects of Empagliflozin in Patients with Chronic Kidney Disease. New England Journal of Medicine. Published October 25, 2024. DOI: http://doi.org/10.1056/NEJMoa2409183

 


RCT: No Significant Difference Between Intraosseous and Intravenous Vascular Access in Out-of-Hospital Cardiac Arrest Outcomes

3 Nov, 2024 | 12:58h | UTC

Background: Out-of-hospital cardiac arrest (OHCA) is a major global health concern, resulting in high mortality rates despite advancements in emergency care. In Denmark alone, approximately 5,000 cases occur annually, with a 30-day survival rate of only about 14%. Rapid vascular access during cardiopulmonary resuscitation (CPR) is crucial for administering medications like epinephrine, as recommended by international guidelines. Both intraosseous (IO) and intravenous (IV) routes are routinely used, but their comparative effectiveness remains unclear. Current guidelines favor IV access for initial attempts, yet this recommendation is based on very low-certainty evidence, highlighting the need for well-designed clinical trials.

Objective: To compare the effectiveness of initial intraosseous versus intravenous vascular access on sustained return of spontaneous circulation (ROSC) in adults experiencing nontraumatic OHCA.

Methods: This randomized, parallel-group superiority trial was conducted across all five regions of Denmark, covering 5.9 million inhabitants. Adults aged 18 years or older with nontraumatic OHCA requiring vascular access during CPR were randomized to receive either initial IO or IV access. The IO group was further randomized to humeral or tibial access for a secondary comparison. The primary outcome was sustained ROSC, defined as no need for chest compressions for at least 20 minutes. Key secondary outcomes included 30-day survival and survival with favorable neurologic outcome (modified Rankin scale score of 0–3). Procedural outcomes such as success rates of vascular access within two attempts, time to successful access, and time to first epinephrine administration were also assessed.

Results: Among 1,479 patients included in the primary analysis (731 in the IO group and 748 in the IV group), successful vascular access within two attempts was achieved in 92% of the IO group versus 80% of the IV group. Despite the higher success rate with IO access, the time to first successful access and time to first epinephrine dose were similar between groups. Sustained ROSC occurred in 30% of patients in the IO group and 29% in the IV group (risk ratio [RR], 1.06; 95% confidence interval [CI], 0.90–1.24; P=0.49). At 30 days, survival rates were 12% in the IO group and 10% in the IV group (RR, 1.16; 95% CI, 0.87–1.56), with favorable neurologic outcomes observed in 9% and 8% of patients, respectively (RR, 1.16; 95% CI, 0.83–1.62). No significant differences were found in procedural times, adverse events, or quality-of-life measures among survivors.

Conclusions: In adults with nontraumatic OHCA, initial intraosseous vascular access did not result in a significant difference in sustained ROSC compared to intravenous access. Both methods yielded comparable survival rates and neurologic outcomes at 30 days, suggesting that the choice of vascular access route may not critically impact immediate resuscitation success.

Implications for Practice: These findings indicate that emergency medical services can opt for either intraosseous or intravenous vascular access during resuscitation based on provider expertise, patient anatomy, and situational considerations without adversely affecting patient outcomes. Emphasizing flexibility in vascular access approach may facilitate quicker access and streamline resuscitation efforts in the prehospital setting.

Study Strengths and Limitations: Strengths include the randomized design, large sample size, and nationwide implementation, enhancing generalizability. Limitations involve potential crossover between groups, lack of blinding among clinicians, and the study being underpowered to detect small differences in long-term outcomes.

Future Research: Further studies are needed to assess long-term survival and neurologic outcomes, and to explore whether specific patient subgroups may benefit more from one vascular access method over the other during cardiac arrest resuscitation.

Reference: Vallentin MF, Granfeldt A, Klitgaard TL, et al. Intraosseous or Intravenous Vascular Access for Out-of-Hospital Cardiac Arrest. New England Journal of Medicine. 2024 Oct 31; DOI: http://doi.org/10.1056/NEJMoa2407616

 


RCT: Intraosseous vs. Intravenous Drug Administration in Out-of-Hospital Cardiac Arrest Shows No Difference in 30-Day Survival

3 Nov, 2024 | 12:48h | UTC

Background: Out-of-hospital cardiac arrest requires rapid drug administration, with medications like epinephrine being highly time-dependent. Intravenous access can be challenging prehospital due to environmental and patient factors, potentially delaying treatment. Intraosseous access may offer faster drug delivery, but its impact on clinical outcomes is unclear.

Objective: To compare the effectiveness of an intraosseous-first versus intravenous-first vascular access strategy on 30-day survival in adults experiencing out-of-hospital cardiac arrest requiring drug therapy.

Methods: In this multicenter, open-label, randomized trial across 11 UK emergency medical systems, 6,082 adults were assigned to receive either intraosseous-first or intravenous-first vascular access during resuscitation. The primary outcome was survival at 30 days. Secondary outcomes included return of spontaneous circulation and favorable neurologic function at hospital discharge (modified Rankin scale score ≤3).

Results: At 30 days, survival was 4.5% in the intraosseous group and 5.1% in the intravenous group (adjusted odds ratio [OR], 0.94; 95% confidence interval [CI], 0.68–1.32; P=0.74). Favorable neurologic outcome at discharge was similar between groups (2.7% vs. 2.8%; adjusted OR, 0.91; 95% CI, 0.57–1.47). Return of spontaneous circulation was lower in the intraosseous group (36.0% vs. 39.1%; adjusted OR, 0.86; 95% CI, 0.76–0.97).

Conclusions: An intraosseous-first vascular access strategy did not improve 30-day survival compared to an intravenous-first strategy in adults with out-of-hospital cardiac arrest. The intraosseous route was associated with a lower rate of return of spontaneous circulation.

Implications for Practice: Paramedics should consider that intraosseous access may not offer a survival advantage over intravenous access and may be linked to a reduced return of spontaneous circulation. This finding may influence decisions on vascular access during resuscitation efforts.

Study Strengths and Limitations: Strengths include a large, multicenter randomized design; limitations involve early termination reducing statistical power and inability to blind prehospital providers.

Future Research: Further studies should investigate why intraosseous access is associated with lower return of spontaneous circulation and assess if specific intraosseous techniques or sites affect outcomes.

Reference: Couper K, Ji C, Deakin CD, et al. A Randomized Trial of Drug Route in Out-of-Hospital Cardiac Arrest. N Engl J Med. 2024; DOI: http://doi.org/10.1056/NEJMoa2407780

 


RCT: Transcatheter Tricuspid-Valve Replacement Improves Symptoms and Quality of Life in Severe Tricuspid Regurgitation

3 Nov, 2024 | 12:37h | UTC

Background: Severe tricuspid regurgitation is associated with debilitating symptoms and increased mortality. Surgical intervention is infrequently performed due to high operative risks and late patient presentation, leading to poor outcomes. Transcatheter tricuspid-valve replacement offers a less invasive alternative versus surgical intervention, but data on its efficacy are limited.

Objective: To compare the safety and effectiveness of transcatheter tricuspid-valve replacement plus medical therapy versus medical therapy alone in patients with severe symptomatic tricuspid regurgitation.

Methods: In this international, multicenter randomized controlled trial, 400 patients with severe symptomatic tricuspid regurgitation despite optimal medical therapy were randomized in a 2:1 ratio to receive transcatheter tricuspid-valve replacement plus medical therapy (valve-replacement group, n=267) or medical therapy alone (control group, n=133). The primary outcome was a hierarchical composite of death from any cause, implantation of a right ventricular assist device or heart transplantation, postindex tricuspid-valve intervention, hospitalization for heart failure, and improvements in the KCCQ-OS score by at least 10 points, NYHA functional class by at least one class, and 6-minute walk distance by at least 30 meters.

Results: At one year, the win ratio favoring valve replacement was 2.02 (95% confidence interval [CI], 1.56 to 2.62; P<0.001), indicating superiority over medical therapy alone. Patients in the valve-replacement group had significant improvements in quality of life, with 66.4% achieving an increase of at least 10 points in the KCCQ-OS score compared to 36.5% in the control group. Improvement of at least one NYHA class was observed in 78.9% of the valve-replacement group versus 24.0% of the control group. Reduction of tricuspid regurgitation to mild or less was achieved in 95.2% of patients in the valve-replacement group, compared to 2.3% in the control group. Severe bleeding occurred more frequently in the valve-replacement group (15.4% vs. 5.3%; P=0.003), as did new permanent pacemaker implantation (17.8% vs. 2.3%; P<0.001).

Conclusions: Transcatheter tricuspid-valve replacement significantly improved clinical outcomes, symptoms, functional capacity, and quality of life in patients with severe tricuspid regurgitation compared to medical therapy alone, despite higher risks of severe bleeding and pacemaker implantation.

Implications for Practice: Transcatheter tricuspid-valve replacement offers a promising therapeutic option for patients with severe symptomatic tricuspid regurgitation who are at high surgical risk. Clinicians should consider this intervention to improve patient symptoms and quality of life, while carefully weighing the procedural risks, particularly bleeding and arrhythmias requiring pacemaker implantation.

Study Strengths and Limitations: Strengths of the study include its randomized controlled design and comprehensive evaluation of both clinical and patient-reported outcomes. Limitations involve the smaller control group due to the 2:1 randomization and a one-year follow-up period that may not capture long-term benefits or risks.

Future Research: Further studies with longer follow-up are needed to assess the durability of transcatheter tricuspid-valve replacement, its long-term impact on survival and hospitalization rates, and strategies to minimize procedural complications.

Reference: Hahn RT, et al. Transcatheter Valve Replacement in Severe Tricuspid Regurgitation. New England Journal of Medicine. 2024 Oct 30; DOI: http://doi.org/10.1056/NEJMoa2401918

 


RCT: Early TAVR Improves Clinical Outcomes in Asymptomatic Severe Aortic Stenosis

29 Oct, 2024 | 13:04h | UTC

Background: Severe aortic stenosis is prevalent among adults aged 65 and older. Current guidelines recommend aortic-valve replacement for symptomatic patients or asymptomatic patients with specific high-risk features. For other asymptomatic patients, routine clinical surveillance is standard due to limited evidence supporting early intervention, particularly with transcatheter aortic-valve replacement (TAVR).

Objective: To determine whether early TAVR reduces the incidence of death, stroke, or unplanned cardiovascular hospitalization compared to standard clinical surveillance in patients with asymptomatic severe aortic stenosis.

Methods: In this prospective, multicenter, randomized controlled trial, 901 asymptomatic patients aged ≥65 years with severe aortic stenosis and preserved left ventricular ejection fraction were randomized 1:1 to undergo early TAVR or to receive guideline-directed clinical surveillance. The mean age was 75.8 years, and the mean Society of Thoracic Surgeons Predicted Risk of Mortality score was 1.8%, indicating low surgical risk. The primary endpoint was a composite of death from any cause, stroke, or unplanned hospitalization for cardiovascular causes.

Results: Over a median follow-up of 3.8 years, the primary endpoint occurred in 26.8% of the TAVR group compared to 45.3% of the surveillance group (hazard ratio [HR], 0.50; 95% confidence interval [CI], 0.40–0.63; P<0.001). Individual components showed lower rates in the TAVR group: death (8.4% vs. 9.2%), stroke (4.2% vs. 6.7%), and unplanned cardiovascular hospitalizations (20.9% vs. 41.7%). Early TAVR patients also maintained better quality of life, with 86.6% achieving favorable outcomes at 2 years compared to 68.0% in the surveillance group (P<0.001). By 2 years, 87.0% of patients in the surveillance group underwent aortic-valve replacement, many presenting with advanced symptoms and cardiac damage. Procedural complications and periprocedural adverse events were similar between groups.

Conclusions: Early TAVR significantly reduced death, stroke, and unplanned cardiovascular hospitalizations in asymptomatic patients with severe aortic stenosis compared to clinical surveillance. Early intervention preserved quality of life and cardiac function, suggesting that early TAVR may benefit this patient population.

Implications for Practice: These findings support considering early TAVR in asymptomatic patients with severe aortic stenosis to improve clinical outcomes and quality of life. This may challenge current guidelines that recommend surveillance over early intervention.

Study Strengths and Limitations: Strengths include the randomized design, large sample size, and multicenter participation. Limitations involve the study population being predominantly low surgical risk patients aged ≥65 years with anatomy suitable for transfemoral TAVR, which may limit generalizability to younger patients, those with higher surgical risk, or those unsuitable for TAVR.

Future Research: Further research is needed to assess long-term valve durability, outcomes in diverse patient populations, and comparisons with surgical aortic-valve replacement. Studies on cost-effectiveness and the impact on guidelines are also warranted.

Reference: Généreux P, et al. Transcatheter Aortic-Valve Replacement for Asymptomatic Severe Aortic Stenosis. New England Journal of Medicine. 2024 Oct 28; DOI: http://doi.org/10.1056/NEJMoa2405880

 


RCT: Vitamin K2 Reduces Nocturnal Leg Cramps in Older Adults

28 Oct, 2024 | 18:59h | UTC

Background Nocturnal leg cramps (NLCs) affect 50% to 60% of adults, causing significant discomfort, sleep disturbances, and reduced quality of life. Current treatments lack robust evidence for efficacy and safety, with quinine no longer recommended due to severe adverse effects. Vitamin K2 has shown promise in reducing muscle cramps in dialysis patients, suggesting potential benefits for managing NLCs.

Objective To evaluate whether vitamin K2 supplementation reduces the frequency, duration, and severity of nocturnal leg cramps in older adults compared with placebo.

Methods In this multicenter, double-blind, placebo-controlled randomized clinical trial conducted in China from September 2022 to December 2023, 199 community-dwelling individuals aged 65 years or older with at least two episodes of NLCs over a two-week screening period were enrolled. Participants were randomly assigned in a 1:1 ratio to receive daily oral vitamin K2 (menaquinone-7, 180 μg) or placebo for eight weeks. The primary outcome was the mean number of NLCs per week. Secondary outcomes included cramp duration and severity, measured on a 1 to 10 analog scale.

Results Of the 199 participants (mean age 72.3 ± 5.5 years; 54.3% female), 103 received vitamin K2 and 96 received placebo. Baseline weekly cramp frequency was similar between groups (vitamin K2: 2.60 ± 0.81; placebo: 2.71 ± 0.80). Over eight weeks, the vitamin K2 group experienced a significant reduction in mean weekly cramps to 0.96 ± 1.41, while the placebo group increased to 3.63 ± 2.20 (between-group difference: −2.67; 95% CI, −2.86 to −2.49; P < .001). The vitamin K2 group also showed greater reductions in cramp severity (mean decrease of 2.55 ± 2.12 points vs 1.24 ± 1.16 points in placebo) and duration (mean decrease of 0.90 ± 0.88 minutes vs 0.32 ± 0.78 minutes in placebo). No adverse events related to vitamin K2 were reported.

Conclusions Vitamin K2 supplementation significantly reduced the frequency, severity, and duration of nocturnal leg cramps in older adults, demonstrating both efficacy and safety.

Implications for Practice Vitamin K2 may offer an effective and safe therapeutic option for managing NLCs in older individuals, addressing a significant unmet clinical need in primary care.

Study Strengths and Limitations Strengths include the randomized, double-blind design and focus on an older population; limitations involve the relatively mild symptoms of participants and lack of assessment of quality of life or sleep improvements.

Future Research Further studies should assess the impact of vitamin K2 on sleep quality and quality of life in patients with more severe NLCs and explore the underlying mechanisms of its muscle-relaxing effects.

Reference Tan J, Zhu R, Li Y, et al. Vitamin K2 in Managing Nocturnal Leg Cramps: A Randomized Clinical Trial. JAMA Internal Medicine. Published online October 28, 2024. DOI: http://doi.org/10.1001/jamainternmed.2024.5726


RCT: ICS-Formoterol and ICS-SABA Reduce Severe Asthma Exacerbations Compared With SABA Alone

28 Oct, 2024 | 18:12h | UTC

Background: Asthma affects millions worldwide and is managed using inhaled relievers to alleviate acute symptoms. While short-acting β agonists (SABA) are commonly used, combining inhaled corticosteroids (ICS) with SABA or formoterol may enhance outcomes. Recent guidelines recommend ICS-formoterol as the preferred reliever, but the optimal choice remains uncertain, especially following the recent FDA approval of ICS-SABA.

Objective: To compare the efficacy and safety of SABA alone, ICS-SABA, and ICS-formoterol as reliever therapies in asthma.

Methods: This systematic review and network meta-analysis included 27 randomized controlled trials involving 50,496 adult and pediatric asthma patients. Trials compared SABA alone, ICS-SABA, and ICS-formoterol as reliever therapies, ensuring similar maintenance treatments across groups. Outcomes assessed were severe asthma exacerbations, asthma symptom control (Asthma Control Questionnaire-5 [ACQ-5]), asthma-related quality of life (Asthma Quality of Life Questionnaire [AQLQ]), adverse events, and mortality.

Results: Compared with SABA alone, both ICS-containing relievers significantly reduced severe exacerbations:

  • ICS-formoterol: Risk ratio (RR) 0.65 (95% CI, 0.60–0.72); risk difference (RD) –10.3% (95% CI, –11.8% to –8.3%).
  • ICS-SABA: RR 0.84 (95% CI, 0.73–0.95); RD –4.7% (95% CI, –8.0% to –1.5%).

Compared with ICS-SABA, ICS-formoterol further reduced severe exacerbations (RR 0.78; RD –5.5%). Both ICS-containing relievers modestly improved asthma symptom control compared with SABA alone. No increase in adverse events was observed with either ICS-containing therapy.

Conclusions: Both ICS-formoterol and ICS-SABA as reliever therapies reduce severe asthma exacerbations and improve symptom control compared with SABA alone, without increasing adverse events. ICS-formoterol may offer additional benefits over ICS-SABA in reducing exacerbations.

Implications for Practice: These findings support the use of ICS-containing reliever therapies over SABA alone in asthma management to reduce severe exacerbations and improve control. ICS-formoterol may be preferred when a greater reduction in exacerbations is desired.

Study Strengths and Limitations: High-certainty evidence strengthens these conclusions, but the lack of direct comparisons between ICS-formoterol and ICS-SABA and limited pediatric data are notable limitations.

Future Research: Direct head-to-head trials comparing ICS-formoterol and ICS-SABA, particularly in pediatric populations, are needed to confirm these findings.

Reference: Rayner DG, Ferri DM, Guyatt GH, et al. Inhaled Reliever Therapies for Asthma: A Systematic Review and Meta-Analysis. JAMA. Published online October 28, 2024. DOI: http://doi.org/10.1001/jama.2024.22700

 


RCT: Early DOACs Safe and Non-Inferior to Delayed Initiation Post-Stroke with Atrial Fibrillation

28 Oct, 2024 | 17:52h | UTC

Background: Atrial fibrillation increases ischaemic stroke risk, and patients are prone to recurrence. Prompt anticoagulation post-stroke is critical, but optimal timing is unclear due to bleeding concerns. Guidelines often delay DOAC initiation without strong evidence.

Objective: To determine if early DOAC initiation (≤4 days) is non-inferior to delayed initiation (7–14 days) in preventing recurrent ischaemic events without increasing intracranial haemorrhage risk in patients with acute ischaemic stroke and atrial fibrillation.

Methods: In this multicentre, open-label, blinded-endpoint, phase 4 randomised controlled trial at 100 UK hospitals, 3,621 adults with atrial fibrillation and acute ischaemic stroke were randomised to early or delayed DOAC initiation. Eligibility required physician uncertainty about timing. Participants and clinicians were unmasked; outcomes were adjudicated by a masked committee. The primary outcome was a composite of recurrent ischaemic stroke, symptomatic intracranial haemorrhage, unclassifiable stroke, or systemic embolism within 90 days.

Results: Among 3,621 patients (mean age 78.5; 45% female), the primary outcome occurred in 59 patients (3.3%) in both early and delayed groups (adjusted risk difference 0.0%, 95% CI –1.1 to 1.2%). Upper confidence limit below the 2% non-inferiority margin (p=0.0003) confirmed non-inferiority. Symptomatic intracranial haemorrhage rates were similar (0.6% early vs 0.7% delayed; p=0.78). No significant differences in mortality or heterogeneity across subgroups.

Conclusions: Early DOAC initiation within 4 days is non-inferior to delayed initiation in preventing recurrent events without increasing intracranial haemorrhage risk. Findings challenge guidelines advising delayed anticoagulation and support early initiation regardless of stroke severity.

Implications for Practice: Clinicians should consider starting DOACs within 4 days post-stroke in atrial fibrillation patients. Early initiation is safe and effective, potentially improving outcomes and suggesting guidelines may need revision.

Study Strengths and Limitations: Strengths include large sample size and masked outcome adjudication. Limitations include exclusion of patients with very severe strokes and low event rates, potentially limiting detection of rare adverse events.

Future Research: Further studies should explore optimal DOAC timing within 4 days and assess safety in patients with severe strokes or extensive haemorrhagic transformation.

Reference: Werring DJ, Dehbi HM, Ahmed N, et al. Optimal timing of anticoagulation after acute ischaemic stroke with atrial fibrillation (OPTIMAS): a multicentre, blinded-endpoint, phase 4, randomised controlled trial. Lancet. 2024; DOI: http://doi.org/10.1016/S0140-6736(24)02197-4

 


Post-trial Follow-up: Empagliflozin Shows Sustained Benefits Post-Discontinuation in Chronic Kidney Disease

25 Oct, 2024 | 20:29h | UTC

Background: Chronic kidney disease (CKD) progression leads to end-stage kidney disease, affecting quality of life and increasing cardiovascular morbidity and mortality. Empagliflozin, an SGLT2 inhibitor, has shown renal and cardiovascular benefits during active treatment. The persistence of these effects post-discontinuation is uncertain.

Objective: To evaluate how the cardiorenal benefits of empagliflozin evolve after stopping the medication, by assessing the composite outcome of kidney disease progression or cardiovascular death during both the active trial and a subsequent post-trial follow-up.

Methods: In the EMPA-KIDNEY trial, 6609 patients with CKD were randomized to receive empagliflozin 10 mg daily or placebo and were followed for a median of 2 years during the active trial. Eligible patients had an eGFR of 20–45 ml/min/1.73 m² or an eGFR of 45–90 ml/min/1.73 m² with a urinary albumin-to-creatinine ratio ≥200 mg/g. After the active trial, 4891 surviving patients (74%) consented to a 2-year post-trial follow-up without the trial medication, although open-label SGLT2 inhibitors could be prescribed by local practitioners. The primary outcome was a composite of kidney disease progression or cardiovascular death assessed from the start of the active trial to the end of the post-trial period.

Results: During the combined active and post-trial periods, a primary outcome event occurred in 26.2% of patients in the empagliflozin group and 30.3% in the placebo group (hazard ratio [HR], 0.79; 95% confidence interval [CI], 0.72–0.87). During the post-trial period alone, the HR was 0.87 (95% CI, 0.76–0.99), indicating sustained benefits after discontinuation. The risk of kidney disease progression was 23.5% with empagliflozin versus 27.1% with placebo. Cardiovascular death occurred in 3.8% of the empagliflozin group and 4.9% of the placebo group (HR, 0.75; 95% CI, 0.59–0.95). There was no significant difference in noncardiovascular mortality.

Conclusions: Empagliflozin continued to confer cardiorenal benefits for up to 12 months after discontinuation in patients with CKD at risk for progression. The sustained reduction in kidney disease progression and cardiovascular death suggests long-term advantages of empagliflozin beyond active treatment, supporting its role in CKD management.

Implications for Practice: These findings support the early initiation and continued use of empagliflozin in patients with CKD to maximize long-term cardiorenal benefits. Clinicians should consider empagliflozin as part of standard care for a broad range of CKD patients, regardless of diabetes status, to slow disease progression and reduce cardiovascular risk.

Study Strengths and Limitations: While the study’s large, diverse CKD population and extended follow-up enhance its generalizability, reliance on local creatinine measurements and lack of hospitalization data during post-trial follow-up are limitations.

Future Research: Further studies should explore the mechanisms underlying the sustained benefits of empagliflozin after discontinuation and assess long-term effects on hospitalization and quality of life in CKD patients.

Reference: The EMPA-KIDNEY Collaborative Group. Long-Term Effects of Empagliflozin in Patients with Chronic Kidney Disease. New England Journal of Medicine 2024; DOI: http://doi.org/10.1056/NEJMoa2409183

 


Cohort Study: Levonorgestrel IUD Use Linked to Increased Breast Cancer Risk in Premenopausal Women

20 Oct, 2024 | 18:13h | UTC

Background: Levonorgestrel-releasing intrauterine systems (LNG-IUSs) are increasingly used, especially among Danish premenopausal women over 30 years old, as a preferred method of hormonal contraception. Previous studies have suggested an increased risk of breast cancer with LNG-IUS use but did not adequately address the duration of continuous use or account for other hormonal contraceptive exposures.

Objective: To assess the risk of breast cancer associated with continuous use of LNG-IUSs, accounting for other hormonal exposures.

Methods: In this nationwide Danish cohort study, 78,595 first-time LNG-IUS users aged 15–49 years from 2000 to 2019 were identified and matched 1:1 by birth year to nonusers of hormonal contraceptives. Exclusion criteria included prior hormonal contraceptive use within 5 years, previous cancer, postmenopausal hormone therapy, and pregnancy at baseline. Participants were followed from initiation until breast cancer diagnosis, other cancer, pregnancy, hormone therapy initiation, emigration, death, or December 31, 2022. Cox proportional hazards models adjusted for confounders estimated hazard ratios (HRs) for breast cancer associated with continuous LNG-IUS use.

Results: During a mean follow-up of 6.8 years, 1,617 breast cancer cases occurred: 720 among LNG-IUS users and 897 among nonusers. The mean age was 38 years. Continuous LNG-IUS use was associated with a higher breast cancer risk compared to nonuse (HR, 1.4; 95% CI, 1.2–1.5). HRs by duration were 1.3 (95% CI, 1.1–1.5) for 0–5 years, 1.4 (95% CI, 1.1–1.7) for >5–10 years, and 1.8 (95% CI, 1.2–2.6) for >10–15 years. Excess breast cancer cases per 10,000 users were 14 (95% CI, 6–23), 29 (95% CI, 9–50), and 71 (95% CI, 15–127), respectively. The trend test for duration was not statistically significant (P = .15).

Conclusions: Continuous use of LNG-IUSs was associated with an increased risk of breast cancer among women aged 15–49 years compared to nonuse of hormonal contraceptives. The absolute increase in risk was low.

Implications for Practice: Healthcare providers should inform women about the potential increased breast cancer risk associated with LNG-IUS use, especially considering its widespread and long-term use among premenopausal women. While the absolute risk increase is small, this information is essential for making informed contraceptive choices.

Study Strengths and Limitations: Strengths include the large, nationwide cohort and adjustment for multiple confounders. Limitations include potential underestimation of risk due to unrecorded LNG-IUS removals before the recommended duration, lack of a statistically significant trend with duration suggesting possible low statistical precision or non-causal association, and the possibility of unmeasured confounding.

Future Research: Further studies are needed to confirm these findings, clarify the causal relationship, and understand the mechanisms underlying the potential increased breast cancer risk with LNG-IUS use.

Reference: Mørch LS, Meaidi A, Corn G, et al. Breast Cancer in Users of Levonorgestrel-Releasing Intrauterine Systems. JAMA. Published online October 16, 2024. DOI: http://doi.org/10.1001/jama.2024.18575

 


Observational Study: Kidney Transplantation from Donors with HIV Safe for Recipients with HIV

20 Oct, 2024 | 17:30h | UTC

Background: Kidney transplantation improves survival for persons with HIV and end-stage renal disease but is limited by organ shortages. Transplantation from donors with HIV to recipients with HIV is emerging under the HIV Organ Policy Equity (HOPE) Act but is currently approved only for research. The Department of Health and Human Services is considering expanding this practice to clinical care, but data are limited to small case series without control groups.

Objective: To assess whether kidney transplantation from donors with HIV is noninferior to transplantation from donors without HIV regarding safety outcomes in recipients with HIV.

Methods: In an observational, noninferiority study at 26 U.S. centers, 408 transplantation candidates with HIV were enrolled. Of these, 198 received a kidney transplant: 99 from deceased donors with HIV and 99 from deceased donors without HIV. The primary outcome was a composite safety event (death, graft loss, serious adverse event, HIV breakthrough infection, persistent failure of HIV treatment, or opportunistic infection), assessed for noninferiority (upper bound of 95% CI for hazard ratio ≤3.00). Secondary outcomes included overall survival, survival without graft loss, rejection rates, infections, cancer, and HIV superinfection.

Results: The adjusted hazard ratio for the composite primary outcome was 1.00 (95% CI, 0.73 to 1.38), demonstrating noninferiority. Overall survival at 1 year was 94% for recipients of kidneys from donors with HIV and 95% for those from donors without HIV; at 3 years, survival was 85% and 87%, respectively. Survival without graft loss at 1 year was 93% vs. 90%; at 3 years, 84% vs. 81%. Rejection rates were similar at 1 year (13% vs. 21%) and 3 years (21% vs. 24%). The incidence of serious adverse events, infections, surgical or vascular complications, and cancer did not differ significantly between groups. HIV breakthrough infection occurred more frequently among recipients of kidneys from donors with HIV (incidence rate ratio 3.14; 95% CI, 1.02 to 9.63), primarily due to nonadherence to antiretroviral therapy; viral suppression was regained in all cases. One potential HIV superinfection occurred without clinical consequences.

Conclusions: Kidney transplantation from donors with HIV to recipients with HIV was noninferior to transplantation from donors without HIV regarding safety outcomes, supporting the expansion of this practice from research to clinical care.

Implications for Practice: Expanding kidney transplantation involving donors and recipients with HIV to clinical practice could increase organ availability and reduce disparities in transplantation access for persons with HIV. Clinicians should monitor for HIV breakthrough infections and encourage adherence to antiretroviral therapy.

Study Strengths and Limitations: Strengths include a multicenter design and direct comparison groups. Limitations involve the observational design, inability to randomize due to allocation constraints, and heterogeneity in immunosuppression protocols.

Future Research: Further studies are needed to confirm these findings, evaluate long-term outcomes, and assess potential risks such as HIV superinfection.

Reference: Durand CM, et al. (2024) for the HOPE in Action Investigators. Safety of Kidney Transplantation from Donors with HIV. New England Journal of Medicine. DOI: http://doi.org/10.1056/NEJMoa2403733

 


RCT: Low-Dose Amitriptyline Effective as Second-Line Treatment for Irritable Bowel Syndrome

20 Oct, 2024 | 15:56h | UTC

Background: Most patients with irritable bowel syndrome (IBS) are managed in primary care. When first-line therapies—such as dietary changes and antispasmodic drugs—are ineffective, the UK National Institute for Health and Care Excellence (NICE) recommends considering low-dose tricyclic antidepressants as second-line treatment. However, their effectiveness in primary care is uncertain, and they are infrequently prescribed in this setting.

Objective: To determine whether titrated low-dose amitriptyline is effective as a second-line treatment for IBS in primary care.

Methods: In a randomized, double-blind, placebo-controlled, phase 3 trial (ATLANTIS) conducted at 55 general practices in England, 463 adults aged 18 years or older with Rome IV IBS and ongoing symptoms despite first-line therapies were randomized 1:1 to receive low-dose oral amitriptyline (10 mg once daily) or placebo for 6 months. Dose titration over 3 weeks up to 30 mg once daily was allowed according to symptoms and tolerability. The primary outcome was the IBS Severity Scoring System (IBS-SSS) score at 6 months. Secondary outcomes included subjective global assessment (SGA) of relief of IBS symptoms, adequate relief for at least 50% of weeks, and adverse events.

Results: Among 463 participants (mean age 48.5 years; 68% female), low-dose amitriptyline was superior to placebo at 6 months, with a significant mean difference in IBS-SSS score between groups (–27.0; 95% CI, –46.9 to –7.1; P = .0079). More participants reported relief of IBS symptoms with amitriptyline compared to placebo (61% vs 45%; odds ratio [OR] 1.78; 95% CI, 1.19–2.66; P = .0050). Adequate relief of IBS symptoms for at least 50% of weeks was higher with amitriptyline (41% vs 30%; OR 1.56; 95% CI, 1.20–2.03; P = .0008). Adverse events were more frequent with amitriptyline, mainly related to anticholinergic effects such as dry mouth (54%) and drowsiness (53%), but most were mild. Withdrawals due to adverse events were slightly higher with amitriptyline (13% vs 9%).

Conclusions: Low-dose amitriptyline was superior to placebo as a second-line treatment for IBS in primary care and was safe and well tolerated.

Implications for Practice: General practitioners should consider prescribing low-dose amitriptyline to patients with IBS whose symptoms do not improve with first-line therapies, providing appropriate support for patient-led dose titration.

Study Strengths and Limitations: Strengths include the large sample size, primary care setting, and extended treatment duration. Limitations involve underrepresentation of patients with IBS with constipation, potential unblinding due to side effects, and a predominantly White participant population.

Future Research: Further trials assessing amitriptyline as a first-line therapy for IBS in primary care and studies on long-term outcomes are recommended.

Reference: Ford AC, Wright-Hughes A, Alderson SL, et al. Amitriptyline at Low-Dose and Titrated for Irritable Bowel Syndrome as Second-Line Treatment in Primary Care (ATLANTIS): a Randomised, Double-Blind, Placebo-Controlled, Phase 3 Trial. Lancet. 2023; DOI: http://doi.org/10.1016/S0140-6736(23)01523-4

 


RCT: Five-Fraction SBRT Noninferior to Conventional Radiotherapy in Localized Prostate Cancer

20 Oct, 2024 | 15:46h | UTC

Background: Prostate cancer poses a significant global health challenge, with radiotherapy being a common curative treatment for localized disease. Hypofractionation, delivering higher doses per session over fewer treatments, has potential benefits in efficacy and convenience. While moderately hypofractionated radiotherapy is established, the efficacy of stereotactic body radiotherapy (SBRT) delivering radiation in just five fractions remains uncertain.

Objective: To assess whether five-fraction SBRT is noninferior to conventionally or moderately hypofractionated radiotherapy regarding freedom from biochemical or clinical failure in patients with low-to-intermediate-risk localized prostate cancer.

Methods: In this phase 3, international, open-label randomized controlled trial (PACE-B), 874 men with stage T1–T2 prostate cancer, Gleason score ≤3+4, and prostate-specific antigen (PSA) ≤20 ng/mL were randomized 1:1 to receive SBRT (36.25 Gy in 5 fractions over 1–2 weeks) or control radiotherapy (78 Gy in 39 fractions over 7.5 weeks or 62 Gy in 20 fractions over 4 weeks). Androgen-deprivation therapy was not permitted. The primary endpoint was freedom from biochemical or clinical failure.

Results: Between August 2012 and January 2018, 874 patients were randomized (433 to SBRT and 441 to control radiotherapy) at 38 centers. Median age was 69.8 years, median PSA was 8.0 ng/mL, and 91.6% had intermediate-risk disease. At a median follow-up of 74.0 months, the 5-year incidence of freedom from biochemical or clinical failure was 95.8% in the SBRT group and 94.6% in the control group (unadjusted HR 0.73; 90% CI, 0.48 to 1.12; P=0.004 for noninferiority). Cumulative incidence of late Radiation Therapy Oncology Group (RTOG) grade 2 or higher genitourinary toxic effects at 5 years was higher with SBRT (26.9% vs. 18.3%; P<0.001), while gastrointestinal toxic effects were similar between groups (10.7% vs. 10.2%; P=0.94). Overall survival did not differ significantly (HR for death, 1.41; 95% CI, 0.90 to 2.20).

Conclusions: Five-fraction SBRT was noninferior to conventional or moderately hypofractionated radiotherapy in terms of biochemical or clinical failure in patients with low-to-intermediate-risk localized prostate cancer. SBRT may be an effective treatment option but is associated with a higher incidence of medium-term genitourinary toxic effects.

Implications for Practice: SBRT offers equivalent oncologic efficacy with the convenience of fewer treatment sessions, potentially reducing patient burden and healthcare resource utilization. Clinicians should consider SBRT for eligible patients but must inform them about the increased medium-term risk of genitourinary toxic effects.

Study Strengths and Limitations: Strengths include a large sample size, multicenter design, standardized radiotherapy protocols, and exclusion of hormonal therapy, minimizing confounding factors. Limitations involve the applicability of findings only to patients similar to those in the trial; some may now opt for active surveillance, and results may not extend to higher-risk populations.

Future Research: Further studies are needed to evaluate long-term outcomes of SBRT, its role in higher-risk patients, and strategies to mitigate genitourinary toxic effects.

Reference: van As N., Griffin C., Tree A., et al. (2024). Phase 3 Trial of Stereotactic Body Radiotherapy in Localized Prostate Cancer. New England Journal of Medicine. DOI: http://doi.org/10.1056/NEJMoa2403365

 


RCT: More Frequent Screening with Pressure-Supported SBTs Delayed Extubation in Mechanically Ventilated Adults

13 Oct, 2024 | 13:15h | UTC

Background: Prompt liberation from mechanical ventilation is crucial to reduce complications associated with prolonged ventilator use. The optimal frequency of weaning readiness screening and the most effective spontaneous breathing trial (SBT) technique are not well established.

Objective: To evaluate whether the frequency of screening for weaning readiness (once-daily vs more frequent) and the SBT technique used (pressure-supported vs T-piece) affect the time to successful extubation in adults receiving invasive mechanical ventilation.

Methods: In a multicenter randomized clinical trial with a 2×2 factorial design, 797 critically ill adults who had been mechanically ventilated for at least 24 hours were enrolled. Participants were randomized to either once-daily or more frequent screening for weaning readiness and to undergo either pressure-supported SBTs (pressure support >0 to ≤8 cm H₂O with PEEP >0 to ≤5 cm H₂O) or T-piece SBTs, each lasting 30–120 minutes. The primary outcome was the time to successful extubation, defined as the time from starting unsupported spontaneous breathing that was sustained for at least 48 hours post-extubation.

Results: Among the 797 patients (mean age 62.4 years; 59.2% male), there was no significant difference in time to successful extubation when comparing screening frequencies (hazard ratio [HR] 0.88; 95% CI, 0.76–1.03; P = .12) or SBT techniques (HR 1.06; 95% CI, 0.91–1.23; P = .45) individually. However, a significant interaction between screening frequency and SBT technique was identified (P = .009). Specifically, in patients undergoing pressure-supported SBTs, more frequent screening *delayed* time to successful extubation compared to once-daily screening (HR 0.70; 95% CI, 0.50–0.96; P = .02). Conversely, when T-piece SBTs were used, the frequency of screening did not significantly affect extubation time. The median time to successful extubation was shortest in the once-daily screening with pressure-supported SBT group (2.0 days) and longest in the more frequent screening with pressure-supported SBT group (3.9 days).

Conclusions: More frequent screening combined with pressure-supported SBTs resulted in a *longer* time to successful extubation, suggesting this combination may delay weaning from mechanical ventilation. Once-daily screening with pressure-supported SBTs showed a trend toward faster extubation compared to other strategies, although this was not statistically significant.

Implications for Practice: Clinicians should be cautious about combining more frequent screening with pressure-supported SBTs, as this may unintentionally prolong mechanical ventilation. Adopting once-daily screening with pressure-supported SBTs might facilitate earlier extubation.

Study Strengths and Limitations: Strengths of the study include its large sample size, multicenter design, and high adherence to the intervention protocols. Limitations involve the unexpected significant interaction between interventions, which may limit the generalizability of the results.

Future Research: Additional studies are warranted to confirm the interaction between screening frequency and SBT technique and to explore the mechanisms underlying the delayed extubation with more frequent screening and pressure-supported SBTs.

Reference: Burns KEA, et al (2024). Frequency of Screening and Spontaneous Breathing Trial Techniques: A Randomized Clinical Trial. JAMA. DOI: http://doi.org/10.1001/jama.2024.20631

 


Meta-analysis: Colchicine Reduces Ischemic Stroke and MACE in Patients with Prior Stroke or Coronary Disease

13 Oct, 2024 | 12:10h | UTC

Background: Colchicine is recommended for secondary prevention in cardiovascular disease, but its efficacy in preventing ischemic stroke and benefits in key patient subgroups remain uncertain. Inflammation plays a significant role in stroke and cardiovascular events, and colchicine’s anti-inflammatory properties may confer additional protective effects.

Objective: To evaluate the efficacy of colchicine for secondary prevention of ischemic stroke and major adverse cardiovascular events (MACE) in patients with prior stroke or coronary disease, and to assess its safety profile across key clinical subgroups.

Methods: A trial-level meta-analysis was conducted, including six randomized controlled trials with a total of 14,934 patients with prior stroke or coronary disease. Trials comparing colchicine with placebo or no colchicine for at least three months were included. The primary efficacy outcomes were ischemic stroke and MACE, defined as a composite of ischemic stroke, myocardial infarction, coronary revascularization, or cardiovascular death. Secondary outcomes included serious safety events and mortality.

Results: Colchicine reduced the risk of ischemic stroke by 27% (1.8%] vs. 186 events [2.5%]; RR 0.73, 95% CI 0.58–0.90; P = .004) and MACE by 27% (505 events [6.8%] vs. 693 events [9.4%]; RR 0.73, 95% CI 0.65–0.81; P < .001). The efficacy was consistent across key subgroups, including sex, age (<70 vs. ≥70 years), diabetes status, and statin use at baseline. Colchicine was not associated with an increase in serious safety outcomes, all-cause mortality (201 deaths [2.7%] vs. 181 deaths [2.4%]; RR 1.09, 95% CI 0.89–1.33; P = .39), cardiovascular death, or non-cardiovascular death.

Conclusions: In patients with prior stroke or coronary disease, colchicine significantly reduces the risk of ischemic stroke and MACE without increasing serious adverse events or mortality. These findings support the use of low-dose colchicine as an effective secondary prevention therapy in a broad cardiovascular patient population.

Implications for Practice: Clinicians should consider incorporating low-dose colchicine into secondary prevention regimens for patients with prior stroke or coronary disease, given its demonstrated efficacy and acceptable safety profile. Its affordability and widespread availability make it a practical option for diverse healthcare settings, including low- and middle-income countries.

Study Strengths and Limitations: Strengths include a large pooled sample size and inclusion of multiple trials across various patient populations, enhancing generalizability. Limitations involve potential performance bias in non-placebo-controlled trials, differences in stroke outcome definitions among studies, and the inability to perform individual patient data analyses.

Future Research: Further studies are needed to evaluate the long-term effects of colchicine on vascular events and cognitive decline in stroke patients, assess safety in patients with renal impairment, and explore its impact on non-cardiovascular mortality.

Reference: Fiolet A.T.L., et al. (2024). Colchicine for secondary prevention of ischaemic stroke and atherosclerotic events: a meta-analysis of randomised trials. eClinicalMedicine. DOI: https://doi.org/10.1016/j.eclinm.2024.102835

 


RCT: Liberal Transfusion Strategy Reduced Unfavorable Neurological Outcomes in Acute Brain Injury

12 Oct, 2024 | 11:01h | UTC

Background: Patients with acute brain injury frequently develop anemia, and the optimal hemoglobin threshold for red blood cell transfusion in this population remains uncertain. Previous studies have shown conflicting results regarding the benefits of liberal versus restrictive transfusion strategies on neurological outcomes.

Objective: To determine whether a liberal transfusion strategy (hemoglobin threshold <9 g/dL) reduces the occurrence of unfavorable neurological outcomes at 180 days compared to a restrictive strategy (hemoglobin threshold <7 g/dL) in patients with acute brain injury.

Methods: The TRAIN trial, a multicenter, phase 3, randomized clinical trial, was conducted across 72 ICUs in 22 countries. It included patients with traumatic brain injury, aneurysmal subarachnoid hemorrhage, or intracerebral hemorrhage, who had hemoglobin levels below 9 g/dL within the first 10 days post-injury. Participants were randomized to a liberal strategy (transfusion triggered by hemoglobin <9 g/dL) or a restrictive strategy (transfusion triggered by hemoglobin <7 g/dL), with primary outcomes measured by the occurrence of an unfavorable neurological outcome, defined by a Glasgow Outcome Scale Extended score of 1-5 at 180 days.

Results: Among 820 patients who completed the trial (mean age 51 years; 45.9% women), 806 had data on the primary outcome (393 liberal, 413 restrictive). The liberal group received a median of 2 units of blood (IQR, 1–3), while the restrictive group received a median of 0 units (IQR, 0–1), with an absolute mean difference of 1.0 unit (95% CI, 0.87–1.12 units). At 180 days, 62.6% of patients in the liberal group had an unfavorable neurological outcome compared to 72.6% in the restrictive group (absolute difference –10.0%; 95% CI, –16.5% to –3.6%; adjusted relative risk 0.86; P = .002). The effect was consistent across prespecified subgroups. Cerebral ischemic events were lower in the liberal group (8.8% vs 13.5%; relative risk 0.65; 95% CI, 0.44–0.97). No significant differences were observed in 28-day survival or other secondary outcomes.

Conclusions: In patients with acute brain injury and anemia, a liberal transfusion strategy resulted in a lower rate of unfavorable neurological outcomes at 180 days compared to a restrictive strategy.

Implications for Practice: A liberal transfusion threshold of 9 g/dL may improve neurological outcomes in patients with acute brain injury by reducing cerebral ischemic events. Clinicians should consider adopting a higher hemoglobin threshold for transfusion in this population, weighing the benefits against potential risks associated with transfusions, such as infection or lung injury.

Study Strengths and Limitations: Strengths include the large, multicenter international design and blinding of outcome assessors. Limitations involve the open-label nature, potential detection bias in assessing cerebral ischemic events, lack of standardized neuroprognostication, and incomplete assessment of concomitant interventions.

Future Research: Further studies are needed to confirm these findings in specific subgroups of acute brain injury, to explore optimal transfusion strategies, and to assess long-term outcomes and potential risks associated with liberal transfusion thresholds.

Reference: Taccone FS, et al. (2024) Restrictive vs Liberal Transfusion Strategy in Patients With Acute Brain Injury: The TRAIN Randomized Clinical Trial. JAMA. DOI: http://doi.org/10.1001/jama.2024.20424

 


RCT: Tele-ICU Intervention Did Not Significantly Reduce ICU Length of Stay in Critically Ill Patients

10 Oct, 2024 | 17:40h | UTC

Background: Telemedicine in critical care, particularly through tele-ICU interventions, has gained traction as a potential solution to the global shortage of intensivists. These systems, which include remote intensivist-led care, have shown promise in improving outcomes, but robust evidence from randomized clinical trials is lacking. The TELESCOPE trial was conducted to assess whether daily remote multidisciplinary rounds combined with monthly audit and feedback meetings could reduce ICU length of stay (LOS) compared with standard care.

Objective: The primary objective of the TELESCOPE trial was to determine if a tele-ICU intervention, involving remote daily multidisciplinary rounds and monthly performance audits led by a board-certified intensivist, could reduce ICU LOS compared to usual care.

Methods: This was a cluster randomized clinical trial involving 30 general ICUs in Brazil, enrolling all consecutive adult patients admitted between June 2019 and April 2021. A total of 17,024 patients were included, with 15 ICUs receiving the tele-ICU intervention and 15 receiving standard care. The intervention consisted of daily remote rounds led by an intensivist, monthly audit meetings, and the provision of evidence-based protocols. The primary outcome was ICU LOS, and secondary outcomes included hospital mortality, ICU efficiency, and various infection rates.

Results: There was no statistically significant difference in ICU LOS between the intervention and control groups (mean LOS: 8.1 days in the tele-ICU group vs. 7.1 days in the usual care group; percentage change, 8.2%; 95% CI, −5.4% to 23.8%; P = .24). Hospital mortality was also similar (41.6% vs. 40.2%; odds ratio, 0.93; 95% CI, 0.78-1.12). No significant differences were found in secondary outcomes, including rates of central line-associated bloodstream infections, ventilator-associated events, or ventilator-free days at 28 days.

Conclusions: The tele-ICU intervention did not reduce ICU LOS in critically ill patients. The lack of observed benefit may be due to suboptimal implementation, variable adherence by local teams, and the high severity of illness in the patient population.

Implications for Practice: While tele-ICU models hold potential, this study suggests that remote intensivist-led care, as implemented in the TELESCOPE trial, may not be sufficient to improve outcomes in high-resource ICU settings with critically ill patients.

Study Strengths and Limitations: The study’s strengths include its pragmatic design, the large number of patients enrolled, and its reflection of real-world ICU settings. However, limitations include the unblinded nature of the trial, suboptimal adherence to the tele-ICU protocol in some centers, and the strain on ICU resources during the COVID-19 pandemic, which may have affected the trial’s outcomes.

Future Research: Further studies should explore how tele-ICU interventions can be optimized, with a focus on identifying the ICU environments and patient populations most likely to benefit. Trials should also address potential barriers to effective implementation, such as staff engagement and local resource constraints.

Reference: Pereira AJ, et al. (2024) Effect of Tele-ICU on Clinical Outcomes of Critically Ill Patients: The TELESCOPE Randomized Clinical Trial. JAMA. DOI: http://doi.org/10.1001/jama.2024.20651


Meta-Analysis: Oral Anticoagulant Monotherapy Reduced Bleeding Without Increasing Ischemic Events in AF and Stable CAD

9 Oct, 2024 | 11:13h | UTC

Background: Atrial fibrillation (AF) patients with stable coronary artery disease (CAD) often require both oral anticoagulants (OACs) for stroke prevention and antiplatelet therapy for CAD management. However, dual antithrombotic therapy (DAT) increases bleeding risk. The optimal antithrombotic regimen in this population remains unclear.

Objective: To evaluate whether OAC monotherapy reduces major bleeding without increasing ischemic events compared to DAT in patients with AF and stable CAD.

Methods: This meta-analysis followed PRISMA guidelines, pooling data from three randomized controlled trials (RCTs) involving 3,945 patients with AF and stable CAD. The trials included used various OACs (rivaroxaban, edoxaban, or warfarin/DOAC) and compared them with DAT. The primary outcomes were all-cause death, cardiovascular death, and major bleeding. Secondary outcomes included stroke (ischemic and hemorrhagic) and myocardial infarction (MI).

Results: OAC monotherapy significantly reduced the risk of major bleeding compared to DAT (3.4% vs 5.8%; RR: 0.55; 95% CI: 0.32–0.95; p=0.03). There were no significant differences between groups in all-cause death (4.2% vs 5.4%; RR: 0.85; 95% CI: 0.49–1.48; p=0.57), cardiovascular death (2.4% vs 3.0%; RR: 0.84; 95% CI: 0.50–1.41; p=0.50), any stroke event (2.2% vs 3.1%; RR: 0.74; 95% CI: 0.46–1.18; p=0.21), or myocardial infarction (RR: 1.57; 95% CI: 0.79–3.12; p=0.20).

Conclusions: In patients with AF and stable CAD, OAC monotherapy significantly reduces major bleeding risk compared to DAT without increasing the risk of ischemic events or mortality.

Implications for Practice: OAC monotherapy may be a preferable antithrombotic strategy in patients with AF and stable CAD, balancing effective thromboembolic protection with a lower bleeding risk. Clinicians should consider OAC monotherapy to simplify antithrombotic regimens and reduce bleeding complications, especially beyond one year after coronary events or interventions.

Study Strengths and Limitations: Strengths include the inclusion of recent large-scale RCTs and the focus on a clinically relevant patient population. Limitations involve reliance on study-level data, limited number of trials, and potential heterogeneity among included studies. The duration of DAT was not consistently available, and individual patient data meta-analysis may provide more detailed insights.

Future Research: Additional large-scale RCTs and individual patient data meta-analyses are needed to confirm these findings and to determine the optimal duration and type of antithrombotic therapy in patients with AF and stable CAD.

Reference: Ahmed M., et al. (2024). Meta-Analysis Comparing Oral Anticoagulant Monotherapy Versus Dual Antithrombotic Therapy in Patients With Atrial Fibrillation and Stable Coronary Artery Disease. Clin Cardiol, 47(10), e70026. DOI: http://doi.org/10.1002/clc.70026

 


Umbrella Review: 5-Day Antibiotic Courses Effective for Non-ICU Community-Acquired Pneumonia or Exacerbations of COPD

6 Oct, 2024 | 17:12h | UTC

Background: Respiratory tract infections (RTIs) significantly contribute to global disease burden and antibiotic usage. Optimizing antibiotic treatment duration is crucial for antimicrobial stewardship to minimize resistance. Despite evidence supporting shorter antibiotic courses for RTIs, prolonged treatment durations persist in clinical practice.

Objective: To evaluate the current evidence base for optimal antibiotic treatment durations in RTIs and determine whether shorter courses are supported.

Methods: An umbrella review was conducted by searching Ovid MEDLINE, Embase, and Web of Science up to May 1, 2024, without language restrictions. Systematic reviews comparing antibiotic treatment durations for community-acquired pneumonia (CAP), acute exacerbations of chronic obstructive pulmonary disease (AECOPD), hospital-acquired pneumonia (HAP), acute sinusitis, and streptococcal pharyngitis, tonsillitis, or pharyngotonsillitis in adults were included. Pediatric-focused reviews were excluded. Quality assessments utilized the AMSTAR 2 tool for reviews and the Cochrane risk-of-bias tool (version 1) for randomized controlled trials (RCTs). The GRADE approach determined the overall quality of evidence.

Results: Thirty systematic reviews were included, generally of low to critically low quality. For non-ICU CAP (14 reviews), moderate-quality evidence supports a 5-day antibiotic course, with insufficient data for shorter durations. In AECOPD (eight reviews), a 5-day treatment was non-inferior to longer courses regarding clinical and microbiological cure, with similar or fewer adverse events. Evidence for non-ventilator-associated HAP is lacking. In acute sinusitis, shorter regimens appear effective, but further research is needed for patients requiring antibiotics. For pharyngotonsillitis (eight reviews), evidence supports short-course cephalosporin therapy but not short-course penicillin when dosed three times daily.

Conclusions: Evidence supports a 5-day antibiotic treatment duration for non-ICU CAP and AECOPD in clinically improving patients. Implementing this evidence in practice is essential. High-quality RCTs are needed to assess shorter durations for CAP and AECOPD, establish optimal durations for HAP and acute sinusitis, and evaluate short-course penicillin with optimal dosing in pharyngotonsillitis.

Implications for Practice: Clinicians should adopt 5-day antibiotic courses for non-ICU CAP and AECOPD in patients showing clinical improvement, aligning with antimicrobial stewardship objectives to reduce unnecessary antibiotic exposure and resistance development.

Study Strengths and Limitations: Strengths include a comprehensive search and assessment of systematic reviews and meta-analyses. Limitations involve the generally low quality of included reviews and RCTs, with many studies exhibiting unclear or high risk of bias. Heterogeneity in definitions of short-course treatment and variability in patient populations and settings were also noted.

Future Research: High-quality RCTs are required to investigate antibiotic durations shorter than 5 days for CAP and AECOPD, determine optimal treatment lengths for HAP and acute sinusitis, and assess short-course penicillin therapy with optimal dosing schedules in pharyngotonsillitis.

Reference: Kuijpers SME, et al. (2024) The evidence base for the optimal antibiotic treatment duration of upper and lower respiratory tract infections: an umbrella review. Lancet Infect Dis. DOI: http://doi.org/10.1016/S1473-3099(24)00456-0


Aspirin vs. Clopidogrel Monotherapy After PCI: 1-Year Follow-Up of the STOPDAPT-3 Trial

6 Oct, 2024 | 16:51h | UTC

Background: Following percutaneous coronary intervention (PCI) with drug-eluting stents (DES), patients are typically managed with dual antiplatelet therapy (DAPT). Recent evidence suggests that monotherapy with a P2Y12 inhibitor may reduce bleeding risks compared to aspirin monotherapy, but no prior trials have directly compared these regimens beyond one month of DAPT. The STOPDAPT-3 trial aimed to evaluate the cardiovascular and bleeding outcomes of aspirin versus clopidogrel monotherapy following a short duration of DAPT.

Objective: To compare the efficacy and safety of aspirin monotherapy with clopidogrel monotherapy from 1 month to 1 year after PCI with DES, focusing on cardiovascular and bleeding outcomes.

Methods: The STOPDAPT-3 trial was a prospective, multicenter, open-label, randomized clinical trial conducted in Japan. A total of 6002 patients with acute coronary syndrome (ACS) or high bleeding risk (HBR) were randomized to either a 1-month DAPT regimen followed by aspirin monotherapy (aspirin group, n=2920) or 1-month prasugrel monotherapy followed by clopidogrel monotherapy (clopidogrel group, n=2913). The primary endpoints were a composite of cardiovascular events (cardiovascular death, myocardial infarction, stent thrombosis, or ischemic stroke) and major bleeding (Bleeding Academic Research Consortium 3 or 5).

Results: At the 1-year follow-up, both the aspirin and clopidogrel groups had comparable cardiovascular outcomes (4.5% incidence in both groups; HR 1.00, 95% CI 0.77–1.30, P=0.97). Bleeding rates were also similar between groups (aspirin: 2.0%; clopidogrel: 1.9%; HR 1.02, 95% CI 0.69–1.52, P=0.92). No significant differences were observed in secondary outcomes, including all-cause mortality, myocardial infarction, stent thrombosis, or revascularization. Additionally, adherence to the assigned monotherapy at 1 year was high in both groups (87.5% for aspirin; 87.2% for clopidogrel).

Conclusions: Aspirin monotherapy, compared to clopidogrel monotherapy, resulted in similar cardiovascular and bleeding outcomes during the 1-year follow-up after PCI with DES. Both therapies appear equally effective and safe for use following short-duration DAPT.

Implications for Practice: These findings suggest that either aspirin or clopidogrel monotherapy could be safely used following a short course of DAPT, with similar clinical outcomes. In regions where more potent P2Y12 inhibitors are not widely used, aspirin monotherapy remains a cost-effective and safe alternative.

Study Strengths and Limitations: The study’s strengths include a large sample size and a well-structured, multicenter design. Limitations include the lack of randomization after 1 month and the high prescription of proton pump inhibitors, which may have affected bleeding outcomes. Additionally, the follow-up period of 1 year may be too short to detect long-term differences.

Future Research: Longer-term studies are needed to confirm the findings, particularly regarding cardiovascular outcomes beyond 1 year. Further research is also required to evaluate the impact of aspirin versus more potent P2Y12 inhibitors in diverse populations and clinical settings.

Reference: Watanabe H., et al. (2024). Aspirin vs. clopidogrel monotherapy after percutaneous coronary intervention: 1-year follow-up of the STOPDAPT-3 trial. European Heart Journal. DOI: https://doi.org/10.1093/eurheartj/ehae617

 


RCT: H. pylori Screening Added to Fecal Immunochemical Testing Did Not Reduce Gastric Cancer Incidence or Mortality

4 Oct, 2024 | 11:00h | UTC

Background: Gastric cancer is a leading cause of cancer-related mortality worldwide, particularly in East Asia. Helicobacter pylori infection is a well-established risk factor for gastric cancer development. While eradication therapy may prevent gastric cancer, the effectiveness of community-based H. pylori screening on gastric cancer incidence and mortality remains uncertain.

Objective: To determine whether adding H. pylori stool antigen (HPSA) testing to fecal immunochemical test (FIT) screening reduces gastric cancer incidence and mortality compared to FIT screening alone.

Methods: In a pragmatic randomized clinical trial conducted in Changhua County, Taiwan (2014–2018), 152,503 residents aged 50 to 69 years eligible for biennial FIT screening were randomized to receive an invitation for HPSA testing plus FIT (n = 63,508) or FIT alone (n = 88,995). Participants in the HPSA + FIT group with positive HPSA results were offered antibiotic eradication therapy. Primary outcomes were gastric cancer incidence and mortality, assessed via national cancer and death registries.

Results: Participation rates were higher in the HPSA + FIT group (49.6%) than in the FIT-alone group (35.7%). In the HPSA + FIT group, 38.5% tested positive for HPSA, and 71.4% of these received antibiotic treatment, achieving a 91.9% eradication rate. Over a median follow-up of approximately 5 years, gastric cancer incidence did not differ significantly between the HPSA + FIT and FIT-alone groups (0.032% vs 0.037%; mean difference –0.005%; 95% CI, –0.013% to 0.003%; P = .23). Gastric cancer mortality rates were also similar (0.015% vs 0.013%; mean difference 0.002%; 95% CI, –0.004% to 0.007%; P = .57). Adjusted analyses accounting for participation rates, follow-up duration, and baseline characteristics showed a lower gastric cancer incidence in the HPSA + FIT group (RR 0.79; 95% CI, 0.63–0.98; P = .04), but no difference in mortality (RR 1.02; 95% CI, 0.73–1.40; P = .91). Adverse effects from antibiotics were mild, with abdominal pain or diarrhea occurring in 2.1%.

Conclusions: An invitation to HPSA testing combined with FIT did not significantly reduce gastric cancer incidence or mortality compared to FIT alone over a median follow-up of about 5 years. Adjusted analyses suggest a potential reduction in gastric cancer incidence but not mortality when accounting for participation rates and follow-up duration.

Implications for Practice: Adding H. pylori screening to existing FIT programs may not significantly reduce gastric cancer incidence or mortality in the short term, possibly due to low participation rates, incomplete eradication, and limited follow-up. Clinicians should consider these factors when implementing community-based H. pylori screening and weigh the benefits against resource utilization and patient adherence.

Study Strengths and Limitations: Strengths include a large sample size and integration of HPSA testing into an existing FIT screening infrastructure. Limitations encompass differences in participation rates and baseline characteristics between groups, a relatively short follow-up period, and only 71.4% of HPSA-positive participants receiving eradication therapy, which may have reduced the ability to detect significant effects.

Future Research: Longer-term studies with higher participation and eradication rates are needed to assess the long-term benefits of H. pylori screening on gastric cancer incidence and mortality. Research should explore strategies to improve screening uptake and treatment adherence.

Reference: Lee Y-C, et al. (2024) Screening for Helicobacter pylori to Prevent Gastric Cancer: A Pragmatic Randomized Clinical Trial. JAMA. DOI: http://doi.org/10.1001/jama.2024.14887

 


RCT: MRI-Guided Biopsy Reduces Overdiagnosis of Clinically Insignificant Prostate Cancer

26 Sep, 2024 | 12:22h | UTC

Background: Overdiagnosis of clinically insignificant prostate cancer is a significant issue in population-based screening programs, primarily when prostate-specific antigen (PSA) testing is followed by systematic biopsy. Magnetic resonance imaging (MRI)-guided biopsies, which avoid systematic biopsies in men with negative MRI results, have shown potential in reducing unnecessary cancer diagnoses. However, long-term data are needed to confirm the safety and efficacy of this approach.

Objective: To evaluate whether MRI-targeted biopsies, when combined with PSA screening, can reduce the detection of clinically insignificant prostate cancer without compromising the identification of clinically significant or advanced disease.

Methods: This population-based, randomized trial in Sweden (GÖTEBORG-2) enrolled 13,153 men aged 50-60 years who underwent PSA screening. Men with PSA levels ≥3 ng/mL were randomized into two groups: (1) MRI-targeted biopsy only in cases with suspicious lesions, or (2) systematic biopsy in all cases with PSA elevation. Screening occurred every 2, 4, or 8 years depending on PSA levels, with follow-up for up to four years. The primary outcome was the detection of clinically insignificant prostate cancer, and secondary outcomes included clinically significant and advanced or high-risk prostate cancer.

Results: After a median follow-up of 3.9 years, the detection of clinically insignificant prostate cancer was significantly lower in the MRI-targeted biopsy group (2.8%) compared to the systematic biopsy group (4.5%), with a relative risk (RR) of 0.43 (95% CI, 0.32-0.57; P < 0.001). The relative risk of detecting clinically significant cancer was 0.84 (95% CI, 0.66-1.07), indicating no significant difference between the two groups. Advanced or high-risk cancers were detected in 15 men in the MRI group and 23 men in the systematic group (RR, 0.65; 95% CI, 0.34-1.24). Severe adverse events occurred in five patients (three in the systematic biopsy group, two in the MRI-targeted biopsy group).

Conclusions: Omitting biopsies in men with negative MRI results substantially reduced the diagnosis of clinically insignificant prostate cancer without increasing the risk of missing clinically significant or advanced cancers. MRI-targeted biopsy strategies can effectively limit overdiagnosis while maintaining safety in screening programs.

Implications for Practice: MRI-targeted biopsies offer a promising strategy to reduce unnecessary cancer diagnoses and avoid overtreatment in prostate cancer screening. Clinicians should consider integrating MRI into prostate cancer screening algorithms, especially in cases with elevated PSA but no MRI-detected lesions. This approach may also decrease biopsy-related complications and patient anxiety.

Study Strengths and Limitations: Strengths of this trial include its population-based design, large sample size, and thorough follow-up. Limitations include its single-center setting in Sweden, which may limit generalizability to more diverse populations, and a modest participation rate of 50%.

Future Research: Further studies should assess the cost-effectiveness of widespread MRI use in prostate cancer screening and explore its utility in diverse populations. Investigations into novel biomarkers that could further refine patient selection for MRI-targeted biopsy are also warranted.

Reference: Hugosson J., et al. (2024). Results after Four Years of Screening for Prostate Cancer with PSA and MRI. N Engl J Med. DOI: https://doi.org/10.1056/NEJMoa2406050

 


Network Meta-Analysis: Eletriptan, Rizatriptan, Sumatriptan, and Zolmitriptan Most Effective for Acute Migraine Episodes

23 Sep, 2024 | 22:34h | UTC

Background: Migraine, a highly prevalent neurological disorder, is a leading cause of disability, especially among women aged 15 to 49. Effective acute management is critical, with current guidelines recommending non-steroidal anti-inflammatory drugs (NSAIDs) and triptans for moderate to severe episodes. However, the relative efficacy of various drug interventions remains unclear, especially with newer treatments like lasmiditan and gepants entering the market.

Objective: To evaluate and compare the efficacy and tolerability of all licensed oral drugs for the acute treatment of migraine episodes in adults.

Methods: A systematic review and network meta-analysis was conducted, including 137 randomized controlled trials (RCTs) involving 89,445 participants. The study analyzed 17 drug interventions, including NSAIDs, triptans, ditans, and gepants, and compared them with placebo. Primary outcomes included pain freedom at two hours post-dose and sustained pain freedom from two to 24 hours post-dose. Certainty of evidence was assessed using the CINeMA framework, and sensitivity analyses were conducted to confirm the robustness of the findings.

Results: All active interventions outperformed placebo for pain freedom at two hours, with odds ratios ranging from 1.73 (95% CI 1.27 to 2.34) for naratriptan to 5.19 (4.25 to 6.33) for eletriptan. The most effective drugs for sustained pain freedom were eletriptan and ibuprofen. Among head-to-head comparisons, eletriptan was the most effective for pain freedom at two hours, followed by rizatriptan, sumatriptan, and zolmitriptan. Newer drugs like lasmiditan, rimegepant, and ubrogepant were less effective than the triptans and showed adverse effects like dizziness and nausea.

Conclusions: Triptans—specifically eletriptan, rizatriptan, sumatriptan, and zolmitriptan—demonstrated superior efficacy and tolerability profiles compared to newer treatments like lasmiditan and gepants. Given their efficacy, these triptans should be prioritized in acute migraine management. However, triptans are underused, and barriers to access should be addressed to ensure broader utilization. Lasmiditan and gepants may still serve as alternatives for patients contraindicated for triptans due to cardiovascular risks.

Implications for Practice: Clinicians should prioritize triptans, particularly eletriptan, rizatriptan, sumatriptan, and zolmitriptan, in managing acute migraine episodes due to their superior efficacy. Careful consideration is needed when selecting newer drugs like lasmiditan and gepants, as they may be less effective and have higher costs and adverse event risks. Cost-effectiveness and patient cardiovascular profiles should guide decision-making.

Study Strengths and Limitations: Strengths include the comprehensive inclusion of both published and unpublished data, as well as the large sample size and robust methodological framework. Limitations include moderate heterogeneity and low confidence in some comparisons due to reporting biases and imprecise treatment effects in older studies.

Future Research: Future studies should focus on re-evaluating the cardiovascular contraindications of triptans to ensure broader access. Additional research is also needed to assess the cost-effectiveness of newer treatments like lasmiditan and gepants, particularly in patients for whom triptans are unsuitable.

Reference: Karlsson WK, et al. Comparative effects of drug interventions for the acute management of migraine episodes in adults: systematic review and network meta-analysis. BMJ. 2024; DOI: https://doi.org/10.1136/bmj-2024-080107

 


Phase 2 RCT: Ponsegromab Shows Promise for the Treatment of Cancer Cachexia

23 Sep, 2024 | 21:48h | UTC

Background: Cancer cachexia is a multifactorial syndrome characterized by weight loss, muscle wasting, and reduced quality of life. Elevated levels of growth differentiation factor 15 (GDF-15), a cytokine, have been associated with cachexia. Ponsegromab, a monoclonal antibody that inhibits GDF-15, has shown potential in reversing cachexia in early studies by improving weight, appetite, and physical activity. This phase 2, randomized, double-blind trial aimed to assess the efficacy and safety of ponsegromab in patients with cancer cachexia and elevated GDF-15 levels.

Objective: To evaluate the impact of ponsegromab on body weight, cachexia symptoms, appetite, physical activity, and safety in cancer cachexia patients with elevated GDF-15 levels.

Methods: In this 12-week study, 187 patients with non-small-cell lung cancer, pancreatic cancer, or colorectal cancer and elevated GDF-15 levels (≥1500 pg/mL) were randomized into four groups: ponsegromab 100 mg, 200 mg, 400 mg, or placebo, administered subcutaneously every four weeks. The primary endpoint was the change in body weight from baseline to week 12. Secondary outcomes included appetite and cachexia symptoms, physical activity measured via digital devices, and safety. The trial also included exploratory endpoints like changes in skeletal muscle mass.

Results:
At 12 weeks, patients treated with ponsegromab showed significant weight gain compared to placebo. The median weight gain differences were 1.22 kg in the 100-mg group, 1.92 kg in the 200-mg group, and 2.81 kg in the 400-mg group. Significant improvements in appetite and cachexia symptoms were observed in the 400-mg group compared to placebo. Physical activity, measured by nonsedentary time, also increased by 72 minutes per day in the 400-mg group. Adverse events were reported by 70% of ponsegromab patients and 80% of placebo patients. Serious adverse events occurred at similar rates across groups, but none were deemed related to treatment. No significant safety concerns were identified.

Conclusions: Ponsegromab effectively increased body weight and improved cachexia symptoms in patients with cancer cachexia and elevated GDF-15 levels, supporting GDF-15’s role as a key driver of cachexia. The findings suggest that ponsegromab may be a promising therapeutic option for managing cancer cachexia, with a favorable safety profile.

Implications for Practice: Ponsegromab could represent a targeted therapy for cancer cachexia, addressing weight loss, appetite, and physical function. Clinicians may consider its use for patients with advanced cancers experiencing cachexia, particularly those with elevated GDF-15 levels.

Study Strengths and Limitations: Strengths of the study include its randomized, double-blind design and the use of objective measures such as digital physical activity tracking. Limitations include the relatively short trial duration, and missing physical activity data for some patients. Additionally, the efficacy of ponsegromab across different baseline levels of GDF-15 requires further investigation.

Future Research: Larger and longer-term trials are necessary to confirm the therapeutic benefit of ponsegromab in cancer cachexia. Future research should explore its impact on survival and assess whether GDF-15 inhibition benefits other conditions associated with elevated GDF-15, such as heart failure and chronic kidney disease.

Reference: Groarke, J. D., et al. (2024). Ponsegromab for the Treatment of Cancer Cachexia. New England Journal of Medicine. DOI: https://doi.org/10.1056/NEJMoa2409515

 


Stay Updated in Your Specialty

Telegram Channels
Free

WhatsApp alerts 10-day free trial

No spam, just news.