When Context Overwhelms: A Study of Long-Context vs. Retrieval-Based QA Under Noise

28 April 2026, Version 1
This content is an early or alternative research output and has not been peer-reviewed by Cambridge University Press at the time of posting.

Abstract

The optimal strategy for supplying external knowledge to a reasoning-oriented Large Language Model (LLM) remains an open question in applied NLP. This paper presents an empirical evaluation of MBZUAI-IFM/K2-Think-v2, a reasoning-focused model with a 1-million-token context window and explicit intermediate reasoning traces, across three context-provision architectures: Long Context (LC), Retrieval-Augmented Generation (RAG), and Hybrid. We evaluate performance on NarrativeQA, HotpotQA, and MultiDoc2Dial under a fully crossed grid of document counts (k ∈ {5, 10, 20, 50, 100}) and noise ratios (0.0–0.75), yielding 120 configuration-level observations across 7,160+ model evaluations. Our results show several consistent patterns across this experimental setup. (1) On NarrativeQA, Hybrid achieves the highest average F1 (0.0497), closely followed by LC (0.0481), while RAG performs lower (0.0351), suggesting that chunk-level retrieval may limit global narrative coherence in long-form comprehension tasks. (2) On HotpotQA, RAG yields slightly lower hallucination rates (10.3% vs. 12.3% for LC) at comparable F1 scores, indicating a potential noise-filtering effect that does not clearly translate into improved accuracy. (3) On MultiDoc2Dial, all architectures exhibit elevated hallucination rates (approximately 20–33%), with no consistent monotonic relationship to noise level, suggesting difficulty in maintaining grounded reasoning under multi-document conversational settings. (4) Exact Match evaluates to 0.0 across all configurations, reflecting a mismatch between strict string-based evaluation and the verbose, multi-step output format of reasoning-oriented models. These findings suggest that context provision strategy plays a meaningful role in performance and robustness, but its effect varies substantially by task structure

Keywords

Large Language Models
Retrieval-Augmented Generation
Long Context
Chain-of-Thought
Hallucination
K2-Think-v2
NarrativeQA
HotpotQA
MultiDoc2Dial
Evaluation Metrics

Comments

Comments are not moderated before they are posted, but they can be removed by the site moderators if they are found to be in contravention of our Commenting and Discussion Policy [opens in a new tab] - please read this policy before you post. Comments should be used for scholarly discussion of the content in question. You can find more information about how to use the commenting feature here [opens in a new tab] .
This site is protected by reCAPTCHA and the Google Privacy Policy [opens in a new tab] and Terms of Service [opens in a new tab] apply.