Abstract
The optimal strategy for supplying external knowledge to a reasoning-oriented Large Language Model (LLM) remains an open question in applied NLP. This paper presents an empirical evaluation of MBZUAI-IFM/K2-Think-v2, a reasoning-focused model with a 1-million-token context window and explicit intermediate reasoning traces, across three context-provision architectures: Long Context (LC), Retrieval-Augmented Generation (RAG), and Hybrid. We evaluate performance on NarrativeQA, HotpotQA, and MultiDoc2Dial under a fully crossed grid of document counts (k ∈ {5, 10, 20, 50, 100}) and noise ratios (0.0–0.75), yielding 120 configuration-level observations across 7,160+ model evaluations. Our results show several consistent patterns across this experimental setup. (1) On NarrativeQA, Hybrid achieves the highest average F1 (0.0497), closely followed by LC (0.0481), while RAG performs lower (0.0351), suggesting that chunk-level retrieval may limit global narrative coherence in long-form comprehension tasks. (2) On HotpotQA, RAG yields slightly lower hallucination rates (10.3% vs. 12.3% for LC) at comparable F1 scores, indicating a potential noise-filtering effect that does not clearly translate into improved accuracy. (3) On MultiDoc2Dial, all architectures exhibit elevated hallucination rates (approximately 20–33%), with no consistent monotonic relationship to noise level, suggesting difficulty in maintaining grounded reasoning under multi-document conversational settings. (4) Exact Match evaluates to 0.0 across all configurations, reflecting a mismatch between strict string-based evaluation and the verbose, multi-step output format of reasoning-oriented models. These findings suggest that context provision strategy plays a meaningful role in performance and robustness, but its effect varies substantially by task structure



![Author ORCID: We display the ORCID iD icon alongside authors names on our website to acknowledge that the ORCiD has been authenticated when entered by the user. To view the users ORCiD record click the icon. [opens in a new tab]](https://www.cambridge.org/engage/assets/public/coe/logo/orcid.png)