Object detection models, such as those in the YOLO family, are generally effective identifying individual weeds, but their performance can be limited by occlusion of target structures in high density scenes. These models are typically trained on images with low weed densities, where individual plants are clearly visible and easy to annotate, yet they are used in field conditions where areas of high density and dense vegetation may occur. Greenhouse experiments were conducted at the University of Florida, Wimauma, FL, to evaluate how purple nutsedge (Cyperus rotundus L.) density impacts the performance of a YOLOv8 model. A greenhouse density test dataset was developed by collecting 480 images of twelve densities ranging from 7 to 331 plants m-2, at transplant and at 3, 6, 10, and 17 days after transplanting. An independent dataset of 2,221 field and greenhouse images was used for the training of a YOLOv8 extra-large model to detect C. rotundus. A logistic sigmoidal model was fitted to evaluate the F1 score as a function of increasing C. rotundus density at each data collection time point. The F1 score showed sigmoidal relationships with density at all time points, exceeding 0.9 at low densities but dropping sharply to near 0 at the highest density and later time points. Performance decline was primarily driven by increased false negatives as density and occlusion increased, with minimal contribution from false positives except at the highest densities. Density thresholds for optimal performance (F1 ≥ 0.90) decreased from 157 to 86 plants m-2 as canopy coverage increased, while marginal performance (F1 = 0.50) dropped from 322 to 140 plants m-2. Our findings suggest that object detection models for C. rotundus are strongly influenced by increased occlusion and morphological changes resulting from greater plant proximity and canopy coverage in high density scenes.