Hostname: page-component-77f85d65b8-7lfxl Total loading time: 0 Render date: 2026-03-30T02:16:48.591Z Has data issue: false hasContentIssue false

Precision weed detection and mapping in vegetables using deep learning

Published online by Cambridge University Press:  08 September 2025

Weili Li
Affiliation:
Assistant Professor, Jincheng College, Nanjing University of Aeronautics and Astronautics, Nanjing, China; current: Visiting Scholar, Peking University Institute of Advanced Agricultural Sciences, Shandong Laboratory of Advanced Agricultural Sciences in Weifang, Shandong, China
Wenpeng Zhu
Affiliation:
Intern, Peking University Institute of Advanced Agricultural Sciences, Shandong Laboratory of Advanced Agricultural Sciences in Weifang, Shandong, China
Jinxu Wang
Affiliation:
Research Assistant, Peking University Institute of Advanced Agricultural Sciences, Shandong Laboratory of Advanced Agricultural Sciences in Weifang, Shandong, China
Kang Han
Affiliation:
Research Assistant, Peking University Institute of Advanced Agricultural Sciences, Shandong Laboratory of Advanced Agricultural Sciences in Weifang, Shandong, China
Xiaojun Jin*
Affiliation:
Associate Professor, National Engineering Research Center of Biomaterials, Nanjing Forestry University, Nanjing, China
Jialin Yu
Affiliation:
Professor and Principal Investigator, Peking University Institute of Advanced Agricultural Sciences, Shandong Laboratory of Advanced Agricultural Sciences in Weifang, Shandong, China
*
Corresponding author: Xiaojun Jin; Email: xiaojunjin@njfu.edu.cn
Rights & Permissions [Opens in a new window]

Abstract

Precision weed detection and mapping in vegetable crops are beneficial for improving the effectiveness of weed control. This study proposes a novel method for indirect weed detection and mapping using a detection network based on the You-Only-Look-Once-v8 (YOLOv8) architecture. This approach detects weeds by first identifying vegetables and then segmenting weeds from the background using image processing techniques. Subsequently, weed mapping was established and innovative path planning algorithms were implemented to optimize actuator trajectories along the shortest possible path. Experimental results demonstrated significant improvements in both precision and computational efficiency compared with the original YOLOv8 network. The mean average precision at 0.5 (mAP50) increased by 0.2, while the number of parameters, giga floating-point operations per second (GFLOPS), and model size decreased by 0.57 million, 1.8 GFLOPS, and 1.1 MB, respectively, highlighting enhanced accuracy and reduced computational costs. Among the analyzed path planning algorithms, including Christofides, Dijkstra, and dynamic programming (DP), the Dijkstra algorithm was the most efficient, producing the shortest path for guiding the weeding system. This method enhances the robustness and adaptability of weed detection by eliminating the need to detect diverse weed species. By integrating precision weed mapping and efficient path planning, mechanical actuators can target weed-infested areas with optimal precision. This approach offers a scalable solution that can be adapted to other precision weeding applications.

Information

Type
Research Article
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution and reproduction, provided the original article is properly cited.
Copyright
© The Author(s), 2025. Published by Cambridge University Press on behalf of Weed Science Society of America
Figure 0

Figure 1. The workflow illustrating the detection and mapping process for bok choy (Brassica rapa ssp. chinensis) using the improved vegetable detection (IVD) model. Target vegetables are first identified, and the remaining green vegetation is segmented as weeds through image processing and area filtering. The processed images are divided into grid cells, with weed-containing cells marked in red to generate a distribution map. A path planning algorithm is then applied to optimize the route for weed control operations.

Figure 1

Figure 2. The architecture of YOLOv8-C2f-Faster-EMA. The original convolution to fully connected (C2f) modules are replaced with C2f-Faster-EMA modules to improve feature extraction and computational efficiency. Additionally, in the backbone network, the bottleneck operators in the C2f modules at stages 3, 5, 7, and 9 were hierarchically substituted with the proposed C2f-Faster-EMA units to enhance feature extraction and information flow. SPPF in the model is the abbreviation of Spatial Pyramid Pooling Fast, which is a module used for pooling operations at different scales.

Figure 2

Figure 3. Architecture of the group shuffle convolution (GSConv) module. The standard convolution operators in the neck module were systematically replaced with GSConv units, which are specifically designed to enhance cross-level feature fusion through a lightweight channel-spatial attention mechanism.

Figure 3

Figure 4. Overall architecture of the improved vegetable detection (IVD) model. The group shuffle convolution (GSConv) units were introduced for Slim-neck construction, and VoV-GSCSP modules were integrated into the You-Only-Look-Once-v8 (YOLOv8) framework. During inference, multiscale feature maps undergo channel compression via GSConv, followed by bilinear upsampling and concatenation to establish cross-resolution connections. These features are further refined through secondary GSConv filtering and final consolidation via a single-stage aggregation module based on VoVNet Volumetric Grid Spatial Cross Stage Partial (VoV-GSCSP) fusion gates. In the backbone, computational redundancy is reduced by replacing conventional bottlenecks in the convolution to fully connected (C2f) modules with Faster-EMA blocks, which apply the efficient multiscale attention (EMA) mechanisms to enhance salient spatial-frequency feature extraction.

Figure 4

Table 1. Ablation study results evaluating the impact of C2f-Faster-EMA and Slim-neck modules on detection performance and model complexity.a

Figure 5

Figure 5. Detection results of the improved vegetable detection (IVD) model on vegetables under challenging conditions, including complex backgrounds and dense weed–vegetable clusters.

Figure 6

Figure 6. Training loss curve of the improved vegetable detection (IVD) model over 100 epochs. The IVD model exhibits a steeper loss curve with faster convergence compared with You-Only-Look-Once-v8 (YOLOv8), indicating more efficient optimization during training.

Figure 7

Figure 7. Weed mapping workflow from original images to trajectory planning. The first row shows the original images of vegetable fields. The second row displays the detection results from the improved vegetable detection (IVD) network, with vegetables highlighted by bounding boxes. The third row presents binary segmentation images generated through excess green (ExG)-based vegetation enhancement followed by Otsu thresholding. The fourth row shows the results after vegetable removal and area filtering to isolate true weed regions. The fifth row displays the generated weeding trajectories used to guide precision weed control operations.

Figure 8

Figure 8. Path planning results for precision weeding based on weed mapping. The blue lines represent the optimized weeding trajectories generated by different path planning algorithms (Christofides, Dijkstra, and dynamic programming [DP]) across four sample images. These results illustrate the application of trajectory optimization for efficient weed control operations.

Figure 9

Table 2. Performance comparison of three path planning algorithms on four sample weed maps, with execution time and shortest path length (in pixels) reported for each algorithm across four images labeled A, B, C, and D