Grasp detection is a significant research direction in the field of robotics. Traditional analysis methods typically require prior knowledge of the object parameters, limiting grasp detection to structured environments and resulting in suboptimal performance. In recent years, the generative convolutional neural network (GCNN) has gained increasing attention, but they suffer from issues such as insufficient feature extraction capabilities and redundant noise. Therefore, we proposed an improved method for the GCNN, aimed at enabling fast and accurate grasp detection. First, a two-dimensional (2D) Gaussian kernel was introduced to re-encode grasp quality to address the issue of false positives in grasp rectangular metrics, emphasizing high-quality grasp poses near the central point. Additionally, to address the insufficient feature extraction capabilities of the shallow network, a receptive field module was added at the neck to enhance the network’s ability to extract distinctive features. Furthermore, the rich feature information in the decoding phase often contains redundant noise. To address this, we introduced a global-local feature fusion module to suppress noise and enhance features, enabling the model to focus more on target information. Finally, relevant evaluation experiments were conducted on public grasping datasets, including Cornell, Jacquard, and GraspNet-1 Billion, as well as in real-world robotic grasping scenarios. All results showed that the proposed method performs excellently in both prediction accuracy and inference speed and is practically feasible for robotic grasping.