Abstract
Quantitative characterization of complete canopy architecture is essential for accurate evaluation of crop photosynthesis and yield potential, thereby supporting crop ideotype design. Although various sensing technologies enable three-dimensional (3D) reconstruction of individual plants and canopies, they often fail to describe canopy architecture accurately because of severe occlusion in dense populations. To address this limitation, we developed an effective framework for the 3D reconstruction of complex and dynamic population-scale canopy architecture in rapeseed using unmanned aerial vehicle multi-view imagery combined with a novel point cloud completion model. A complete point cloud generation pipeline was first established to enable automated training data annotation, allowing discrimination between surface points and occluded points within the canopy. The proposed crop population point cloud completion network (CP-PCN) integrates a multi-resolution dynamic graph convolutional encoder, a point pyramid decoder, a dynamic graph convolutional feature extractor, and a generative adversarial network-based loss function to predict occluded canopy points. CP-PCN achieved chamfer distance values of 3.35 to 4.51 cm across four growth stages, outperforming the state-of-the-art transformer-based method PoinTr. Ablation analyses confirmed that each of the four modules contributes to overall model accuracy. In addition, validation experiments showed that the improved architectural completeness achieved by CP-PCN resulted in more accurate yield estimation compared with incomplete and PoinTr-completed point clouds. CP-PCN also demonstrated strong cross-crop generalizability by successfully reconstructing mature rice canopies. Overall, this framework provides a scalable approach for quantitative analysis of complex canopy architectures in field-grown crops.


