Second, in GN-CNN, we utilize local neighborhood normal information. Therefore, they are utilized in the proposed pipeline: First, in the encoder of the encoder/decoder framework of the GN-GAN generator, we introduced a graph convolution-enhanced combined multilayer perceptron operation (CMLP ), namely GCMLP, by adding several graph convolution layers. The model’s local surface variance and normal information determine how light is reflected, and how the model’s appearance is perceived by the viewers. Based on this, unlike existing (few-shot) learning methods, we propose unleashing the model appearance to further augment the information. Experiments show that the proposed method could improve the accuracy in 3D point cloud classification tasks and under few-shot learning scenarios, compared with existing methods such as PointNet, PointNet++, DGCNN, and Geo-CNN, making it a beneficial method for Metaverse applications.įurthermore, the 3D models in the Metaverse are usually acquired with a focus on the model’s visual appearances instead of the exact point positions, for example, for driving and indoor navigation, more attention is usually paid to the model’s appearance to make sure the models look good, instead of the model actual accuracy, which is usually the focus of computer-aided design (CAD). Its few-shot learning performance is evaluated on ModelNet40, when the training set size is reduced to 30%, the overall classification accuracy can reach 91.8%, which is 2.5% higher than Geo-CNN. The classification performance of GN-CNN is evaluated on ModelNet10 with an overall accuracy of 95.9%. The GNConv is used as the convolution-like operation in GN-CNN. The GN-GAN adopts an encoder–decoder structure and the GCMLP is used as the core operation of the encoder. This is realized by introducing a graph convolution-enhanced combined multilayer perceptron operation (CMLP), namely GCMLP, to capture the local geometric relationship as well as a local normal-aware GeoConv, namely GNConv. Thus, conceptually, we also propose to augment the information by unleashing and incorporating local variance information, which conveys the appearance of the model. Furthermore, the 3D models in the Metaverse are mostly acquired with a focus on the models’ visual appearances instead of the exact positions. We tackle the difficulties with a few-shot learning (FSL) approach by proposing an unsupervised generative adversarial network GN-GAN to generate prior knowledge and perform warm start pre-training for GN-CNN. In this paper, we propose a novel data-driven 3D point cloud analysis network GN-CNN that is suitable for such scenarios. Metaverse applications often require many new 3D point cloud models that are unlabeled and that have never been seen before this limited information results in difficulties for data-driven model analyses.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |