Abstract

Machining feature recognition is a key technology to realize CAD/CAPP/CAM system integration. Aiming at high robustness of traditional processing feature recognition in image reasoning, an automatic processing shape recognition method based on fuzzy learning of processing surrounding point black data is proposed. The Cloud RNN in the PointNet stage strongly demonstrates that the framework originates from convolutional neural spider webs. Protector shape for detailed discoloration data on constructed prominence surfaces for automatic rifle recognition is conducted. Spot staining data sample library is also constructed. The prosecuting feature recognizer gained advantages through sample training, which realized robot-style notification of 36 processing shapes. This is conducted with a recognition accuracy rate of over 90%. The method is simple and efficient, although it is not suitable for point cloud data with backlash and defects. It is sensible and still has usable robustness and confirmation performance against mischief around shape peripheries due to shape intersections.

1. Introduction

A machining feature is a set of retracted, complete, and exposed shapes on a part with machining semantics, such as holes, blind holes, and keyways. The General Machinery Building can be seen as a combination of a blank and a series of machining forms. The machining shape is the basis of electronic computer-aided manufacturing and the basis is the link between computer-aided design (CAD) and computer-aided manufacturing (CAM). Since the 1980s, advanced feature recognition has been a hot research direction in both industry and academia. There have been a variety of feature recognition methods, such as the method supported by formal logic, the order of feature-based cascading adjacency graphs, and the regularity of combining graphs and rules. These methods are important for CAD design and rely on modern logic judgments. Although there are some methods to support the form, the construction of its characteristics is still inseparable from the design of logical rules. At present, there are still many problems in dealing with feature confirmation, including recognition validity, robustness of recognition results, and confirmation of unfortunate shapes. Among them, unfortunate feature recognition is a recognized difficulty in the industry. For illustrate, when a gouge shape is interrupted with other features, its rebate surface may quartile or extend, the topological relationship between formal boundaries and adjacent obstacles may deflect, and identification with meaningful three-segmentation methods handsome to rot.

Occult learning has seen strong success in the field of computer illusion in recent years. Among them, convolutional neural networks (CNNs) are widely customized for their powerful shape extraction capabilities [4]. At the same time, the regularity of three-dimensional (3D) CNN has also been preliminarily developed in feature recognition. (Singh et al. provided a new management idea for clarifying the proposition of shape recognition.) The input of a typical convolutional reticulum is two-dimensional (2D), 3D, or even multi-dimensional matrix data, such as image formats and 3D voxels. Zhang et al. proposed that a formal notification means, FeatureNet, is proposed, which supports 3D voxel grid data, using a 3D CNN example [110]. The brand is meshed into a 3D voxel structure, which is then classified by an enticing CNN to identify pre-directed processing shapes. However, gridded 3D data have the fate of a data-front-temporal continuum, which is too computationally expensive due to the stability of decomposed voxels. Shi et al. [8] equivalently uses a multi-view approach to build the feature recognizer MsvNet. This mode has a large number of neural network parameters; management, and notification are not efficient. 3D point clouds are a specific method of summarizing CAD models [10]. The Stanford paper proposes PointNet, a convolutional neural plexus model for 3D step damage data, based on examination of item staining. The utility of tainting as data input to form data features. The author long talked about a CNN structure for bone process shape notification on PointNet, but mainly explained the mold model with separate features, which largely came from practical applications.

This article proposes a deep letter pattern based on 3D detail cloud data of machined surfaces that can automatically identify indictment shapes. The strength contributions are as follows: (1) a rule for generating feature processing surfaces and adapting to 3D stage damage data learned by CNN network is proposed; (2) based on PointNet, a CNN for bony protrusion feature recognition is proposed and optimized Model’s network architecture. Extensive data curation attempts were made to robustly identify the narrative shape of the processed protective surface sets.

2.1. Stage Data Characteristics

A stage cloud is a large collection of moments that make up the target surface, a calm set of 3D instantaneous vectors [17]. The following are the main features of the detail taint data: First, disorder, the item cloud is a collection of point data, and the direction of the points is different. Second, a geometric plane that affects its expression. Combining these details in any order is contemptible and remains the main 3D standard; assistants are relationships between points, points blacken of a part plan come from the same distance space, points are not isolated, only the differences between them and the positions are separated, which indicates that successive steps can form a meaningful subset, that is, the local structure. Third is that the topological invariance of the point set under geometric deformation and the classification of the spatial rotation detail cloud and the segmentation of the detail cannot be achieved by translation features.

2.2. Cloud to Exchange
2.2.1. Dotnet [812]

PointNet191 is a CNN model that can directly take the embarrassment of picking as input. In PointNet, through a weighted multilayer perceptron (MLP), each detail of its input is extracted x; −eRD(). Latent vector: symmetric cosecant simulated by max-pooling exercises that aggregate all eigenvectors into a comprehensive eigenvector that is invariant to input points, thus solving the cloud of detail proposition. To expand the geometric version of the specified sully to the invariance problem, PointNet produces the Tnet data alignment network when inputting the specificity sully. The network now predicts a linear metamorphosis matrix that supports blackening features, transforms the input specified blackening into a suitable linear space, and then performs item damage features. Extraction and classification operations: PointNet has a devout feat in specificity cloud classification and semantic segmentation, but it is only used for feature extraction on unmixed points and is flawed in its ability to capture the territorial structure. Qi et al. are based on PointNet. A hierarchical CNN composition: PointNet++, this fork can better capture the local spatial structure, but its results are not modified.

2.3. Processing Characteristics

As mentioned above, the machining form can be determined as a shape placement on a part with specific machining semantics. Different machining contexts have other definitions for machining features. The processing forms discussed in this article are mainly discriminative geometric adjustments formed by cutting, such as holes, through grooves, rounded corners, and notches. Machining forms do not attach geometric transformations of the model (e.g., rotation, emotion, etc.). Zhang et al. [7]. Twenty-four commonly used processing forms are identified in the FeatureNet library proposed in [7]. For comparison, this note also adopts the classification rules of the narrative form in this instrument.

The attention mechanism works by forcing the system to focus on the basic information and ignore the second. Inspired by the introduction of attention to the ordering of 2D images of spider webs, this installation has the characteristics of fixed settlement, does not serve the connection between moments, and can experience the requirements of the sudden cloud formation, so many scholars have introduced attention to the instantaneous cloud formation. Algorithm. The author proposed that the point attention transformer supports detail cloud derivation, which uses a combinatorial equivalence study (Group Shuffle Attention, GSA) similar to the similarity attention mechanism to model the relationship between points. In addition, frustration also summarizes two capabilities namely GSA and Gumbel Subset Sampling (GSS). The GSA module can better mine the formal relationship between points and use GSS to complete the selection of representative point subsets. Inspired by the wisdom of graph convolution, some scholars combine graph volute technology and advertising mechanism to form a new classification strategy. The authors depicts that GAPNet supports self-advertence tendency, which teaches the neighborhood semantic instructions of the primitive input moment appearance by enlay a graph application mechanism in the chimney MLP lift, worn a correspondence mechanism (several-head application) aggregation to breed think shapes from different GAPLayer series. The GAPLayer and Attention Bed in Spider Web can be integrated into existent manage forks to largely extract the staid district geometry and validate fork performance.

By generating plot-deformed volumes to make GACNet, the torsional nucleus scheme in the catch structure can be adapted to the understanding of other objects. GACNet choose into explanation pithy assignments and global companies while focusing on the provincial open. Tsinghua scholars introduced the concept of Transform into moment cloud processing, and proposed a PCT network with few parameters and high precision. The network first encodes the feature semantics of the input feature cloud into a higher-dimensional feature process, and then concatenates epic geometric information that has undergone four attention models (end example—attention model and pre-set attention model) to different scales superior. Finally, aggregate Dian Suli’s theme and blanket (former name) to achieve classification and division of business.

2.4. Feature Machining Surface [1217]

According to the machining style definitions discussed in this article, all faces that become machined features are forced machining faces. In addition to the machined surface of the plump (round) feature, the other machined surfaces are collectively referred to as concave surfaces, that is, geometric surfaces connected by concave sharpness. The adaptation of the complete and adjacent machined surfaces constitutes an optional machined feature. The machining characteristics are all self-contained and their machined surfaces are not damaged. However, in some parts, machined features are often interrupted and the shapes are in phase with each other causes the geometry of the coach surface to be divided or expanded. This is a model with intricate slam features, ready to be A, and after adding another via slam form B, a shape recognition system with religious robustness should still be the contamination A feature is effectively recognized as a no-slot feature, and the B feature is a through-slot shape.

2.5. Identifying the System Framework [1015]

Based on the PointNet technology and the 3D point blackening data of the machined surface, this article discusses a smart-based automatic notification framework for machining features. The exact details are the same. (1) Construct 3D designated tarnish data samples based on the feature progress surface. Considering the necessity of thorough learning and as many data prospects as possible, this study uses the STL list of 24 features provided by FeatureNet [7] as the basic data letter for deep learning. Update the base taint data schema library with feature extraction and temporal blackening sampling. Training and execution of neural cobwebs and drilling data that there is a scaling relationship between the nature and quantity of data. In order to cooperate with the neural network to obtain higher education, the data is processed using techniques, such as data normalization and data augmentation. Finally, the data samples are randomly split into training specimens and touchstone samples to beautify the final high-level feature point dataset. (2) The construction and training of CNN processing feature recognizer on the basis of PointNet, a CNN deep learning model is compiled. The blacken dataset specified above is a custom training network model that supports Larsen effect results on training and testing condyles, improves the network structure and adjust the parameters. Finally, the network model with the best test results enumerated above is necessary as the derived feature identifier. (3) Identification of CNN-based prosecution form identifiers. The system first performs boundary segmentation on the identified capabilities. According to the formation principle of concave machining features, the extreme pattern portion is leafed into peripheral groups of polymorphic growth. The specific division algorithm procedure will be introduced in another convention. During the identification process, each rhythm selects a prosecution surface decision, that is, if a set of look-ahead surfaces is forcibly input, the system first blacks it out as a point, and then converts it to a regularization-specified input through the data protrusion generation talk in this article. Finally, the system automatically classifies it through a neural network and gives a decisive classification result.

3. Our Proposed Method

Compared with traditional algorithms, the benefit of thorough lore is that the characteristics of big data can be automatically taught, without the need to use handbook Fellowship in the project. In this section, sagacious lore-based feature cloud classification algorithms are divided into two categories: projection-based item sully classification methods and original detail-tarnish-based classification rules conforming to point aggregation systems, and representative and higher classification rules. The up network building is the selector of the show. This is a collection of specific cloud taxonomy charters based on extensive knowledge. An iconic CNN conducts trading operations on organized, ordered, and structured 2D images. For irregular, unstructured stage cloud data, the point blackening feature is obtained after projecting it into a specific preset way. In this summary, by alphabetical ordering and summarizing, the methods under this category are further redivided into two categories: voxel grid-supported methods and anti-voxel grid-supported methods.

3.1. Multiview Advanced

Drawing on the innate understanding of convolutional neural plexus in semantic annotation of 2D portraits, and the equivalence of voxel organization structure and display data, scholars strive to make voxelized unstructured appointment data busy with 3D CNNs.

3.2. Demonstration to explore

Voxelization of step-coloring utilizes a volumetric occupancy grid to describe the environment hierarchy as a 3D grid.

3.3. Models second to none

The VoxNet model proposed by the author make full use of item damage information to effectively promote massive specified cloud data. Design full volumetric occupancy grids and 3D convolutional plexuses. The VoxNet mesh structure has few parameters and a simple form, and it can generate a comprehensive classification to classify the specified cloud through multi-slot stacking. Inspired by complex trust networks (deep belief networks), the author [7] discussed 3D Shapenets dummies supported on convolutional deep belief networks. The design is robustly trained by representing the geometric features of the feature cloud as binary likelihood distributions on a voxel grid and exploiting convolutional shared importance to alleviate the problem of parameter incontinence. To address the sparsity of item damage and the aforementioned computationally expensive issues, scholars have attempted to recover fixed-decision voxel grids with octree configurations, such as OctNet [10] and OCNN [1] fret. OctNet adopts an octagonal grid-octree structure to separate walks hierarchically, and each host and slice point corresponds to a storage pool element. This course not only avoids heavy computation and huge memory consumption, but also achieves a deeper engagement purpose. Inspired by OctNet, the author [11] showed that OCNN, which cites feature constraints in octrees in 3D CNNs, which reduces the computational burden and improves computational efficiency to a certain extent. KD wood structures with octree-like lickpot makeup are also accessed for classification models, and the refinement method KD-Net leverages KD wood structures to combine assignments in an unpolished fashion. Since the network does not rely on a convolutional structure, the desquamation behavior can be completely eliminated. However, when the point cloud rotates, it will indirectly bear the fretting effect, and the fresh special tainted data are regenerated into the elegance of the KD tree, which increases the computational burden. Although a flexible and appropriate arrow building can reduce memory waste during computation, the training process does not fully utilize the local geometric building, and the boundaries of voxels can affect computation termination. The author opposition a grid-supported MeshCNN with an old-call CNN, which defines the volume of one of the feather-edge and optimizes the pooling part by artifice that exasperate with trivial eigenvalues to automatically differentiate assortment employment. This disgraceful that anxious form can be seizure while releasing excessive figure.

As one of the simplest cryptic and learned dummies, convolutional neural characters excel in advanced research on large-volume images, such as adversarial discovery, semantic segmentation, and edge detection. The geological structure advantage of CNN provides inspiration for referencing flames from 3D data to high-dimensional features. The classification and partitioning tasks in the thorn cloud process can be achieved by enhancing the CNN structure. The one-dimensional perfect convolutional classification network discussed by the authors directly used the 3D coordinates of the feature sully data and the corresponding spectral features. Wang et al. designed a cunning neural network DNNSP with second-hand spatial pooling and max-pooling boosting, which converts point-supported feature quantities into cluster-supported form, and then completes the classification through MLP. The author [10] assumed that the object information of local provinces is redundant when PointNet++ program points are corrupted, so they embedded uncertain volumes into a hierarchical network and used A-CNN. The regional adjacent forms of surrounding points are extracted by clique convolution, and the all-encompassing shape and local features are aggregated in the subsequent prick tarnish processing to complete the classification work. Ring production in character does not indefinitely suspect neighborhood items, so the performance of A-CNN in large-cap scenario applications is constant. The anomalous nature of dots turning black makes them different from 2D image convolutions. Each feature in the 2D image has an immovable condition, and there are multiple possibilities for the conditional consequences of thorns. The convolution is done on point blackening at different positions, and the effect is affected by the stage-corrupted input order, which affects the volume result. PointCNN regresses the barrier of the input order of step blacken to the convolution action, and the χ-transverse volume speculator defined in the spider web can transform the input method-limited data into a system-independent form. The grid uses the χ estimator to translate the input point cloud coordinates, and after learning the formal information through the MLP, the χ metabolic grid is used for the feature process. Expansion–distorted observations are employed in classification networks to preserve network richness and receiver processing growth. PointCNN demonstrates the troublesome region configuration derivation problem for moment-tarnish classification. However, there is still a gap between the χ transformation mold and the preset problem, and the bundle is indispensable and needs to be further improved.

4. Experimental Results and Analysis

Training a neural network requires a large amount of sample data. According to the aforementioned ubiquitous plant purpose, a large number of 3D items are required to contaminate specimen data having advanced features. Putting a lot of effort into extrapolating and having huge 3D point test piece data (over 10,000 example data) is a chore that the industry spares no expense. This paper uses the feature library provided in FeatureNet171 to generate a total of 24,000 STL files based on momentum parameters in the form of 24 token processing. Based on these STL instances, the specified data foreground library is obtained according to the process. Next, a key role in the condyle emerges. The way to generate these form data in FeatureNet is to protrude a third one with random parameters on a cube of side length 0.1 m. The epidermis formed by cutting off part of the curl is an intrinsic shape regulated by exogenous genes. The 3D model is stored in STL format, so the characteristic peripheral clot is a selection of triangular blocks belonging to the set of shape surfaces from the pigeonhole. In other words, the way of double logic can make the study easier, that is, to remove those sets that do not have characteristic faces. If a triangle’s ninny has both a 0 or 10 coordinate (in centimeters for its length), then the triangle patch must not be a feature perimeter determination. Neural vexation in this article is an immediate discipline through special staining. Therefore, the specificity of the points directly affects the quality and recognition accuracy of the neural network. This article uses the PCLNet example provided in the PCL library. The point cloud sampling application realizes a collection of homogeneous point cloud specimens of shape surfaces, and each shape surface stakes out 800 equivalent characteristic damage data. This is a color-changing display of orthogonal groove sampling details of the rectangular groove apparent boundary showing the groove feature surface. In practice, the indictment form will appear anywhere in the roam in any pose and size. In addition, the neural network requires an instantaneous cloud of input non-volatile quantities and performs established points on the characteristic surfaces of other gauges. Sampling results in point clouds of different densities. Therefore, in order to enable neurons to recognize various angles and poses of neurite features, and to expel the reputation of different densities, this paper normalizes the instant cloud data to a hinge and a radius of 1 source. The data is randomly vicissitude 90° around the two coordinates, 180° and 270° rotation. Through the above preserver, drilling data of 24 type of machining form can be succeeded, each shape has 1,000 instructive data, a total of 24,000 conduct data, and each data set has 800 characteristics. But as express below, the enumerate of detail stain application in Manege is 512. Therefore, while the system enforces full training for each indictment shape pattern, the 800 features are randomly decomposed into 6 ability configurations for the final education dataset, with a total of 144,000 training data. PointNet is a neural network that can directly feed unordered 3D moment data. In this article, a point cloud sampled from an equivalence set of high-level features is represented as a set of 3D pricks© }, where each A-moment Pi is represented by its (x, y, z) three-coordinate excellence. This paper builds a CNN architecture based on the PointNet framework. The architecture mainly consists of a data alignment boosting Tnet, a convolutional layer, a pooling bed, a connected couch, and an output layer with 5 capabilities.

The plexus that unleashes the classification class is essentially a mapping of input feature vectors to category vectors. In the use of this paper, the input data is 6n 3D point discoloration data, which is a matrix of “3”. In fact, when the model is educating, the data will be input into the neural plexus in batches, so the input data measurement is batchxwx3. In order to ensure that the original geometric features and consistency category labels remain unchanged after the step cloud support geometric transformation, the official sally has carried out a distortion transformation. The data alignment layer Tnet was added before. The structure of this seam is shown, and finally a nine-dimensional vector product is multiplied and reshaped into a 3 × 3 die. First convert the point cloud data into a fancy table via a spreadsheet.

4.1. Implement

Experiments are included on the cobweb model described above. The experimental education database has 144,000 data, and the number of point clouds confirmed by each feature is constant at 512. Each data set has a point cloud to represent the type. The data is divided into 65%, 15%, and 20%. It is divided into formulation adjustment, verification adjustment, and differentiation adjustment. Train in training obstacles, validate the management effect of passable examples in the validation set, and test the latest validation of the design in standard determination. The experimental environment is Windows 10 operating system, AMD R5-4600H+GTX1650 4GB video core+16GB memory high-performance notebook, and the above-mentioned plex mode is implemented on the cunning literature framework of TensorFlow2.1. The notification results of the 24 processing forms are recorded. The method in this article is procured with FeatureNet and MSVNet. Due to the different operating efficiencies in different environments, the comparison of education time in this article is for reference only. The CNN adverbial in this paper has only 64,824 training parameters, iteratively seduces in 109 minutes of dialect. After 106 steps, the proof confirmation lecture is stable. The standard results in the test set show that the comprehensive notification rate of 24 signs is 99.12%, understanding the advanced level. Compared with FeatureNet [7] and MsvNet [8], the converse method in this article reduces the drilling parameters by 485 clocks and 1,971 times, especially by 3.6 clocks and 8 sets of training parameters. Experimental events can show that both the epoch complexity and the duration complexity of this method are greater than the other two methods. The results are shown in Table 1.

The above-mentioned shape and surface blocks are obtained through the data generation system proposed in this article to obtain the processed epigenetic point blackening data, and then input into the recognizer for recognition. After the trial, except for the square through slot, which was mistakenly detected as a triangular through hole, the other five features can be accurately identified. The unidentified through slam had a similarly abridged bottom epigenetic extension, not involving the Manege dataset, and was correctly identified after division processing in the look-ahead.

5. Conclusions

Based on PointNet, this article proposes a machining feature recognition method for machining surface point cloud data using CNN neural network. Including the construction of 3D point cloud data samples of the processing surface adapted to the processing feature recognition CNN network learning, the design and improvement of the network structure of the processing feature recognition CNN model, and the optimization of the recognition parameters. It can still be recognized accurately and realize robust recognition of point cloud data of processing surface set. The network model structure has high versatility, high recognition accuracy, and fast recognition speed. Compared with the voxel CNN network structure and multi-view method, this article method has better performance.

Data Availability

The data can be obtained by requesting the correspondence author. The figures and tables used to support the findings of this study are included in the article.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

The authors would like to sincerely thank the technicians who have contributed to this research. (1) This research work was supported by the Basic Public Welfare Research Project of Zhejiang Province (LGG19E050010). (2) This research work was also supported by the Science and Technology Program of Jinhua, China (2020-1-004a). (3) This research work was supported by the Natural Science Foundation of Zhejiang Province (LQ21E050006).