Recent Progress in Materials  (ISSN 2689-5846) is an international peer-reviewed Open Access journal published quarterly online by LIDSEN Publishing Inc. This periodical is devoted to publishing high-quality papers that describe the most significant and cutting-edge research in all areas of Materials. Its aim is to provide timely, authoritative introductions to current thinking, developments and research in carefully selected topics. Also, it aims to enhance the international exchange of scientific activities in materials science and technology.
Recent Progress in Materials publishes original high quality experimental and theoretical papers and reviews on basic and applied research in the field of materials science and engineering, with focus on synthesis, processing, constitution, and properties of all classes of materials. Particular emphasis is placed on microstructural design, phase relations, computational thermodynamics, and kinetics at the nano to macro scale. Contributions may also focus on progress in advanced characterization techniques.          

Main research areas include (but are not limited to):
Characterization & Evaluation of Materials
Metallic materials 
Inorganic nonmetallic materials 
Composite materials
Polymer Materials
Biomaterials
Sustainable Materials and Technologies
Special types of Materials
Macro-, micro- and nano structure of materials
Environmental interactions, process modeling
Novel applications of materials

Publication Speed (median values for papers published in 2023): Submission to First Decision: 5.3 weeks; Submission to Acceptance: 12.6 weeks; Acceptance to Publication: 7.5 days (1-2 days of FREE language polishing included)

Current Issue: 2024  Archive: 2023 2022 2021 2020 2019
Open Access Original Research

Automated Quality and Process Control for Additive Manufacturing using Deep Convolutional Neural Networks

Yaser Banadaki 1,*, Nariman Razaviarab 1, Hadi Fekrmandi 2, Guoqiang Li 3,4, Patrick Mensah 4, Shuju Bai 5, Safura Sharifi 6

1. Department of Computer Science, Southern University and A&M College, Baton Rouge, Louisiana 70813, USA

2. Department of Mechanical Engineering, South Dakota School of Mines and Technology, Rapid City, SD 57701, USA

3. Department of Mechanical & Industrial Engineering , Louisiana State University, Baton Rouge, LA 70803, USA

4. Department of Mechanical Engineering, Southern University and A&M College, Baton Rouge, Louisiana 70813, USA

5. Department of Computer Science and Information Technology, Clayton State University, Morrow, GA 30260 USA

6. Grainger College of Engineering, Physics, University of Illinois Urbana-Champaign, Urbana, Illinois, 61801, USA

Correspondence: Yaser Banadaki

Academic Editor: Seyed Ghaffar

Special Issue: Additive Manufacturing Technology in Construction

Received: November 24, 2021 | Accepted: February 23, 2022 | Published: February 28, 2022

Recent Progress in Materials 2022, Volume 4, Issue 1, doi:10.21926/rpm.2201005

Recommended citation: Banadaki Y, Razaviarab N, Fekrmandi H, Li G, Mensah P, Bai S, Sharifi S. Automated Quality and Process Control for Additive Manufacturing using Deep Convolutional Neural Networks. Recent Progress in Materials 2022; 4(1): 005; doi:10.21926/rpm.2201005.

© 2022 by the authors. This is an open access article distributed under the conditions of the Creative Commons by Attribution License, which permits unrestricted use, distribution, and reproduction in any medium or format, provided the original work is correctly cited.

Abstract

Additive Manufacturing (AM) is a crucial component of the smart manufacturing industry. In this paper, we propose an automated quality grading system for the fused deposition modeling (FDM) process as one of the major AM processes using a developed real-time deep convolutional neural network (CNN) model. The CNN model is trained offline using the images of the internal and surface defects in the layer-by-layer deposition of materials and tested online by studying the performance of detecting and grading the failure in AM process at different extruder speeds and temperatures. The model demonstrates an accuracy of 94% and specificity of 96%, as well as above 75% in measures of the F-score, the sensitivity, and the precision for classifying the quality of the AM process in five grades in real-time. The high-performance of the model could not be achieved with the values usually used for printing temperature and printing speed, only in addition with much higher values. The proposed online model adds an automated, consistent, and non-contact quality control signal to the AM process. The quality monitoring signal can also be used by the AM machine to stop the AM process and eliminate the sophisticated inspection of the printed parts for internal defects. The proposed quality control model ensures reliable parts with fewer quality hiccups while improving performance in time and material consumption.

Keywords

Additive manufacturing; real-time monitoring; deep convolutional neural network; quality and process control; 

1. Introduction

Additive Manufacturing (AM) technology is a crucial component of the smart manufacturing system (also known as Industry 4.0 [1,2,3,4]) to enable flexible configuration and dynamic processes [5] to quickly adapt the products to new demands and potentially change traditional supply chains. For instance, the concept of jewelry and artifacts can reap enormous benefits when used with additive manufacturing (AM) technologies [6]. Previously established applications of AM demonstrate its interdisciplinary nature and its lasting appeal as it continues to be applied throughout various sectors ranging from the fabrication of physical manufacturing prototypes [7,8] to health care and biological products [9,10]. In an intelligent manufacturing system, machines and robots must provide a high automation level with the ability to process information [11,12], visualize the performance in real-time [13], and enable a predictive maintenance system [14]. Despite the tremendous potential of AM for producing custom-designed parts on-demand, with minimal material waste [15], widespread adoption of AM is hampered by poor process reliability and throughput from lacking condition-awareness of the AM process and automation. The parts built using current state-of-the-art AM machines have noticeable inconsistency and unpredictable mechanical properties [16,17].

Future AM machines must be an intelligence system that can perform self-monitoring, self-calibrating, and quality self-controlling in real-time. The gap between the smart factory and existing manufacturing systems needs to be bridged, paying critical attention to the automation, flexibility, and reconfigurability of AM machines in a computer-integrated manufacturing system. As such, researchers have studied how to improve the sensing capability of the AM process and introduced feedback signals accessible to the machine or user. A sensor-based predictive model was proposed [18] to assess the surface integrity of additively manufactured parts, and a monitoring system was presented [19] to predict strain and temperature profiles. The sensor-based monitoring systems require multiple sensors to precisely monitor and recognize the product quality during the AM process. An analytical expression for surface roughness prediction was also developed [20] using the geometrical information to investigate the effects of the static machine setting parameters on part quality. However, the predictive model has not considered the layer-by-layer nature of the AM process and the in-process quality variation during the AM process. Monitoring techniques using an optical camera were presented to detect the defects caused by residual stress [21], study the required force for the proper filament fate rate [22], and control the flow temperature and pressure of the Fused Filament Fabrication (F.F.F.) nozzle [23]. A monitoring system using X-ray computer tomography was developed [24] to study the effect of the nozzle blockage on large pores formation in build parts.

Machine learning (ML) [25] has also been explored by researchers to create the predictive model—detecting defects in the AM process for various applications. A ML model was developed [8] to control the process powder quality by using the computational data obtained from the Discrete Element Method. Research demonstrates that an ML model can also characterize, compare, and analyze powder feedstock materials and micrographs in the metal AM process [26]. The quality of 3D inkjet printing was assessed with an ML model in designing electronic circuits [27]. As inappropriate parameter setting or configuration could lead to building defects in the AM process, such as large pores and rough surfaces, ML models were developed to optimize the parameters of a Binder Jetting (B.J.) process [28] and locate an optimal region of the machine setting combination at high temperature, low layer thickness as well as high feed/flow rate ratio [29]. The in-situ monitoring and diagnosing of the AM process were developed using conventional data-driven models such as support vector machine (SVM) [30,31,32], hidden semi-Markov model (HSMM) [33], and clustering method [34].

In this paper, we employed a deep learning model to develop an automated and accurate quality grading system for the additive manufacturing process. The residual pressure of the melted filament within the extrusion chamber may cause the material to overfill and underfill, which could lead to visible surface defects and/or invisible internal defects and, consequently, degradation in the quality and the mechanical performance of the printed parts. The quality control (Q.C.) system can predict and flag manufacturing failures once these failures happen to stop or adjust the AM process, leading to a better chance of getting to the 100 percent yield. To implement the real-time Q.C. system, we first collected printing data for offline training of the predictive model and then evaluated and tested the predictive model for online quality monitoring of the AM process. The signal is used as feedback for the machine to decide to "go" or "no-go" based on the quality of the ongoing printing process—possibly eliminating the need to inspect parts after they are entirely built [14,35]. As the post-process inspection is an expensive and time-consuming quality control step, adding an automated in-process quality control that keeps track of printing interlayers can greatly reduce the waste of time and materials. The deep learning-based Q.C. system provides an automated, fast, consistent, and more precise measure of printing quality to optimize the AM process for better parts with fewer quality hiccups saving time and materials. The proposed model of the AM process presented in this paper can serve as a proof-of-concept for any type of AM machines, such as 3D bio-printers or metal and liquid-based printers.

The rest of the paper is organized as follows: Section 2 presents the experimental arrangement, data collection, and training process to develop an efficient predictive model of automated AM fault detection and quality classification for the AM process. Section 3 discusses the performance measures of detecting and classifying the failure in AM process, including the model prediction of AM quality for different printing speeds and extruder temperatures as two important controllable parameters with the most significant impact on the quality of built parts. The last section draws summarizing conclusions and the potential endeavors for future work.

2. Developing a Deep Convolutional Neural Network Model

Figure 1 shows the procedure for developing a data-driven quality predictive model of the 3D printing process. This approach implements the automated fault detection and quality classification model by collecting the process data and training a deep convolutional neural network.

Click to view original image

Figure 1 Procedure to implement the fault detection and quality classification system for the AM process, including collecting data, training the CNN model, and evaluating the model performance.

2.1 Data Collection from the AM Process

An image acquisition system is established to capture the images of each layer of the product that is being printed in the process. The images are used to prepare the dataset for training the non-contact quality predictive model using the deep learning-based machine vision system. In AM technology, the geometry of parts is formed layer by layer by joining filament materials. During this process, the geometrical deviation of each layer could affect the quality of the whole part, as shown in Figure 2(a). The deep learning-based AM quality monitoring system can capture frames of two main failure categories: surface and internal defects resulting from overfill and underfill of material, which may be visible in the part surface as excess material or avoid, respectively [32]. The residual pressure of the melted filament within the extrusion chamber may lead to excess material deposition and, thus, overfilling. To avoid these kinds of errors, a "go" or "no go" decision can be made at each point of the printed layer based on a feedback signal generated for the quality of the printing process in real-time [36]. The deep-learning-based Q.C. system has recently emerged as a monitoring technology that can rapidly and automatically provide a huge number of samples for real-time control of product profiles in manufacturing processes.

Click to view original image

Figure 2 (a) Examples of defects in the printed objects. (b) Experimental setup, including 3D printer and AM monitoring system. The training process is conducted on the Intel® i5 desktop without parallel processing devices. (c) Sample frames of printing parts captured for training the CNN model. Collecting data with six different printing speeds of printing material and four different temperatures of printing material. The collected data annotated in five classes based on the printing quality from A: highest quality to E: lowest quality. The crosses show that the printer settings completely fail to print any objects. Note: the figures show the quality of the finished surface of the printed specimens while the training database includes the hidden defect information in the inter-layers of the objects that is invisible to human inspectors and can be very important for the mechanical performance of the printed parts.

Figure 2(b) shows the experimental arrangement, including the Creality3D Ender print, Lumens DC125 camera, and the real-time Q.C. model developed as described in Section 2.2. The videos from the AM process are captured by the high-definition C.C.D. camera to produce the training data. The data is collected by filming the build of every layer in the AM process and converting the videos to frames. As such, the training data includes the occurrences of voids (internal bubbles) within the part that affect the structural integrity of the part and cannot be easily eliminated by post-processing [37]. The specimens are fabricated on the commercial desktop 3D printer (Creality3D Ender-3) to comply with modern trends in additive technologies development for personal applications, which are within an affordable price range and are often able to produce parts to find applications in various walks of life.

The printer operates based on a fused deposition modeling method, in which a thermoplastic filament is heated to a semi-liquid state and deposited on a heated bed layer-by-layer to construct a 3D object [38]. The printer uses 1.75 mm-thick Polylactic Acid (P.L.A.) build material to make a solid to liquid transition by melting at an extrusion temperature. The dataset is generated by printing different shapes and angles—objects with varying speeds of printing (in mm/s) and different extruder temperatures (in oC) as two crucial machine settings [39]. The printer allows us to adjust the printing speed from 50 mm/s to 1000 mm/s and the printing temperature from 185 oC to 260 oC. As such, to collect the training and test datasets, we print the objects by setting the printer parameters in 6 different printing speeds of 50 mm/s, 100 mm/s, 200 mm/s, 400 mm/s, 800 mm/s, and 1000 mm/s, and four different printing temperatures of 185 oC, 200 oC, 230 oC, and 260 oC, resulting 24 speed-temperature settings in total.

The speed and temperature of the AM process significantly affect the quality of the printed parts, as shown as an example in Figure 2(c). Increasing the speed and decreasing the temperature of the AM process degrades the quality of parts and produces a different scenario of defect generation in the AM process. The printer fails to print objects at the temperature of 185 oC for three speeds above 200 mm/s because the extrusion temperature needs to be higher to melt the plastic quickly enough when the plastic is being pulled through the extruder. As such, the videos of the AM process are converted to frames for 21 printer settings. The rates of the frame extraction are adjusted for different printing speeds to ensure capturing at least ten frames from the AM process of each layer. Capturing this number of frames enables the inclusion of the data from the initial production of internal defects in the layer-by-layer deposition. The training database includes the hidden information of defects as excess material or voids that are mostly invisible to human inspectors, while these defects are very critical for the mechanical performance of the printed parts. The images are manually inspected to exclude the images that the printer nozzle blocks the proper view of the printing area. After normalizing the intensity of images, 5000 images are chosen for the training process (238 images for each printer settings), and 100 images are randomly selected from the 5000 images as test data for each class to evaluate the quality predictive model. The sampled images with the size of 600×600 pixels are captured to ensure it is large enough to detect small-sized defects in the AM process.

2.2 Training CNN Model

The conventional machine-learning techniques need to learn effective features to extract feature vectors from input patterns through a feature extraction (F.E.) algorithm. The F.E. procedure requires human intervention in a training process that may affect the accuracy of the classification algorithm. In this paper, we used CNN (convolutional neural networks) architectures within a deep learning framework [40] that solved the shortcomings of the existing machine learning approaches. As a kind of deep learning and a neural network with deep layer architecture, deep CNN performs multilayer convolution to extract features and combine the features automatically at the same time on a single network. Deep CNN extracts spatial features from low-level layers that are then passed to aggregation layers (convolutional, pooling, etc.) and additional layers of filters for extracting higher-order features (patterns). The higher-order features are combined at the top layers, and fully connected (F.C.) layers in the output part of the network perform image interpretation and classification, as shown in Figure 3(a). As feature extraction and classification are simultaneously performed in a neural network, features fit for the classification are automatically carried out, which further improves performance.

Click to view original image

Figure 3 Training and testing procedure of deep neural networks. (a) Training convolutional neural networks that include procedures to extract spatial features to pass to aggregation layers (averaging and pooling), followed by extraction of higher-order features that are combined at the top layer for AM fault detection and quality classification; (b) The architecture of Inception-v3 used in this study has the initial module, auxiliary and final classifiers, along with 11 inception modules where each module consists of pooling layers and convolutional filters with rectified linear units (ReLU) as activation function; and (c) Schematics of the deep neural network including an input layer, multiple hidden layers, and an output layer that classifies test images of the AM process to five classes of AM quality in real-time.

The image patches of the AM process are fed into a deep CNN for efficient AM quality detection. The composition of hidden layers relates to the number of convolution and pooling layers, the number of nodes in a convolution layer, and the kernel size of the pooling and convolution mask. The image size and the data size determine the optimal number of layers in deep CNN such that the size of images and the number of classes affect the mask of layers and the number of nodes, respectively. The performance and reliability of CNN are directly associated with the amount of sample data and the depth of layers. Without a public dataset, it is difficult to find AM images suitable for different scenarios of defect creation in the AM process, and thus increasing the depth of the network with a limited number of sample images leads to over-fitting, further lowering the reliability of the model. The increase in the size of images allows expanding the depth of the neural network by adding more layers, possibly improving the CNN performance. However, more layers lead to an exponential increase in the computation cost, making the repetitive convolution-pooling structure necessary to be effectively parallelized to reduce the computing time.

Our model uses a ReLU (Rectified Linear Unit) activation function for both the input layer and hidden layers, while a logistic regression (softmax) function is used to generate a normalized exponential distribution for the final layer to obtain the complete learning probability and predicted labels. ReLU overcomes the vanishing gradient, allowing models to train faster and perform better. The use of ReLU for hidden layers makes a significant difference to training and inference time for neural networks. The sigmoid function exists between 0 and 1, allowing the prediction of the probability as an output. A deep CNN has many hidden layers. To learn all the weights in the layers, the loss function is minimized by a batch gradient descent algorithm that is generally used to train a neural network to propagate an error by the chain rule. During the training steps, our deep CNN model learns optimal weights of all layers using forward- and backward- propagations through the neural network architecture. The architecture is employed by retraining a pre-trained model, the Inception-v3 architecture [41,42], as shown in Figure 3(b). The Inceptionv3 allows deeper networks while also maintaining the number of parameters from growing too large so that it has about 25 million parameters compared to 60 million parameters in AlexNet. The architecture has initial module, auxiliary and final classifiers, along with 11 inception modules where each module consists of pooling layers and convolutional filters with ReLU activation function. The architecture is introduced in the TensorFlow (TF) platform. TF is a collection of workflows to develop and train models, providing the advantages of high availability, high flexibility, and high efficiency. The model is pre-trained on the ImageNet dataset to have the advantage of transfer learning (T.L.). ImageNet is a large dataset of annotated photographs, over 14 million images, used for computer vision research. The TL approach extracts existing knowledge learned from one environment to solve the other new problems such that the pre-trained CNNs take advantage of training with a smaller amount of data for the new problem and significantly shorten the training time.

To test and optimize the performance of the deep CNN model, we conduct systematic convergence studies concerning the epochs, learning rate, and batch size. The train and test accuracies of the quality predictive model versus the epoch for two different learning rates are shown in Figure 4(a). It is observed that both the train and the test accuracies increase by increasing the number of epochs, and the higher learning rate accelerates the convergence of the deep CNN model. Another significant observation in Figure 4(a) is that the fluctuations of the test accuracies are very small as the number of iterations increases after 150 epochs, showing that the size of the datasets and the deep CNN model is appropriately selected, and the model does not suffer from overfitting.

Click to view original image

Figure 4 Convergence of deep learning models for the different number of epochs, learning rates, and batch size. (a) Training and test accuracy versus epoch for the deep CNN predictive model of printing quality in the AM process at a batch size of 8 with different learning rates of 0.01 and 0.001 (unitless); (b) Accuracy and training time of the deep CNN model versus learning rate at epoch = 280 and batch size = 32; and (c) Accuracy and training time of the deep CNN model versus batch size at epoch = 280 and learning rate = 0.01.

After preparing and preprocessing the dataset, the use of hyperparameter tuning (HT) can optimize a model for the best classification metrics. HT is the process of choosing a set of optimal hyperparameters for a learning algorithm. The learning rate is the most critical hyperparameter for the performance of deep neural networks that determines the step size at each iteration to move toward a minimum of the loss function. The parameter affects how quickly the quality predictive model of the AM process can converge to the best accuracy. Figure 4(b) shows the plot of the model accuracy and training time versus learning rate. As the learning rate increases, the accuracy stops increasing and starts to decrease at 0.01. The maximum accuracy of 91% can be achieved at a learning rate of 0.01 for the batch size of 32. While choosing lower learning rates increases the accuracy faster, it makes the optimization process unable to converge at the global minimum of the loss function, lowering the model accuracy. Batch size is also an important hyperparameter to tune in modern deep learning systems. A small batch size allows the model to start learning before having to see all the data, but it may not converge to the global optima, resulting in lower accuracy of the quality predictive model. As shown in Figure 4(c), the accuracy of our model is 87% for the batch size of 8, while the model accuracy raises to 91% at a batch size of 32 with roughly the same computational training time for training the model. Increasing the batch size cannot lead to further improvement in the accuracy of quality control or computational speedups in the non-parallel computer systems used in the research, and in many cases, depending on the size of the training databases, increasing batch size will decrease the model generalization, resulting in lower model accuracy.

3. Evaluating the Model for Automated Detection of the AM Quality

We calculate five metrics for the final evaluation of the predicting performance of the printing quality in the AM process including Precision = TP / (TP + FP), Sensitivity = TP / (TP + FN), F-score = 2 x TP / (2 x TP + FP + FN), Accuracy = (TP + TN) / (TP + FP + FN + TN), and Specificity = TN / (FP + TN) where T.P., TN, F.P., and F.N. are, respectively, the true positive, true negative, false positive, and false negative number of the printing objects being classified for each class. The precision can be viewed as a measure of a classifier's exactness and the sensitivity (or recall) as a measure of a classifier's completeness such that low precision indicates many false positives while low sensitivity indicates many false negatives. The specificity measures the proportion of correctly identified negatives, and the F-score considers both precision and recall, indicating the worst accuracy when it reaches 0 and the best when it reaches 1.

3.1 Performance Measures for Classification on Speeds and Temperatures

In this section, we study whether the detection of the part quality from our classification model can be used to determine the printing temperature and speed. Extrusion speed and extrusion temperature are two controllable factors in the AM process that have a dominant impact on printing quality. A confusion matrix can be constructed to observe the dominant confusing classes for the classification model. Figure 5 shows the confusion matrix for the printed objects with six different speeds and four temperatures classified into 21 classes (not applicable classes eliminated) [see Figure 2(c)]. The arrow in Figure 5(a), for example, shows that the classification algorithm has difficulty in correctly predicting the classes. Forty test data are misclassified because the classification is prone to error at the same high temperature of 260 oC and close extrusion speeds of 50 mm/s and 100 mm/s. In the confusion matrix, the diagonal represents the correctly predicted number of each observation. The quality of all the test parts correctly relates to the printing temperature of 230 oC and the speed of 800 mm/s, while 68% of printed test parts are incorrectly predicted at the temperature of 200 oC and the speed of 200 mm/s.

Click to view original image

Figure 5 Performance analysis for identifying 21 categories annotated based on temperature and speed settings of the AM process. (a) Confusion matrix showing the exact number of correctly classified AM images and misclassified AM images; (b-f) 5 statistic metrics for the model prediction to assign the printing part to 21 classes.

Figure 5(b-f) depicts five measures that are computed for the performance analysis of the classification to relate the printed parts to the extrusion temperature and speed. It is noticed that the accuracies of all the classes are above 93%, and the difference in accuracy among the classes is small as the accuracy refers to the true predictions (T.P. and T.N.) among the total validation. Besides the high accuracy or high specificity, a good classifier must also demonstrate high performance for the other measures. The F-score, the sensitivity, and the precision of the quality predictive model [Figure 5(b-d)] reveal that these classification factors can be as low as ~0.3 for a speed slower than 200 mm/s. Similarly, the maximum values of these three measures are not larger than 0.71, indicating the model predicts the part quality with many false positives and many false negatives. F-scores of the classes are lower than 0.5 for 11 out of 21 classes, indicating a low accuracy of the deep CNN model for the classification of 21 AM process categories, especially for the printed classes with speed slower than 200 mm/s.

Overall, the performance of the classification model in relating the detected part quality to printing temperature and speed is not satisfactory because there exist similar qualities of printed parts at different settings of printing temperature and speed. The study shows that no identical defect signatures in printed parts can be detected to distinguish different settings of printing temperature and speed.

3.2 Performance Measures for Predicting the Quality of AM Process

In this section, the classification model is developed to grade the quality of printed parts and study how the printing temperature and speed can impact the classification performance. The quality predictive model of the AM process can be employed to detect the significance of the defects in the 3D-printing process and automatically grade the quality of the printing process. The signal can be used as feedback to the machine to decide whether the quality of the printing process is satisfactory for a given application. Figure 6 shows the performance analysis for the deep CNN model to classify five quality grades (A to E) of the AM process annotated in Figure 2(c). Figure 6(a) shows the confusion matrix for the printed objects with six different speeds and four temperatures classified into five quality grades, which can guide us to observe the dominant confusing classes of the developed classification model. Out of 100 test images in each class, the maximum number of the correct predictions is 91 that belongs to the object printing with the quality grade of C, and the minimum number of the correct predictions is 81 for the object printing with the quality grade of E.

Click to view original image

Figure 6Classification performance analysis for identifying five quality grades of the AM process. (a) Confusion matrix showing the detailed number of correctly classified and misclassified images of the AM process; (b) 5 statistic metrics for the model prediction to classify the printing quality of the AM process to five grades including F-score, sensitivity, precision, specificity, and accuracy analysis; and (c) Comparison of F-score, sensitivity, and precision results for 21-class and 5-class classifications that shows the significant increase in these statistic metrics for A-E grade classifications.

Figure 6(b) depicts five measures that are computed for the performance analysis of the classification based on the quality grade of the AM process. It is noticed that the accuracies of the five quality classes of the AM process, A to E, are 96%, 93.6%, 92%, 94.5%, and 94%, respectively. The specificity of the classifier is also high, equal to 98%, 96%, 92%, 97.5%, and 97.5% for grades A to E, respectively. Calculating the F-score, the sensitivity, and the precision of the predictive model reveals that the average of the three measures is higher than 80%, indicating the model prediction of a few false positives and a few false negatives. Figure 6(c) shows how the performance measures of the classification model improve by training over five quality classes of the AM process. The graph demonstrates the challenge of revealing the printing temperature and speeds based on the quality of printed parts because two or more temperatures and speeds may result in the same quality of the printed parts.

Figure 7 illustrates the model evaluation of the quality prediction of the AM process as a function of printing speed and temperature, including the true quality label of AM quality that corresponds to the annotation of the training data [Figure 7(a)] and the predicted quality label of AM quality [Figure 7(b)]. The comparison of the two graphs indicates that the quality prediction model of AM quality can reach an average accuracy of 98.2% [region 1]. Region 1 corresponds to printer setting for low speeds with less attention to the printing temperature. It is noticed that the deep CNN model of the AM process has a good prediction of the printing quality when printing with low speeds regardless of the printing temperature of the AM process. In this region, it was further noticed that the model also has a good prediction of the printing quality when printing at higher temperatures for wider printing speeds. The printing quality of A (in yellow) is predicted in a smaller area so that the prediction model downgrades the quality of the AM process in reason 3. By moving to higher speeds in region 4, the average accuracy of the deep CNN model decreases to 83%.In region 2, the model upgrades the true quality labels of E and D to D and C, respectively, (Blue to Green) that corresponds to the printing process with very high speeds above 500 (mm/s).

Click to view original image

Figure 7 Evaluation of predicted printing quality as a function of printing speed and temperature. (a) True quality label of AM quality versus speed and temperature, following the data annotation in Figure 2(b); and (b) Predicted quality label of AM quality versus speed and temperature that includes the regions of high prediction accuracy and quality upgrade error.

4. Limitations and Future Scope of the Work

One of the limitations of the classification algorithm is associated with its difficulty in correctly predicting the classes based on the printing temperatures and printing speeds. The model misclassifies the collected printing images because no identical defect signatures in printed parts can be detected to distinguish different settings of printing temperature and speed. The model developed to relate the detected part quality to printing temperature and speed is not satisfactory because there exist similar qualities of printed parts at different settings of printing temperature and speed. The future scope of the work for this model can be finding the distinguishable defect signatures in printed parts for different settings of printing temperature and speed.

Another limitation of the classification algorithm is associated with the prediction accuracy of the printing quality. For the model developed based on the quality grade of the AM process, the prediction accuracy of the printing quality decreases by increasing the printing speeds. For instance, the model downgrades the quality of the AM process in reason 3 and upgrades the quality of printed objects for higher printing speeds above 500 (mm/s) in reason 2. The future scope of the work for this quality grading model can be the methods and algorithms to improve the prediction accuracy of the printing quality for high printing speeds. For instance, one can investigate the influence of diversity of defects in training datasets or increase the number of training datasets.

5. Summary and Conclusion

Additive manufacturing has tremendous potential to make a custom-designed part on-demand with minimal material. However, it is currently hampered by poor process reliability and throughput due to the lack of an in-process feedback signal from the AM process. In most industrial fields, AM defect inspection systems still depend on expensive and time-consuming post-process inspections. Applying machine learning to AM technology could increase the quality and yield of the process, ensuring the technology's continued rise. In this paper, we proposed a deep learning-based predictive model by training a deep convolutional neural network to create real-time grading and monitoring for the AM process. We found that the model is unable to correlate the signature of the AM process to printing temperature and speed due to similar qualities of printed parts at different settings of printing temperature and speed. However, the model trained for classifying the AM process into five quality grades reaches an average accuracy of 94%, an average specificity of 96%, and an average accuracy of 80% for the AM process with higher speeds. The proposed predictive model presented in this paper serves as a proof-of-concept for any AM machine. The concept and model of automated and real-time quality monitoring can be used to develop bio-, polymer-, and liquid-based printers in the future. The findings of the paper about automated and real-time quality monitoring improves the speed, amount of material waste, reliability, and productivity of the AM process.

Author Contributions

Yaser Banadaki and Safura Sharifi: idea, experiment, concept, writing; Nariman Razaviarab: experiment, simulation; Hadi Fekrmandi and Guoqiang Li and Patrick Mensah and Shuju Bai: writing, review, and editing.

Funding

This work was supported in part by the National Science Foundation (N.S.F.) (Award Number: 2011900), the Louisiana Board of Regents (BoR), and the Louisiana Consortium for Innovation in Manufacturing and Materials.

Competing Interests

The authors have declared that no competing interests exist.

References

  1. Hermann M, Pentek T, Otto B. Design principles for industrie 4.0 scenarios. Proceedings of the 2016 49th Hawaii international conference on system sciences; 2016 January 5th; Koloa, HI, USA. Piscataway: Institute of Electrical and Electronics Engineers. [CrossRef]
  2. Ammar M, Haleem A, Javaid M, Bahl S, Verma AS. Implementing Industry 4.0 technologies in self-healing materials and digitally managing the quality of manufacturing. MaterToday Proc. 2021. Doi: 10.1016/j.matpr.2021.09.248. [CrossRef]
  3. Ammar M, Haleem A, Javaid M, Walia R, Bahl S. Improving material quality management and manufacturing organizations system through industry 4.0 technologies. Mater Today Proc. 2021; 45: 5089-5096. [CrossRef]
  4. Ashima R, Haleem A, Bahl S, Javaid M, Mahla SK, Singh S. Automation and manufacturing of smart materials in additive manufacturing technologies using internet of things towards the adoption of industry 4.0. Mater Today Proc. 2021; 45: 5081-5088. [CrossRef]
  5. Scholz-Reiter B, Weimer D, Thamer H. Automated surface inspection of cold-formed micro-parts. CIRP Ann Manuf Technol. 2012; 61: 531-534. [CrossRef]
  6. Fatma N, Haleem A, Bahl S, Javaid M. Prospects of jewelry designing and production by additive manufacturing. In: Current advances in mechanical engineering. Singapore: Springer; 2021. pp.869-879. [CrossRef]
  7. Bhushan B, Caspers M. An overview of additive manufacturing (3D printing) for microfabrication. Microsyst Technol. 2017; 23: 1117-1124. [CrossRef]
  8. Chua CK, Leong KF. 3D printing and additive manufacturing: Principles and applications (with companion media pack)-of rapid prototyping. Singapore: World Scientific Publishing Company; 2014. [CrossRef]
  9. Ventola CL. Medical applications for 3D printing: Current and projected uses. Pharm Ther. 2014; 39: 704-711.
  10. Ho CM, Ng SH, Li KH, Yoon YJ. 3D printed microfluidics for biological applications. Lab Chip. 2015; 15: 3627-3637. [CrossRef]
  11. Lin YC, Hung MH, Huang HC, Chen CC, Yang HC, Hsieh YS, et al. Development of advanced manufacturing cloud of things (AMCoT)--a smart manufacturing platform. IEEE Robot Autom Lett. 2017; 2: 1809-1816. [CrossRef]
  12. Zhang W, Mehta A, Desai PS, Higgs III CF. Machine learning enabled powder spreading process map for metal additive manufacturing (AM). Proceedings of the 2017 International Solid Freeform Fabrication Symposium; 2017 August 7th-9th; Austin, TX, USA. Austin: University of Texas at Austin.
  13. Xu P, Mei H, Ren L, Chen W. ViDX: Visual diagnostics of assembly line performance in smart factories. IEEE Trans Vis Comput Graph. 2016; 23: 291-300. [CrossRef]
  14. Wang KS, Li Z, Braaten J, Yu Q. Interpretation and compensation of backlash error data in machine centers for intelligent predictive maintenance using ANNs. Adv Manuf. 2015; 3: 97-104. [CrossRef]
  15. Ford S, Despeisse M. Additive manufacturing and sustainability: An exploratory study of the advantages and challenges. J Clean Prod. 2016; 137: 1573-1587. [CrossRef]
  16. Guessasma S, Zhang W, Zhu J, Belhabib S, Nouri H. Challenges of additive manufacturing technologies from an optimisation perspective. Int J Simul Multidiscip Des Optim. 2015; 6: A9. [CrossRef]
  17. Dantan JY, Huang Z, Goka E, Homri L, Etienne A, Bonnet N, et al. Geometrical variations management for additive manufactured product. CIRP Ann Manuf Technol. 2017; 66: 161-164. [CrossRef]
  18. Kousiatza C, Karalekas D. In-situ monitoring of strain and temperature distributions during fused deposition modeling process. Mater Des. 2016; 97: 400-406. [CrossRef]
  19. Li Z, Zhang Z, Shi J, Wu D. Prediction of surface roughness in extrusion-based additive manufacturing with machine learning. Robot Comput Integr Manuf. 2019; 57: 488-495. [CrossRef]
  20. Ahn D, Kweon JH, Kwon S, Song J, Lee S. Representation of surface roughness in fused deposition modeling. J Mater Process Technol. 2009; 209: 5593-5600. [CrossRef]
  21. Holzmond O, Li X. In situ real time defect detection of 3D printed parts. Addit Manuf. 2017; 17: 135-142. [CrossRef]
  22. Greeff GP, Schilling M. Closed loop control of slippage during filament transport in molten material extrusion. Addit Manuf. 2017; 14: 31-38. [CrossRef]
  23. Anderegg DA, Bryant HA, Ruffin DC, Skrip Jr SM, Fallon JJ, Gilmer EL, et al. In-situ monitoring of polymer flow temperature and pressure in extrusion based additive manufacturing. Addit Manuf. 2019; 26: 76-83. [CrossRef]
  24. du Plessis A, le Roux SG, Steyn F. Quality Investigation of 3D printer filament using laboratory X-ray tomography. 3D Print Addit Manuf. 2016; 3: 262-267. [CrossRef]
  25. Mitchell R, Michalski J, Carbonell T. An artificial intelligence approach. In: Machine learning. Berlin, Heidelberg: Springer; 2013.
  26. DeCost BL, Jain H, Rollett AD, Holm EA. Computer vision and machine learning for autonomous characterization of am powder feedstocks. JOM. 2017; 69: 456-465. [CrossRef]
  27. Stoyanov S, Bailey C. Machine learning for additive manufacturing of electronics. Proceedings of the 2017 40th international spring seminar on electronics technology; 2017 May 10th; Sofia, Bulgaria. Piscataway: Institute of Electrical and Electronics Engineers. [CrossRef]
  28. Chen H, Zhao YF. Learning algorithm based modeling and process parameters recommendation system for binder jetting additive manufacturing process. Proceedings of the International Design Engineering Technical Conferences and Computers and Information in Engineering Conference; 2015 August 2nd; Boston, Massachusetts, USA. New York: American Society of Mechanical Engineers. [CrossRef]
  29. Rao PK, Liu JP, Roberson D, Kong ZJ, Williams C. Online real-time quality monitoring in additive manufacturing processes using heterogeneous sensors. J Manuf Sci Eng. 2015; 137: 061007. [CrossRef]
  30. Li Y, Zhao W, Li Q, Wang T, Wang G. In-situ monitoring and diagnosing for fused filament fabrication process based on vibration sensors. Sensors. 2019; 19: 2589. [CrossRef]
  31. Kim JS, Lee CS, Kim SM, Lee SW. Development of data-driven in-situ monitoring and diagnosis system of fused deposition modeling (FDM) process based on support vector machine algorithm. Int J Precis Eng Manuf Green Technol. 2018; 5: 479-486. [CrossRef]
  32. Wu H, Wang Y, Yu Z. In situ monitoring of FDM machine condition via acoustic emission. Int J Adv Manuf Technol. 2016; 84: 1483-1495. [CrossRef]
  33. Wu H, Yu Z, Wang Y. Real-time FDM machine condition monitoring and diagnosis based on acoustic emission and hidden semi-Markov model. Int J Adv Manuf Technol. 2017; 90: 2027-2036. [CrossRef]
  34. Liu J, Hu Y, Wu B, Wang Y. An improved fault diagnosis approach for FDM process with acoustic emission. J Manuf Process. 2018; 35: 570-579. [CrossRef]
  35. Xue X, Kou YM, Wang SF, Liu ZZ. Computational experiment research on the equalization-oriented service strategy in collaborative manufacturing. IEEE Trans Serv Comput. 2016; 11: 369-383. [CrossRef]
  36. Peng A, Xiao X. Investigation on reasons inducing error and measures improving accuracy in fused deposition modeling. Adv Inf Sci Serv Sci. 2012; 4: 149-157. [CrossRef]
  37. Agarwala MK, Jamalabad VR, Langrana NA, Safari A, Whalen PJ, Danforth SC. Structural quality of parts processed by fused deposition. Rapid Prototyp J. 1996; 2: 4-19. [CrossRef]
  38. Chong L, Ramakrishna S, Singh S. A review of digital manufacturing-based hybrid additive manufacturing processes. Int J Adv Manuf Technol. 2018; 95: 2281-2300. [CrossRef]
  39. Peng AH, Wang ZM. Researches into influence of process parameters on FDM parts precision. Appl Mech Mater. 2010; 34: 338-343. [CrossRef]
  40. Shin HC, Roth HR, Gao M, Lu L, Xu Z, Nogues I, et al. Deep convolutional neural networks for computer-aided detection: CNN architectures, dataset characteristics and transfer learning. IEEE Trans Med Imaging. 2016; 35: 1285-1298. [CrossRef]
  41. Szegedy C, Vanhoucke V, Ioffe S, Shlens J, Wojna Z. Rethinking the inception architecture for computer vision. Proceedings of the IEEE conference on computer vision and pattern recognition; 2016 June 27th-30th; Las Vegas, NV, USA. Piscataway: Institute of Electrical and Electronics Engineers. [CrossRef]
  42. Abadi M, Barham P, Chen J, Chen Z, Davis A, Dean J, et al. TensorFlow: A system for large-scale machine learning. Proceedings of the12th USENIX symposium on operating systems design and implementation (OSDI ’16); 2016 November 2nd-4th; Savannah, GA, USA. Berkeley: Giant Rabbit LLC.
Newsletter
Download PDF Download Citation
0 0

TOP