TMCnet News

An Edge Extraction Algorithm for Weld Pool Based on Component Tree [Sensors & Transducers (Canada)]
[April 22, 2014]

An Edge Extraction Algorithm for Weld Pool Based on Component Tree [Sensors & Transducers (Canada)]


(Sensors & Transducers (Canada) Via Acquire Media NewsEdge) Abstract: In order to realize the automation and intelligence of welding process, the visual sensor and image processing technology of weld pool edge feature has become one of the key points. During the course of gas metal arc welding (GMAW), since this kind of welding requires a larger current, it makes the arc very strong and products so many droplets transfer and spatter interference. Therefore it is so difficult to extract the edge of welding pool. A new edge extraction algorithm based on component tree is proposed in the paper. It can realize the image segmentation adaptively using local features, retain the useful edge effectively and remove the false edge and noise as well. The experiments show that this algorithm can get more accurate edge information. Copyright © 2013 IFSA.



Keywords: Gas Metal Arc Welding, Pool Image, Edge Extraction, Component Tree.

(ProQuest: ... denotes formulae omitted.) 1. Introduction Since the quality and formation of the welding seam are decisively affected by the size, shape, etc. of the welding pool, researching the changes on the welding pool during the welding process is significantly important for the welding process automation, the seam improvement and the welding quality [1]. Fig. 1 is a typical image of the welding pool based on the passive optical. In recent years, scholars both at home and abroad have carried out much research extensively on the welding pool edge extraction as well as on the seam and wire positioning. For instance, Xiangfeng Zhen, Qing Wang and Xiaoguang Liu have applied the grayscale morphology to the edge extraction of welding pool, which has obtained an obvious effect [2]. Jiaxiang Xue, Lin Jia and Haibao Li from South China University of Technology have extracted the welding pool edge and the seam positions by the M band wavelet transform [3]. In 2009, Liu et al. introduced the active contour model into the field of GMAW (gas metal arc welding) and then proposed an active contour model algorithm based on snake line to realize the seam extraction in the welding process of GMAW [4]. Besides, there are several conventional extraction algorithms, such as Sobel and Canny Operators. These methods are very effective for the welding image of few interferences and small spatters such as TIG (tungsten inert gas) welding [5]. But for GMAW, it's hard to position the edge of welding pool precisely because the image background is comparatively complicated due to the big interference produced by the strong arc light, droplets and spatters during welding. And few research has been carried out on this aspect both at home and abroad [6, 7].


Because of the high production efficiency, GMAW has attracted great attention in the field of engineering. Recently, Jing Li et al. proposed an edge extraction algorithm for the welding pool based on the area positioning and CV model [8], which preliminarily validated the effectiveness of area rough positioning to the edge extraction of welding pool. In 2010, Michael Donoser put forward a new edge extraction method called Linked Edges as Stable Region Boundaries [9] whose effectiveness for image edge extracting had already been validated in the well-known data sets ETHZ Shape Classes and Weizmann Database. The two studies have opened a new phase for the research in this field. According to the traits of GWAM, this paper introduces the component tree model to the image edge extraction and proposes an edge extraction algorithm based on component tree, which represents the character and distribution of mages grayscale and merges the region according to the local feature to extract the edge of welding pool precisely and eliminates the false edges and noise caused by arc light. Finally, this algorithm is compared with other edge extraction methods like SOBEL, CV active contour model in real experiments.

2. Algorithm Design Component tree concept was initially proposed for classification and cluster in statistics [10]. Component tree model utilizes a hierarchical data structure to describe the internal components and the related space position characteristics for a specific object. Reference [11] applies component tree to image representation and filtering. And this paper uses component tree to realize the image edge extraction of welding pool. The edge extraction is a kind of image segmentation technology. All the pixels in the image will be divided into the two categories: background and edge ones. That coincides with the purpose and categories of component tree algorithm. The following steps are required to realize the image segmentation by component tree.

2.1. Component Construction and Graph During the process of image analyzing, the images are quantified in order to simplify the calculation and speed up the response. For instance, the grayscale ranges in the value from 0 to 255 and those 256 grades can be quantified into a number of grayscales. As shown in Fig. 2, it's a 5 x 5 grayscale image. If it's quantified into 10 grayscales, according to the principle of connectivity in digital image processing, the set of the interconnected pixel points of the same gray scale I is defined as the component Vj(I), j is the component label. Then, according to the definition, Fig. 2 includes 11 components Vj( 1), V2(2), Vj(3), V4(4), Vs(5), V6(5), VAß), W), V9(8), Vw(8), Vu(9), among which, V5(5), and Vg(5) represent the components of 5 grayscale for the left and right respectively; Vç(8) and V/o(8) are the components of 8 grayscale for the top left and bottom right respectively. Fig. 3 can be seen as the set including a number of vertexes and sides, which can reflect the connection between vertexes. Now, if the component is seen as the vertex, according to the principle of connectivity, there is a side between two vertexes if the two are connected. Then, as shown in Fig. 3, an undirected graph can be derived from the 11 components in Fig. 2.

2.2. Component Tree Construction If a graph is connected, then there must be a tree containing all the vertexes in the graph [12]. According to connectivity of image, Fig. 3 must be a connected graph and then a tree can be got accordingly called as a component tree here. The rules for constructing a component tree are as follows: 1) From the bottom to the top. The leaf nodes first, then layer by layer, until the final root node.

2) In a descending order of grayscale. The component of high grayscale comes first. If there are two or more components of the same level, then follow the sequence from the top to bottom and from the left to right.

3) If Vj(I) is the current component for processing, search the graph to find all the components connected with Vj (/) of lower grayscale, and then choose Vk(I') of the highest grayscale /' as the parent node for Vj (/) . By analogy, all the components can be processed like that.

According to the rule stated above, as shown in Fig. 4, a component tree can be generated from Fig. 3. The component tree reflexes the space position relations of all of the components in the graph: sibling nodes are not connected to each other, child nodes are connected with parent nodes and the grayscale of child nodes is always higher than that of parent nodes.

2.3. Component Tree Merge The purpose of image segmentation is to put the connected pixels of similar character into one sort. In a component tree, the grayscale of each component stands for its region property and the overall structure of the component tree represents the space position relations of all the regions.

Thus, image segmentation can be achieved by merging the connected and similar components. The rules for merging components are: 1) Start with the bottom of the tree and process the parent nodes one by one. As shown in Fig. 4, Vs(5) is the last parent node. Then the merge should start with this node until the root node.

2) If Vj(Ip) is the current component for processing, check its grayscale relation with its child node Vk (Iq ) . If it meets ...

where p is the merge threshold, then all the child and parent nodes which meet the condition can be merged to form a new component. Or else, it means there is no similarity among the components which can't be merged into one region. Then disconnect these nodes from each other.

3) How to calculate the grayscale of the new constructed component if the two components can be merged? If it's simply to average the sum, when the pixels of the two components vary considerably, the grayscale of the component containing more pixels may contribute more to the merge grayscale. Apparently, the average couldn't well reflect the real grayscale for the new merge component. Thus, the new merge grayscale is calculated by the weighted method here.

...

where np and nq are the number of pixels contained in the two components respectively.

4) Process all of the components one by one, until there are no nodes to merge. Then the left nodes in the component tree are isolated and quite different in grade gray. These isolated nodes are the final result of image segmentation.

Take the component tree in Fig. 4 for example and assume that p= 3. Firstly, process the last node Vl0 (8) on 6th layer. F10(8) can't be merged with its parent node Vs(5) because their grayscale relationship doesn't meet rule (1). Then disconnect V10 (8) which therefore turned into an isolated node. Next step, process node V4 (4) . Out of all the child nodes, only one node Vs(5) can be merged with V4 (4) . First, disconnect Vj j (9) and V4 (4) . Then, merge V4 (4) and V5 (5) . V5 (5) according to the rule (2) and the grayscale of the new node is 4.5. After that, process the next parent node V1 (6) in the same way. By analogy, process the nodes one by one until the root node Vx (1). The final result is shown in Fig. 5 that the graph is divided into 5 areas as shown in Fig. 6. And region 1, 9 and 8 turn out to be noise because they just contain only one pixel, and the rest two regions are normal segmentation areas containing 5 and 16 pixels respectively. This segmentation result is basically in agreement with the original image in Fig. 2 in view of visual aspects. Thus, a conclusion can be drawn that the component tree can make full use of the image local characteristics to have a good segmentation on images.

3. Welding Pool Edge Extraction by Component Tree Component tree model can realize the image segmentation and the edge extraction of welding pool is actually a process of image segmentation too to distinguish the boundary points from others. Thus, to introduce the component tree model to the edge extraction of GMAW welding pool can achieve a precise detect on the welding pool. The steps are as follows: Process the images of the welding pool with gradient operator to get the gradient map as shown in Fig. 7.

Quantify the gradient map Build Components for the quantized gradient map and create component trees.

Merge component trees. Since the welding pool edge is the focus and the edge is the largest connected area except the background. Then the largest area is background and the second one is the edge of weld pool in merged result. If there are any other connected regions, they must be noise and false edges caused by strong arc. These areas can be ignored because the number of pixels contained must be much smaller than that in the real edge of the welding pool.

4. Experiment Results and Analysis In order to verify the algorithm proposed by this paper, the experiment captured the seam image during welding by using the passive optical GMAW pipeline backing welding system of the CCD camera, which realized the experiments on the welding pool edge extraction algorithm based on the component tree. And further comparison was made with other edge extraction methods such as SOBEL operator, CV active contour model, etc.

The setting of grayscale in component tree model differs, the result differs. The Higher the grayscale set, the higher the precision of edge extraction will be but simultaneously the algorithm becomes too complicated to meet the demand of instantaneity. And the low grayscale can meet the instantaneity, but can't extract the edge of images precisely. Out of numerous experiments, when the grayscale I is set to 20, both the precision and instantaneity can be met. The merge threshold p is also very important, which can directly decide the extraction results. Fig. 8 shows the experiment result.

From Fig. 8, it can be seen that during the edge extracting based on Sobel Operator, the gradient is calculated first and then the image segmentation is achieved by binaryzation. Thus, Sobel Operator extraction depends too much on the threshold selection. When the background illumination is uneven, the extracted edge is difficult to be continuous. What's more, Sobel Operator is quite sensitive to noise. Thus, smooth processing is needed to remove the noise, which however will fuzzy the edge. CV active contour model can suppress noise well, but lack effective means for false edge. The edge extraction based on component tree model doesn't depend on some global threshold due to the grayscale quantifying processing. Instead, the merge is done step by step according to the local features. Since it makes full use of the image local features, the background and edge can be well distinguished even under uneven background illumination. And the needed region is judged according to the number of pixels contained, which can effectively get rid of noise and false edge. That's why the component tree model can comparatively get a better image segmentation. Table 1 shows the run-time of three methods. The component tree model is able to meet the demand of instantaneity although it is the most time-consuming method.

5. Conclusion The original CCD images taken during the welding process contain lots of noise due to the interference of dust, spatter, arc, etc. Thus, it's always a hot and difficult point to get an accurate, effective and quick method for the welding pool image processing in the field of welding quality control. According to the characteristics of GMAW, this paper proposes an edge extraction algorithm based on component tree which can well restrain the noise and false edges within the welding pool image. And a good segmentation can be therefore achieved by the step-by-step division and hierarchy based on the image local attribute. Experiments show that this algorithm is very suitable for the images of big noise, uneven illumination and target area like GMAW. This algorithm can also be applied to the common image segmentation. Especially for the images containing obvious and a small number of target areas, component tree algorithm can achieve a very good segmentation effect. But when the image is complex without clear target areas, the efficiency of this algorithm will be affected due to the large number of components.

Acknowledgements We would like to thank Qinbin Han and Wei Tang for constructive suggestions on the proposed framework and providing the data. This work was supported by the National Natural Science Foundation of China (No.61202135), the Natural Science Foundation of Jiangsu Province (No.BK2012472), the Natural Science Foundation of the Higher Education Institutions of Jiangsu Province (No. 11KJB520007).

References [1]. D. B. Zhao, S. B. Chen, L. Wu, et al., Intelligent control for the double sided shape of the weld pool in pulsed GTAW with wire filler, Welding Journal, Vol. 80, Issue 11,2001, pp. 253-260.

[2]. Xiangfeng Zhen, Qing Wang, Xiaoguang Liu, Extracting borderline of C02 arc welding molten pool image using gray-scale morphology, Transactions of the China Welding Institution, Vol. 28, Issue 1,2007, pp. 105-108.

[3]. Jiaxiang Xue, Lin Jia, Haibao Li, Edge detection of welding molten pool image based on M-band wavelet transform, China Mechanical Engineering, Vol. 15, Issue 13,2004, pp. 1144-1146.

[4]. J. Liu, Z. Fan, S. Olsen, Using active contour models for feature extraction in camera-based seam tracking of arc welding, in Proceeding of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS'09), 2009, pp. 5948-5955.

[5]. Kristina Toutanova, Aria Haghighi, Christopher D. Manning, Joint learning improves semantic role labeling, in Proceedings of the Annual Meeting of the Association for Computational Linguistics ACL '2005,2005, pp. 589-596.

[6]. Ruifang Ge, Raymond J. Mooney, Discriminative reranking for semantic parsing, in Proceedings of the Conference of the International Committee on Computational Linguistics and the Association for Computational Linguistics (COLING/ACL' 06), 2006, pp. 263-270.

[7]. M. Collins, Ranking algorithms for named-entity extraction: Boosting and the voted perceptron, in Proceedings of the Annual Meeting of the Association for Computational Linguistics (ACL '02), 2002, pp. 489-496.

[8]. Hongbo Deng, Michael R. Lyu, Irwin King, Effective latent space graph-based re-ranking model with global consistency, in Proceedings of the Second ACM International Conference on Web Search and Data Mining, Barcelona, Spain, 2009, pp. 212-221.

[9]. Michael Donoser, Linked edges as stable region boundaries, Computer Vision and Pattern Recognition, Vol. 18, Issue 6,2010, pp. 1665-1672.

[10]. J. Hartigan, Statistical theory in clustering, Journal of Classification, No. 2, 1985, pp. 63-76.

[11]. L. Najman, M. Couprie, Building the component tree in quasi-linear time, IEEE Transaction on Image Processing, Vol. 15, Issue 11,2006, pp. 167-181.

[12]. Mark Allen Weiss, Data structures and algorithm analysis in C++, China Machine Press, Beijing, 2011, pp. 313-314.

[13]. L. Wong. PIES, a protein interaction extraction system, in Proceedings of the Pacific Symposium on Biocomputing, Hawaii, USA, 2001, pp. 520-531.

[14]. Christian Blaschke, Alfonso Valencia, The framebased module of the SUISEKI information extraction system, IEEE Intelligent Systems, Vol. 17, Issue 2, 2002, pp. 14-20.

[15]. I. Donaldson, J. Martin, B. de Bruijn, C. Wolting, PreBIND and Textomy - mining the biomedical literature for protein-protein interactions using a support vector machine, BMC Bioinformatics, Vol. 4, Issue 11,2003.

[16]. Jung-Hsien Chiang, Hsu-Chun Yu, Huai-Jen Hsu, GIS: a biomedical text-mining system for gene information discovery, Bioinformatics, Vol. 20, Issue 1,2004, pp. 120-121.

[17]. Syed Toufeeq Ahmed, Deepthi Chidambaram, Hasan Davulcu, Chitta Baral, IntEx: a syntactic role driven protein-protein interaction extractor for biomedical text, in Proceedings of the ACL-ISMB Workshop on Linking Biological Literature, Ontologies and Database 2005,2005, pp. 54-61.

[18]. T. C. Rindflesch, L. Tanabe, J. N. Weinstein, L. Hunter, EDGAR: extraction of drugs, genes and relations from the biomedical literature, in Proceedings of the Pacific Symposium on Biocomputing, 2000, pp. 517-528.

[19]. David P. A. Comey, Bernard F. Buxton, William B. Langdon, David T. Jones, BioRAT: extracting biological information from full-length papers, Bioinformatics, Vol. 20, Issue 17, 2004, pp. 3206-3213.

[20]. A. Rzhetsky, I. Iossifov, T. Koike, M. Krauthammer, P. Kra, M. Morris, H. Yu, P. A. DubouÎç, W. Weng, W. J. Wilbur, V. Hatzivassiloglou, C. Friedman, GeneWays: a system for extracting, analyzing, visualizing, and integrating molecular pathway data, Journal of Biomedical Informatics, Vol. 37, Issue 1, February 2004, pp. 43-53.

[21]. Deyu Zhou, Yulan He, Discriminative training of the hidden vector state model for semantic parsing, IEEE Transaction on Knowledge and Data Engineering, 2008, Vol. 21, No. 1,2009, pp. 66-77.

[22]. Y. He, S. Young, Semantic processing using the hidden vector state model, Computer Speech and Language, Vol. 19, Issue 1,2005, pp. 85-106.

[23]. Dafeng Chen, Deyu Zhou,Yuliang Zhuang, Analyzing ChIP-seq data based on multiple knowledge sources for Histone modification, Journal of Software, Vol. 7, No. 6, 2012, pp. 1179-1187.

1 Dafeng CHEN,2 Zuojin HU,1 Yifei CHEN,3 Yitong LI 1 School of Information Science, Nanjing Audit University, Nanjing 211815, China 2 School of Information Science, Nanjing Technical College of Special Education, Nanjing 210038, China 3 College of Computer Science and Software, Tianjin Polytechnic University, Tianjin 300000, China E-mail: [email protected], [email protected], [email protected], [email protected] Received: 15 September 2013 /Accepted: 25 October 2013 /Published: 30 December 2013 (c) 2013 International Frequency Sensor Association

[ Back To TMCnet.com's Homepage ]