An image processing pipeline in Sewing Technology

Posted on : by : admin

An image processing pipeline in Sewing Technology

An image processing pipeline was designed for the thread detection. At all times, knowledge about the desired thread pattern model is available.

Singer Sewing Machine Price in Chennai

A. Two Partial Luminance Images The composed camera RGB-image is converted to a singlechannel luminance image L(x, y) = 0.3∗R(x, y)+ 0.59∗G(x, y)+ 0.11∗B(x, y). (1) The conversion is based on the assumption, that the thread appears either brighter or darker than the tissue background. However, it is unknown in advance which appearance is given. Therefore, two partial images are generated. VS Sewing Machines The positive image I+(x, y) only contains pixels brighter than the tissue mean, whereas the negative image I−(x, y) only contains pixels darker than the mean, I+(x, y) = max(0, L(x, y) − m) (2) I−(x, y) = max(0, m − L(x, y)) (3) m = mean(L(x, y)). (4) By separation into partial images, the pixels representing the thread will only be visible in one of them. They will always appear as a bright structure. Next to the actual thread pixels, there will also appear spurious pixels from noisy tissue structures. They resemble thread-like structure parts and are stochastically distributed.

B. Frangi Filtering A Frangi filter [3] is applied on both partial images. The filter operation basically consists of a pixel-wise computation of the Hessian-Matrix and a Gaussian-shaped smoothing kernel. It is also known as a vesselness filter and was originally introduced in the context of medical image analysis to emphasize pixels that are embedded in vessel-like structures. However, an elongated and thin appearance is not only characteristic for vessels inside the human body but also for the considered thread within this work. It is therefore natural to adopt the established methodology for the given task. The result are two images, IF r+ and IF r−, with each pixel value containing the probability that it is embedded in a thread-like structure.

C. Selection of the Thread Image Based on the supplied model prior, an approximate number of expected thread pixels can be estimated. This expectation can be turned into a thresholding operation, that is performed on both images. Since one of the filtered images contains both the thread as well as background noise, while the other image only contains background noise, a robust detection of the thread image is straight forward. The result is a single binary image, wF r(x, y), with a pixel value of 1 indicating a thread pixel.

D. Classification of Pure Thread Pixels The previous step provides a mask for pixels that are embedded in a thread-like, i.e. elongated and thin, structure. Yet, pixels lying not directly on the thread but nearby may be included. Therefore, the mask is refined using an expectation maximization (EM) algorithm [1]. The refinement is no longer performed on the luminance image, but on the RGB-image. As initialization, the tissue RGB values from the background removal step are taken for the tissue mean and covariance values. The thread mean and covariance values are derived from all pixels masked by wF r. The iterative EM algorithm results in a single binary image, wges(x, y), with a pixel value of 1 denoting apure thread pixel.

After identifying image pixels as being part of the thread structure, it is necessary to assemble them into an actual thread pattern. This is achieved by performing a registration of the model prior, describing the desired thread pattern, to the actually detected thread pattern. It is essential for this step to be robust against potential outliers and/or missed stitch positions. Since the observed thread pattern will not equal the model prior in general due to possible tissue distortions, an adaptation of the model is necessary. The adaptation process corresponds to finding a unique deformation vector for each model stitch point.

A. Thread Representation Positions of the thread appearing in the image are extracted using a blob detection on the binary image wges(x, y). Every position found this way will in the following be called a thread representative. The representatives are preferably distributed in equidistant steps over the length of the thread. Furthermore, they can be ordered to form a sequence of points, representing the entirety of the thread. The result is displayed in Figure 5c, with the circles visualizing the found representatives. However, it can be seen that outliers are possible (visualized in red). Additionally, individual thread positions might be missed. The model fitting and adaptation is not performed using each individual stitch, since such an approach would be highly susceptible to interferences like outliers. Instead, more abstract features are considered, that can be robustly recognized within both the set of representatives and the pattern model. • Characteristic points may be thread endings, points where the thread pattern changes abruptly its direction, or the intersection of lines. • Polygons may be formed by the linear connection of neighboring representatives. The corresponding polygons within the pattern model are formed by the linear connection of neighboring model stitch positions. Next to the improved robustness, the computation complexity is heavily reduced.

B. Initial Model-Based Registration The specimen placed inside the inspection system may be arbitrarily rotated and shifted. For this purpose, an initial estimation of the rough placement of the thread pattern relative to the prior model is estimated. The estimation utilizes the generalized Hough transformation (GHT) [2] [6], which has been proven to fulfill this task. The goal is to compute a global shift vector minimizing the distance between model and real world thread features.

C. Iterative registration Based on the initial model-based registration, the model adaptation is performed in an iterative procedure. Each iteration consists of multiple steps. First, an assignment between features within the model and the representatives is established. The assignment needs to be considered separately for both feature types. The target of a characteristic model point is a characteristic thread representative having the same type and being at the minimum Euclidean distance. The search for corresponding target polygons is based on the polygon center and its direction. Once the assignments are established, a transformation of the current model features is performed to approximate the thread pattern. Within the first iterations, the assignments are rather unreliable. Therefore, only a rigid registration having few transformation parameters but many assigned feature points is determined. Over the course of the iterations, the assignments become more reliable and the number of free transformation parameters is increased. In the end, the assignments become extremely reliable. Thus, individual model points are only then allowed to be shifted towards individual thread points, resulting in a controlled deformation of the model to adapt to the real thread pattern. Every transformation is estimated using an energy minimization approach, independent of the degree of freedom. There exist two types of energies, internal and external. The internal energy is a measure of the deformation of the model. The more a model polygon vector deviates from its original vector in size and direction, the higher the cost. The contribution of the external energy depends on the type of feature. If an assignment between a characteristic point in the model and its counterpart on the thread is possible, the model point can be directly attracted to its target. The strength of attraction is independent of the direction and depends only on the Euclidean distance between both. For most cases, an exact assignment from model polygons to individual thread polygons is not useful due to the high number of them that causes confusability. However, a direction of attraction can be determined. Therefore the external energy for polygons and the points spanning those polygons is computed in dependance on the direction of the model polygon normal at its center point and the projection of that normal onto a thread polygon vector.

1) Rigid registration: A rigid registration consists of a rotation and translation. Scaling is determined and applied as part of the deformation registration later on. The transformation is the result of an optimization over all model features and their associated targets, such that a non-linear equation system needs to be solved. An optimal solution is found using the conjugate gradient method [7]. 2) Deformable registration: New positions for the model points are the result of an optimization procedure that minimizes the total energy depending on the new positions. A linear equation system with the coordinates of all model points needs to be optimized. The relative weighting between external and internal energy determines how near a model point is attracted to its target with the model steepness as the constraint. A large weight for the internal energy means a very steep model that is robust but fails to model each local detail of the thread deformation. A small weight means a very flexible but not very robust model. Hence, the improvement of the quality of the found targets with each iteration allows a dynamic adaptation of the weights, resulting in a model adapting better and better to local deformations recorded in the thread image.

Given arbitrary complex thread patterns, the system automatically detects the real world thread pattern and compares it with the pattern intended by the manufacturer. A deformation vector can be calculated for every individual stitch with an accuracy of up to 50µm. The system may serve as a stand-alone for quality assurance. However, it also lays the foundation for the automated correction of stitch positions in textiles distorted by the elasticity of the material. Based on the computed deformation vector field, an automated correction of the CNC program to create the desired pattern is possible. An obvious limitation of the proposed system is the requirement for the thread to appear visually different than the background tissue. However, this is not the case for a variety of products, for which the thread color is wanted to be identical to the background color. A typical example is a black car seat sewed using a black thread. We are currently working on exchanging the RGB-camera for a multi-spectral image acquisition in order to distinguish originally metamer, i.e. identical appearing, objects.

A camera based inspection system was introduced to automate the quality inspection process within the area of CNC sewing. Created thread patterns are automatically detected and compared against the intended pattern model. A deformation vector is computed for every individual stitch position. For both the thread detection as well as the model-based registration and adaptation, dedicated image processing pipelines were proposed.