Use Case: Fabric Stain Classification

Martin Isaksson
Nerd For Tech
Published in
4 min readAug 10, 2021

--

Image Source.

The global textile industry impacts nearly every human being on the planet and had an estimated size of $1000.3 billion USD in 2020¹. It includes the production, refinement, and sale of both synthetic and natural fibers used in thousands of industries.

The high demand for quality textiles has led to the application of automated, AI-based quality control of textile production in recent years. This is also due in part to technical developments, the use of modeling and simulations, and the high-probability of errors and defects prevalent in textile production.

With the growing use of (machine learning) ML in Industrial IoT (IIoT) and Industry 4.0, we set out to build an image recognition model in PerceptiLabs that could analyze images of textiles to determine whether or not they contain stains. A model like this could be used in conjunction with real-time camera or video feeds in textile manufacturing plants, to quickly catch defects and improve quality control.

Dataset

To train our model, we used images from the Fabric Stain Dataset on Kaggle. The original dataset comprises 466 images depicting normal and stained fabric textiles.

The original dataset is unbalanced with 68 non-defect images and 398 images with different types of stain defects in polyester and cotton fabrics. To eliminate potential biases during training, we made a balanced dataset using data augmentation (applying random rotation, along with vertical and horizontal flips) to increase the number of non-defect images from 68 to 408. Figure 1 shows some example images from this dataset:

Figure 1: Examples of images from the dataset.
Figure 1: Examples of images from the dataset — Image Source.

When loading the data via PerceptiLabs’ Data Wizard, we resized the images to 224x224 to improve computation times, while the three (RGB) channels were used to feed data into the pre-trained model. To map the classifications to the images, we created a .csv file that associates each image file with the appropriate classification label (stain and defect_free) for loading the data into PerceptiLabs. Below is a partial example of how the .csv file looks:

Example of the .csv file to load data into PerceptiLabs that maps the image files to their classification labels.
Example of the .csv file to load data into PerceptiLabs that maps the image files to their classification labels.

Model Summary

Our model was built with three Components:

Component 1: MobileNetV2, include_top=false, pretrained=imagenet

Component 2: Dense, Activation=ReLU, Neurons=128

Component 3: Dense, Activation=ReLU, Neurons=2

The model uses transfer learning via MobileNetV2 as shown in Figure 2:

Figure 2: Topology of the model in PerceptiLabs.
Figure 2: Topology of the model in PerceptiLabs — Image Source.

Training and Results

We trained the model in batches of 32 across 10 epochs, using the ADAM optimizer, a learning rate of 0.001, and a Cross-Entropy loss function. With a training time of around 122 seconds, we achieved a training accuracy of 93.79% and a validation accuracy of 83.23%.

Figure 3 shows PerceptiLabs’ Statistics view during training:

Figure 3: PerceptiLabs’ Statistics View during training.
Figure 3: PerceptiLabs’ Statistics View during training — Image Source.

Figures 4 and 5 below show the loss and accuracy across the 10 epochs during training:

Figure 4: Loss during training.
Figure 4: Loss during training — Image Source.

In figure 4 we can see that both training and validation loss rapidly decreased in the first epic. Training loss remained fairly stable for the remainder of the epochs while validation loss steadily increased, indicating that we could have stopped the training earlier to reduce the overfitting.

Figure 5: Accuracy during training.
Figure 5: Accuracy during training — Image Source.

In Figure 5 we can see that training accuracy remained relatively stable after the first two epochs with a gradual decrease towards the end, while validation accuracy remained fairly stable throughout.

Vertical Applications

A model like this could be used for computer vision-based quality control in manufacturing. When paired with real-time images or video feeds of materials on production lines, this ML model provides a strong foundation for automating the identification of material defects. The model itself could also be used as the basis for transfer learning to create models for detecting defects in other types of materials.

Summary

This use case is an example of how image recognition can be used in manufacturing. If you want to build a deep learning model similar to this, run PerceptiLabs and check out the repo we created for this use case on GitHub. Also be sure to check out our other material-defect blog: Use Case: Defect Detection in Metal Surfaces.

¹https://www.grandviewresearch.com/industry-analysis/textile-market

--

--

Martin Isaksson
Nerd For Tech

Martin Isaksson is Co-Founder and CEO of PerceptiLabs, a startup focused on making machine learning easy.