Portable AIFIBRES - Portable Artificial Intelligence for Textile and Fibres Recycling
InnovateUK
May 2024 - May 2026
Total Budget £1.68M, KU: £121,000
Kingston University working in partnership with offcuts company KAPDAA, textile sorter Choose2Reuse CIC and large
charity, Royal Opera House, is developing Ai4Fibres - the world's-first portable plug-and-play recycling system for
garments and fabrics. The system will scan, sort, and segregate 10 tonnes of garment waste per week and will use
advanced AI to address critical inefficiencies in the current garment/fabric recycling process.
Key milestones include:
Deployment of hyperspectral imaging systems and development of AI models for textile material classification.
This crucial step forms the backbone of our sorting process, helping to differentiate textiles for recycling at an
advanced level.
We were honoured to welcome Michelle Donelan, Secretary of State for Science, Innovation, and Technology, at the
project facilities, where we discussed the future of sustainable textile recycling.
Successful deployment of our robotics lab dedicated to textile and cloth manipulation for button and zippers
removal - an essential step toward automating the sorting and recycling process.
Our first live pilot at the Royal Borough of Kingston upon Thames - a vibrant hub of green business initiatives.
The pilot showcased the full integration of all systems into a portable recycling solution, bringing sustainable textile
management directly to the community.
Overview of Data Collection
Setup / tools used
RGB camera: 4K
Multispectral Camera: Parrot SEQUOIA
Depth sensor: Azure Kinect
Circular white LED lights: for uniform lighting
Infrared lamp: to aid multispectral imaging and highlight material properties
Raspberry Pis: to automate light control
GUI: initiate capturing pipeline for each sample
Automated capturing pipeline: built in Python to integrate cameras, sensors and light automation
Dataset Statistics
Includes four classes, two pure and two blends
Includes different patterned fabrics, textures and colours
Captured under diverse lighting conditions
Includes samples with wrinkles
Our first live pilot at the Royal Borough of Kingston upon Thames - a vibrant hub of green business initiatives.
The pilot showcased the full integration of all systems into a portable recycling solution, bringing sustainable textile
management directly to the community.
Overview of Classification Methods
Computer-vision powered scanning system
Classification is the determination of an object/image type.
Here the AI has training to determine the type of fabric from an image.
The model then gives a set of prediction scores. The score is the confidence it has in saying it is that class.
Transfer Learning using VGG19
Two-phase training:
Frozen base model with custom top layers
Fine-tuning with unfrozen layers
Structure:
VGG19 pre-trained backbone
Global Average Pooling
Dense Layer
Dropout
Dense Layer
Dropout
Dense Layer with SoftMax activation
Initial phase:
Learning rate: 1e-3
Frozen VGG19 layers
Fine-tuning phase:
Learning rate: 1e-5
Unfrozen last layers
Callbacks: Early Stopping, ReduceLROnPlateau, ModelCheckpoint
Overview of Segmentation Methods
Segmentation is where an object in an image is located and separated from the rest of an image.
Here the AI has segmentation training to detect the locations of the buttons and zips as they require separate recycling to the clothing fabric.
The images below show the original (bottom) and the segmentation (top).
CV Contaminates (Zips, Buttons, etc.) Identification System
Pipeline Development
Utilisation of SAM for Cloth Segmentation
Utilisation of SAM for Cloth Segmentation
SAM enables high-quality mask generation for all objects in an image.
Integrated with Grounding DINO for zero-shot detection, leveraging (image, text) input pairs for object bounding box generation.
Facilitates button and zipper localisation without task-specific training.
Output Capabilities
Bounding Boxes: Precise object localisation
Masks: Detailed segmentation of buttons and zippers
Contours: Accurate shape delineation for automated processing
Application
Outputs guide the laser cutting process for efficient button extraction from clothing materials.
SAM Workflow
Image encoder extracts embeddings from input image.
Prompt encoder integrates text-based input.
Mask decoder combines image embeddings and text prompts to generate precise masks.
Outputs only specified masks, bypassing default behaviour of generating masks for all objects in the image.
No need for manual annotation – eliminates need for task-specific segmentation training.