Sometimes it seems like computer vision gets all the attention when it comes to input data for machine learning algorithms. Image sensors are the sensors that all other sensors want to be. And that’s understandable, after all, visual information is very rich and has been put to good use in areas ranging from autonomous driving and medical diagnostics to assistive technologies. These successes can easily make us forget how important the other senses are in our normal activities. Consider driving, for example – it’s certainly important to see where you’re going, but the feel of the road through the steering wheel also provides information about road conditions when it’s raining or snowing. Similarly, dressing or folding laundry would be very difficult if there were no feel of the fabric in the hands.
On this last point, several methods have recently been proposed to help robots manipulate the fabric in different ways. However, these approaches were not particularly effective for many common tasks, such as entering a desired number of fabric layers from a stack. To address these shortcomings, a team of researchers from Carnegie Mellon University’s Robotics Institute developed a system called ReSkin which uses magnetometer-based sensors to give robots an accurate sense of touch, even when working with deformable materials like fabric. This is a deceptively difficult problem, due to the flexibility of the fabric and the unpredictable ways it can wrinkle when interacting with it.
The proposed pipeline (📷: S. Tirumala et al.)
The ReSkin artificial skin is a thin elastic polymer with embedded magnetic particles to enable three-axis measurement capture. Thanks to the movements or depressions of this simulated skin, one can interpret the changes that occur in the surrounding magnetic fields as a tactile signal. Since ReSkin sensors are very thin and don’t involve imaging, they can fit into very small, dark areas to gather information inaccessible by other means, such as the folds between layers of tissue.
This new touch sensor was installed on the finger of a mini-Delta gripper on a Franka robotic arm with seven degrees of freedom. The team wanted to teach this robot to perform tasks such as folding laundry, making textiles or assisted dressing. So they started with an important first step: grabbing a specific number of pieces of fabric from a pile. Manually programming the rules to accomplish this task would be difficult, and it wouldn’t be able to adapt to new situations that hadn’t been planned in advance, so the team decided to use a classifier machine learning to recognize how many layers of tissue the robot grabs.
Testing the classifier in various scenarios (📷: S. Tirumala et al.)
After collecting a training dataset and teaching a k-means classification model, he was ready to recognize four distinct scenarios: pinching with no tissue between his fingers or pinching with one, two, or three layers of tissue between his fingers. This information was then fed into a pipeline to modify the gripping policy of the robotic arm to adjust the positioning of the fingers. An average classification accuracy of 84% was observed in a series of trials, which shows that ReSkin combined with a machine learning classifier is an effective tool. It was noted, however, that the system became less efficient as the number of fabric layers increased.
ReSkin has shown this can be a better way forward for touch sensing, providing accurate results where other methods, especially optical sensing, aren’t well suited to the task. The scope of this initial work was quite limited, but the team hopes its work will inspire future research leading to robots capable of performing a wide variety of useful tasks through touch sensing.