Tackling Embedded Sensing Using Soft Robotics And Machine Learning
The human body maintains a sense of its own position and motions deriving information from multiple sensory modalities working together. They include the visual, vestibular, and somatosensory system. Among these, the human somatosensory system is the most challenging to emulate in a robotic system.
The somatosensory system uses internally embedded receptors to estimate body configuration and external contacts. It forms an essential component for self-awareness and motor control. Since these receptors grow, multiply, and deteriorate along with the enclosing body, a constant adaptation of the perceptive system is required. Adding to the fact that signals are highly dimensional and complex to interpret, development of the perceptive system is challenging.
Traditional robots were made with particular morphologies that simplified the problem of control and sensing. They were able to do this using rigid links and simple joints. Using rotary encoders and simple geometry, the state of rigid robots could be easily predicted with high accuracy and precision. The idea of using flexible materials and unrestrained joints lies in the central theme of soft robotics. The growing consensus is that this is crucial to emulate the complex and efficient feats of nature. However, many of the classic tools employed by roboticists for control and sensing could not be directly transferred to these soft systems.
Our work relies on a sensing architecture similar to our own perceptive system. It is mainly characterized by a dense and redundant sensor arrangement built by unobtrusive sensing elements. The redundant layout aids in reducing signal noise and unexpected damages. The embedded sensor elements have to unobtrusive as we do not want them to affect the motions of the body. This is exactly why soft sensors — sensors made out of highly complaint materials — are essential.
Design and development of soft sensors is a vast field with numerous technological solutions being invented regularly. Given a soft sensor with the desirable properties, the next challenge is to place them inside the target system and interpret meaningful physical quantities from the sensor signals. Current approaches rely on precise fabrication and placement of the soft sensors in a system whose dynamics is known beforehand. Hence, the sensor signals can be calibrated accordingly. However, this significantly slows down the manufacturing process and requires a priori knowledge about the body itself. The method becomes impractical once we start scaling to high dimensional systems or for complex geometries, like in the case of wearable robotics.
A simple solution to this problem, as found in nature, is to freely and randomly distribute sensors of varying dimensions along the body and learn to interpret the sensor signals using alternate sensory information as the “teacher.” Humans use the visual system, and we similarly rely on an infrared-based visual tracking system. We use long short-term memory (LSTM), a commonly used neural network architecture for time-series prediction, to learn the mapping from the sensors signals to the ground truth obtained from the visual system. Once the learning is complete, the visual system can be removed and the sensor signals with the LSTM network will predict the same physical parameters (the position of the body, in our case).
The soft sensor we use is an elastomeric material impregnated with carbon nanotube rods (cPDMS). The carbon nanotube provides conductivity to the elastomeric material and the conductivity would change in response to the stretching of the elastomeric material. Hence by measuring the resistance of the material information about the strain and/or stress in the elastomeric body can be estimated.
Using this approach, no prior knowledge about the enclosing system is required or about the sensor itself. Like in nature, the architecture is robust to damages and also utilizes redundancy for noise reduction. Moreover, visual information can be swapped with other sensory modalities like force data to estimate contact forces using the same sensor network.
These findings are described in the article entitled Soft robot perception using embedded soft sensors and recurrent neural networks, recently published in the journal Science Robotics.