OK. It's a couple of minutes past the hour, so let's get started. It is my pleasure to introduce our first robotics seminar speaker of the semester, Professor Cecilia Laschi. Cecilia is the Provost Chair Professor of Robotics at the National University of Singapore, where she leads the Soft Robotics Lab.
Cecilia holds many leadership positions in the robotics society. She is editor-in-chief of Bioinspiration & Biomimetics, specialty chief editor of Soft Robotics in Frontiers, in Robotics & AI. She is on the editorial board of Science Robotics, RL, IJR, and many others.
Cecilia is an IEEE fellow and members of other societies, such as AAAS, I-RIM, and GNB. Cecilia has served twice as the administrative committee member of IEEE Robotics and Automation Society, where she founded and chaired the conference, RoboSoft, in 2018 and now, serving on the steering committee. She was also the program chair of IROS 2018 and 2024, as well as the co-chair of the Gordon Research Conference on Robotics in 2024. Cecilia is best known for her foundational research in soft robotics, an area that she pioneered and contributed to develop at the international level.
She investigates fundamental challenges of creating robots with soft materials, with a bioinspired approach, where her team studies octopus manipulation and locomotion. She also explores application of soft robots in marine and biomedical applications. She has also worked in humanoid and neurorobotics, applying new brain models to humanoid robots. Today, the title of Cecilia's talk will be, "Robotics Goes Soft, Methods and Technologies for New Robotics Scenarios."
With that, we welcome you, Cecilia. The floor is yours. Thank you. Thank you very much for your introduction. Thank you for inviting me.
I'm nervous to speak here. I see many known faces. And really, thank you. It's my honor to be here and to tell you a little bit about soft robotics.
As you see, this is my title. I am from The National University of Singapore. I'm a professor there. I'm also leading the Advanced Robotics Centre, but I'm also on leave from the BioRobotics Institute of Scuola Sant'Anna in Italy, where I spend most of my career, actually. So again, I'll speak about soft robotics, but I was not born as a soft roboticist, because before that, in my previous life, my research was in neurorobotics, in my early stages of career. So this is the humanoid robot that I used to work with when I was a PhD student, a young assistant professor.
So it looks a bit out of date today, but I'm very fond of this. And you see, I was focused on sensory motor coordination, full manipulation. And then we built a platform, a biped, a humanoid in collaboration with Waseda University.
So I moved a little bit on locomotion, but still, the way the robot could learn and the way we could implement the brain models on those robots. So they were, basically, platforms for research. They were not aimed at any application.
But despite that, I could even manage to found a company, which was not really in my plan with this kind of fundamental research, but it happened at some point. And as you see, this implementation of brain models is very much what we call bioinspired approach to robotics. But it's not only that, it's actually double relation between biology and robotics. So of course, the first meaning is easier. For us, we are more familiar to that. So from the biological system, we observe or we derive some models, and then we build our robots.
This is what we call bioinspiration and biomimetics. There is a journal. It was mentioned.
I'm editor-in-chief. It's not only for robotics, of course. It applies to many other disciplines, from materials, many other fields of engineering.
But when you do that, if you have experience in your own research, you also have more insight on the original model. So it's a double kind of relation, and sometimes we refer to this as biorobotics when it is about robotics, but I like this definition from the first cyberneticist, actually, in the early 20th century. So it's a sort of a unified approach to the study of the biological system and the machines, in our case, the robots. And there is a very nice editorial of science robotics explaining this in one of the first issues of the journal, explaining the difference between the science of biorobotics, when we discover new principles explained them, and the engineering of biorobotics when we invent new technologies, new robots. So going to-- on the first arrow, which is the most familiar to us-- sorry. This is not supposed to-- OK.
This is my personal view of the way we should proceed, which starts from the observation of the biological system. This is where we should get insight on principle, how it works, work, in principle. As roboticists, we can build something, but don't call them "robots." I call them "mockups," something that helps us verify the principle that we observe.
And it's very much trial and error. But if and when we can describe the principles in mathematical terms and abstract them, it can become engineering and robotics. So this step is very, very important.
We have to build some models. And of course, we also like to build prototypes here. Again, prototypes, not the final rule, but something that helps us verify the model at this point. At this point, with the model, with the equations, we can design the robot. And the final robot, for some applications, it's our bioinspired robot, even if the shape of the appearance is very different from the initial biological model. But it will be bioinspired, because it contains the principles that we observed and we described mathematically.
So what are the principles that we can take from nature? Of course, there are many, but I'm trying to distill a main message here. Let me see if I can-- so what we observe, when we observe biological system or system that are very complex, we think that our robots are complex, but any single biological system, way more complex than our robots in terms of sensors and actuators and receptors and muscles to control. They also, live and survive and thrive in natural environments, which are much more complex of the environments where we put our robots at work.
But their behavior is very efficient, smooth, fast. So this is what I think we should learn from nature. In robotics, we should learn how to build robots as complex as needed. Maybe they are too simple for real environments.
We should be able to let them work in real, complex environments, and we should understand what are the simplifying principles that nature has put in place to make it possible. So complex systems, complex environments, but simple kind of behavior. So the way they work is at the end, somehow, simple and fast and efficient. So in my previous life, when I was studying neuroscience, working with neuroscientists, I learned this word, "simplexity," which is different from "simplification." Simplification is what we do, traditionally, in robotics. So we try to simplify the environment and put robots in unstructured environments.
And we also simplify the system because we have a limited number of actuators and sensors in our robots. Simplexity is something different. We don't want to simplify the system or the environment. We want to use simplifying principles to make it work in a fast and efficient way. And there are many of them in our nervous system, in our brain, and not only in our human brains, which I don't go through now, but I pick just one of them that I find particularly interesting, which is the capability of brains for predictions and anticipations. So we are used to sensory feedback to our action in our robots.
But actually, this is not always the case in brains, but there is a lot, which is based on prediction, anticipation, and open loop. If we want to close the loop with sensory information, sometimes it's too slow to really explain sensory motor coordination. So in robotics, we generally have sensors on one side, actuators on the other side. Sensors receive something from the environment. It is processed by some forms of brain controllers, and an action is generated. This drawing is from a book by Rodney Brooks from MIT.
Of course, I know him very well. And he actually tried to change disruptively this classical architecture by saying, oh, we don't necessarily need cognition, because if we have sensors, the world can be explored by the robot, can be learned by the robot, and we don't need a model of the robot. We can just connect perception and action directly, which is a big change. It was really a revolution. And it is a big simplifying principle, because we simplify this control perception action loop a lot. But here, it's clear that perception comes first, and then there is some processing and then action.
So pretty slow, long, and slow loop. In this other case, it's much faster, but still perception comes first. So you still have to wait for the sensory input, which is, apparently, not what happens in our brain.
In our brain, we can actually predict the sensory input before we receive it on our receptors. So if I take the same picture, what I have to do is to add something. So I'm going to make this system more complex, because I have to add something that generates a disrespected perception. And it is generated, you see, with the same signal for the movement, for the action. So when we generate a motor command, at the same time, we generate the sensory input that we will receive after the motor command is executed. This is possible because we have internal models, because we have a lot of experience of our work, our daily actions.
Of course, we have sensors. We don't switch sensors off, but we take the very role, sensory data, and we compare it with the expected perception. So comparisons are always very fast in brains. They are very fast, but also, in robots, you can just make-- And if the perception is good, which is probably most cases because we have experience of what we do, there is nothing that we can do, and the action can just go on.
If there is a mismatch, then I have to do some processing, and I go back to some of the previous cases, but only when there is a mismatch. And we implemented it long ago in the tactile space. This is a model from Roland Johansson, a neuroscientist who explained what we do when we see an object and try to grasp it, and you see that at the same time, when we generate the motor action, we generate the tactile image. And this is what we implemented in the old humanoid platform that I showed. And we also did it in the visual space.
So when we have to process images, that's a bit more computationally expensive. So it may make even more sense. What you see are the two images of the robot. But the one in red is not real.
It's a synthetic. It's generated by the robot itself. So it's what the robot expects to receive on the cameras. And the black one is the comparison. So a perfect matching would be a black image. There is something, some pixels that are colored.
It means that there is something unexpected, and it's the hand waving in front. So we have an advantage when everything is expected because the behavior is much faster. But we also can detect when there is something unexpected. So for example, we detect obstacle we detect.
So when I did this research, which was probably almost 20 years ago, it was not very successful because in the end, you don't see a real advantage, because it's just, you save some computing time, some computation. But if you have enough computing power, you don't care too much. And this is what actually happened in robotics, especially with vision 20 years ago, it was cumbersome to process an image, but today, it is not. So didn't have much success, but recently, I saw something that actually brought me back to those times because this is not my work.
Unfortunately, it's Davide Scaramuzza in Switzerland. So he actually uses the same principle. So he is doing a lot of prediction, and he found the task where the efficiency of this is well-demonstrated. Because you probably know the works. It's a drone racing autonomous, completely autonomous drones that he demonstrated. They want against human pilots, and actually, the reason why they could find the gates so fast and they could race so fast is because they could predict and anticipate, so a better task to demonstrate this principle.
Now, that I'm in soft robotics, I also found a nice task where I can use prediction in a very helpful way. This is a soft robot. The details are not very interesting, but it's actuated by roads. So these roads can either push or pull and can bend the arm in both directions. There are three of them, so we can bend in any direction.
But we also have some sensors along the arm on the surface, so we can detect the deformations and even external contacts. But this is a fundamental problem in soft robotics. How can you distinguish when the arm is deformed? Because I'm moving the arm, or because it is deformed by an external force.
I consider it as a fundamental problem, and actually, the way I solved it by using prediction. So I can, with some internal models, I can predict the response, the output from the sensors. And I can understand if it is what I was expecting, because I'm actuating and moving the arms, or it's something, an external force deforming it. And we just gave a couple of demonstrations of it, so we are now using, here the external tactile perception to explore and then solve this means problem. So going back to my experience in neurorobotics, as I said, I was implementing these brain models.
And I was pretty happy with that, but I was not happy with the body, because at some point, I had to translate what was coming out from those models to talk to a body that was completely different. In the end, those humanoids are built in a pretty traditional robotics way. And this was also the years when the embodied intelligence was becoming popular, saying that intelligence is not only in the brain, but the body plays a very important role. So the brain is not all.
And so I had a lot of doubts, and I decided that there was a need for a new body where they should be more in agreement, this bioinspired brain with the body, I'm sure you are familiar with embodied intelligence. Actually, it says that the body itself plays a role in intelligent behavior. Intelligent behavior means sensory motor coordination, adaptation. And if we consider the role of the body, we can simplify the tasks of our robot was very much.
And this resonates very much, for me, with the idea of simplexity, the idea of simplifying principles. So if I go back to the previous architectures, and if I want to show the embodied intelligence here, what I would do is just to add an arrow here. So embodied intelligence is the way the body is subject to external forces in the interaction with the environment, and these forces generate the behavior that we want. So I add this arrow, and in a sense, I remove all the rest. So it's a super powerful simplifying principle.
Because I'm going to close a very short control loop, which is only mechanical. It's only the physical body and the environment. And the same here, I can remove everything else. So it's just the mechanical system, the motor part. But of course, if we want to use these external forces, this interaction forces to simplify our behavior, we need a body that can receive those external forces, can deform under those forces. So we need compliance.
And this is pretty obvious, but compliance is something that, generally, in robotics is cited, especially in the classical industrial of robots, you don't want compliance because you want the robot to be able to do precise, accurate tasks. But instead, we are reversing this. We want compliance. We want soft robots. And if we look back at nature again, we see that, basically, it's a soft animal world.
Of course, a completely soft animal, so they either live underwater, underground, where gravity is limited, or they are very small. And if you have to negotiate gravity, you need some forms of skeleton. But still, the mass of the skeleton compared to the rest is still a very small percentage, like 11% in human beings. So for me, soft robotics derived from all that. And I like this definition, which was given by the RoboSoft community. It was the start of the community, the first meetings, and you see that this definition is very much focused on the deformation.
So this is what defines a robot as a soft robot, if it undergo large deformations, not necessarily because the material is soft, but because the geometrical structure is deformable. So in my own research, the model that I chose at that point was the octopus together with Barbara, who is here. Of course, it's an excellent model for soft robotics, but also, for embodied intelligence. Because it's a relatively-- I wouldn't say simple, but it's a mollusk, and they demonstrate a lot of intelligence.
They are be smart. But focusing just on the movement, this is the typical movement they do for reaching an object, which is based on a stiffening wave from the proximal part to the distal part. As you can imagine, there are many muscles involved. It's like controlling an infinite number of degrees of freedom. So in robotics terms, is pretty difficult. And the animal, it seems that they only use three parameters from the brain to set the height, the angle of the shoulder, let's say.
Then there are many neurons in the arm, of course, controlling this movement. It's a sequential activation. So it's also a simplification.
But there is also a big part, which is done by the mechanical structure, the fact that buoyancy is 0. The fact that this bending is, as you see, is also a way to reduce the water drag, and on the contrary, to be helped by the water drag. We also found a lot of very interesting principles in the way this muscular structure works. It's called the muscular hydrostat because the volume is constant during contraction. So we can have elongation and shortening, when the longitudinal muscles are contracted or the transverse muscles are contracted.
We have bending, of course, in any direction because of the longitudinal muscles. We have muscles for torsion, but what is, for me very, very interesting for robotics is that co-contractions increase the stiffness. So the octopus is a soft animal, but is not always soft. The octopus arm can stiffen a lot. And this is exactly what we are looking for in robotics, in software, robotics.
So I said, we need to observe the principle but also, to describe that mathematically. So how far are we from describing mathematically the principles of embodied intelligence, the principles we observe in the octopus. So again, for me, embodied intelligence, in the end, is this arrow, this additional arrow here in any architecture that you want to consider for your robot. So it's this mechanical feedback from the environment. So are we ready to do this transition, and is it enough to model this arrow, to model the embodied intelligence? And actually, we argue that, yes, in soft robotics, we should be able to model, not only the internal interactions.
This is what we do normally in robotics. So we model the effect of actuators on the movement of the robot. We also need to model the effect of external forces, in our case, the deformation of the robot. Then we proposed a very general equation that you see here where you have a term describing the soft body and two terms, one accounting for the internal interactions, so the effect of actuators, and the second one, accounting for the external interaction forces. There are techniques for both terms. Some of them are in the realm of continuum mechanics.
Others are based on line parameters or reduced-order models. But there are techniques that we just have to learn how to put them together. And I have a project which is exactly on this reaching movement of the octopus arm, which probably, is not-- where I want to demonstrate that it is efficient, efficient energetically, because of the shape of the movement and because the water drag is used to help the movement. So I want to demonstrate that by considering not just the deformation of the octopus arm, but coupling that with the fluid dynamics of the movement in water. And we also have another project, which is a collaboration with Italy, where we also want to compare different kind of muscular hydrostats and also, in this case, model their behavior.
And in this collaboration, we actually-- this is unpublished, but it's a very, very interesting result. Because in collaboration with Politecnico di Milano, they developed a modeling framework for the cardiac muscle. And this modeling framework allows to simulate the activation of the muscle with the motor neuron, so the electrical activation or the contraction, the fluid dynamics of the blood flow.
So in our case, we replace the cardiac muscles with the octopus muscles, arranged in the same way, longitudinal and transversally. And so we can play with the patterns of activation. So we can simulate different activation patterns, and this is the best one that we found, which produces a rich movement, which is similar to the animal one. And of course, we found that to have this similar movement, we need to activate the transverse muscles sequentially, as it was hypothesized in biology. And we also found a very nice matching with this finding in biology, which is about the invariant velocity profile of the bending point.
We also modeled, let's say, the internal interaction. We used the Coursera approach to model not only the longitudinal muscle but the transverse muscles. This method was used already by my former group, I mean, long ago. But now, we could finally add the transverse muscles. And again, we demonstrate the important role to obtain this stiffening wave. And we used a similar approach for the arm that I presented before, so that we can have a very nice forward model from the rod length that we can actually generate the final shape, and also, the sensor.
Of course, we are building. Because this is our ultimate goal, building an octopus-like arms, this one, I cannot show the internal structure because it is under patenting, but you see that it generates exactly the stiffening wave from the proximal part to the distal part. What you don't see in the video is that the proximal part, from the base to the bending point is stiffened, and the rest from the bending point to the tip is completely passive, which is exactly what we observed in the animal. So again, I want to demonstrate the energetical efficiency of this movement in water. For the moment, we found that it is also very fast and very efficient. We are testing on some underwater robots, and we also, we are going to test it in deep sea, 2,000 meters deep.
For the moment, we went on the pressure chamber down to 1,000 meters. Still playing with the stiffness. This is a student project, actually, where he tried to implement this swimming-like movement-- octopus-like swimming-- where we want to have an anisotropic stiffness because we want the arm to be stiff in the stroke, but then very, very soft in the recovery.
And so he tested different ways to build this robot. Let me go a bit faster with the video. Let's see.
OK. No need to go faster. This is just the way it swim. Since the arms are moved at the same time, it's basically, one actuator.
Then we have a prototype that can steer, but with just two actuators, with an asymmetric activation of the arms, and that was featured in a technical journal. We're also testing another hypothesis that we found in biology. It seems that octopus arms, elephant trunks, and many other appendices, like many tails and even, maybe, the human arm, but I will ask Armada for that. It seems that we follow spiraling movement, which is a logarithmic spiraling.
And so we did a little bit of mathematical model. We played with some of the basic parameters, and we also built a prototype, which is pretty simple, but I go back to the prototype that you see in this picture. So you see it is actuated by three tendons. So again, three to have the omnidirectional movement, so only three tendons.
So we don't have much in terms of actuation. But since the morphology is designed based on this logarithmic spiraling, we have very nice movements. And we actually demonstrated that we can reproduce some of the motion primitives of the elephant trunk. We have data from biology because there were experiments done on basic movements of the elephant trunk when grasping different kinds of objects. So we used those data to do this comparison.
And even if the robot is completely different in terms of actuation, in terms of working principle, the final shape of the movements is very, very similar because of this logarithmic spiraling. So the last step of our, let's say bioinspired robotics method is the application, the final robot, so what kind of applications for all those robots? Of course, the first idea is to go back to nature. Actually, we know that robots are so helpful under water, but they basically work, operate, in the water column. They don't go on the sea bottom, generally.
And the sea bottom is very, very interesting because it's where all the, for example, the human activities happen. It's where pollution tend to accumulate. It's where it's home to 98% of marine species. And from the octopus, of course, we learned a lot about how to operate on the sea bottom. This is a benthic animal. They live on the sea floor, and they have very nice strategies for locomotion, for walking underwater, for swimming.
Again, I'm not going into the details, but there are a lot of simplifying principles in those movements that we can reproduce with robots, with very few actuators and very simple control. So we especially focused on the walking, underwater walking, and working underwater with legs is not a good idea, generally. So you probably experience when you walk in the water, you had a lot of water drag on your legs, because they are rigid, because we have a skeleton. But we found a very smart way of doing that in the octopus.
And we describe it with a model, this reduced model here called the U-SLIP. And we found that it describes, also, the working of crops and especially, the running of crops under water. So the idea is very simple.
We reverse the angle of attack, and we push from the back. So the octopus, basically, pushes from the back. And having completely soft arms, it can shorten and elongate, reducing completely the water drag. The crab does something similar, and based on this model, we could actually design and build this robot that can walk under water. The compliancy here is in the joints, in the motors, in the joints. The control is basically zero.
There is no leg coordination. There is nothing here in terms of control. But then thanks to the mechanics, let's say, so to this compliance, we also have self-stabilizing gaits. And the robots can keep sessions without using any energy, without using the motors and can use, they can negotiate obstacles, also, without any computing. It's just mechanically.
So we can envisage a lot of marine applications for soft robots. I mean, they can walk on the sea bottom, but we can use materials that are biodegradable, materials that are self-healing. We can use all the adaptation capabilities of soft robots, and we can really come up with very complex scenarios for marine applications. In the biomedical fields, there are also many applications for soft robots, from artificial organs that can be built with the same technologies that we develop for soft robots to assistive devices, prosthesis, or exoskeletons, soft exoskeletons, or even surgical devices, like endoscopes that can navigate safely inside the human body. Assistive robotics is one of those fields. Of course, if you have to assist the patient with a physical interaction, soft robotics technologies are especially well suited for that, and this is where I'm focusing my research.
And now, the field is growing. Actually, we made this analysis with Ritu Raman. Recently, we published this paper.
So it's a big growth, not only in terms of publications, but also, in terms of technology. There was an evolution of the technologies, and we also tried to give an outlook to the future, where we think many of the concepts can be stressed further, and maybe, we can reach physical digital twins of the human body by using soft robotics technologies. So I'm focusing a little bit on assistive robotics in one of my projects in Singapore, this CARTIN, which is the Centre for Advanced Robotics.
So now we are no more on the science side. We are on the application side. So you have to consider the users very much, especially in cases like this, where the users are, let's say, common people. So we use these methods using industrial design, user-centric design. So the idea is that you have some studies of the users and descriptions of the user and their needs at the very beginning, before designing the robot.
So this is supposed to increase the acceptability, the final acceptability of the robots. In our case, the users are, not only the elderly that we want to assist, but also, their caregivers, so the clinical staff. And ultimately, we hope we have a wider adoption of robots for elder care in our case.
So we did this study in a hospital in Singapore, where there is a very interesting geriatric day hospital, where the elderly go during the day. So it's not a hospital ward with beds. They go after hospitalization to relearn some daily tasks, and this is perfect for us because we wanted to address daily tasks.
So we had a lot of observations of the way they do the tasks. We had meetings and discussions with the clinical staff, and they come up with a couple of typical users for us. One is an elderly, but still independent, but of course, needing some assistance for some tasks. And the other one is an elderly with a caregiver. Generally, the caregiver is a family member, maybe the wife or the husband who are also old.
So the robot, in this case, is more like an aide for the caregiver than the elderly itself. And in terms of tasks, we were surprised, because we were thinking of feeding, maybe dressing up. But it was clear that the very big priority is the transfer, so transition. So from bed to sitting, from sitting to standing, from chair to wheelchair.
So this kind of transitions, it's where they have the greatest need for help for robots. And of course, these are also, the most difficult tasks for robots, for soft robots. So in terms of technologies, our problem was to have something that is pretty stiff on the vertical axis, but very soft and movable on the plane, let's say a horizontal plane. We collaborated with the Division of Industrial Design of our university.
So they came up with this nice rendering, and we made already, a couple of prototypes. Again, I cannot show the internal structure, because we are patenting it. I just spoiled that. It is based on what we learned from the muscular hydrostat of the octopus, in terms of how to change the stiffness and increase the stiffness of our arm when it has to carry a high payload like the weight of a person. We tested other solutions, some of them based on this origami structures and the layer jamming. This is also unpublished, and you're probably familiar with the jamming transition.
So if you put, in this case, it's laser jams or layers of material inside a membrane and remove air, the whole structure becomes very stiff and we could actually measure the stiffness. Again, we are especially interested in the stiffness along the z-axis, which is where we want to carry the weight. A similar structure, based on origami deformations, we are using in another project, which is in collaboration with the MIT in Singapore, which is called the SMART.
I cannot talk too much about the use case that we are addressing here because it's still confidential. And again, I'm not giving many details of the arm as well, but I can show a couple of videos where you see just the performance of the arm, which is completely retractable. You see, it can fold completely, and it can also bend widely in all directions. So it can bend, basically, even more than 180. So when we developed this kind of arms for some applications, assistive or these other ones, we want to control them.
Even if we develop soft robots for embodied intelligence, we don't want to control. But then in the end, we want to control them and using the same approach that we use with traditional robots, a bit difficult, for me, at least we have to consider more transformation that we are used to. Generally, it's between task space and joint space. With soft robots, the concept of joint is a bit blurred.
And so we have some actuators. Somewhere they produce a deformation. This deformation will bring the end effector somewhere else, but it's a bit more difficult to really specify those transformations, for sure.
There are some more. Of course, we can model all those transformations. We saw before, that there are techniques to model the internal deformations or the effect of actuators. But in soft robotics, it also makes a lot of sense to use learning to replace the model based controller with some learning techniques, where we give the task the desired position for the end effector, and this, for example, the neural network can generate the commands for the actuators. So we don't use any robot model. Of course, this is, again, not because the models are impossible to have, but the generic complex, and not accurate.
Not because the math is not accurate, but because the soft robots are not accurate. So there is always a bit of mismatch between the model and the robot itself. We demonstrated, actually, it a few years ago, where we could compare, let's say, model-based approach with a very simple neural network on the same prototype. And what we demonstrated is that when we simulated the inaccuracy of the model, and of course, what you see here, which is pretty expected, is that with the model-based approach, the error grows with the inaccuracy of the model that we are simulating.
The neural network is less accurate. You see the error here, initially, when the model is very good. The neural network has a higher error, but it's the same.
It's always the same irrespective of the inaccuracy of the model, because it's not considered a model, but it's learning on the physical robot arm. So when this learning approaches, we can also control some dynamic tasks like this, very nice throwing. We can control the stiffness when the robot can change the stiffness. In this case, in this robot, we have two different kinds of actuations, and they are antagonistic. Because there are pneumatic chambers that tend to elongate and cables, tendons that tend to shorten.
So they have an antagonistic effect, and we can tune the stiffness to some extent. And with some learning-based control, we can control, not just the trajectory, but also, the force that the end effector applies. We are testing some imitation learning in soft robots control, for example, with, let's say demonstration-- let me open-- by a person here. So the experimenter is producing some letters, like B letters here, and then the robot can learn from this demonstration. Or we can use some tele demonstration.
Or we can try to learn by imitating imitation of some, you see biological model like the elephant, data from the elephant trunk that I mentioned before, or even from climbing plants. This is a project we had with Barbara. And we are also testing reinforcement learning. We had very nice results in reducing the sim to real gap and to adapt to damages.
And we have some unpublished results, still, on the use of reinforcement, learning to control, not just the position of the end effector, but also, the position of other points, along the arm, as well as external disturbances, like you see here, external deformations. And the robot can still try to keep the end effector in the target position. So let's say the field of controlling soft robot is growing more publications.
And also, the use of learning in those control systems is growing. We published, recently, we contributed to a special issue on the control of soft robots. But we also like-- Kevin, let me know if I can still-- I have another very short thing that I find super interesting, because we also like robots where we don't control anything at all. We don't have any electronics.
We don't have any computing, so completely mechanical kind of robots. This is the work on one of my PhD students, so it is based on a bistable valve. So we have just two states. What you see is a small tube. This is a small tube, so it can be clinked here and here or here and here.
So you can switch mechanically between these two states. And if you have an input, constant input of air pressure, and you have an actuator, by switching between the two states, you either actuate or not your actuator, so your pneumatic actuator. So a very nice demonstration is this gripper where there is a mechanical, if you want a mechanical sensor, a mechanical switch here.
So when there is a contact, we switch to the other state. And with this constant air pressure input, you have a bending of the finger and you have the grasping. We can make the system a bit more complex and have a sort of small pneumatic actuator that can do this mechanical switching, and we can generate this way, some rhythmic movements.
And with rhythmic movements, you can control some rhythmic patterns of locomotion, for example, can be two legs, can be for swimming, can be for flying. This is the example of a simple walking robot. And the input, here, it's untethered.
So you see there is a small air bottle, which the robot can carry with it. So you just open the bottle, and that's done. You don't have to control anything. You don't do anything else. So for me, this is really interesting because you can think of, for example, using completely biodegradable materials.
For this robot, there is no electronics. And you can create a robot that is completely biodegradable. And for me, it's one of the steps that we should take for greening robotics. It's time to green robotics.
So we cannot let robotics become something like this, some other digital industry. So for me, the very first step is to reducing computation. So I understand we have the computing power to do all the processes that we want. But we can even take some of the elegant lessons from nature and be efficient and reduce competition. It is the very first step to reduce the pollution that our robots can create. With reduced energy needs, we can try some energy scavenging, energy harvesting techniques, and with software robotics, we can also explore biodegradable materials.
And all these can contribute to a vision for future bioinspired robots that we published a few years ago with Barbara. It's a simple concept. We should think of robots with a life cycle, like living beings, which means they are generated at some point. They can grow. They can learn.
They can adapt to their tasks. They should be autonomous in self-healing, in finding their energy. And they should have an end of life, where they integrate back in the environment without adding e-waste. The field of soft robotics is an excellent place to do that, because it's interdisciplinary. We can address all those different aspects.
It's a growing field, where the number of publications, this is not updated, so one-year-old, but you see, I mean, the shape of the plot is impressive. It's not just the number of publications, it's the quality, the journals where soft robotics publications appear. It's one of the top topic in science, robotics, in the IEEE Robotics and Automation Letters. it seems that it's about 10% of all IEEE publications in robotics, they are about soft robotics. The community is very lively.
It started with this robot soft, which was a European grant, but it became, very soon, a conference, RoboSoft, which has already 500 participants, but it's constantly growing. And in the Robotics and Automation Society, there is also a TC, so a Technical Committee. And those of you who are interested in soft robotics, please join all these activities. So I finish with s small announcement. If you are also involved in teaching soft robotics, stay tuned, because there is a new textbook coming out from MIT Press. It will be available in August.
And I just conclude by acknowledging all the funding agencies, but especially, thanking my group in Singapore, our students, PhD students, postdocs who did this wonderful work. You see how nice the group is. Singapore is super nice. We are hiring.
So those of you who are interested, we have big plans for robotics in US. We have the Advanced Robotics Centre. We have an internal seed funding for robotics and new master in robotics, new bachelor and robotics and machine intelligence. So there is a lot going on in robotics, in NUS, so pleased, and very, very appealing conditions for young researchers, so even very close from PhD. And of course, thanks to you for listening so long, and thanks a lot. Thank you so much, Cecilia, for the very nice talk.
I think we probably have a little bit of time. Is there any questions from the audience? Well, the last part, we have big jealousy about what's happening in the US, because the US side, Trump's administration is really killing, all the research in university. So maybe a big allure for young people finding some jobs out there. But back to a more serious question. So I really like your octopus and before that, an elephant trunk is a wonderful work. And also, look at the octopus catching some object, and you have to include the fluid dynamics.
And that's really fascinating, because in the fluid mechanics, my colleague at the mechanical engineering department has been working on the fleet versus solid surface interactions, if solid surface moves. It's a very hard problem. And if that's, actually, software robotics. It is, basically, boundary is not so much clear-- chicken and egg problems. I'm just wondering, do we have the right tools to analyze and predict that kind of motions? And you did to some extent using some kind of machine learning or something.
But nonetheless, the first principle part is still very much complex and not solved yet. Do we have enough tools in doing that? That's a very good question. Since I'm not in the field, for me, it was very, very difficult. But then I talked to some colleagues in the field and they said, OK, I mean, there are tools that probably your community doesn't know is not familiar with.
But we can probably collaborate and we can try to find. So actually, they open a word to me because they model very complex system, like the atmosphere for the climate change, for study. So you cannot say that the problems that you have are so complex that we cannot address. We can try to address.
So now, I'm trying to bring this community a little bit close to soft robotics and try to address the problem together. What we are doing with the octopus arm in water is a classical FSI. We are doing some experiments, PIV techniques that I don't know, but these colleagues have. And they also provide us the equipment to do the experiments. So I'm using tools that are there. They are not new tools, to my knowledge.
Of course, there is a bit of AI, today, that you can use for solving some of the equations, some of the models. But yeah. I'm happy that I attracted some people from that community. And I'm sure we can go a bit further together. If they like the program of soft robotics and interaction with fluids, with solid for locomotion, that's also very, very interesting. Is there a more questions from the audience? Cecilia, good to see you again.
Yes. Quick question. Did you find a stiffness ratio, either in the octopus arm or in the trunk so that as you go down how the stiffness is changing? Is there a nature ratio? Sorry? Like your ellipsoids of stiffness? Yes.
Good question. I'm not pretty sure. We are doing something that I didn't show after I removed the slides last minute, because it's still a work in progress.
We are trying to estimate the force at the tip. I had the same idea. Maybe we can find something similar, but it's still very much in progress, so I cannot answer now.
I don't know, but yeah, that's a very good point that we may be able to demonstrate at some point. I hope so, but I will be super happy to collaborate with you on that. Maybe I can follow up with some questions. So I think we are soft. Most animals are soft per definition of having larger strain or large deformation.
Animals and us are very fast. We can run very fast. Some animals can fly, but most of the soft robots today are quite slow. I see a couple of slides in your presentation, where the oscillator allows the soft robotic actuator to move very, very quickly.
So do you see what's the main challenges or opportunities for building future soft robot that can be fast, and is there any opportunities for creating soft robot that can compete with animals or humans? Yeah, that's a big question. Super big question. Actually, with the octopus-like arm, when we do this reaching movement with this stiffness wave that propagates from base, we were not interested in the speed. The animal itself is not very fast in this movement, but in the end, our arm came up to be very, very fast. So the video I showed, we had to reduce the speed because it's really, really fast. Why that? I don't know.
It's a combination of the mechanical structure, and the actuation is also very simple because it's only one actuator to generate this stiffening wave, because of the mechanical structure. So maybe this can be an idea, so to focus on the mechanics and reduce the burden of actuation and especially the burden of control. Yeah, the other example is fast because it's completely unstable. It's an instability, so yeah.
Great. Is there more questions from the audience? All right. If there is no question, let me first mention that we have a short reception on the third floor.
So after this, if you have time, please let's all head to the third floor together. Otherwise, let's thank our speaker again. Thank you, Cecilia.
Thank you so much. Thanks for listening.
2025-05-30 09:58