Aerobatics Control of Flying Creatures
|
Jungdam Won1
Jungnam Park1
Jehee Lee1 |
![]() The imaginary dragon learned to perform aerobatic maneuvers. The dragon is physically simulated in realtime and interactively controllable. |
Abstract  
Flying creatures in animated films often perform highly dynamic aerobatic maneuvers, which require their extreme of exercise capacity and skillful control. Designing physics-based controllers (a.k.a., control policies) for aerobatic maneuvers is very challenging because dynamic states remain in unstable equilibrium most of the time during aerobatics. Recently, Deep Reinforcement Learning (DRL) has shown its potential in constructing physics-based controllers. In this paper, we present a new concept, Self-Regulated Learning (SRL), which is combined with DRL to address the aerobatics control problem. The key idea of SRL is to allow the agent to take control over its own learning using an additional self-regulation policy. The policy allows the agent to regulate its goals according to the capability of the current control policy. The control and self-regulation policies are learned jointly along the progress of learning. Self-regulated learning can be viewed as building its own curriculum and seeking compromise on the goals. The effectiveness of our method is demonstrated with physically-simulated creatures performing aerobatic skills of sharp turning, rapid winding, rolling, soaring, and diving. |
Publication  
 
Jungdam Won, Jungnam Park, and Jehee Lee. 2018. Aerobatics Control of Flying Creatures via Self-Regulated Learning. ACM Trans. Graph. 37, 6, (SIGGRAPH Asia 2018) Download Paper (4.0 MB) |
Presentation  
 
|
Demo video Download Video(Main) (162 MB) Download Video(Supplemental) (48 MB) |
Thanks This research was supported by the MSIP(Ministry of Science, ICT and Future Planning), Korea, under the SW STARLab support program (IITP-2017-0536-20170040) supervised by the IITP(Institute for Information communications Technology Promotion. |