History of Robotics Research and Development of Japan2011Integration, Intelligence, etc.Co-player robot that plays the music with humans in an ensemble

Takeshi MizumotoKyoto University
Lim AngelicaKyoto University
Takuma OtsukaKyoto University
Tatsuhiko ItoharaKyoto University
Kazumasa MurataTokyo Institute of Technology
Kazuhiro NakadaiTokyo Institute of Technology
Hiroshi G. OkunoKyoto University
Gökhan InceHonda Research Institute Japan Co., Ltd.
João Lobato OliveiraFaculty of Engineering, University of Porto
Kazuyoshi YoshiiKyoto University
Music playing robots have been studied for decades, and many sophisticated robots have been developed, e.g., a flute-playing robot and a piano-playing robot. From the entertainment point of view, these entertainments are unidirectional communications, i.e., people can only listen to their playing. In this study, we develop robots that co-play and dance by adjusting their expression by listening to the human's play to realize participable entertainment so that people can join the entertainment actively. An example of an application of co-playing robots is to encourage people who have linguistic or cultural barriers to communicating each other through playing music together with the robot. Three problems exists in developing co-playing robots. (1) Designing a portable music playing system for making various robots play the music. (2) Developing information retrieval and prediction methods of human co-player's play in a noisy environment for synchronous music playing. (3) Adding a rich expression of the music. For the first problem, we designed a general-purpose co-player robot [R1] and developed a theremin-playing robot. The theremin is a monophonic instrument with a continuous musical scale. Since the theremin's pitch and volume can be controlled only by the arm movement in the air without any physical contacts, we can develop a portable music playing system without any special hardware to play[C1, R5]. For the second problem, we developed information retrieval methods for specific human-robot ensemble situations: for example, a beat tracking method that is robust to robot's motor noise [C2, C4], a gesture recognition method of flute playing [C5, R2], a multi-modal beat tracking method for guitar playing [C4, R6, R8], and a beat tracking method with ego noise cancellation [C6]. We also developed methods to predict a human's musical timing by building a mathematical ensemble model using coupled oscillators. Then, we developed an onset prediction method for duo ensembles [R4] and multi-person ensembles [R9]. For the third problem, we developed a cross-modal emotional model to represent the emotional expressions of playing sound and motion uniformly [R7]. Based on these results, we developed various kinds of ensemble systems that play with a human drummer, guitarist, or flutist [C3, R2, R3], and dance with the music [C6]. ([C]: Correspondence papers, [R]: Reference papers) IROS 2008 NTF Award for Entertainment Robots and Systems Nomination Finalist in 2008. IROS 2010 NTF Award for Entertainment Robots and Systems in 2010. International Society for Applied Intelligence, Best Paper Award in 2010. IEEE Robotics and Automation Society Japan Chapter Young Award in 2011.
An architecture of a co-player robot that plays the thremin
An ensemble with a flutist and a drummer
An ensemble with a guitarlist and a rhythm instrument player

Related Article