SEARCH
MENU

History of Robotics Research and Development of Japan1996Integration, Intelligence, etc.Task-Oriented Generation of Visual Sensing Strategies

Jun MiuraOsaka University
Katsuhi IkeuchiThe University of Tokyo (currently Toyohashi University of Technology)
In vision-guided robotic operations, vision is used for extracting necessary information for proper task execution. Since visual sensing is usually performed with limited resources, visual sensing strategies should be planned so that only necessary information is obtained efficiently. To determine an efficient visual sensing strategy, knowledge of the task is necessary. Without knowledge of the task, it is often difficult to select the appropriate visual features to be observed. In addition, resources may be wasted in tracking uninformative features. The generation of the appropriate visual sensing strategy entails the following: 1. What visual information is necessary for the current task? 2. Which visual features carry such necessary information? 3. How to get necessary information with the sensors used. This is facilitated by the knowledge of the task, which describes what objects are involved in the operation, and how they are assembled. This paper proposes a novel method of systematically generating visual sensing strategies based on knowledge of the task to be performed.  We deal with visual sensing strategy generation in assembly tasks, in which the environment is known, that is, the shape, the size, and the approximate location of every object is known to the system. In this situation, the role of visual sensors is to determine the position of the currently assembled object with sufficient accuracy so that the object can be, with a high degree of certainty, mated with other objects. In the proposed method, using the task analysis based on face contact relations between objects, necessary information for the current operation is first extracted. Fig. 1 illustrates three classes of possible face contact transitions, and transition (c) requires visual information. Fig. 2 enumerates possible face contact states and admissible state transitions for three-dimensional translation motions with planar face contacts. Based on the classification in Fig. 1, transitions with bold lines in Fig. 2 require visual information. An extended analysis has been done which includes cylindrical objects and one additional rotational motion about the z axis, and six state transitions shown in Fig. 3 have been found to require visual information. The bold arrows in the figure indicate the degrees of freedom to be observed.  Then, visual features to be observed are determined using the knowledge of the sensor, which describes the relationship between a visual feature and information to be obtained (see Fig. 4).  Finally, feasible visual sensing strategies are evaluated based on the predicted success probability, and the best strategy is selected. A predicted success probability is calculated based on object CAD models and uncertainty models of visual information (see Fig. 5). Our method has been implemented using a laser range finder as the sensor. Experimental results show the feasibility of the method, and point out the importance of task-oriented evaluation of visual sensing strategies. 11th (1997) RSJ Best Paper Award
図1 3種の接触状態遷移
Fig. 1: Three contact state transitions.
図2 状態遷移グラフと視覚情報を必要とする遷移
Fig. 2: State transition graph and transitions which require visual information.
図3 視覚情報を必要とする6種類の遷移
Fig. 3: Six transition groups which require visual information.
図4 観測すべき視覚情報の決定
Fig. 4: Selection of features to observe using task description and sensing primitives.
図5 予測成功確率(2次元の場合)
Fig. 5: Calculation of the predicted success probability.
 

Movies


Correspondence papers


Jun Miura and Katsushi Ikeuchi:Task-Oriented Generation of Visual Sensing Strategies

Journal of the Robotics Society of Japan, Vol. 14, No. 4, pp. 574-585, 1996. (in Japanese)

J. Miura and K. Ikeuchi:Task-Oriented Generation of Visual Sensing Strategies in Assmebly Tasks

IEEE Trans. on Pattern Analysis and Machine Intelligence, Vol. 20, No. 2, pp. 126-137, 1998.

Related papers


[1] Katsushi IKEUCHI and Takashi SUEHIRO, "Task Model for Assembly Plan from Observation System," Journal of the Robotics Society of Japan, Vol. 11, No. 2, pp. 281-290, 1993. (in Japanese)

[2] J. Miura and K. Ikeuchi: Task Planning of Assembly of Flexible Objects and Vision-Based Verification, Robotica, Vol. 16, No. 3, pp. 297-307, 1998.

Related Article

記事はありません。