Applying Motion Capture in Computer Animation Education

Автор: Xiaoting Wang, Chenglei Yang, Lu Wang

Журнал: International Journal of Engineering and Manufacturing(IJEM) @ijem

Статья в выпуске: 4 vol.1, 2011 года.

Бесплатный доступ

This paper introduces the motion capture technology and its use in computer animation education. Motion capture is a powerful aid in the course of computer animation and a supplement to the traditional key-frame animation. We use professional cameras to record the motion of the actor and then manipulate the data in software to eliminate some occlusion and confusion errors. For data that is still not satisfying, using data filter to smooth the motion to cut some awry frames. Then we import the captured data into Motionbuilder to adjust the motion and preview the real-time animation. At last in Maya we combine the motion data and character model, let the character perform the captured data and add the scene model and music to export the whole animation. In the course of computer animation, we use this method to design the animation of military boxing, basketball playing and folk dancing.

Еще

Motion caputure, computer animation, computer education, motionbuilder, maya

Короткий адрес: https://sciup.org/15014146

IDR: 15014146

Текст научной статьи Applying Motion Capture in Computer Animation Education

Published Online August 2011 in MECS

Available online at

time capture and some are off-line system. According to the different technology theory, motion capture can be divided into mechanical, acoustic, electromagnetic type and optical. The typical motion capture system includes sensors, signal capture device, data transform device and data manipulate device.

In this article, we use optical motion capture system based on computer vision. Theoretically, a point in 3D environment can be located by two different cameras and the trajectory of this point can be got by the high speed continues shooting image sequences. The advantages of optical motion capture system are: without the limit of wire and mechanical device, the actor can perform freely; its high sampling speed can satisfy the need of most high speed motion. The disadvantages are: the computation workload of post-processing is huge; the illumination and reflection of the environment affect the capture result; nearby markers can be confused or occluded when capture complex motions.

The remainder of this paper is organized as follows: section 2 introduces the history and application of motion capture; our experimental environment is described in section 3; in section 4 we outlined the steps of motion capture and the detailed capture and manipulate process is explained in section 5; at last we give some samples.

  • 2.    Backgroud

    Motion capture technology in animation production appears in 1970s. Disney tried to copy the continuous photos of the actor to improve the animation effect, although the motion is so realistic, the animation is lack of dramatic and cartoon. Researches are developed by university labs in 1980s, attract more and more focus from researchers and businessman and gradually from a trial study to practice. With the rapid development of computer hardware and software and the requirement of animation, motion capture is now into a practical stage and a number of manufactures have introduced a variety of commercial motion capture equipment, such as Vicon [3], Polhemus, Sega Interactive, MAC, X-Ist, FilmBox, MotionAnalysis ect. And its application area has beyond performance animation into virtual reality, games, ergonomic study, simulation training and biomechanics research.

  • 3.    Environment

Motion capture is so suitable in high realistic animation film or games to make an avatar perform like a real person. It can greatly improve the efficiency of animation, reduce the cost, make the animation more realistic and totally improve the level of animation. Motion capture provides a new way of human-machine interaction [4]. We can use the gesture and expression got from motion capture as an input method to control the computer. This technology is also important in virtual reality [5]. In order to realize the interaction between human and virtual environment, we must determine the head, hand or body’s location and direction to follow the performer’s motion and send the information back to the display and control system. In robotic control [6], robot in danger environment can do the same motion performed by engineer in security environment to realize remote control. In interactive games, motion can be captured and used to drive game characters to give game player a new experience of participation. Motion capture is very useful in physical training and helps training into a digital era from traditional stage which based only on experience. Athlete’s motion can be recorded and manipulated in computer to make quantitative analysis and combined with human physiology, physics to enhance the training technology.

In our experiment environment, we use 8 VICON MX T40 cameras and 4 VICON MX T160 cameras (Fig.1) in a 10*10m2 square, hanging on the ceiling. The effective capture area is 7*7 m2.

Figure1. VICON Camera.

Figure 2. Marker’s grayscale image.

Figure 3. Character model in Maya.

Figure 4. Storyboard of motion capture.

Each marker can be identified by cameras accurately because every marker has its own grayscale image (Fig.2) which is distinguish to other markers, just as difference person has different face.

  • 4.    Schedule of Motion Capture

  • 4.1.    Character model

  • 4.2.    Storyboard

  • 4.3.    Motion capture and data manipualte

Before motion capture, users will invite directors and actors and supply the storyboard of the drama. The director will explain the contents and the performance to the actors and they will practice until satisfy the requirement. Because the special characteristic of motion capture, the actors sometimes will perform not so same as in a real film, and it need the director tell them with the director’s experience.

Character models (Fig.3) are provided by users, which are suitable for drama. In some animation which pursues real effect, the model should be made as similar as the performer whose motion will be captured. In this way, the captured points will fit the model well, so the motion performed by the character model looks with little distortion. But in some animation full of imagination, models are fancy, which do not agree with the performer and the performer will adjust his/her motion to simulate the motion which the fancy character will do.

Users will invite director and actors and the director will give out the storyboard (Fig.4) of the drama. Actors will practice the performance with the aid of director. Storyboards are graphic organizers such as a series of illustrations or images displayed in sequence for the purpose of pre-visualizing a motion picture, animation, motion graphic or interactive media sequence, including website interactivity.

  • 1)    Capture the actors’ motion (Fig.5) under the requirement of director. The capture process will be described in detail in next section.

  • 2)    Manipulate the motion data after capturing to export the apropriate data format. The captured data contians errors and it is not avoidable. Some erros are introduced by the occlusion of points and some are because of the effect of light, which find the points but cannot identify correctly. Part of errors can be eliminated by manually labeling and some can be smoothed by frame interpolation.

  • 4.4.    Using captured data to drive model

  • 5.    The Process of Motion Capture

    • 5.1.    Calibrate environment

Figure 5. Motion capture in our experiment environment

Figure 6. Motion capture data driven characters.

Use the data to drive the character model and let the model do the same motion performed by the actor (Fig. 6). At this step, you can see the prototype of motion capture animation.

First, we will calibrate the environment. This step is important, for any error in the calibration will cause fault result no matter how well the post process will do. The calibration includes cameras calibration and volume origin and axes calibration to determine the floor and capture volume of the environment. When the scene (Fig.7) displayed on screen is the same as the actual environment, this step is succeeded.

Figure 7. Experiment environment.

Figure 8. Actor with markers.

  • 5.2.    Actor preparation

  • 5.3.    Capture the preparatory motion

  • 5.4.    Capture and edit motion

  • 5.5.    Motion capture data used in motionbuilder

  • 5.6.    Motion in maya

At the same time, actor (Fig.8) puts on the tights and according to the selected body template, markers are stick to the tights.

The aim of capturing the preparatory motion is to identify the markers to the full extent.

Figure 9. Captured motion.

Figure 10. Motion capture data driven actor in Motionbuilder.

Import the captured data into motionbuilder (Fig.10) to view the realtime animation effect and do some motion capture editing and data cleanup work. If the result is satisfying, we bind the motion with the character in the next step.

In this phase, we finish the motion capture animation and add the scene in Maya. Music and special effects are also added and export the final animation (Fig.11).

  • Figure 11.    Military boxing.

  • Figure 12.    Basketball playing.

  • 6.    Samples

  • 7.    Conclusion and Future Work

Besides the military boxing, we give other samples of motion capture: one is basketball palying (Fig.12) and the other is Monglian dancing (Fig.13).

Figure 13. Monglian dancing.

This paper describes the combine of motion capture in our computer animation education and gives some examples. The experience show that applying motion capture to the characters in animation can produce vivid motion result and is time-saving than traditional key-frame animation. We are considering combining motion capture and key-frame animation in the future work and take the advantage of them to produce effective animation.

Список литературы Applying Motion Capture in Computer Animation Education

  • Alberto Menache, Understanding motion capture for computer animation and video games, Morgan Kaufmann, 1999.
  • Esben Plenge, Facial Motion Capture with Sparse 2D-to-3D Active Appearance Models, Master Thesis, 2008.
  • http://www.vicon.com/
  • Qiong Wu, Maryia Kazakevich, Robyn Taylor and Pierre Boulanger, “Interaction with a Virtual Character through Performance Based Animation”, Lecture Notes in Computer Science, Volume 6133, 2010, p.285-288.
  • E. Granum, B. Holmqvist, Soren Kolstrup, Madsen K. Halskov and Lars Qvortrup, Virtual Interaction: Interaction in Virtual Inhabited 3D Worlds, 1 edition, Springer, 2000.
  • Ohta, A. and Amano, N., “Vision-Based Motion Capture for Human Support Robot in Action”, SICE-ICASE, 2006. International Joint Conference.
Статья научная