Retrieval of Motion Capture Data Aids Efficient Digital Learning
Автор: Sheng-Chih Chen, Wei-Kuang Chen, Tsai-Sheng Kao, Jui-I Hsu
Журнал: International Journal of Education and Management Engineering(IJEME) @ijeme
Статья в выпуске: 5 vol.2, 2012 года.
Бесплатный доступ
This study aims to integrate digital technology with the animation production process. By experiencing and learning how body joints move — with digital technology as the learning aid — students can create results similar to motion-captured body movements. This then can be applied to animation design with the hope that it can help the students in their future employment. This paper focuses on digital learning and technology to bring forth body movement production principles and an integrative framework. The study results can offer training to front-end talents of the digital content industry. In addition, the main contribution lies in the research and development of training methods and linking them with digital learning techniques. This can provide directions for the development of upcoming relevant cultural industrial courses. This paper will use human-like character animation —that is comparatively harder to represent in 3D computer animation — as the example. It will also discuss the variations in results achieved by different production processes. At the same time, feasible training directions are provided as references for learning digital technologies.
Cross-disciplinary integration, digital learning, human-like characters, motion capture
Короткий адрес: https://sciup.org/15013693
IDR: 15013693
Список литературы Retrieval of Motion Capture Data Aids Efficient Digital Learning
- B. Jabłoński, R. Klempous, and D. Majchrzak, "Feasibility Analysis of Human Motion Identification using Motion Capture," proceedings of the 25th IASTED international conference on Modeling, identification, and control, 2006, pp. 495-500.
- J. Davis and A. Tyagi, "A Reliable-Inference Framework for Pose-Based Recognition of Human Actions," in IEEE International Conference on Advanced Video and Signal Based Surveillance, Miami, Florida, July 21-22, 2003.
- J. Davis and V. Kannappan, "Expressive Features for Movement Exaggeration," SIGGRAPH Conference Abstracts and Applications (Technical Sketches), San Antonio, Texas, July 24, 2002, pp. 182.
- J. Davis, and H. Gao, "An Expressive Three-Mode Principal Components Model of Human Action Style," Image and Vision Computing, Vol. 21, No. 11, 2003, pp. 1001-1016.
- J. Davis, and H. Gao, "Gender Recognition from Walking Movements using Adaptive Three-Mode PCA," IEEE Workshop on Articulated and Nonrigid Motion, Washington DC, June 27, 2004.
- J. Davis, and H. Gao, "Recognizing Human Action Efforts: An Adaptive Three-Mode PCA Framework," International Conference on Computer Vision, Nice, France, Oct 13-16, 2003, pp. 1463-1469.
- J. Lasseter, "Principles of Traditional Animation Applied to 3D Computer Animation," ACM SIGGRAPH Computer Graphics, vol. 21, no. 4, 1987, pp. 35-44.
- K. Pullen and C. Bregler, "Motion capture assisted animation: texturing and synthesis," ACM Transactions on Graphics, vol. 21, iss. 3, 2002, pp. 501-508.
- S. Owen, Computer Animation Website. http://www.siggraph.org/education/materials/HyperGraph/animation/anim0.htm, 2002.
- S. Tabata, A. Nakamura, and Y. Kuno, "Development of an Easy Dance Teaching System using Active Devices," proceedings of the IASTED International Conference on Advances in Computer Science and Technology (ACST 2004), 2004, pp. 38-43.
- T. Murakami, A. Nakamura, and Y. Kuno, "Generation of Digital Contents for Traditional Dances by Integrating Appearance and Motion Data," proceedings of the Second IASTED International Conference on Visualization, Imaging, and Image Processing, 2002, pp. 672-676.