Application of the computer vision system reconstructing the three-dimensional form of space technology objects
Автор: Koshelev P.E., Yuev A.A., Konoplev Yu.V., Tolmachev A.S., Tishkov A.I.
Журнал: Siberian Aerospace Journal @vestnik-sibsau-en
Рубрика: Informatics, computer technology and management
Статья в выпуске: 1 vol.23, 2022 года.
Бесплатный доступ
Trends in the development of the spacecraft (SC) industry pose increasingly complex problems for mechatronic systems. Due to the complexity of the assembly processes of large-sized transformable struc-tures (LTC) and solar panels, the current methods of structures deployment are becoming less reliable and relevant. In addition, the development of orbital stations leads to the emergence of a whole range of new problems. Using mechatronic manipulators is one of the promising approaches. However, the functionality of their application is severely limited by the algorithms for moving the effector and manipulating objects predetermined during development. In this regard, there is a need to develop new approaches to finding and determining the shape of the target object, for further calculation of the algorithm for its interaction with the mechatronic manipulator gripper. The paper presents a method for reconstructing the three-dimensional shape of objects, based on using a computer vision system. A stereo camera is used to obtain the spatial characteristics of the scene. Based on the data obtained, the scene is divided into several independent geometric surfaces, followed by stage-by-stage processing by neural network algorithms. The required parameters of the target objects are ex-tracted at each stage of the algorithm. YOLACT EDGE is used as a neural network architecture, which performs semantic segmentation and classification of objects. The correlation of the spatial characteristics of the target objects and the replacing three-dimensional model can be considered as a separate problem. To ensure this correlation, the neural network architecture was supplemented with the branch "Keypoints mask", which provides a prediction of the positions of the keypoints of objects that unambiguously deter-mine the spatial characteristics of the target object. As a result, the system obtained is able to provide the construction of a three-dimensional map of the swept area in real time. In addition, based on the received telemetric information, it is possible to calculate the trajectory of the manipulator's effector and its interaction with objects.
Computer vision system, machine learning, stereo vision, objects detection
Короткий адрес: https://sciup.org/148329603
IDR: 148329603 | DOI: 10.31772/2712-8970-2022-23-1-8-20
Список литературы Application of the computer vision system reconstructing the three-dimensional form of space technology objects
- Belonozhko P. P. [Space Robotics: Experience and Development Prospects]. VKS. 2018, Vol. 94, No. 1, P. 84–93 (In Russ.).
- Belonozhko P. P. [Space robotics for installation and service. Potential tasks, advanced systems concepts]. VKS. 2019, Vol. 99, No. 2, P. 84–97 (In Russ.).
- Beklemishev N. D. et al. Algoritmy upravleniya dvizheniem skhvata manipulyatora [Motion control algorithms of the manipulator gripper]. Preprinty IPM im. M. V. Keldysha. 2017, No. 47, 36 p.
- Buyanov A. A., Vlasov A. I. Makeev S. S. [Research on neural network algorithms used for pattern recognition]. Molodye uchenye – nauke, tekhnologiyam i profobrazovaniyu dlya ustoychivogo razvitiya problemy i novye resheniya 2-ya Mezhdunar. konf. SN. [Young Scientists to Science, Technology, and Vocational Education for Sustainable Development: Problems and New Solutions]. 2000, P. 22–25 (In Russ.).
- Medvedev M. V., Kirpichnikov A. P. [Three-dimensional reconstruction of objects in the vision system of a mobile robot]. Vestnik Kazanskogo tekhnologicheskogo universiteta. 2014, No. 15. P. 326–330 (In Russ.).
- Li Y. et al. SplitFusion: Simultaneous tracking and mapping for non-rigid scenes. IEEE International Conference on Intelligent Robots and Systems. 2020, P. 5128–5134.
- Zhou Z. et al. Learning-based object detection and localization for a mobile robot manipulator in SME production. Robot. Comput. Integr. Manuf. 2022, No. 73. P. 102229–102241.
- Shi S. et al. PV-RCNN: Point-voxel feature set abstraction for 3D object detection. Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2020, P. 10529–10538.
- Oh J. K., Lee S., Lee C. H. Stereo vision based automation for a bin-picking solution. Int. J. Control. Autom. Syst. 2012, No. 10, P. 362–373.
- Klekho D. Yu., Karelina E. B. Batyev Yu.P. [Using convolutional neural network technology in image object segmentation]. Vestnik MGUL – Lesnoy vestnik. 2021, Vol. 25, No. 1, P. 140–145 (In Russ.).
- Makarenko, A. A., Kalayda V. T. [Face Image Detection Method for Video Monitoring Systems Based on Neural Network]. Izvestiya TPU. 2006, Vol. 309, No. 8, P. 113–117 (In Russ.).
- Bolya D. et al. YOLACT: Real-time instance segmentation. Proceedings of the IEEE International Conference on Computer Vision, 2019, P. 9157–9166.
- Liu H. et al. YolactEdge: Real-time Instance Segmentation on the Edge. arXiv preprint arXiv, 2021. Available at: https://arxiv.org/abs/2012.12259.
- Golovinskiy A., Funkhouser T. Min-cut based segmentation of point clouds. IEEE 12th International Conference on Computer Vision Workshops, ICCV Workshops, 2009, P. 39–46.
- Luu T. H., Tran T. H. 3D vision for mobile robot manipulator on detecting and tracking target. ICCAS 2015 – 2015 15th International Conference on Control, Automation and Systems, Proceedings, 2015, P. 1560–1565.
- Khomchenko V. G. Robototekhnicheskie sistemy. [Robotics systems]. Omsk, OmGTU Publ., 2016, 195 p. A. S.