Vision-based attitude estimation for spacecraft docking operation through deep learning algorithm

Thaweerath Phisannupawong, Patcharin Kamsing, Peerapong Tortceka, Soemsak Yooyen

DOI: 10.23919/ICACT48636.2020.9061445

Conference Location: Phoenix Park, PyeongChang,, Korea (South), Korea (South)

Abstract

On-orbital services, especially in docking operation and other space object interaction. The missions need accurate, reliable, and robust detection to be an accurate in joining any interaction concerned. Two spacecrafts with an unknown mathematical model to predict the position and orientation, a computer vision-based attitude estimation system to detect the poses of spacecraft via camera is the key option of the mission. In astronautics control, the position coordinates are normally represented as the cartesian coordinate system and used a quaternions coordinate system for orientation representation because quaternions can represent the orientation of spacecraft better than physical angle and can overcome the problem of singularity. This paper aims to construct a model for both position and orientation estimation with public data. The input images are the dataset of Soyuz in the resolution of 1280×960, which is simulated by Unreal Engine 4. The implementation of this paper use GoogLeNet for a convolutional neural network model with the mathematical model of loss subject to direct regression. The result shows that a position estimation is significantly accurate with having distance error smaller than 1 meter and trand to reduce when setting a proper scaling factor for loss function. The result demonstrates a high error for orientation estimation. However, the experiment expresses that both position and orientation estimation can be improved in case of selecting a suitable scaling factor of loss function.