Transforming LiDAR Point Cloud Characteristics between different Datasets using Image-to-Image Translation
Felix Berens,
Yannick Knapp,
Markus Reischl,
Stefan Elser
Kapitel/Beitrag aus dem Buch: Schulte, H et al. 2020. Proceedings – 30. Workshop Computational Intelligence : Berlin, 26. – 27. November 2020.
In recent years several new LiDAR datasets for object detection were published. All these datasets were recorded with different LiDAR setups and at different locations. KITTI, for example, has 64 channels and was recorded in Germany, whereas Lyft (Level 5) has only 40 channels and was recorded in the USA. This leads to different characteristics of the LiDAR point clouds. In this paper, we present and evaluate a way to transform KITTI BEV maps such that they look like Lyft BEV maps. For this transformation we use the state-of- the-art image-to-image translator CycleGAN. The transformation is evaluated by two strategies: Firstly we test if the translated KITTI BEV maps work better for an object detector, which is trained on Lyft. Secondly we test if the characteristic structure of the Lyft dataset (number of channels, location of points) is adopted from the translated point cloud. The conducted experiments showed that after the translation the KITTI BEV maps are more similar to Lyft BEV maps, but the detection got worse.