The understanding and interpretation of complex 3D environments is a key challenge of autonomous driving. Lidar sensors and their recorded point clouds are particularly interesting for this challenge since they provide accurate 3D information about the environment. This work presents a multimodal approach based on deep learning for panoptic segmentation of 3D point clouds. It builds upon and combines the three key aspects multi view architecture, temporal feature fusion, and deep sensor fusion.
Umfang: XV, 213 S.
Preis: 43.00 €
These are words or phrases in the text that have been automatically identified by the Named Entity Recognition and Disambiguation service, which provides Wikipedia () and Wikidata () links for these entities.
Dürr, F. 2023. Multimodal Panoptic Segmentation of 3D Point Clouds. Karlsruhe: KIT Scientific Publishing. DOI: https://doi.org/10.5445/KSP/1000161158
Dieses Buch ist lizenziert unter Creative Commons Attribution + ShareAlike 4.0
Dieses Buch ist Peer reviewed. Informationen dazu finden Sie hier
Veröffentlicht am 9. Oktober 2023
Englisch
248
Paperback | 978-3-7315-1314-8 |