Semantic Mapping with Omnidirectional Vision

dc.contributor.authorPosada, Luis Felipe
dc.contributor.authorVelasquez-Lopez, Alejandro
dc.contributor.authorHoffmann, Frank
dc.contributor.authorBertram, Torsten
dc.date.accessioned2021-04-12T21:14:33Z
dc.date.available2021-04-12T21:14:33Z
dc.date.issued2018-01-01
dc.description.abstractThis paper presents a purely visual semantic mapping framework using omnidirectional images. The approach rests upon the robust segmentation of the robot's local free space, replacing conventional range sensors for the generation of occupancy grid maps. The perceptions are mapped into a bird's eye view allowing an inverse sensor model directly by removing the non-linear distortions of the omnidirectional camera mirror. The system relies on a place category classifier to label the navigation relevant categories: room, corridor, doorway, and open room. Each place class maintains a separated grid map that are fused with the range-based occupancy grid for building a dense semantic map.eng
dc.identifierhttps://eafit.fundanetsuite.com/Publicaciones/ProdCientif/PublicacionFrw.aspx?id=8307
dc.identifier.doi10.1109/ICRA.2018.8461165
dc.identifier.issn10504729
dc.identifier.issn2577087X
dc.identifier.otherWOS;000446394501070
dc.identifier.otherSCOPUS;2-s2.0-85063162565
dc.identifier.urihttp://hdl.handle.net/10784/28959
dc.language.isoengeng
dc.publisherIEEE COMPUTER SOC
dc.relation.urihttps://www.scopus.com/inward/record.uri?eid=2-s2.0-85063162565&doi=10.1109%2fICRA.2018.8461165&partnerID=40&md5=605724ef18669fcc66eec17070286d69
dc.rightshttps://v2.sherpa.ac.uk/id/publication/issn/1050-4729
dc.sourceIEEE International Conference on Robotics and Automation ICRA
dc.subject.keywordLOCALIZATIONeng
dc.subject.keywordFEATURESeng
dc.titleSemantic Mapping with Omnidirectional Visioneng
dc.typeinfo:eu-repo/semantics/conferencePapereng
dc.typeconferencePapereng
dc.typeinfo:eu-repo/semantics/publishedVersioneng
dc.typepublishedVersioneng
dc.type.localDocumento de conferenciaspa

Archivos