Spatial Layout and Surface Reconstruction from Omnidirectional Images

dc.contributor.authorPosada, Luis Felipe
dc.contributor.authorAlejandro Velásquez
dc.date.accessioned2021-04-12T21:14:32Z
dc.date.available2021-04-12T21:14:32Z
dc.date.issued2016-10-09
dc.description.abstractThis paper presents a spatial layout recovery approach from single omnidirectional images. Vertical structures in the scene are extracted via classification from heterogeneous features computed at the superpixel level. Vertical surfaces are further classified according to their main orientation by fusing oriented line features, floor-wall boundary features and histogram of oriented gradients (HOG) with a Random Forest classifier. Oriented line features are used to build an orientation map that considers the main vanishing points. The floor-wall boundary feature attempts to reconstruct the scene shape as if it were observed from a bird's-eye view. Finally, the HOG descriptors are aggregated per superpixel and summarize the gradient distribution at homogeneous appearance regions. Compared to existing methods in the literature which rely only on corners or lines, our method gains statistical support from multiple cues aggregated per superpixel which provide more robustness against noise, occlusion, and clutter.eng
dc.identifierhttps://eafit.fundanetsuite.com/Publicaciones/ProdCientif/PublicacionFrw.aspx?id=5788
dc.identifier.isbn9781509037629
dc.identifier.otherWOS;000391921702102
dc.identifier.urihttp://hdl.handle.net/10784/28938
dc.language.isoengeng
dc.publisherIEEE
dc.rightsIEEE
dc.source2016 Ieee/rsj International Conference On Intelligent Robots And Systems (iros 2016)
dc.titleSpatial Layout and Surface Reconstruction from Omnidirectional Imageseng
dc.typeinfo:eu-repo/semantics/conferencePapereng
dc.typeconferencePapereng
dc.typeinfo:eu-repo/semantics/publishedVersioneng
dc.typepublishedVersioneng
dc.type.localDocumento de conferenciaspa

Archivos