Relative Pose Estimation and Fusion of Omnidirectional and Lidar Cameras (bibtex)
by Levente Tamas, Robert Frohlich, Zoltan Kato
Abstract:
This paper presents a novel approach for the extrinsic parameter estimation of omnidirectional cameras with respect to a 3D Lidar coordinate frame. The method works without specific setup and calibration targets, using only a pair of 2D-3D data. Pose estimation is formulated as a 2D-3D nonlinear shape registration task which is solved without point correspondences or complex similarity metrics. It relies on a set of corresponding regions, and pose parameters are obtained by solving a small system of nonlinear equations. The efficiency and robustness of the proposed method was confirmed on both synthetic and real data in urban environment.
Reference:
Levente Tamas, Robert Frohlich, Zoltan Kato, Relative Pose Estimation and Fusion of Omnidirectional and Lidar Cameras, In Proceedings of the ECCV Workshop on Computer Vision for Road Scene Understanding and Autonomous Driving (Lourdes de Agapito, Michael M. Bronstein, Carsten Rother, eds.), volume 8926 of Lecture Notes in Computer Science, Zurich, Switzerland, pp. 640-651, 2014, Springer.
Bibtex Entry:
@string{eccv-cvrsuad="Proceedings of the ECCV Workshop on Computer Vision for Road Scene Understanding and Autonomous Driving"}
@string{lncs="Lecture Notes in Computer Science"}
@string{springer="Springer"}
@INPROCEEDINGS{Tamas-etal2014,
  author =	 {Levente Tamas and Robert Frohlich and Zoltan Kato},
  title =	 {Relative Pose Estimation and Fusion of
                  Omnidirectional and Lidar Cameras},
  booktitle =	 eccv-cvrsuad,
  year =	 2014,
  series =	 lncs,
  address =	 {Zurich, Switzerland},
  month =	 sep,
  publisher =	 springer,
  pages =	 {640-651},
  editor =	 {Lourdes de Agapito and Michael M. Bronstein and
                  Carsten Rother},
  volume =	 8926,
  isbn =	 {ISBN 978-3-319-16180-8},
  abstract =	 {This paper presents a novel approach for the
                  extrinsic parameter estimation of omnidirectional
                  cameras with respect to a 3D Lidar coordinate
                  frame. The method works without specific setup and
                  calibration targets, using only a pair of 2D-3D
                  data. Pose estimation is formulated as a 2D-3D
                  nonlinear shape registration task which is solved
                  without point correspondences or complex similarity
                  metrics. It relies on a set of corresponding
                  regions, and pose parameters are obtained by solving
                  a small system of nonlinear equations. The
                  efficiency and robustness of the proposed method was
                  confirmed on both synthetic and real data in urban
                  environment.},
}
Powered by bibtexbrowser