Laser and camera fusion for indoor robot localization -

dc.contributor.authorMuhieddine, Ali Hussein,
dc.contributor.departmentAmerican University of Beirut. Faculty of Engineering and Architecture. Department of Mechanical Engineering. degree granting institution.
dc.date2013
dc.date.accessioned2015-02-03T10:23:35Z
dc.date.available2015-02-03T10:23:35Z
dc.date.issued2013
dc.date.submitted2013
dc.descriptionThesis (M.E.)-- American University of Beirut, Department of Mechanical Engineeering, 2013.
dc.descriptionAdvisor : Dr. Daniel Asmar, Assistant Professor, Mechanical Engineering ; Members of Committee : Dr. Elie Shammas, Assistant Professor, Mechanical Engineering ; Dr. Imad Elhajj, Associate Professor, Electrical and Computer Engineering.
dc.descriptionIncludes bibliographical references (leaves 45-50)
dc.description.abstractAlthough Iterative Closest Point (ICP) and Visual Odometry (VO) are under extensive development and improvement, the limitations of both systems are still challenging. While accurate depth scans are provided for ICP using laser scanners, the performance of this technique degrades when correspondences are ambiguous. On the other hand, VO systems have robust feature matching techniques but lack accurate depth measurements. Intuitively, by extracting visual features in the scene of the laser one may correlate the location of these features to those of the laser scan points to robustify the 3D-to-3D egomotion estimation which is typically done via ICP. Towards this end, this thesis presents a system for ego motion estimation using a camera-laser pair setup. The camera-laser extrinsic calibration allows the transformation of the laser points into a line of pixels in the image, providing additional information from the scene. As long as the ground is at, the presented method applies for matching laser points in di erent types of environments. In Manhattan environments, the vertical projections on the laser lines of two matching image features in two successive frames are found to be matching laser points. For environments with inclined walls and obstacles the same projections provide a local search region for the correct laser match in the second frame. In this case a set of three laser points are matched at a time via geometric descriptors. Contrary to prior art, the proposed system is not limited to Manhattan settings and is successful as long as the ground is at. Experiments are conducted inside real environments and superior results prove accurate matching and successful robot localization.
dc.format.extentx, 50 leaves : illustrations (some color) ; 30 cm
dc.identifier.otherb17933419
dc.identifier.urihttp://hdl.handle.net/10938/10002
dc.language.isoen
dc.relation.ispartofTheses, Dissertations, and Projects
dc.subject.classificationET:005964 AUBNO
dc.subject.lcshComputer vision.
dc.subject.lcshRobotics.
dc.subject.lcshRobots -- Motion.
dc.subject.lcshRobot vision.
dc.titleLaser and camera fusion for indoor robot localization -
dc.typeThesis

Files