AUB ScholarWorks

Laser and camera fusion for indoor robot localization -

Show simple item record

dc.contributor.author Muhieddine, Ali Hussein,
dc.date 2013
dc.date.accessioned 2015-02-03T10:23:35Z
dc.date.available 2015-02-03T10:23:35Z
dc.date.issued 2013
dc.date.submitted 2013
dc.identifier.other b17933419
dc.identifier.uri http://hdl.handle.net/10938/10002
dc.description Thesis (M.E.)-- American University of Beirut, Department of Mechanical Engineeering, 2013.
dc.description Advisor : Dr. Daniel Asmar, Assistant Professor, Mechanical Engineering ; Members of Committee : Dr. Elie Shammas, Assistant Professor, Mechanical Engineering ; Dr. Imad Elhajj, Associate Professor, Electrical and Computer Engineering.
dc.description Includes bibliographical references (leaves 45-50)
dc.description.abstract Although Iterative Closest Point (ICP) and Visual Odometry (VO) are under extensive development and improvement, the limitations of both systems are still challenging. While accurate depth scans are provided for ICP using laser scanners, the performance of this technique degrades when correspondences are ambiguous. On the other hand, VO systems have robust feature matching techniques but lack accurate depth measurements. Intuitively, by extracting visual features in the scene of the laser one may correlate the location of these features to those of the laser scan points to robustify the 3D-to-3D egomotion estimation which is typically done via ICP. Towards this end, this thesis presents a system for ego motion estimation using a camera-laser pair setup. The camera-laser extrinsic calibration allows the transformation of the laser points into a line of pixels in the image, providing additional information from the scene. As long as the ground is at, the presented method applies for matching laser points in di erent types of environments. In Manhattan environments, the vertical projections on the laser lines of two matching image features in two successive frames are found to be matching laser points. For environments with inclined walls and obstacles the same projections provide a local search region for the correct laser match in the second frame. In this case a set of three laser points are matched at a time via geometric descriptors. Contrary to prior art, the proposed system is not limited to Manhattan settings and is successful as long as the ground is at. Experiments are conducted inside real environments and superior results prove accurate matching and successful robot localization.
dc.format.extent x, 50 leaves : illustrations (some color) ; 30 cm
dc.language.iso eng
dc.relation.ispartof Theses, Dissertations, and Projects
dc.subject.classification ET:005964 AUBNO
dc.subject.lcsh Computer vision.
dc.subject.lcsh Robotics.
dc.subject.lcsh Robots -- Motion.
dc.subject.lcsh Robot vision.
dc.title Laser and camera fusion for indoor robot localization -
dc.type Thesis
dc.contributor.department American University of Beirut. Faculty of Engineering and Architecture. Department of Mechanical Engineering. degree granting institution.


Files in this item

This item appears in the following Collection(s)

Show simple item record

Search AUB ScholarWorks


Browse

My Account