1. 程式人生 > >Forrest's Space

Forrest's Space

轉自:http://blog.csdn.net/zyh821351004/article/details/50081713

http://slamcn.org/index.PHP/%E9%A6%96%E9%A1%B5

===============================================================

slam資料整理

  slam視訊課程,ppt簡單教程; 書籍 ; slam論文(綜述 常用方法  視覺slam 鐳射slam  閉環  ) ;  openSLAM

  slam資料集    slam研究者群   slam研究者

===================================視覺slam============================================================

==================================課程==================================================================

== 國外機器人/移動機器人相關視訊==

==機器學習==

========Photogrammetry ==========

 ========vision==========

=======================================書籍=============================================

Home:  http://www.probabilistic-robotics.org/   勘誤  http://probabilistic-robotics.informatik.uni-freiburg.de/errata.html

Multiple View Geometry in Computer Vision Second Edition   

通過MATLAB幾乎把機器人學給貫穿了,裡面每章節都有對應的Code,關於裡面Matlab的codes 

澳大利亞昆士蘭理工大學的Peter Corke是機器視覺領域的大牛人物,他所編寫的Robotics, vision and control一書更是該領域的經典教材

配套有matlab工具箱。工具箱分為兩部分,一部分為機器人方面的,另一部分為視覺方面的工具箱

原始碼都是開放免費下載的: http://petercorke.com/Toolbox_software.html            


=======================================論文==============================================================

slam基本方法: 

濾波框架:       卡爾曼濾波  : EKF UKF  EIF  等

                      粒子濾波:   PF  RBPF   FASTSAM 1.0   2.0  MCL

圖優化框架:    Graph-slam   工具: g20

開源演算法:  

              鐳射:  gampping   karto-slam   scan-matching

              視覺:  Mono-slam(SceneLib davison  c++)    ekfmonoo-slam(逆深度觀測模型  matlab)  

                        PTAM    SVO    ORB 

閉環檢測

開原始碼彙總

     openslam              https://www.openslam.org/   

//====== 

基礎的同學論文先看綜述類,基礎類. 再根據自己打算做的,把相關方向的論文都涉獵下看看.再集中解決你文獻調研中出現的問題.

=======================================資料集==============================================================

  • Logs of odometry, laser and sonar data taken from real robots.
  • Logs of all sorts of sensor data taken from simulated robots.
  • Environment maps generated by robots.
  • Environment maps generated by hand (i.e., re-touched floor-plans).
-----轉自
  • SLAM benchmarking.  http://kaspar.informatik.uni-freiburg.de/~slamEvaluation/datasets.php
  • KITTI SLAM dataset.  http://www.cvlibs.net/datasets/kitti/eval_odometry.php. 包括 單目視覺 ,雙目視覺, velodyne, POS 軌跡
  • OpenSLAM .https://www.openslam.org/links.html
  • CMU Visual Localization Data Set: Dataset collected using the Navlab 11 equipped with IMU, GPS, Lidars and cameras.
  • NYU RGB-D Dataset: Indoor dataset captured with a Microsoft Kinect that provides semantic labels.
  • TUM RGB-D Dataset: Indoor dataset captured with Microsoft Kinect and high-accuracy motion capturing.
  • New College Dataset: 30 GB of data for 6 D.O.F. navigation and mapping (metric or topological) using vision and/or laser.
  • The Rawseeds Project: Indoor and outdoor datasets with GPS, odometry, stereo, omnicam and laser measurements for visual, laser-based, omnidirectional, sonar and multi-sensor SLAM evaluation.
  • Victoria Park Sequence: Widely used sequence for evaluating laser-based SLAM. Trees serve as landmarks, detection code is included.
  • Malaga Dataset 2009 and Malaga Dataset 2013: Dataset with GPS, Cameras and 3D laser information, recorded in the city of Malaga, Spain.
  • Ford Campus Vision and Lidar Dataset: Dataset collected by a Ford F-250 pickup, equipped with IMU, Velodyne and Ladybug.

------轉自
1. Tum資料集
這個大家用的人都知道,RGB-D資料集,有很多個sequence,自帶Ground-truth軌跡與測量誤差的指令碼(Python寫的,還有一些有用的函式)。
有一些很簡單(xyz, 360系列),但也有的很難(各個slam場景)。
由於它的目標場景是機器人救援(雖然看不太出來),場景都比較空曠,許多時候kinect的深度只夠看一個地板。對視覺演算法可靠性的要求還是蠻高的。
網址: http://vision.in.tum.de/data/datasets/rgbd-dataset


2. MRPT
壇友SLAM_xian已經給出了地址:見此貼
含有多種感測器資料,包括雙目,laser等等。
MRPT本身是個機器人用的開發包(然而我還是沒用過),有興趣的同學可以嘗試。

3. Kitti
壇友zhengshunkai給出了地址:見此貼
著名的室外資料集,雙目,有真值。場景很大,資料量也很大(所以在我這種流量限制的地方用不起……)。如果你做室外的請務必嘗試一下此資料集。就算你不用審稿人也會讓你用的。

4. Oxford資料集
含有一些Fabmap相關的資料集,用來驗證閉環檢測的演算法。室外場景。它提供了ground-truth閉環(據說是手工標的,真是有耐心啊)。
網址:http://www.robots.ox.ac.uk/~mobile/wikisite/pmwiki/pmwiki.php?n=Main.Datasets#userconsent#

5. ICL-NUIM資料集
(又)是帝國理工弄出來的,RGB-D資料集,室內向。提供ground-truth和odometry。
網址:http://www.doc.ic.ac.uk/%7Eahanda/VaFRIC/iclnuim.html

6. NYUV2 資料集
一個帶有語義標籤的RGB-D資料集,原本是用來做識別的,也可以用來做SLAM。特點是有一個訓練集(1400+手工標記的影象,好像是僱人弄的),以前一大堆video sequence。
網址:http://cs.nyu.edu/silberman/datasets/nyu_depth_v2.html (似乎訪問有問題,不知道會不會修復)


7. KOS的3d scan資料集
一個鐳射掃描的資料集。
網址:http://kos.informatik.uni-osnabrueck.de/3Dscans/

 -------------- 

======================================研究者=============================================================

slam研究者群:   254787961  

  • LSPI: Fast andefficient reinforcement learning with linear value function approximationfor MDPs and multi agent systems.
  • DPSLAM: Fast,accurate, truly simultaneous localization and mapping without landmarks.
  • Textured Occupancy Grids: Monocular Localization without Features. We provide some 3D data sets using a variety of sensors.

Dr. Thomas Whelan  : research focuses on real-time dense visual SLAM and on a broader scale, general robotic perception.

Probabilistic Robotics: About the Authors

Sebastian Thrun is Associate Professor in the Computer Science Department at Stanford University and Director of the Stanford AI Lab. 

Wolfram Burgard is Associate Professor and Head of the Autonomous Intelligent Systems Research Lab in the Department of Computer Science at the University of Freiburg. 

Dieter Fox is Associate Professor and Director of the Robotics and State Estimation Lab in the Department of Computer Science and Engineering at the University of Washington.


===============================公司===================================

Tango goole   :   https://www.google.com/atap/project-tango/

slamTec   鐳射slam :   http://www.slamtec.com/en

………………………………