1. 程式人生 > >Dex-Net 3.0 論文翻譯

Dex-Net 3.0 論文翻譯

一、緒論

1、研究目的:研究深度學習在機器人吸附抓取領域的應用
2、研究意義:提高在對具有複雜幾何外形的物體進行吸附抓取是魯棒性較低的問題
3、研究思路:
(1). 設計物理模型
(2). 構建Dex-Net 3.0資料集
(3). 訓練GQ-CNN網路

二、柔性吸附接觸模型

(一)問題描述
1、目標:對於由深度相機給出的點雲,我們的目標是找到一個魯棒性最高的吸附抓取方式。
2、假設:為了便於模型的建立,我們做出以下假設:
  (1). 該系統是準靜態的。(吸盤運動過程中的慣性影響可以忽略)
  (2). 物體是剛性,無孔的。
  (3). 每個物體在平坦的工作面上以穩定的靜止姿態分開存放。
  (4). 與工作表面正交的單個深度相機,具有相對於機器人已知的位置和方向。
  (5). 具有已知幾何形狀的真空式末端執行器和由線性彈性材料製成的單個吸盤。
3、因此,我們定義:
  (1). 一次吸附抓取的引數化表示:u = ( p + v ),其中p表示三維目標點,v表示一個二維的漸進角度
  (2). 影響抓取成功魯棒性的潛在狀態(如物體材料,摩擦性因素等):x
  (3). 表示成功抓取分佈的模型:p ( S | u, x ),其中,S是一個二進位制抓取質量函式,當S = 1時,抓取成功,否則,抓取失敗。
  (4). 對於給定點雲y,我們定義,抓取魯棒性即為在環境p下,抓取成功的概率:Q ( u, y ) = P ( S | u, y )
4、我們的目標是對於一個已知的點雲,找到一個最大化魯棒性的吸附抓取方式,即
這裡寫圖片描述

由於x的存在,我們不能直接推算出抓取的魯棒性函式Q,但是,我們可以通過已有資料中的點雲,吸附抓取方式和成功標籤來訓練GQ-CNN神經網路,藉助最小化交叉熵代價函式L的方法逼近π*,即:
這裡寫圖片描述
之後我們將在之前研究的基礎上,對抓取結果進行評估。

(二)密封形成
1、為了對抓取結果進行評估,論文建立了一個準靜態彈簧模型,並在該模型的基礎上對以下兩個指標進行評估:
1) 在吸盤的周邊與物體表面之間是否形成密封。
2) 對於一個已經形成的密封狀態,由於重力和擾動,吸盤是否能抵擋物體上的外部作用力。
2、該模型用由連線{ v1, v2, v3, … ,vn, a }多個頂點的彈簧系統代替複雜的彈性分析模型,連線的彈簧可以分為以下三類:
邊界(結構)彈簧:連線底面的相鄰頂點:vi ~ vi+1
錐(結構)彈簧:連線底面頂點和錐體頂點:vi ~ a
彎曲彈簧:連線相互間隔的底面頂點:vi ~ vi+2
3、在此基礎上,論文提出了判斷密封是否形成的指標:
  (1). 在接近或接觸配置期間,C的錐面不能與M碰撞。
  (2). M的表面在C的邊界彈簧所形成的接觸環內沒有縫隙。
(3). 每個彈簧中要維持C的接觸構造所需的能量低於閾值。

(三)作用力空間分析
擾動作用力:在抓取狀態下,物體受力情況可以用包含m個基準作用力的接觸模型表示。
吸附接觸模型:該模型定義的抓取力由以下幾個部分組成:
  (1). 擾動法向力(fz):吸盤材料沿z軸施加到物體上的力。
  (2). 真空力(V):保持物體吸附狀態的氣壓差產生的恆定力的大小。
  (3). 摩擦力(ff =(fx,fy)):由於吸盤與物體之間的法向力,接觸切面中的力f N = f z + V。
  (4). 扭轉摩擦力矩(τz):由接觸環中的摩擦力產生的扭矩。
  (5). 彈性恢復力矩(τe =(τx,τy)):由吸力杯中的彈性恢復力沿著接觸環的邊界推動物體的接觸切線平面中的軸的轉矩。
根據推導:F必須滿足一組線性約束條件,用以計算作用力產生的阻力
這裡寫圖片描述

這裡μ是摩擦係數,r是接觸環的半徑,κ是材料依賴常數。

(四)魯棒性作用力阻力
我們通過評估物體姿態,抓取姿態和干擾作用力的分佈上的密封形成和作用力阻力來評估候選吸附方式的魯棒性:
定義:u和x的魯棒性作用力阻力度量是:
這裡寫圖片描述

在實踐中,我們通過採取M個樣本,評估每個樣本的作用力阻力並計算樣本平均值來確定魯棒性作用力阻力。

三、Dex-Net 3.0資料集

1、為了學習預測基於嘈雜點雲的抓取魯棒性,我們通過從聯合分佈p(S,u,x,y)中抽取元組(Si,ui,yi)生成Dex-Net 3.0訓練資料集,該資料集中包含點雲資料,抓取策略和抓取成功標籤,它由以下分佈組成:
  • 狀態:p(x):機器人將遇到的可能的物體,物體姿勢和相機姿勢先前的狀態。
  • 候選抓取方式:p(u | x):先前的約束將候選人掌握到物體表面上的目標點。
  • 掌握成功p(S | u,x):重力扳手的扳手阻力隨機模型。
  • 觀測值p(y | x):感測器噪聲模型。
2、要從模型中抽樣:
  (1). 我們首先從3D CAD模型的資料庫中隨機選擇一個物件;
  (2). 然後對物體的姿態,摩擦係數等潛在狀態進行取樣;
  (3). 對作用力阻力(ρ)進行評估,並按閾值為0.2轉換為二進位制標籤S;
  (4). 使用渲染和影象噪聲模型對場景的點雲圖進行取樣,將S標籤通過投影與影象中的畫素位置相關聯。

四、深度魯棒性吸附抓取策略

使用GQ-CNN架構對Dex-Net 3.0資料庫進行訓練,該網路架構與Dex-Net 2.0相似,區別是:
  (1). 修改姿態輸入以包含進近方向和桌面法線之間的夾角
  (2). 將pc1層從16位修改至64位
訓練得到的模型在驗證資料集上達到了93.5%的分類精度。

五、參考文獻

[1] A. Ali, M. Hosseini, and B. Sahari, “A review of constitutive models
for rubber-like materials,” American Journal of Engineering and
Applied Sciences, vol. 3, no. 1, pp. 232–239, 2010.
[2] B. Bahr, Y. Li, and M. Najafi, “Design and suction cup analysis of
a wall climbing robot,” Computers & electrical engineering, vol. 22,
no. 3, pp. 193–209, 1996.
[3] J. Bohg, A. Morales, T. Asfour, and D. Kragic, “Data-driven grasp
synthesisa survey,” IEEE Trans. Robotics, vol. 30, no. 2, pp. 289–309,
2014.
[4] N. Correll, K. E. Bekris, D. Berenson, O. Brock, A. Causo, K. Hauser,
K. Okada, A. Rodriguez, J. M. Romano, and P. R. Wurman, “Analysis
and observations from the first amazon picking challenge,” IEEE
Transactions on Automation Science and Engineering, 2016.
[5] R. Detry, C. H. Ek, M. Madry, and D. Kragic, “Learning a dictionary
of prototypical grasp-predicting parts from grasping experience,” in
Proc. IEEE Int. Conf. Robotics and Automation (ICRA). IEEE, 2013,
pp. 601–608.
[6] Y. Domae, H. Okuda, Y. Taguchi, K. Sumi, and T. Hirai, “Fast
graspability evaluation on single depth maps for bin picking with
general grippers,” in Robotics and Automation (ICRA), 2014 IEEE
International Conference on. IEEE, 2014, pp. 1997–2004.
[7] C. Eppner, S. Höfer, R. Jonschkowski, R. M. Martin, A. Sieverling,
V. Wall, and O. Brock, “Lessons from the amazon picking challenge:
Four aspects of building robotic systems.” in Robotics: Science and
Systems, 2016.
[8] C. Ferrari and J. Canny, “Planning optimal grasps,” in Proc. IEEE Int.
Conf. Robotics and Automation (ICRA), 1992, pp. 2290–2295.
[9] K. Goldberg, B. V. Mirtich, Y. Zhuang, J. Craig, B. R. Carlisle, and
J. Canny, “Part pose statistics: Estimators and experiments,” IEEE
Trans. Robotics and Automation, vol. 15, no. 5, pp. 849–857, 1999.
[10] R. Hartley and A. Zisserman, Multiple view geometry in computer
vision. Cambridge university press, 2003.
[11] C. Hernandez, M. Bharatheesha, W. Ko, H. Gaiser, J. Tan, K. van
Deurzen, M. de Vries, B. Van Mil, J. van Egmond, R. Burger, et al.,
“Team delft’s robot winner of the amazon picking challenge 2016,”
arXiv preprint arXiv:1610.05514, 2016.
[12] M. Jaderberg, K. Simonyan, A. Zisserman, et al., “Spatial transformer
networks,” in Advances in Neural Information Processing Systems,
2015, pp. 2017–2025.
[13] E. Johns, S. Leutenegger, and A. J. Davison, “Deep learning a
grasp function for grasping under gripper pose uncertainty,” in Proc.
IEEE/RSJ Int. Conf. on Intelligent Robots and Systems (IROS). IEEE,
2016, pp. 4461–4468.
[14] I. Kao, K. Lynch, and J. W. Burdick, “Contact modeling and ma-
nipulation,” in Springer Handbook of Robotics. Springer, 2008, pp.
647–669.
[15] D. Kappler, J. Bohg, and S. Schaal, “Leveraging big data for grasp
planning,” in Proc. IEEE Int. Conf. Robotics and Automation (ICRA),
2015.
[16] A. Kasper, Z. Xue, and R. Dillmann, “The kit object models database:
An object model database for object recognition, localization and
manipulation in service robotics,” Int. Journal of Robotics Research
(IJRR), vol. 31, no. 8, pp. 927–934, 2012.
[17] R. Kolluru, K. P. Valavanis, and T. M. Hebert, “Modeling, analysis, and
performance evaluation of a robotic gripper system for limp material
handling,” IEEE Transactions on Systems, Man, and Cybernetics, Part
B (Cybernetics), vol. 28, no. 3, pp. 480–486, 1998.
[18] R. Krug, Y. Bekiroglu, and M. A. Roa, “Grasp quality evaluation done
right: How assumed contact force bounds affect wrench-based quality
metrics,” in Robotics and Automation (ICRA), 2017 IEEE International
Conference on. IEEE, 2017, pp. 1595–1600.
[19] I. Lenz, H. Lee, and A. Saxena, “Deep learning for detecting robotic
grasps,” Int. Journal of Robotics Research (IJRR), vol. 34, no. 4-5, pp.
705–724, 2015.
[20] S. Levine, P. Pastor, A. Krizhevsky, and D. Quillen, “Learning hand-
eye coordination for robotic grasping with deep learning and large-
scale data collection,” arXiv preprint arXiv:1603.02199, 2016.
[21] Z. Li and S. S. Sastry, “Task-oriented optimal grasping by multifin-
gered robot hands,” IEEE Journal on Robotics and Automation, vol. 4,
no. 1, pp. 32–44, 1988.
[22] J. Mahler, J. Liang, S. Niyaz, M. Laskey, R. Doan, X. Liu, J. A.
Ojea, and K. Goldberg, “Dex-net 2.0: Deep learning to plan robust
grasps with synthetic point clouds and analytic grasp metrics,” in Proc.
Robotics: Science and Systems (RSS), 2017.
[23] J. Mahler, F. T. Pokorny, B. Hou, M. Roderick, M. Laskey, M. Aubry,
K. Kohlhoff, T. Kröger, J. Kuffner, and K. Goldberg, “Dex-net 1.0:
A cloud-based network of 3d objects for robust grasp planning using
a multi-armed bandit model with correlated rewards,” in Proc. IEEE
Int. Conf. Robotics and Automation (ICRA). IEEE, 2016.
[24] G. Mantriota, “Theoretical model of the grasp with vacuum gripper,”
Mechanism and machine theory, vol. 42, no. 1, pp. 2–17, 2007.
[25] R. M. Murray, Z. Li, and S. S. Sastry, A mathematical introduction to
robotic manipulation. CRC press, 1994.
[26] L. Pinto and A. Gupta, “Supersizing self-supervision: Learning to
grasp from 50k tries and 700 robot hours,” in Proc. IEEE Int. Conf.
Robotics and Automation (ICRA), 2016.
[27] X. Provot et al., “Deformation constraints in a mass-spring model
to describe rigid cloth behaviour,” in Graphics interface. Canadian
Information Processing Society, 1995, pp. 147–147.
[28] R. Y. Rubinstein, A. Ridder, and R. Vaisman, Fast sequential Monte
Carlo methods for counting and optimization. John Wiley & Sons,
2013.
[29] A. Saxena, J. Driemeyer, and A. Y. Ng, “Robotic grasping of novel
objects using vision,” The International Journal of Robotics Research,
vol. 27, no. 2, pp. 157–173, 2008.
[30] K. B. Shimoga, “Robot grasp synthesis algorithms: A survey,” The
International Journal of Robotics Research, vol. 15, no. 3, pp. 230–
266, 1996.
[31] H. S. Stuart, M. Bagheri, S. Wang, H. Barnard, A. L. Sheng,
M. Jenkins, and M. R. Cutkosky, “Suction helps in a pinch: Improving
underwater manipulation with gentle suction flow,” in Intelligent
Robots and Systems (IROS), 2015 IEEE/RSJ International Conference
on. IEEE, 2015, pp. 2279–2284.
[32] N. C. Tsourveloudis, R. Kolluru, K. P. Valavanis, and D. Gracanin,
“Suction control of a robotic gripper: A neuro-fuzzy approach,”
Journal of Intelligent & Robotic Systems, vol. 27, no. 3, pp. 215–235,
2000.
[33] A. J. Valencia, R. M. Idrovo, A. D. Sappa, D. P. Guingla, and
D. Ochoa, “A 3d vision based approach for optimal grasp of vacuum
grippers,” in Electronics, Control, Measurement, Signals and their
Application to Mechatronics (ECMSM), 2017 IEEE International
Workshop of. IEEE, 2017, pp. 1–6.
[34] J. Weisz and P. K. Allen, “Pose error robust grasping from contact
wrench space metrics,” in Proc. IEEE Int. Conf. Robotics and Au-
tomation (ICRA). IEEE, 2012, pp. 557–562.
[35] W. Wohlkinger, A. Aldoma, R. B. Rusu, and M. Vincze, “3dnet: Large-
scale object class recognition from cad models,” in Proc. IEEE Int.
Conf. Robotics and Automation (ICRA). IEEE, 2012, pp. 5384–5391.
[36] K.-T. Yu, N. Fazeli, N. Chavan-Dafle, O. Taylor, E. Donlon, G. D.
Lankenau, and A. Rodriguez, “A summary of team mit’s approach to
the amazon picking challenge 2015,” arXiv preprint arXiv:1604.03639,
2016.