1. 程式人生 > >對抗樣本論文總結

對抗樣本論文總結

[1]Karparthy部落格 Breaking Linear Classifiers on ImageNet

http://karpathy.github.io/2015/03/30/breaking-convnets/

 

[2]Christian等人在ICLR2014最先提出adversarial examples的論文Intriguing properties of neural networks

論文下載到本地的第3篇

 

[3]Ian Goodfellow對對抗樣本解釋的論文Explaining and Harnessing Adversarial Examples

論文下載到本地的第5篇

 

[4]最近Bengio他們組發文表示就算是從相機自然採集的影象,也會有這種特性Adversarial examples in the physical world

論文下載到本地第4篇

 

[5]Anh Nguyen等人在CVPR2015上首次提出Fooling Examples的論文Deep Neural Networks are Easily Fooled: High Confidence Predictions for Unrecognizable Images
https://arxiv.org/pdf/1412.1897.pdf

下載為本地論文第18篇

 

[6]Delving into Transferable Adversarial Examples and Black-box Attacks

論文下載到本地的第17篇

對抗樣本可轉移性與黑盒攻擊_學習筆記:https://blog.csdn.net/qq_35414569/article/details/82383788