生成對抗網路GAN資料打包分享
全文摘要
生成式對抗網路,即所謂的GAN是近些年來最火的無監督學習方法之一,模型由Goodfellow等人在2014年首次提出,將博弈論中非零和博弈思想與生成模型結合在一起,巧妙避開了傳統生成模型中概率密度估計困難等問題,是生成模型達到良好的效果。本文總結收集了一些關於生成對抗網路的學習資源,有興趣者可以好好學一學。
生成對抗網路資料打包
1基礎知識
-
臺大李弘毅老師gan課程。參考連結:
#youtube#:
https://www.youtube.com/watch?v=DQNNMiAP5lw&index=1&list=PLJV_el3uVTsMq6JEFPW35BCiOQTsoqwNw
#bilibili#
https://www.bilibili.com/video/av24011528?from=search&seid=11459671583323410876
-
成對抗網路初學入門:一文讀懂GAN的基本原理
-
深入淺出:GAN原理與應用入門介紹
-
港理工在讀博士李嫣然深入淺出GAN之應用篇
https://pan.baidu.com/s/1o8n4UDk 密碼: 78wt -
萌物生成器:如何使用四種GAN製造貓圖
-
GAN學習指南:從原理入門到製作生成Demo
https://zhuanlan.zhihu.com/p/24767059x -
生成式對抗網路GAN研究進展
http://blog.csdn.net/solomon1558/article/details/52537114
2 相關報告
-
【乾貨】Google GAN之父Ian Goodfellow ICCV2017演講:解讀生成對抗網路的原理與應用
-
NIPS 2016教程:生成對抗網路
-
訓練GANs的技巧和竅門
https://github.com/soumith/ganhacks
3 論文前言
-
對抗例項的解釋和利用(Explaining and Harnessing Adversarial Examples)2014
https://arxiv.org/pdf/1412.6572.pdf -
基於深度生成模型的半監督學習( Semi-Supervised Learning with Deep Generative Models )2014
https://arxiv.org/pdf/1406.5298v2.pdf -
條件生成對抗網路(Conditional Generative Adversarial Nets)2014
https://arxiv.org/pdf/1411.1784v1.pdf -
基於深度卷積生成對抗網路的無監督學習(Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks (DCGANs))2015
https://arxiv.org/pdf/1511.06434v2.pdf -
基於拉普拉斯金字塔生成式對抗網路的深度影象生成模型(Deep Generative Image Models using a Laplacian Pyramid of Adversarial Networks)2015
http://papers.nips.cc/paper/5773-deep-generative-image-models-using-a-5. laplacian-pyramid-of-adversarial-networks.pdf -
生成式矩匹配網路(Generative Moment Matching Networks)2015
http://proceedings.mlr.press/v37/li15.pdf -
超越均方誤差的深度多尺度視訊預測(Deep multi-scale video prediction beyond mean square error)2015
https://arxiv.org/pdf/1511.05440.pdf -
通過學習相似性度量的超畫素自編碼(Autoencoding beyond pixels using a learned similarity metric)2015
https://arxiv.org/pdf/1512.09300.pdf -
對抗自編碼(Adversarial Autoencoders)2015
https://arxiv.org/pdf/1511.05644.pdf -
基於畫素卷積神經網路的條件生成圖片(Conditional Image Generation with PixelCNN Decoders)2015
https://arxiv.org/pdf/1606.05328.pdf -
通過平均差異最大優化訓練生成神經網路(Training generative neural networks via Maximum Mean Discrepancy optimization)2015
https://arxiv.org/pdf/1505.03906.pdf -
訓練GANs的一些技巧(Improved Techniques for Training GANs)2016
-
InfoGAN:基於資訊最大化GANs的可解釋表達學習(InfoGAN:Interpretable Representation Learning by Information Maximizing Generative Adversarial Nets)2016
https://arxiv.org/pdf/1606.03657v1.pdf -
上下文畫素編碼:通過修復進行特徵學習(Context Encoders: Feature Learning by Inpainting)2016
-
生成對抗網路實現文字合成影象(Generative Adversarial Text to Image Synthesis)2016
http://proceedings.mlr.press/v48/reed16.pdf -
對抗特徵學習(Adversarial Feature Learning)2016
https://arxiv.org/pdf/1605.09782.pdf -
結合逆自迴歸流的變分推理(Improving Variational Inference with Inverse Autoregressive Flow )2016
https://papers.nips.cc/paper/6581-improving-variational-autoencoders-with-inverse-autoregressive-flow.pdf -
深度學習系統對抗樣本黑盒攻擊(Practical Black-Box Attacks against Deep Learning Systems using Adversarial Examples)2016
https://arxiv.org/pdf/1602.02697.pdf -
參加,推斷,重複:基於生成模型的快速場景理解(Attend, infer, repeat: Fast scene understanding with generative models)2016
https://arxiv.org/pdf/1603.08575.pdf -
f-GAN: 使用變分散度最小化訓練生成神經取樣器(f-GAN: Training Generative Neural Samplers using Variational Divergence Minimization )2016
http://papers.nips.cc/paper/6066-tagger-deep-unsupervised-perceptual-grouping.pdf -
在自然影象流形上的生成視覺操作(Generative Visual Manipulation on the Natural Image Manifold)2016
https://arxiv.org/pdf/1609.03552.pdf -
對抗性推斷學習(Adversarially Learned Inference)2016
https://arxiv.org/pdf/1606.00704.pdf -
基於迴圈對抗網路的影象生成(Generating images with recurrent adversarial networks)2016
https://arxiv.org/pdf/1602.05110.pdf -
生成對抗模仿學習(Generative Adversarial Imitation Learning)2016
http://papers.nips.cc/paper/6391-generative-adversarial-imitation-learning.pdf -
基於3D生成對抗模型學習物體形狀的概率隱空間(Learning a Probabilistic Latent Space of Object Shapes via 3D Generative-Adversarial Modeling)2016
https://arxiv.org/pdf/1610.07584.pdf -
學習畫畫(Learning What and Where to Draw)2016
https://arxiv.org/pdf/1610.02454v1.pdf -
基於輔助分類器GANs的條件影象合成(Conditional Image Synthesis with Auxiliary Classifier GANs)2016
https://arxiv.org/pdf/1610.09585.pdf -
隱生成模型的學習(Learning in Implicit Generative Models)2016
https://arxiv.org/pdf/1610.03483.pdf -
VIME: 變分資訊最大化探索(VIME: Variational Information Maximizing Exploration)2016
http://papers.nips.cc/paper/6591-vime-variational-information-maximizing-exploration.pdf -
生成對抗網路的展開(Unrolled Generative Adversarial Networks)2016
https://arxiv.org/pdf/1611.02163.pdf -
基於內省對抗網路的神經影象編輯(Neural Photo Editing with Introspective Adversarial Networks)2016,原文連結:
-
基於解碼器的生成模型的定量分析(On the Quantitative Analysis of Decoder-Based Generative Models )2016,原文連結:
-
結合生成對抗網路和Actor-Critic 方法(Connecting Generative Adversarial Networks and Actor-Critic Methods)2016,原文連結:
-
通過對抗網路使用模擬和非監督影象訓練( Learning from Simulated and Unsupervised Images through Adversarial Training)2016,原文連結:
-
基於上下文RNN-GANs的抽象推理圖的生成(Contextual RNN-GANs for Abstract Reasoning Diagram Generation)2016,原文連結:
-
生成多對抗網路(Generative Multi-Adversarial Networks)2016,原文連結:
-
生成對抗網路組合(Ensembles of Generative Adversarial Network)2016,原文連結:
-
改進生成器目標的GANs(Improved generator objectives for GANs) 2016,原文連結:
-
訓練生成對抗網路的基本方法(Towards Principled Methods for Training Generative Adversarial Networks)2017,原文連結:
-
生成對抗模型的隱向量精準修復(Precise Recovery of Latent Vectors from Generative Adversarial Networks)2017,原文連結:
-
生成混合模型(Generative Mixture of Networks)2017,原文連結:
-
記憶生成時空模型(Generative Temporal Models with Memory)2017,原文連結:
-
AdaGAN: Boosting Generative Models". AdaGAN。原文連結[https://arxiv.org/abs/1701.04862]
-
Loss-Sensitive Generative Adversarial Networks on Lipschitz Densities。原文連結:https://arxiv.org/abs/1701.06264;程式碼:https://github.com/guojunq/glsgan/
-
Wasserstein GAN, WGAN。原文連結:[https://arxiv.org/abs/1701.07875];程式碼:
-
Boundary-Seeking Generative Adversarial Networks,BSGAN。原文連結:https://arxiv.org/abs/1702.08431;程式碼地址:
-
Generative Adversarial Nets with Labeled Data by Activation Maximization,AMGAN。原文連結:
-
Triple Generative Adversarial Nets,Triple-GAN。原文連結
-
BEGAN: Boundary Equilibrium Generative Adversarial Networks。原文連結:https://arxiv.org/abs/1703.10717;程式碼:
-
Improved Training of Wasserstein GANs。原文連結:https://arxiv.org/abs/1704.00028;程式碼:
-
MAGAN: Margin Adaptation for Generative Adversarial Networks。原文連結[https://arxiv.org/abs/1704.03817],
-
Gang of GANs: Generative Adversarial Networks with Maximum Margin Ranking。原文連結:
-
Softmax GAN。原文連結:
-
Generative Adversarial Trainer: Defense to Adversarial Perturbations with GAN。原文連結:
-
Flow-GAN: Bridging implicit and prescribed learning in generative models。原文連結:
-
Approximation and Convergence Properties of Generative Adversarial Learning。原文連結:
-
Towards Consistency of Adversarial Training for Generative Models。原文連結:
-
Good Semi-supervised Learning that Requires a Bad GAN。原文連結:https://arxiv.org/abs/1705.09783
-
On Unifying Deep Generative Models。原文連結:
-
DeLiGAN:Generative Adversarial Networks for Diverse and Limited Data。原文連結:
http://10.254.1.82/cache/6/03/openaccess.thecvf.com/e029768353404049dbcac9187a363d5a/Gurumurthy_DeLiGAN__Generative_CVPR_2017_paper.pdf;程式碼:https://github.com/val-iisc/deligan
-
Temporal Generative Adversarial Nets With Singular Value Clipping。原始連結:
-
Least Squares Generative Adversarial Networks. LSGAN。原始連結:
4 專案實戰
-
深度卷積生成對抗模型(DCGAN)參考連結
-
用Keras實現MNIST生成對抗模型,參考連結:
https://oshearesearch.com/index.PHP/2016/07/01/mnist-generative-adversarial-model-in-keras/
-
用深度學習TensorFlow實現影象修復,參考連結:
-
TensorFlow實現深度卷積生成對抗模型(DCGAN),參考連結:
-
Torch實現深度卷積生成對抗模型(DCGAN),參考連結:
-
Keras實現深度卷積生成對抗模型(DCGAN),參考連結:
-
使用神經網路生成自然影象(Facebook的Eyescream專案),參考連結:
-
對抗自編碼(AdversarialAutoEncoder),參考連結:
-
利用ThoughtVectors 實現文字到影象的合成,參考連結:
-
對抗樣本生成器(Adversarialexample generator),參考連結:
https://github.com/e-lab/torch-toolbox/tree/master/Adversarial
-
深度生成模型的半監督學習,參考連結:
-
GANs的訓練方法,參考連結:
-
生成式矩匹配網路(Generative Moment Matching Networks, GMMNs),參考連結:
-
對抗視訊生成,參考連結:
-
基於條件對抗網路的影象到影象翻譯(pix2pix)參考連結:
-
對抗機器學習庫Cleverhans, 參考連結:
5 相關補充
-
生成對抗網路(GAN)專知薈萃。參考資料:
-
The GAN Zoo千奇百怪的生成對抗網路,都在這裡了。你沒看錯,裡面已經有有近百個了。參考連結:
-
gan資料集錦。參考連結:
-
gan在醫學上的案例集錦:
-
gan應用集錦:
-
生成對抗網路(GAN)的前沿進展(論文、報告、框架和Github資源)彙總,參考連結:
http://blog.csdn.net/love666666shen/article/details/74953970