1. 程式人生 > 其它 >半監督學習(基於生成模型)

半監督學習(基於生成模型)

1.基於損失函式和模型設計的主要深度半監督學習方法分類

2.Semi-supervised GANs

[1] L. Schoneveld, “Semi-supervised learning with generative adversarial networks,” in Doctoral dissertation, Ph.D. Dissertation, 2017.
[2] I. J. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. WardeFarley, S. Ozair, A. C. Courville, and Y. Bengio, “Generative


adversarial nets,” in NIPS, 2014, pp. 2672–2680.
[3] A. Radford, L. Metz, and S. Chintala, “Unsupervised representation learning with deep convolutional generative adversarial
networks,” in ICLR, 2016.
[4] J. T. Springenberg, “Unsupervised and semi-supervised learning
with categorical generative adversarial networks,” in
ICLR, 2016.
[5] E. L. Denton, S. Gross, and R. Fergus, “Semi-supervised learning with context-conditional generative adversarial networks,”
CoRR, vol. abs/1611.06430, 2016.
[6] A. Odena, “Semi-supervised learning with generative adversarial
networks,” CoRR, vol. abs/1606.01583, 2016.
[7] T. Salimans, I. J. Goodfellow, W. Zaremba, V. Cheung, A. Radford,

and X. Chen, “Improved techniques for training gans,” in NIPS,
2016, pp. 2226–2234.
[8] Z. Dai, Z. Yang, F. Yang, W. W. Cohen, and R. Salakhutdinov,
“Good semi-supervised learning that requires a bad GAN,” in
NIPS, 2017, pp. 6510–6520.
[9] G. Qi, L. Zhang, H. Hu, M. Edraki, J. Wang, and X. Hua, “Global
versus localized generative adversarial nets,” in CVPR. IEEE
Computer Society, 2018, pp. 1517–1525.
[10] X. Wei, B. Gong, Z. Liu, W. Lu, and L. Wang, “Improving the
improved training of wasserstein gans: A consistency term and
its dual effect,” in ICLR. OpenReview.net, 2018.
[11] M. Arjovsky, S. Chintala, and L. Bottou, “Wasserstein GAN,”
CoRR, vol. abs/1701.07875, 2017.
[12] I. Gulrajani, F. Ahmed, M. Arjovsky, V. Dumoulin, and A. C.
Courville, “Improved training of wasserstein gans,” in NIPS,
2017, pp. 5767–5777.
[13] J. Donahue, P. Krahenb ¨ uhl, and T. Darrell, “Adversarial feature ¨
learning,” in ICLR. OpenReview.net, 2017.
[14] V. Dumoulin, I. Belghazi, B. Poole, A. Lamb, M. Arjovsky, O. Mastropietro, and A. C. Courville, “Adversarially learned inference,”
in ICLR. OpenReview.net, 2017.
[15] A. Kumar, P. Sattigeri, and T. Fletcher, “Semi-supervised learning
with gans: Manifold invariance with improved inference,” in
NIPS, 2017, pp. 5534–5544.
[16] C. Li, T. Xu, J. Zhu, and B. Zhang, “Triple generative adversarial
nets,” in NIPS, 2017, pp. 4088–4098.

3.Semi-supervised VAE

[1] D. P. Kingma and M. Welling, “Auto-encoding variational bayes,”in ICLR, 2014.

[2] D. J. Rezende, S. Mohamed, and D. Wierstra, “Stochastic backpropagation and approximate inference in deep generative models,” in ICML, ser. JMLR Workshop and Conference Proceedings,
vol. 32. JMLR.org, 2014, pp. 1278–1286.
[3] D. P. Kingma, S. Mohamed, D. J. Rezende, and M. Welling, “Semisupervised learning with deep generative models,” in NIPS, 2014,
pp. 3581–3589.
[4] L. Maaløe, C. K. Sønderby, S. K. Sønderby, and O. Winther, “Auxiliary deep generative models,” in ICML, ser. JMLR Workshop
and Conference Proceedings, vol. 48. JMLR.org, 2016, pp. 1445–
1453.
[5] M. E. Abbasnejad, A. R. Dick, and A. van den Hengel, “Infinite
variational autoencoder for semi-supervised learning,” in CVPR.
IEEE Computer Society, 2017, pp. 781–790.
[6] S. Narayanaswamy, B. Paige, J. van de Meent, A. Desmaison,
N. D. Goodman, P. Kohli, F. D. Wood, and P. H. S. Torr, “Learning
disentangled representations with semi-supervised deep generative models,” in NIPS, 2017, pp. 5925–5935.
23
[7] J. Schulman, N. Heess, T. Weber, and P. Abbeel, “Gradient estimation using stochastic computation graphs,” in NIPS, 2015, pp.
3528–3536.
[8] Y. Li, Q. Pan, S. Wang, H. Peng, T. Yang, and E. Cambria, “Disentangled variational auto-encoder for semi-supervised learning,”
Inf. Sci., vol. 482, pp. 73–85, 2019.
[9] T. Joy, S. M. Schmon, P. H. S. Torr, N. Siddharth, and T. Rainforth, “Rethinking semi-supervised learning in vaes,” CoRR, vol.
abs/2006.10102, 2020.




後續會繼續更新另外幾種方法的相關經典論文!!!