1. 程式人生 > >Tensorflow(二)單機多卡分散式訓練

Tensorflow(二)單機多卡分散式訓練

建立分散式訓練:

# 計算losses:
with tf.device('/gpu:0'):
    D_real_loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=D_logit_real,
                                                                         labels=tf.ones_like(D_logit_real)))
    D_fake_loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=D_logit_fake,
                                                                         labels=tf.zeros_like(D_logit_fake)))
    D_loss = D_fake_loss + D_real_loss
    D_solver = tf.train.AdamOptimizer().minimize(D_loss, var_list=theta_D)
with tf.device('/gpu:1'):
    G_loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=D_logit_fake, labels=tf.ones_like(D_logit_fake)))
    G_solver = tf.train.AdamOptimizer().minimize(G_loss, var_list=theta_G)

分散式訓練結果:

未建立分散式訓練時,預設使用全部四張顯示卡,得到的結果為:訓練時間短一些(苦笑)