1. 程式人生 > >Ubuntu16.04測試開啟GPU加速

Ubuntu16.04測試開啟GPU加速

  • 測試1:系統自動分配裝置
#-*- coding:utf-8 -*-
import tensorflow as tf
# 新建一個 graph.
a = tf.constant([1.0, 2.0, 3.0, 4.0, 5.0, 6.0], shape=[2, 3], name='a')
b = tf.constant([1.0, 2.0, 3.0, 4.0, 5.0, 6.0], shape=[3, 2], name='b')
c = tf.matmul(a, b)
# 新建session with log_device_placement並設定為True.
sess = tf.Session(config=tf.ConfigProto
(log_device_placement=True)) # 執行這個 op. print(sess.run(c))

你應該能看見以下輸出:

Device mapping:
/job:localhost/replica:0/task:0/gpu:0 -> device: 0, name: Tesla K40c, pci bus
id: 0000:05:00.0
b: /job:localhost/replica:0/task:0/gpu:0
a: /job:localhost/replica:0/task:0/gpu:0
MatMul: /job:localhost/replica:0/task:0/gpu:0
[[ 22.  28.]
 [ 49.  64.]]
  • 測試2:手動指定分配裝置
#-*- coding:utf-8 -*-
import tensorflow as tf
# 新建一個graph.
with tf.device('/cpu:0'):
  a = tf.constant([1.0, 2.0, 3.0, 4.0, 5.0, 6.0], shape=[2, 3], name='a')
  b = tf.constant([1.0, 2.0, 3.0, 4.0, 5.0, 6.0], shape=[3, 2], name='b')
c = tf.matmul(a, b)
# 新建session with log_device_placement並設定為True.
sess = tf.Session(config=tf.ConfigProto(log_device_placement=True)) # 執行這個op. print(sess.run(c))

你應該能看見以下輸出,a 和 b 操作都被指派給了 cpu:0。

Device mapping:
/job:localhost/replica:0/task:0/gpu:0 -> device: 0, name: Tesla K40c, pci bus
id: 0000:05:00.0
b: /job:localhost/replica:0/task:0/cpu:0
a: /job:localhost/replica:0/task:0/cpu:0
MatMul: /job:localhost/replica:0/task:0/gpu:0
[[ 22.  28.]
 [ 49.  64.]]
  • 測試3:跑個TensorFlow的小程式。程式碼見百度雲盤

參考文獻: