Tensorflow 運算裝置的配置
阿新 • • 發佈:2018-11-10
說明:此文是翻譯官網 Using GPUs
Tensorflow 的運算可以是 CPU,也可以是GPU,想要檢視當前的運算被分配到哪個裝置,可以設定 log_device_placement
得到如下的輸出,說明我的運算被分配到CPU上去運行了# Creates a graph. a = tf.constant([1.0, 2.0, 3.0, 4.0, 5.0, 6.0], shape=[2, 3], name='a') b = tf.constant([1.0, 2.0, 3.0, 4.0, 5.0, 6.0], shape=[3, 2], name='b') c = tf.matmul(a, b) # Creates a session with log_device_placement set to True. sess = tf.Session(config=tf.ConfigProto(log_device_placement=True)) # Runs the op. print sess.run(c)
MatMul: (MatMul): /job:localhost/replica:0/task:0/cpu:0 2017-09-20 16:27:31.185055: I tensorflow/core/common_runtime/simple_placer.cc:834] MatMul: (MatMul)/job:localhost/replica:0/task:0/cpu:0 b: (Const): /job:localhost/replica:0/task:0/cpu:0 2017-09-20 16:27:31.185445: I tensorflow/core/common_runtime/simple_placer.cc:834] b: (Const)/job:localhost/replica:0/task:0/cpu:0 a: (Const): /job:localhost/replica:0/task:0/cpu:0 2017-09-20 16:27:31.185854: I tensorflow/core/common_runtime/simple_placer.cc:834] a: (Const)/job:localhost/replica:0/task:0/cpu:0 [[22 28] [49 64]]
如何自定義運算裝置呢,使用 with tf.device(''),注意這是分配的CPU,不是CPU核
得到的輸出是# Creates a graph. with tf.device('/cpu:0'): a = tf.constant([1.0, 2.0, 3.0, 4.0, 5.0, 6.0], shape=[2, 3], name='a') b = tf.constant([1.0, 2.0, 3.0, 4.0, 5.0, 6.0], shape=[3, 2], name='b') c = tf.matmul(a, b) # Creates a session with log_device_placement set to True. sess = tf.Session(config=tf.ConfigProto(log_device_placement=True)) # Runs the op. print sess.run(c)
MatMul: (MatMul): /job:localhost/replica:0/task:0/cpu:0
2017-09-20 16:49:52.835533: I tensorflow/core/common_runtime/simple_placer.cc:834] MatMul: (MatMul)/job:localhost/replica:0/task:0/cpu:0
b: (Const): /job:localhost/replica:0/task:0/cpu:0
2017-09-20 16:49:52.835888: I tensorflow/core/common_runtime/simple_placer.cc:834] b: (Const)/job:localhost/replica:0/task:0/cpu:0
a: (Const): /job:localhost/replica:0/task:0/cpu:0
2017-09-20 16:49:52.836294: I tensorflow/core/common_runtime/simple_placer.cc:834] a: (Const)/job:localhost/replica:0/task:0/cpu:0
[[22 28]
[49 64]]
一般如果使用GPU作為運算部件的話,運算會佔用所有的記憶體,如何自定義分配GPU記憶體呢,CPU沒有這個自定義選項,兩種方式
- 先分配小部分,再逐漸增長
config = tf.ConfigProto()
config.gpu_options.allow_growth = True
session = tf.Session(config=config, ...)
2.設定比例
config = tf.ConfigProto()
config.gpu_options.per_process_gpu_memory_fraction = 0.4
session = tf.Session(config=config, ...)
當有多個GPU怎麼設定其中的一部分來運算
# Creates a graph.
c = []
for d in ['/gpu:2', '/gpu:3']:
with tf.device(d):
a = tf.constant([1.0, 2.0, 3.0, 4.0, 5.0, 6.0], shape=[2, 3])
b = tf.constant([1.0, 2.0, 3.0, 4.0, 5.0, 6.0], shape=[3, 2])
c.append(tf.matmul(a, b))
with tf.device('/cpu:0'):
sum = tf.add_n(c)
# Creates a session with log_device_placement set to True.
sess = tf.Session(config=tf.ConfigProto(log_device_placement=True))
# Runs the op.
print sess.run(sum)