【每天學習一點點】Tensorflow2.X 執行問題:Could not create cudnn handle: CUDNN_STATUS_ALLOC_FAILED
阿新 • • 發佈:2020-07-30
Tensorflow2.X 執行問題:Could not create cudnn handle: CUDNN_STATUS_ALLOC_FAILED
Probably you're running out of GPU memory.
If you're using TensorFlow 1.x:
1st option) setallow_growth
to true.
import tensorflow as tf config = tf.ConfigProto() config.gpu_options.allow_growth=True sess = tf.Session(config=config)
2nd option) set memory fraction.
# change the memory fraction as you want
import tensorflow as tf
gpu_options = tf.GPUOptions(per_process_gpu_memory_fraction=0.3)
sess = tf.Session(config=tf.ConfigProto(gpu_options=gpu_options))
If you're using TensorFlow 2.x:
1st option) setset_memory_growth
to true.
# Currently the ‘memory growth’ option should be the same for all GPUs. # You should set the ‘memory growth’ option before initializing GPUs. import tensorflow as tf gpus = tf.config.experimental.list_physical_devices('GPU') if gpus: try: for gpu in gpus: tf.config.experimental.set_memory_growth(gpu, True) except RuntimeError as e: print(e)
2nd option) setmemory_limit
as you want. Just change the index of gpus and memory_limit in this code below.
import tensorflow as tf
gpus = tf.config.experimental.list_physical_devices('GPU')
if gpus:
try:
tf.config.experimental.set_virtual_device_configuration(gpus[0], [tf.config.experimental.VirtualDeviceConfiguration(memory_limit=1024)])
except RuntimeError as e:
print(e)
使用方案:
import tensorflow as tf gpus = tf.config.experimental.list_physical_devices('GPU') if gpus: try: for gpu in gpus: tf.config.experimental.set_memory_growth(gpu, True) except RuntimeError as e: print(e)
問題解決。
參考:https://stackoverflow.com/questions/48610132/tensorflow-crash-with-cudnn-status-alloc-failed