1. 程式人生 > >NumPy arrays and TensorFlow Tensors的區別和聯系

NumPy arrays and TensorFlow Tensors的區別和聯系

聯系 tomat mut explicit loop 雙向 possible present ORC

1,tensor的特點

  • Tensors can be backed by accelerator memory (like GPU, TPU).
  • Tensors are immutable

2,雙向轉換

  • TensorFlow operations automatically convert NumPy ndarrays to Tensors.
  • NumPy operations automatically convert Tensors to NumPy ndarrays

3,轉換的代價

Tensors can be explicitly converted to NumPy ndarrays by invoking the .numpy()

method on them. These conversions are typically cheap as the array and Tensor share the underlying memory representation if possible. However, sharing the underlying representation isn‘t always possible since the Tensor may be hosted in GPU memory while NumPy arrays are always backed by host memory, and the conversion will thus involve a copy from GPU to host memory.

4,使用tensor時如何測定和選擇gpu

x = tf.random_uniform([3, 3])

print("Is there a GPU available: "),
print(tf.test.is_gpu_available())

print("Is the Tensor on GPU #0: "),
print(x.device.endswith(‘GPU:0‘))

print(tf.test.is_built_with_cuda())

5,顯式指定運行的xpu

import time

def time_matmul(x):
start = time.time()

for loop in range(10):
tf.matmul(x, x)

result = time.time()-start

print("10 loops: {:0.2f}ms".format(1000*result))


# Force execution on CPU
print("On CPU:")
with tf.device("CPU:0"):
x = tf.random_uniform([900, 900])
assert x.device.endswith("CPU:0")
time_matmul(x)

# Force execution on GPU #0 if available
if tf.test.is_gpu_available():
with tf.device("GPU:0"): # Or GPU:1 for the 2nd GPU, GPU:2 for the 3rd etc.
x = tf.random_uniform([1000, 1000])
assert x.device.endswith("GPU:0")
time_matmul(x)

NumPy arrays and TensorFlow Tensors的區別和聯系