tensorflow reduce系列函式(tf.reduce_mean, tf.reduce_sum, tf.reduce_prod, tf.reduce_max, tf.reduce_min)
阿新 • • 發佈:2018-12-13
簡而言之,reduce系列的函式都可在張量指定的維度上操作
目錄
輸入引數
tf.reduce_all/reduce_any/reduce_max/reduce_min/reduce_mean/reduce_sum/reduce_prod/reduce_logsumexp(
input_tensor,
axis=None,
keepdims=None,
name=None,
reduction_indices=None,
keep_dims=None
)
- input_tensor:輸入的張量。
- axis
- keepdims:如果為 true,則保留長度為1的縮小維度。
- name:操作的名稱(可選)。
- reduction_indices:引數axis的已棄用的別名名稱。(為了相容性)
- keep_dims:引數keepdims的已棄用的別名名稱。(為了相容性)
tf.reduce_all 在boolean張量的維度上計算元素的 "邏輯和"
按照axis給定的維度減少input_tensor (boolean Tensor) 。除非keepdims為 true,否則張量的秩將在軸的每個條目中減少1。如果keep_dims為 true,則減小的維度將保留為長度1。
如果axis=None,則會減少所有維度,並返回具有單個元素的張量。
x = tf.constant([[True, True], [False, False]])
tf.reduce_all(x) # False
tf.reduce_all(x, 0) # [False, False]
tf.reduce_all(x, 1) # [True, False]
tf.reduce_any 在boolean張量的維度上計算元素的 "邏輯或"
x = tf.constant([[True, True], [False, False]]) tf.reduce_any(x) # True tf.reduce_any(x, 0) # [True, True] tf.reduce_any(x, 1) # [True, False]
tf.reduce_max 計算張量的各個維度上元素的最大值
x = tf.constant([[1, 2, 3], [4, 5, 6]])
tf.reduce_max(x) # 6
tf.reduce_max(x, 0) # [4, 5, 6]
tf.reduce_max(x, 1) # [3, 6]
tf.reduce_max(x, 1, keepdims=True) # [[3],
# [6]]
tf.reduce_max(x, [0, 1]) # 6
tf.reduce_min 計算張量的各個維度上元素的最小值
x = tf.constant([[1, 2, 3], [4, 5, 6]])
tf.reduce_min(x) # 1
tf.reduce_min(x, 0) # [1, 2, 3]
tf.reduce_min(x, 1) # [1, 4]
tf.reduce_min(x, 1, keepdims=True) # [[1],
# [4]]
tf.reduce_min(x, [0, 1]) # 1
tf.reduce_mean 計算張量的各個維度上的元素的平均值
x = tf.constant([[1., 2., 3], [4., 5., 6.]])
tf.reduce_mean(x) # 3.5
tf.reduce_mean(x, 0) # [2.5, 3.5, 4.5]
tf.reduce_mean(x, 1) # [2., 5.]
tf.reduce_mean(x, 1, keepdims=True) # [[2.],
# [5.]]
tf.reduce_mean(x, [0, 1]) # 3.5
要注意資料型別的相容性
x = tf.constant([1, 0, 1, 0])
tf.reduce_mean(x) # 0
y = tf.constant([1., 0., 1., 0.])
tf.reduce_mean(y) # 0.5
tf.reduce_sum 計算張量的各個維度上元素的總和
x = tf.constant([[1, 2, 3], [4, 5, 6]])
tf.reduce_sum(x) # 21
tf.reduce_sum(x, 0) # [5, 7, 9]
tf.reduce_sum(x, 1) # [6, 15]
tf.reduce_sum(x, 1, keepdims=True) # [[ 6],
# [15]]
tf.reduce_sum(x, [0, 1]) # 21
tf.reduce_prod 計算張量的各個維度上元素的乘積
x = tf.constant([[1, 2, 3], [4, 5, 6]])
tf.reduce_prod(x) # 720
tf.reduce_prod(x, 0) # [4, 10, 18]
tf.reduce_prod(x, 1) # [6, 120]
tf.reduce_prod(x, 1, keepdims=True) # [[ 6],
# [120]]
tf.reduce_prod(x, [0, 1]) # 720
tf.reduce_logsumexp 計算log(sum(exp(張量的各維數的元素)))
x = tf.constant([[0., 0., 0.], [0., 0., 0.]])
tf.reduce_logsumexp(x) # log(6)
tf.reduce_logsumexp(x, 0) # [log(2), log(2), log(2)]
tf.reduce_logsumexp(x, 1) # [log(3), log(3)]
tf.reduce_logsumexp(x, 1, keepdims=True) # [[log(3)], [log(3)]]
tf.reduce_logsumexp(x, [0, 1]) # log(6)
這個函式在數值上比 更穩定。它避免了大量輸入的 exp 引起的溢位和小輸入日誌帶來的下溢。
tf.reduce_join 在給定的維度上加入一個字串張量
tf.reduce_join(
inputs,
axis=None,
keep_dims=False,
separator='',
name=None,
reduction_indices=None
)
# tensor `a` is [["a", "b"], ["c", "d"]]
tf.reduce_join(a, 0) # ==> ["ac", "bd"]
tf.reduce_join(a, 1) # ==> ["ab", "cd"]
tf.reduce_join(a, -2) # = tf.reduce_join(a, 0) ==> ["ac", "bd"]
tf.reduce_join(a, -1) # = tf.reduce_join(a, 1) ==> ["ab", "cd"]
tf.reduce_join(a, 0, keep_dims=True) # ==> [["ac", "bd"]]
tf.reduce_join(a, 1, keep_dims=True) # ==> [["ab"], ["cd"]]
tf.reduce_join(a, 0, separator=".") # ==> ["a.c", "b.d"]
tf.reduce_join(a, [0, 1]) # ==> "acbd"
tf.reduce_join(a, [1, 0]) # ==> "abcd"
tf.reduce_join(a, []) # ==> [["a", "b"], ["c", "d"]]
tf.reduce_join(a) # = tf.reduce_join(a, [1, 0]) ==> "abcd"
在具有給定形狀的 字串張量中計算跨維度的字串連線。返回一個新的Tensor,它由輸入字串與給定的分隔符separator(預設:空字串)連線建立的。axis為負則從末端向後數,-1相當於n - 1。
- separator:可選的string。預設為""。加入時要使用的分隔符。