1. 程式人生 > >tf.group,tf.tuple,tf.control_dependencies,randomForest

tf.group,tf.tuple,tf.control_dependencies,randomForest

tf.group

tf.group(
    *inputs,
    **kwargs
)

該函式就是把引數中的操作作為一個組和,把這些操作作為為一個操作 group的引數是一個一個operation,而不是一個list(這就是input前面加 * 的原因,注意一定是一個個的) return: 返回是一個op

tf.tuple

tf.tuple(
    tensors,
    name=None,
    control_inputs=None
)

引數說明: tensors: 是一個list name: (可選) 為這個操作宣告一個名字 control_inputs: 在返回之前需要完成的操作 return: 其返回的是tensors裡面各個tensor組合的一個list,

tf.group 和 tf.tuple的程式碼例子

import tensorflow as tf
w = tf.Variable(1)
mul = tf.multiply(w, 2)
add = tf.add(w, 2)
group = tf.group(mul, add) #注意這裡是一個一個的哈
tuple = tf.tuple([mul, add])#注意這裡是一個list哈
sess=tf.Session()
sess.run(tf.global_variables_initializer())
print(sess.run([mul,add,group,w]))
print(sess.
run(tuple))

輸出的是:

[2, 3, None, 1]
[2, 3]

tf.identity()

這是一個賦值操作, 注意這是一個op(操作) 。y=tf.identity(x) 效果等價於y=x 但是前者是一個操作,在graph中是一個op 後者在不是

tf.control_dependencies():

tf.control_dependencies()設計是用來控制計算流圖的,給圖中的某些計算指定順序。 比如:我們想要獲取引數更新後的值,那麼我們可以這麼組織我們的程式碼。

說一下:

control_dependencies的意義是: 在執行with包含的操作前,注意一定是操作,否則不woker:(在這裡就是 updated_weight = tf.identity(weight) )前,先執行control_dependencies中的操作(在這裡就是 opt)

opt = tf.train.Optimizer().minize(loss)   
with tf.control_dependencies([opt]):#注意書寫格式
     updated_weight = tf.identity(weight)
with tf.Session() as sess:
	 tf.global_variables_initializer().run()#這種書寫格式只能在with裡面 因為會預設找到sess
	 sess.run(updated_weight, feed_dict={...}) # 這樣每次得到的都是更新後的weight 

with tf.control_dependencies()所包含的不是op時:

x = tf.Variable(0.0)

x_plus_1 = tf.assign_add(x, 1) #返回一個op,表示給變數x加1的操作,這裡是一個op 不是返回一個tensor

with tf.control_dependencies([x_plus_1]):
    y = x#這僅僅是一個 賦值,不是op
    
init = tf.global_variables_initializer()
with tf.Session() as session:
    init.run()
    for i in range(5):
        print(y.eval())

輸出:

0.0
0.0
0.0
0.0
0.0

with tf.control_dependencies()所包含的是op時:

x = tf.Variable(0.0)

x_plus_1 = tf.assign_add(x, 1) #返回一個op,表示給變數x加1的操作,這裡是一個op 不是返回一個tensor

with tf.control_dependencies([x_plus_1]):
    y = x
    op1=tf.group(y)
    
init = tf.global_variables_initializer()
with tf.Session() as session:
    init.run()
    for i in range(5):
        session.run(op1) #這是一個操作,所以會執行x_plus_1將x加1
        print(y.eval())  #因為x已經加1 所以y會獲取x加1後的值,注意:執行這句話的時候,x_plus_1還是沒執行

輸出:

1.0
2.0
3.0
4.0
5.0

tensorflow實現RandomForest

# -*- encoding:utf-8 -*-
import tensorflow as tf
from tensorflow.python.ops import resources
from tensorflow.contrib.tensor_forest.python import tensor_forest
import os

from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("../../mnist_data/", one_hot=False)

num_steps=500
batch_size=1024
num_features=784
num_trees=10
num_classes = 10 
max_nodes = 1000

X=tf.placeholder(tf.float32,shape=[None,784])
Y=tf.placeholder(tf.int32,shape=[None])


# Random Forest Parameters 隨機樹的引數
hparams = tensor_forest.ForestHParams(num_classes=num_classes,
                                   num_features=num_features,
                                   num_trees=num_trees,
                                   max_nodes=max_nodes).fill()

forest_graph = tensor_forest.RandomForestGraphs(hparams)
train_op = forest_graph.training_graph(X,Y)
loss_op=forest_graph.training_loss(X,Y)

infer_op,_,_ = forest_graph.inference_graph(X)
correct_prediction = tf.equal(tf.argmax(infer_op,1),tf.cast(Y,tf.int64))
accuracy_op=tf.reduce_mean(tf.cast(correct_prediction,tf.float32))

init_vars=tf.group(tf.global_variables_initializer(),resources.initialize_resources(resources.shared_resources()))
sess=tf.Session()
sess.run(init_vars)


# Training
for i in range(1, num_steps + 1):
    # Prepare Data
    # Get the next batch of MNIST data (only images are needed, not labels)
    batch_x, batch_y = mnist.train.next_batch(batch_size)
    _, l = sess.run([train_op, loss_op], feed_dict={X: batch_x, Y: batch_y})
    if i % 50 == 0 or i == 1:
        acc = sess.run(accuracy_op, feed_dict={X: batch_x, Y: batch_y})
        print('Step %i, Loss: %f, Acc: %f' % (i, l, acc))

# Test Model
test_x, test_y = mnist.test.images, mnist.test.labels
print("Test Accuracy:", sess.run(accuracy_op, feed_dict={X: test_x, Y: test_y}))

輸出:

Step 1, Loss: -1.000000, Acc: 0.431641
Step 50, Loss: -254.800003, Acc: 0.870117
Step 100, Loss: -539.599976, Acc: 0.881836
Step 150, Loss: -829.599976, Acc: 0.911133
Step 200, Loss: -1001.000000, Acc: 0.921875
Step 250, Loss: -1001.000000, Acc: 0.922852
Step 300, Loss: -1001.000000, Acc: 0.928711
Step 350, Loss: -1001.000000, Acc: 0.924805
Step 400, Loss: -1001.000000, Acc: 0.911133
Step 450, Loss: -1001.000000, Acc: 0.900391
Step 500, Loss: -1001.000000, Acc: 0.921875
Test Accuracy: 0.9204