1. 程式人生 > >【Tensorflow】Cifar-10

【Tensorflow】Cifar-10

tensorflow官方CIFAR-10 教程學習筆記

主要包括以下四部分:

環境:

  • python 3.6
  • tensorflow 1.8

目錄

完整程式碼:

完整程式碼:

cifar10

  • tf.app.flags.DEFINE_xxx() && tf.app.flags.FLAGS.flag_xxx

# tf.app.flags.DEFINE_string("param_name", "default_val", "description")

tf.app.flags.DEFINE_xxx() :新增命令列可選引數,主要是在命令列執行程式時傳參使用,如果不傳引數,則執行預設引數。

tf.app.flags.FLAGS.flag_xxx則取得上述引數。

  • activation_summary(x)

tensorboard 的相關建立初始化。

  • _variable_on_cpu(name, shape, initializer)

指定在cpu建立/獲取變數。

 tf.device() 指定模型執行的具體裝置,可以指定執行在GPU還是CUP上,以及哪塊GPU上。如果安裝的是GPU版本的tensorflow,機器上有支援的GPU,也正確安裝了顯示卡驅動、CUDA和cuDNN,預設情況下,Session會在GPU上執行。tensorflow中不同的GPU使用/gpu:0和/gpu:1區分,而CPU不區分裝置號,統一使用 /cpu:0

  • _variable_with_weight_decay(name, shape, stddev, wd)

建立/獲取變數,並把帶權重衰減的變數值加入loss。

  • distorted_inputs()&inputs()

cifar10 data輸入,這2個函式會從CIFAR-10二進位制檔案中讀取圖片檔案。distorted做了一系列隨機變換(翻轉、亮度對比度等)的方法來人為的擴充套件資料集。

  • inference(images)

預測,兩個卷積層兩個全連線層。

  • loss(logits, labels)

計算loss。

 sparse_labels = tf.reshape(labels, [FLAGS.batch_size, 1])

  行向量reshape成列向量:

  例:labels = [1,3,5,7,9]​​​​       sparse_labels =  [1,3,5,7,9]^T

 indices = tf.reshape(tf.range(FLAGS.batch_size), [FLAGS.batch_size, 1])

  生成index大小的index:

  例:indices = [0,1,2,3,4]^T

 concated = tf.concat([indices, sparse_labels], 1)

  連線以上兩個列向量,concated shape->[batch_size,2]

  例:concated = [[0,1], [1,3], [2,5], [3,7], [4,9]]^{T} 

dense_labels = tf.sparse_to_dense(concated,
                                    [FLAGS.batch_size, NUM_CLASSES],
                                    1.0, 0.0)

  得到[FLAGS.batch_size, NUM_CLASSES]大小的onehot標籤矩陣

  例:

  [ [0,1,0,0,0,0,0,0,0,0]

  [0,0,0,1,0,0,0,0,0,0]

  [0,0,0,0,0,1,0,0,0,0]

  [0,0,0,0,0,0,0,1,0,0]

  [0,0,0,0,0,0,0,0,0,1]]

  • def _add_loss_summaries(total_loss)

moving average loss op.

apply方法會為每個變數(也可以指定特定變數)建立各自的shadow variable, 即影子變數。
之所以叫影子變數,是因為它會全程跟隨訓練中的模型變數。影子變數會被初始化為模型變數的值,
然後,每訓練一個step,就更新一次。

  • train(total_loss, global_step)

cifar-10模型訓練

tf.control_dependencies()用來控制計算流圖的,給圖中的某些計算指定順序:

with g.control_dependencies([a, b, c]):
  # `d` and `e` will only run after `a`, `b`, and `c` have executed.
  d = ...
  e = ...
  •  maybe_download_and_extract()

下載Binary 版的cifar10資料

完整程式碼:

"""Builds the CIFAR-10 network.
Summary of available functions:
 # Compute input images and labels for training. If you would like to run
 # evaluations, use input() instead.
 inputs, labels = distorted_inputs()
 # Compute inference on the model inputs to make a prediction.
 predictions = inference(inputs)
 # Compute the total loss of the prediction with respect to the labels.
 loss = loss(predictions, labels)
 # Create a graph to run one step of training with respect to the loss.
 train_op = train(loss, global_step)
"""
# pylint: disable=missing-docstring
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import gzip
import os
import re
import sys
import tarfile
import tensorflow.python.platform
from six.moves import urllib
import tensorflow as tf
import cifar10_input
FLAGS = tf.app.flags.FLAGS
# Basic model parameters.
tf.app.flags.DEFINE_integer('batch_size', 128,
                            """Number of images to process in a batch.""")
tf.app.flags.DEFINE_string('data_dir', 'cifar10_data',
                           """Path to the CIFAR-10 data directory.""")
# Global constants describing the CIFAR-10 data set.
IMAGE_SIZE = cifar10_input.IMAGE_SIZE
NUM_CLASSES = cifar10_input.NUM_CLASSES
NUM_EXAMPLES_PER_EPOCH_FOR_TRAIN = cifar10_input.NUM_EXAMPLES_PER_EPOCH_FOR_TRAIN
NUM_EXAMPLES_PER_EPOCH_FOR_EVAL = cifar10_input.NUM_EXAMPLES_PER_EPOCH_FOR_EVAL
# Constants describing the training process.
MOVING_AVERAGE_DECAY = 0.9999     # The decay to use for the moving average.
NUM_EPOCHS_PER_DECAY = 350.0      # Epochs after which learning rate decays.
LEARNING_RATE_DECAY_FACTOR = 0.1  # Learning rate decay factor.
INITIAL_LEARNING_RATE = 0.1       # Initial learning rate.
# If a model is trained with multiple GPU's prefix all Op names with tower_name
# to differentiate the operations. Note that this prefix is removed from the
# names of the summaries when visualizing a model.
TOWER_NAME = 'tower'
DATA_URL = 'http://www.cs.toronto.edu/~kriz/cifar-10-binary.tar.gz'
def _activation_summary(x):
  """Helper to create summaries for activations.
  Creates a summary that provides a histogram of activations.
  Creates a summary that measure the sparsity of activations.
  Args:
    x: Tensor
  Returns:
    nothing
  """
  # Remove 'tower_[0-9]/' from the name in case this is a multi-GPU training
  # session. This helps the clarity of presentation on tensorboard.
  # [0-9]* :這個匹配0個或0個以上的任何數字

  tensor_name = re.sub('%s_[0-9]*/' % TOWER_NAME, '', x.op.name)
  tf.summary.histogram(tensor_name + '/activations', x)
  tf.summary.scalar(tensor_name + '/sparsity', tf.nn.zero_fraction(x))

def _variable_on_cpu(name, shape, initializer):
  """Helper to create a Variable stored on CPU memory.
  Args:
    name: name of the variable
    shape: list of ints
    initializer: initializer for Variable
  Returns:
    Variable Tensor
  """
  # 在TensorFlow中,模型可以在本地的GPU和CPU中執行,使用者可以指定模型執行的裝置
  with tf.device('/cpu:0'):
    # 獲取已存在的變數(要求不僅名字,而且初始化方法等各個引數都一樣),如果不存在,就新建一個。
    # 可以用各種初始化方法,不用明確指定值。
    var = tf.get_variable(name, shape, initializer=initializer)
  return var

def _variable_with_weight_decay(name, shape, stddev, wd):
  """Helper to create an initialized Variable with weight decay.權重衰減
  Note that the Variable is initialized with a truncated normal distribution.
  A weight decay is added only if one is specified.
  Args:
    name: name of the variable
    shape: list of ints
    stddev: standard deviation of a truncated Gaussian
    wd: add L2Loss weight decay multiplied by this float. If None, weight
        decay is not added for this Variable.
  Returns:
    Variable Tensor
  """
  var = _variable_on_cpu(name, shape,
                         tf.truncated_normal_initializer(stddev=stddev))
  if wd:
    # 為了控制模型的複雜程度,會在loss function中加入正則項
    # Computes half the L2 norm of a tensor without the `sqrt`
    # output = sum(t ** 2) / 2
    weight_decay = tf.multiply(tf.nn.l2_loss(var), wd, name='weight_loss')
    # # 對應的正則項加入集合losses
    # 把變數放入一個集合,把很多變數變成一個列表
    tf.add_to_collection('losses', weight_decay)
  return var

def distorted_inputs():
  """Construct distorted input for CIFAR training using the Reader ops.
  Returns:
    images: Images. 4D tensor of [batch_size, IMAGE_SIZE, IMAGE_SIZE, 3] size.
    labels: Labels. 1D tensor of [batch_size] size.
  Raises:
    ValueError: If no data_dir
  """
  if not FLAGS.data_dir:
    raise ValueError('Please supply a data_dir')
  # 用於路徑拼接檔案路徑
  data_dir = os.path.join(FLAGS.data_dir, 'cifar-10-batches-bin')
  return cifar10_input.distorted_inputs(data_dir=data_dir,
                                        batch_size=FLAGS.batch_size)

def inputs(eval_data):
  """Construct input for CIFAR evaluation using the Reader ops.
  Args:
    eval_data: bool, indicating if one should use the train or eval data set.
  Returns:
    images: Images. 4D tensor of [batch_size, IMAGE_SIZE, IMAGE_SIZE, 3] size.
    labels: Labels. 1D tensor of [batch_size] size.
  Raises:
    ValueError: If no data_dir
  """
  if not FLAGS.data_dir:
    raise ValueError('Please supply a data_dir')
  data_dir = os.path.join(FLAGS.data_dir, 'cifar-10-batches-bin')
  return cifar10_input.inputs(eval_data=eval_data, data_dir=data_dir,
                              batch_size=FLAGS.batch_size)

# 模型的預測流程
def inference(images):
  """Build the CIFAR-10 model.
  Args:
    images: Images returned from distorted_inputs() or inputs().
  Returns:
    Logits.
  """
  # We instantiate all variables using tf.get_variable() instead of
  # tf.Variable() in order to share variables across multiple GPU training runs.
  # If we only ran this model on a single GPU, we could simplify this function
  # by replacing all instances of tf.get_variable() with tf.Variable().
  #
  # conv1
  with tf.variable_scope('conv1') as scope:
    kernel = _variable_with_weight_decay('weights', shape=[5, 5, 3, 64],
                                         stddev=1e-4, wd=0.0)
    conv = tf.nn.conv2d(images, kernel, [1, 1, 1, 1], padding='SAME')
    biases = _variable_on_cpu('biases', [64], tf.constant_initializer(0.0))
    bias = tf.nn.bias_add(conv, biases)
    conv1 = tf.nn.relu(bias, name=scope.name)
    _activation_summary(conv1)
  # pool1
  pool1 = tf.nn.max_pool(conv1, ksize=[1, 3, 3, 1], strides=[1, 2, 2, 1],
                         padding='SAME', name='pool1')
  # norm1
  norm1 = tf.nn.lrn(pool1, 4, bias=1.0, alpha=0.001 / 9.0, beta=0.75,
                    name='norm1')
  # conv2
  with tf.variable_scope('conv2') as scope:
    kernel = _variable_with_weight_decay('weights', shape=[5, 5, 64, 64],
                                         stddev=1e-4, wd=0.0)
    conv = tf.nn.conv2d(norm1, kernel, [1, 1, 1, 1], padding='SAME')
    biases = _variable_on_cpu('biases', [64], tf.constant_initializer(0.1))
    bias = tf.nn.bias_add(conv, biases)
    conv2 = tf.nn.relu(bias, name=scope.name)
    _activation_summary(conv2)
  # norm2
  norm2 = tf.nn.lrn(conv2, 4, bias=1.0, alpha=0.001 / 9.0, beta=0.75,
                    name='norm2')
  # pool2
  pool2 = tf.nn.max_pool(norm2, ksize=[1, 3, 3, 1],
                         strides=[1, 2, 2, 1], padding='SAME', name='pool2')
  # local3 基於修正線性啟用的全連線層.
  with tf.variable_scope('local3') as scope:
    # Move everything into depth so we can perform a single matrix multiply.
    dim = 1
    for d in pool2.get_shape()[1:].as_list(): # get_shape,返回的是一個元組 ;as_list(): 以list形式,[1:]:w*h*channel
      dim *= d
    reshape = tf.reshape(pool2, [FLAGS.batch_size, dim])
    weights = _variable_with_weight_decay('weights', shape=[dim, 384],
                                          stddev=0.04, wd=0.004)
    biases = _variable_on_cpu('biases', [384], tf.constant_initializer(0.1))
    local3 = tf.nn.relu(tf.matmul(reshape, weights) + biases, name=scope.name)
    _activation_summary(local3)
  # local4
  with tf.variable_scope('local4') as scope:
    weights = _variable_with_weight_decay('weights', shape=[384, 192],
                                          stddev=0.04, wd=0.004)
    biases = _variable_on_cpu('biases', [192], tf.constant_initializer(0.1))
    local4 = tf.nn.relu(tf.matmul(local3, weights) + biases, name=scope.name)
    _activation_summary(local4)
  # softmax, i.e. softmax(WX + b)
  with tf.variable_scope('softmax_linear') as scope:
    weights = _variable_with_weight_decay('weights', [192, NUM_CLASSES],
                                          stddev=1/192.0, wd=0.0)
    biases = _variable_on_cpu('biases', [NUM_CLASSES],
                              tf.constant_initializer(0.0))
    softmax_linear = tf.add(tf.matmul(local4, weights), biases, name=scope.name)
    _activation_summary(softmax_linear)
  return softmax_linear


def loss(logits, labels):
  """Add L2Loss to all the trainable variables.
  Add summary for for "Loss" and "Loss/avg".
  Args:
    logits: Logits from inference().
    labels: Labels from distorted_inputs or inputs(). 1-D tensor
            of shape [batch_size]
  Returns:
    Loss tensor of type float.
  """
  # Reshape the labels into a dense Tensor of
  # shape [batch_size, NUM_CLASSES].
  sparse_labels = tf.reshape(labels, [FLAGS.batch_size, 1])
  # 生成一個index表明一個batch裡面每個樣本對應的序號
  indices = tf.reshape(tf.range(FLAGS.batch_size), [FLAGS.batch_size, 1])
  # 在維度1連線 [FLAGS.batch_size, 2]
  concated = tf.concat([indices, sparse_labels], 1)
  # 呼叫tf.sparse_to_dense輸出一個onehot標籤的矩陣
  dense_labels = tf.sparse_to_dense(concated,
                                    [FLAGS.batch_size, NUM_CLASSES],
                                    1.0, 0.0)
  # Calculate the average cross entropy loss across the batch.
  cross_entropy = tf.nn.softmax_cross_entropy_with_logits(logits = logits, labels = dense_labels, name='cross_entropy_per_example')
  cross_entropy_mean = tf.reduce_mean(cross_entropy, name='cross_entropy')
  tf.add_to_collection('losses', cross_entropy_mean)
  # The total loss is defined as the cross entropy loss plus all of the weight
  # decay terms (L2 loss).
  return tf.add_n(tf.get_collection('losses'), name='total_loss')

def _add_loss_summaries(total_loss):
  """Add summaries for losses in CIFAR-10 model.
  Generates moving average for all losses and associated summaries for
  visualizing the performance of the network.
  Args:
    total_loss: Total loss from loss().
  Returns:
    loss_averages_op: op for generating moving averages of losses.
  """
  # Compute the moving average of all individual losses and the total loss.
  loss_averages = tf.train.ExponentialMovingAverage(0.9, name='avg')
  losses = tf.get_collection('losses')
  loss_averages_op = loss_averages.apply(losses + [total_loss])
  '''
apply方法會為每個變數(也可以指定特定變數)建立各自的shadow variable, 即影子變數。
之所以叫影子變數,是因為它會全程跟隨訓練中的模型變數。影子變數會被初始化為模型變數的值,
然後,每訓練一個step,就更新一次。
'''
  # Attach a scalar summary to all individual losses and the total loss; do the
  # same for the averaged version of the losses.
  for l in losses + [total_loss]:
    # Name each loss as '(raw)' and name the moving average version of the loss
    # as the original loss name.
    tf.summary.scalar(l.op.name +' (raw)', l)
    tf.summary.scalar(l.op.name, loss_averages.average(l))
  return loss_averages_op

def train(total_loss, global_step):
  """Train CIFAR-10 model.
  Create an optimizer and apply to all trainable variables. Add moving
  average for all trainable variables.
  Args:
    total_loss: Total loss from loss().
    global_step: Integer Variable counting the number of training steps
      processed.
  Returns:
    train_op: op for training.
  """
  # Variables that affect learning rate.
  num_batches_per_epoch = NUM_EXAMPLES_PER_EPOCH_FOR_TRAIN / FLAGS.batch_size
  decay_steps = int(num_batches_per_epoch * NUM_EPOCHS_PER_DECAY)
  # Decay the learning rate exponentially based on the number of steps.
  lr = tf.train.exponential_decay(INITIAL_LEARNING_RATE,  # The initial learning rate.
                                  global_step,  # Global step to use for the decay computation. Must not be negative.
                                  decay_steps,
                                  LEARNING_RATE_DECAY_FACTOR,  # 衰減速率,即每一次學習都衰減為原來的LEARNING_RATE_DECAY_FACTOR
                                  staircase=True)  # If True decay the learning rate at discrete intervals
  # staircase=True,那就表明每decay_steps次計算學習速率變化,更新原始學習速率,如果是False,那就是每一步都更新學習速率。

  tf.summary.scalar('learning_rate', lr)
  # Generate moving averages of all losses and associated summaries.
  loss_averages_op = _add_loss_summaries(total_loss)
  # Compute gradients.
  # 每步更新loss
  with tf.control_dependencies([loss_averages_op]):
    opt = tf.train.GradientDescentOptimizer(lr)
    grads = opt.compute_gradients(total_loss)
  # Apply gradients.
  apply_gradient_op = opt.apply_gradients(grads, global_step=global_step)
  # Add histograms for trainable variables.
  # 返回需要訓練的變數列表
  for var in tf.trainable_variables():
    tf.summary.histogram(var.op.name, var)
  # Add histograms for gradients.
  for grad, var in grads:
    if grad is not None:
      tf.summary.histogram(var.op.name + '/gradients', grad)
  # Track the moving averages of all trainable variables.
  variable_averages = tf.train.ExponentialMovingAverage(
      MOVING_AVERAGE_DECAY, global_step)
  variables_averages_op = variable_averages.apply(tf.trainable_variables())
  with tf.control_dependencies([apply_gradient_op, variables_averages_op]):
    # Does nothing. Only useful as a placeholder for control edges
    train_op = tf.no_op(name='train')
  return train_op

def maybe_download_and_extract():
  """Download and extract the tarball from Alex's website."""
  dest_directory = FLAGS.data_dir
  if not os.path.exists(dest_directory):
    os.makedirs(dest_directory)
  filename = DATA_URL.split('/')[-1]
  filepath = os.path.join(dest_directory, filename)
  if not os.path.exists(filepath):
    def _progress(count, block_size, total_size):
      '''
             :param blocknum: 已下載資料塊
             :param blocksize: 資料塊大小
             :param totalsize: 遠端檔案大小
             :return:
             '''
      sys.stdout.write('\r>> Downloading %s %.1f%%' % (filename,
          float(count * block_size) / float(total_size) * 100.0))
      sys.stdout.flush()
    # urllib.request.urlretrieve(url, local, callback) 從遠端下載資料
    filepath, _ = urllib.request.urlretrieve(DATA_URL, filepath,
                                             reporthook=_progress)
    print()
    statinfo = os.stat(filepath)
    print('Successfully downloaded', filename, statinfo.st_size, 'bytes.')
    tarfile.open(filepath, 'r:gz').extractall(dest_directory)

cifar10_input

  • read_cifar10(filename_queue)

從檔名佇列讀取資料,返回CIFAR10Record類。

 class CIFAR10Record(object):
    pass

空類。

# 讀取固定長度位元組
  reader = tf.FixedLengthRecordReader(record_bytes=record_bytes)
  # 每次在filename_queue中讀取record_bytes位元組資訊 下次呼叫時會接著上次讀取的位置繼續讀取檔案
  result.key, value = reader.read(filename_queue)

每次讀取固定長的的資料,即 1(label)+3*32*32(img)大小.

 record_bytes = tf.decode_raw(value, tf.uint8)

tf.decode_raw將原來編碼為字串型別的變數重新變回來,這個方法在資料集dataset中很常用,因為製作圖片源資料一般寫進tfrecord裡用to_bytes的形式,也就是字串。這裡將原始資料取出來,必須制定原始資料的格式,原始資料是什麼格式這裡解析必須是什麼格式,要不然會出現形狀的不對應問題!

  depth_major = tf.reshape(tf.slice(record_bytes, [label_bytes], [image_bytes]),
                           [result.depth, result.height, result.width])
  # Convert from [depth, height, width] to [height, width, depth].
  # 轉置
  result.uint8image = tf.transpose(depth_major, [1, 2, 0])

Binary 版的cifar10資料, 格式如下:

<1 x label><3072 x pixel>
...
<1 x label><3072 x pixel>

第一位元組是第一張圖片的標籤值,數值範圍是 0-9,表示 10 類,record_bytes[1:] 大小為3072(3*32*32),3072中前1024個表示Red通道資料,中間1024個表示Green通道資料,最後1024個表示Blue通道資料,資料範圍是0-255,表示畫素點灰度.後面的卷積等操作需要32 * 32 * 3格式的資料,所以這裡首先reshape成[3,32,32],然後利用tf.transpose轉置,uint8image shape[32,32,3].

  •  _generate_image_and_label_batch(image, label, min_queue_examples,batch_size)

得到亂序的資料batch。

  • distorted_inputs(data_dir, batch_size) & inputs(eval_data, data_dir, batch_size)

輸入資料。

完整程式碼:

"""Routine for decoding the CIFAR-10 binary file format."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import os
import tensorflow.python.platform
from six.moves import xrange  # pylint: disable=redefined-builtin
import tensorflow as tf
from tensorflow.python.platform import gfile
# Process images of this size. Note that this differs from the original CIFAR
# image size of 32 x 32. If one alters this number, then the entire model
# architecture will change and any model would need to be retrained.
IMAGE_SIZE = 24
# Global constants describing the CIFAR-10 data set.
NUM_CLASSES = 10
NUM_EXAMPLES_PER_EPOCH_FOR_TRAIN = 50000
NUM_EXAMPLES_PER_EPOCH_FOR_EVAL = 10000

def read_cifar10(filename_queue):
  """Reads and parses examples from CIFAR10 data files.
  Recommendation: if you want N-way read parallelism, call this function
  N times.  This will give you N independent Readers reading different
  files & positions within those files, which will give better mixing of
  examples.
  Args:
    filename_queue: A queue of strings with the filenames to read from.
  Returns:
    An object representing a single example, with the following fields:
      height: number of rows in the result (32)
      width: number of columns in the result (32)
      depth: number of color channels in the result (3)
      key: a scalar string Tensor describing the filename & record number
        for this example.
      label: an int32 Tensor with the label in the range 0..9.
      uint8image: a [height, width, depth] uint8 Tensor with the image data

      @param  filename_queue  要讀取的檔名佇列
      @return 某個物件,具有以下欄位:
              height: 結果中的行數 (32)
              width:  結果中的列數 (32)
              depth:  結果中顏色通道數(3)
              key:    一個描述當前抽樣資料的檔名和記錄數的標量字串
              label:  一個 int32型別的標籤,取值範圍 0..9.
              uint8image: 一個[height, width, depth]維度的影象資料
  """
  class CIFAR10Record(object):
    pass

  result = CIFAR10Record()
  # Dimensions of the images in the CIFAR-10 dataset.
  # See http://www.cs.toronto.edu/~kriz/cifar.html for a description of the
  # input format.

  label_bytes = 1  # 2 for CIFAR-100
  result.height = 32
  result.width = 32
  result.depth = 3
  image_bytes = result.height * result.width * result.depth

  # Every record consists of a label followed by the image, with a
  # fixed number of bytes for each.
  # 每個記錄都包含標籤資訊和圖片資訊,每個記錄都有固定的位元組數(3073 = 1 + 3072)3*32*32 = 3072

  record_bytes = label_bytes + image_bytes

  # Read a record, getting filenames from the filename_queue.  No
  # header or footer in the CIFAR-10 format, so we leave header_bytes
  # and footer_bytes at their default of 0.
  # 讀取固定長度位元組
  reader = tf.FixedLengthRecordReader(record_bytes=record_bytes)
  # 每次在filename_queue中讀取record_bytes位元組資訊 下次呼叫時會接著上次讀取的位置繼續讀取檔案
  result.key, value = reader.read(filename_queue)

  # Convert from a string to a vector of uint8 that is record_bytes long.
  # 將原來編碼為字串型別的變數重新變回來
  record_bytes = tf.decode_raw(value, tf.uint8)

  # The first bytes represent the label, which we convert from uint8->int32.
  # 從輸入資料input中提取出一塊切片
  # 第1維偏移0,label_bytes(1)大小的數
  result.label = tf.cast(
      tf.slice(record_bytes, [0], [label_bytes]), tf.int32)
  # The remaining bytes after the label represent the image, which we reshape
  # from [depth * height * width] to [depth, height, width].
  # 第1維偏移0,image_bytes大小的數
  # 3072中前1024個表示Red通道資料,中間1024個表示Green通道資料,最後1024個表示Blue通道資料,所以reshape後為3*height*width
  depth_major = tf.reshape(tf.slice(record_bytes, [label_bytes], [image_bytes]),
                           [result.depth, result.height, result.width])
  # Convert from [depth, height, width] to [height, width, depth].
  # 轉置
  result.uint8image = tf.transpose(depth_major, [1, 2, 0])
  return result


def _generate_image_and_label_batch(image, label, min_queue_examples,
                                    batch_size):
  """Construct a queued batch of images and labels.
  Args:
    image: 3-D Tensor of [height, width, 3] of type.float32.
    label: 1-D Tensor of type.int32
    min_queue_examples: int32, minimum number of samples to retain
      in the queue that provides of batches of examples.
    batch_size: Number of images per batch.
  Returns:
    images: Images. 4D tensor of [batch_size, height, width, 3] size.
    labels: Labels. 1D tensor of [batch_size] size.
  """
  # Create a queue that shuffles the examples, and then
  # read 'batch_size' images + labels from the example queue.
  num_preprocess_threads = 16
  # # Creates batches by randomly shuffling tensors 返回值是一個batch的樣本和樣本標籤
  # 將佇列中資料打亂後,再讀取出來,因此佇列中剩下的資料也是亂序的
  images, label_batch = tf.train.shuffle_batch(
      [image, label], # tensor_list
      batch_size=batch_size, # 返回的一個batch樣本集的樣本個數
      num_threads=num_preprocess_threads,
      capacity=min_queue_examples + 3 * batch_size,
      min_after_dequeue=min_queue_examples) #min_after_dequeue,一定要保證這引數小於capacity引數的值,否則會出錯。
  # Display the training images in the visualizer.
  tf.summary.image('images', images)
  return images, tf.reshape(label_batch, [batch_size])


def distorted_inputs(data_dir, batch_size):
  """Construct distorted input for CIFAR training using the Reader ops.
  Args:
    data_dir: Path to the CIFAR-10 data directory.
    batch_size: Number of images per batch.
  Returns:
    images: Images. 4D tensor of [batch_size, IMAGE_SIZE, IMAGE_SIZE, 3] size.
    labels: Labels. 1D tensor of [batch_size] size.
  """
  # 獲取當前目錄,並組合成新目錄
  filenames = [os.path.join(data_dir, 'data_batch_%d.bin' % i)
               for i in xrange(1, 6)]
  for f in filenames:
    if not gfile.Exists(f):
      raise ValueError('Failed to find file: ' + f)

  # Create a queue that produces the filenames to read.
  # 把需要的全部檔案打包為一個tf內部的queue型別
  filename_queue = tf.train.string_input_producer(filenames)
  '''
  tf.train.string_input_producer建立了一個這樣的執行緒,新增QueueRunner到資料流圖中
  string_input_producer來生成一個先入先出的佇列, 檔案閱讀器會需要它來讀取資料。
  '''
  # Read examples from files in the filename queue.
  read_input = read_cifar10(filename_queue)
  reshaped_image = tf.cast(read_input.uint8image, tf.float32)
  height = IMAGE_SIZE
  width = IMAGE_SIZE

  # Image processing for training the network. Note the many random
  # distortions applied to the image.
  # Randomly crop a [height, width] section of the image.
  distorted_image = tf.random_crop(reshaped_image, [height, width,3]) # 隨機裁剪為24 * 24

  # Randomly flip the image horizontally.
  distorted_image = tf.image.random_flip_left_right(distorted_image)  # 隨機左右翻轉

  # Because these operations are not commutative, consider randomizing
  # randomize the order their operation.
  distorted_image = tf.image.random_brightness(distorted_image,
                                               max_delta=63)
  distorted_image = tf.image.random_contrast(distorted_image,
                                             lower=0.2, upper=1.8)

  # Subtract off the mean and divide by the variance of the pixels.
  # 白化(標準化)操作。tf.image.per_image_standardization  將代表一張圖片的三維矩陣中的數字均值變為0,方差變為1。
  float_image = tf.image.per_image_standardization(distorted_image)

  # Ensure that the random shuffling has good mixing properties.
  min_fraction_of_examples_in_queue = 0.4
  min_queue_examples = int(NUM_EXAMPLES_PER_EPOCH_FOR_TRAIN *
                           min_fraction_of_examples_in_queue)
  print ('Filling queue with %d CIFAR images before starting to train. '
         'This will take a few minutes.' % min_queue_examples)
  # Generate a batch of images and labels by building up a queue of examples.

  return _generate_image_and_label_batch(float_image, read_input.label,
                                         min_queue_examples, batch_size)


def inputs(eval_data, data_dir, batch_size):
  """Construct input for CIFAR evaluation using the Reader ops.
  Args:
    eval_data: bool, indicating if one should use the train or eval data set.
    data_dir: Path to the CIFAR-10 data directory.
    batch_size: Number of images per batch.
  Returns:
    images: Images. 4D tensor of [batch_size, IMAGE_SIZE, IMAGE_SIZE, 3] size.
    labels: Labels. 1D tensor of [batch_size] size.
  """
  # 根據eval_data決定讀入train or eval資料
  if not eval_data:
    filenames = [os.path.join(data_dir, 'data_batch_%d.bin' % i)
                 for i in xrange(1, 6)]
    num_examples_per_epoch = NUM_EXAMPLES_PER_EPOCH_FOR_TRAIN
  else:
    filenames = [os.path.join(data_dir, 'test_batch.bin')]
    num_examples_per_epoch = NUM_EXAMPLES_PER_EPOCH_FOR_EVAL

  for f in filenames:
    if not gfile.Exists(f):
      raise ValueError('Failed to find file: ' + f)

  # 同上
  # Create a queue that produces the filenames to read.
  filename_queue = tf.train.string_input_producer(filenames)

  # Read examples from files in the filename queue.
  read_input = read_cifar10(filename_queue)
  reshaped_image = tf.cast(read_input.uint8image, tf.float32)
  height = IMAGE_SIZE
  width = IMAGE_SIZE
  # Image processing for evaluation.
  # Crop the central [height, width] of the image.
  resized_image = tf.image.resize_image_with_crop_or_pad(reshaped_image,
                                                         width, height)
  # Subtract off the mean and divide by the variance of the pixels.
  float_image = tf.image.per_image_standardization(resized_image)
  # Ensure that the random shuffling has good mixing properties.
  min_fraction_of_examples_in_queue = 0.4
  min_queue_examples = int(num_examples_per_epoch *
                           min_fraction_of_examples_in_queue)
  # Generate a batch of images and labels by building up a queue of examples.
  return _generate_image_and_label_batch(float_image, read_input.label,
                                         min_queue_examples, batch_size)
tf.Graph().as_default():

tf.Graph() 表示例項化了一個類,一個用於 tensorflow 計算和表示用的資料流圖。tf.Graph().as_default() 表示將這個類例項,也就是新生成的圖作為整個 tensorflow 執行環境的預設圖。

tf.train.start_queue_runners(sess=sess)

啟動所有的QueueRunners。

cifar10_train

"""A binary to train CIFAR-10 using a single GPU.
Accuracy:
cifar10_train.py achieves ~86% accuracy after 100K steps (256 epochs of
data) as judged by cifar10_eval.py.
Speed: With batch_size 128.
System        | Step Time (sec/batch)  |     Accuracy
------------------------------------------------------------------
1 Tesla K20m  | 0.35-0.60              | ~86% at 60K steps  (5 hours)
1 Tesla K40m  | 0.25-0.35              | ~86% at 100K steps (4 hours)
Usage:
Please see the tutorial and website for how to download the CIFAR-10
data set, compile the program and train the model.
http://tensorflow.org/tutorials/deep_cnn/
"""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from datetime import datetime
import os.path
import time
import tensorflow.python.platform
from tensorflow.python.platform import gfile
import numpy as np
from six.moves import xrange  # pylint: disable=redefined-builtin
import tensorflow as tf
import cifar10

FLAGS = tf.app.flags.FLAGS
# 命令函引數
tf.app.flags.DEFINE_string('train_dir', 'cifar10_train',
                           """Directory where to write event logs """
                           """and checkpoint.""")
tf.app.flags.DEFINE_integer('max_steps', 100000,
                            """Number of batches to run.""")
tf.app.flags.DEFINE_boolean('log_device_placement', False,
                            """Whether to log device placement.""")

def train():
  """Train CIFAR-10 for a number of steps."""
  with tf.Graph().as_default():
    global_step = tf.Variable(0, trainable=False)
    # Get images and labels for CIFAR-10.
    images, labels = cifar10.distorted_inputs()
    # Build a Graph that computes the logits predictions from the
    # inference model.
    logits = cifar10.inference(images)
    # Calculate loss.
    loss = cifar10.loss(logits, labels)
    # Build a Graph that trains the model with one batch of examples and
    # updates the model parameters.
    train_op = cifar10.train(loss, global_step)
    # Create a saver.
    saver = tf.train.Saver(tf.global_variables())
    # Build the summary operation based on the TF collection of Summaries.
    summary_op = tf.summary.merge_all()
    # Build an initialization operation to run below.
    init = tf.initialize_all_variables()
    # Start running operations on the Graph.
    sess = tf.Session(config=tf.ConfigProto(
        log_device_placement=FLAGS.log_device_placement))
    sess.run(init)
    # Start the queue runners.
    tf.train.start_queue_runners(sess=sess)
    summary_writer = tf.summary.FileWriter(FLAGS.train_dir,
                                            graph_def=sess.graph_def)
    for step in xrange(FLAGS.max_steps):
      start_time = time.time()
      _, loss_value = sess.run([train_op, loss])
      duration = time.time() - start_time
      assert not np.isnan(loss_value), 'Model diverged with loss = NaN'
      if step % 10 == 0:
        num_examples_per_step = FLAGS.batch_size
        examples_per_sec = num_examples_per_step / duration
        sec_per_batch = float(duration)
        format_str = ('%s: step %d, loss = %.2f (%.1f examples/sec; %.3f '
                      'sec/batch)')
        print (format_str % (datetime.now(), step, loss_value,
                             examples_per_sec, sec_per_batch))
      if step % 100 == 0:
        summary_str = sess.run(summary_op)
        summary_writer.add_summary(summary_str, step)
      # Save the model checkpoint periodically.
      if step % 1000 == 0 or (step + 1) == FLAGS.max_steps:
        checkpoint_path = os.path.join(FLAGS.train_dir, 'model.ckpt')
        saver.save(sess, checkpoint_path, global_step=step)

def main(argv=None):  # pylint: disable=unused-argument
  cifar10.maybe_download_and_extract()
  if gfile.Exists(FLAGS.train_dir):
    gfile.DeleteRecursively(FLAGS.train_dir) # Deletes everything under dirname recursively 遞迴的刪除
  gfile.MakeDirs(FLAGS.train_dir)
  train()

if __name__ == '__main__':
  # tf.app.run()會呼叫main
  tf.app.run()

cifar10_eval

"""Evaluation for CIFAR-10.
Accuracy:
cifar10_train.py achieves 83.0% accuracy after 100K steps (256 epochs
of data) as judged by cifar10_eval.py.
Speed:
On a single Tesla K40, cifar10_train.py processes a single batch of 128 images
in 0.25-0.35 sec (i.e. 350 - 600 images /sec). The model reaches ~86%
accuracy after 100K steps in 8 hours of training time.
Usage:
Please see the tutorial and website for how to download the CIFAR-10
data set, compile the program and train the model.
http://tensorflow.org/tutorials/deep_cnn/
"""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from datetime import datetime
import math
import time
import tensorflow.python.platform
from tensorflow.python.platform import gfile
import numpy as np
import tensorflow as tf
import cifar10
FLAGS = tf.app.flags.FLAGS
tf.app.flags.DEFINE_string('eval_dir', 'cifar10_eval',
                           """Directory where to write event logs.""")
tf.app.flags.DEFINE_string('eval_data', 'test',
                           """Either 'test' or 'train_eval'.""")
tf.app.flags.DEFINE_string('checkpoint_dir', 'cifar10_train',
                           """Directory where to read model checkpoints.""")
tf.app.flags.DEFINE_integer('eval_interval_secs', 60 * 5,
                            """How often to run the eval.""")
tf.app.flags.DEFINE_integer('num_examples', 10000,
                            """Number of examples to run.""")
tf.app.flags.DEFINE_boolean('run_once', False,
                         """Whether to run eval only once.""")
def eval_once(saver, summary_writer, top_k_op, summary_op):
  """Run Eval once.
  Args:
    saver: Saver.
    summary_writer: Summary writer.
    top_k_op: Top K op.
    summary_op: Summary op.
  """
  with tf.Session() as sess:
    ckpt = tf.train.get_checkpoint_state(FLAGS.checkpoint_dir)
    if ckpt and ckpt.model_checkpoint_path:
      # Restores from checkpoint
      saver.restore(sess, ckpt.model_checkpoint_path)
      # Assuming model_checkpoint_path looks something like:
      #   /my-favorite-path/cifar10_train/model.ckpt-0,
      # extract global_step from it.
      global_step = ckpt.model_checkpoint_path.split('/')[-1].split('-')[-1]
    else:
      print('No checkpoint file found')
      return
    # Start the queue runners.
    #  tf.train.Coordinator()來建立一個執行緒管理器(協調器)物件
    coord = tf.train.Coordinator()
    try:
      threads = []
      # 所有佇列管理器被預設加入圖的tf.GraphKeys.QUEUE_RUNNERS集合中
      for qr in tf.get_collection(tf.GraphKeys.QUEUE_RUNNERS):
        threads.extend(qr.create_threads(sess, coord=coord, daemon=True,
                                         start=True))
      num_iter = int(math.ceil(FLAGS.num_examples / FLAGS.batch_size))
      true_count = 0  # Counts the number of correct predictions.
      total_sample_count = num_iter * FLAGS.batch_size
      step = 0
      while step < num_iter and not coord.should_stop():
        predictions = sess.run([top_k_op])
        true_count += np.sum(predictions)
        step += 1
      # Compute precision @ 1.
      precision = true_count / total_sample_count
      print('%s: precision @ 1 = %.3f' % (datetime.now(), precision))
      summary = tf.Summary()
      summary.ParseFromString(sess.run(summary_op))
      summary.value.add(tag='Precision @ 1', simple_value=precision)
      summary_writer.add_summary(summary, global_step)
    except Exception as e:  # pylint: disable=broad-except
      coord.request_stop(e)
    coord.request_stop()
    coord.join(threads, stop_grace_period_secs=10)
	
def evaluate():
  """Eval CIFAR-10 for a number of steps."""
  with tf.Graph().as_default():
    # Get images and labels for CIFAR-10.
    eval_data = FLAGS.eval_data == 'test'
    images, labels = cifar10.inputs(eval_data=eval_data)
    # Build a Graph that computes the logits predictions from the
    # inference model.
    logits = cifar10.inference(images)
    # Calculate predictions.
    top_k_op = tf.nn.in_top_k(logits, labels, 1)
    # Restore the moving average version of the learned variables for eval.
    variable_averages = tf.train.ExponentialMovingAverage(
        cifar10.MOVING_AVERAGE_DECAY)
    variables_to_restore = variable_averages.variables_to_restore()
    saver = tf.train.Saver(variables_to_restore)
    # Build the summary operation based on the TF collection of Summaries.
    summary_op = tf.summary.merge_all()
    graph_def = tf.get_default_graph().as_graph_def()
    summary_writer = tf.summary.FileWriter(FLAGS.eval_dir,
                                            graph_def=graph_def)
    while True:
      eval_once(saver, summary_writer, top_k_op, summary_op)
      if FLAGS.run_once:
        break
      time.sleep(FLAGS.eval_interval_secs)
	  
def main(argv=None):  # pylint: disable=unused-argument
  cifar10.maybe_download_and_extract()
  if gfile.Exists(FLAGS.eval_dir):
    gfile.DeleteRecursively(FLAGS.eval_dir)
  gfile.MakeDirs(FLAGS.eval_dir)
  evaluate()
if __name__ == '__main__':
  tf.app.run()

參考連結:

相關推薦

TensorflowCifar-10

tensorflow官方CIFAR-10 教程學習筆記 主要包括以下四部分: 環境: python 3.6 tensorflow 1.8 目錄 完整程式碼: 完整程式碼: cifar10 tf.app.flags.DEFINE_xx

PytorchCIFAR-10分類任務

CIFAR-10資料集共有60000張32*32彩色圖片,分為10類,每類有6000張圖片。其中50000張用於訓練,構成5個訓練batch,每一批次10000張圖片,其餘10000張圖片用於測試。 CIFAR-10資料集下載地址:點選下載 資料讀取,這裡選擇下載py

計算幾何DPtarjanDay 10.6

long long pri cout logs 前綴 ret ble freopen style T1 計算幾何+遞推 1 #include <cstdio> 2 #include <cmath> 3 double w,x,r; 4 int

tensorFlowtf.reshape()報錯信息 - TypeError: Expected binary or unicode string

bject port cas inpu dimen div nts sof expec 今天在使用tensoflow跑cifar10的CNN分類時候,download一個源碼,但是報錯 TypeError: Failed to convert object of type

TensorFlowtf.nn.softmax_cross_entropy_with_logits的用法

white 交叉 none padding tomat ros true const cross 在計算loss的時候,最常見的一句話就是 tf.nn.softmax_cross_entropy_with_logits ,那麽它到底是怎麽做的呢? 首先明確一點,loss是代

TensorFlow01 TensorFlow簡介與Python基礎

編譯器 n) The 腳本語言 ble rem 時間 完整 快的 TensorFlow簡介與Python基礎 2018.9.10 一、概述 TF使用數據數據流圖進行數值計算,亮點是支持異構設備分布式計算機 常用的ML庫有MXNet Torch/Pytorch Theano

TensorFlow(01)線性回歸

lob 超參數 教育版 ini src ont numpy mat font 特別說明 代碼地址:Github 環境說明 平臺:WIN10(教育版) 環境:Anaconda5.2(Python3.6.6) IDE:Pacharm2018.2.3(專業版) Tensor

機器學習-Logistic迴歸python實踐310.26更新)

寫在最前面:Logistic迴歸通過Sigmoid函式接受輸入然後進行預測 首先,介紹一下什麼是Sigmoid函式。大家一定聽過海維賽德階躍函式(Heaviside step function),什麼?沒聽過,好吧,換個名字,單位階躍函式,這個認識吧! 這個函式的問題在於該函式

TensorFlowWin10+TensorFlow-gpu1.9.0+CUDA9.0+cudnn7.1.4(2018/11/02)

折騰了一天多,終於配置成功了orz 本篇文章是2018年11月2日寫的,Win10,顯示卡為960M 下載版本為:(請注意相容性) Anaconda3    5.3.0 TensorFlow-gpu   1.9.0 CUDA9.0 cu

TensorFlow池化層max_pool中兩種paddding操作

max_pool()中padding引數有兩種模式valid和same模式。 Tensorflow的padding和卷積層一樣也有padding操作,兩種不同的操作輸出的結果有區別: 函式原型max_pool(value, ksize, strides, padding

UE410講 Matinee相機過場動畫

(版權宣告,禁止轉載)                                     &

tensorflow模型優化(一)指數衰減學習率

指數衰減學習率是先使用較大的學習率來快速得到一個較優的解,然後隨著迭代的繼續,逐步減小學習率,使得模型在訓練後期更加穩定。在訓練神經網路時,需要設定學習率(learning rate)控制引數的更新速度,學習速率設定過小,會極大降低收斂速度,增加訓練時間;學習率太大,可能導致引數在最優解兩側來回振盪

tensorflowtf.identity()

常與tf.control_dependencies(self, control_inputs)配合使用,只有當這個上下文管理器中的是一個操作時,control_inputs才會執行。 x = tf.Variable(0.0) x_plus_1 = tf.assign_add(x, 1)

tensorflowtf.get_variable()和tf.Variable()的區別

1.tf.Variable() tf.Variable(initial_value, trainable=True, collections=None, validate_shape=True, name=None) ##initial_value為變數的初始值 tf.get

tensorflow命令列引數解析

1. tf.app.flags,用於支援接受命令列傳遞引數 import tensorflow as tf #第一個是引數名稱,第二個引數是預設值,第三個是引數描述 tf.app.flags.DEFINE_string('str_name', 'def_v_1',"descrip1")

TensorflowTensorflow的圖、會話、裝置、變數、核心

前言 基礎知識,前面我們介紹到,Tensorflow的資料流圖是由節點和邊組成的有向無環圖,此外,還涉及一些其他概念,如圖、會話、裝置、變數、核心等。 圖(Graph) import tensorflow as tf # 建立圖 # 建立一個常量運算操作,產生一個1 x 2

TensorflowTensorflow基礎知識

Tensorflow簡介 \quad\quad 在我們使用以統計方法為核心的機器學習方法的時候,重要的是做特徵工程,

TensorFlow多GPU訓練:示例程式碼解析

使用多GPU有助於提升訓練速度和調參效率。 本文主要對tensorflow的示例程式碼進行註釋解析:cifar10_multi_gpu_train.py 1080Ti下加速效果如下(batch=128) 單卡: 兩個GPU比單個GPU加速了近一倍 :

tensorflowObject DetectionAPI訓練識別自己的資料集

#一、資料準備 ###1.一個友好的標註工具 各種系統安裝已經再此介紹的很詳細了,linux下可以三行命令解決。 注意:圖片要求是png或者jpg格式 1> . 標註資訊存為xml檔案,使用該指令碼可以將所有的xml檔案轉換為1個csv檔案(自行修改xml路徑

tensorflowtensorflow中的全域性變數GLOBAL_VARIABLES及區域性變數LOCAL_VARIABLES

在初學tensorflow的時候,我們會發現在函式體內定義tf.variable()或者tf.get_variable()變數的時候,跟其他語言不同,在tensorflow的函式體內定義的變數並不會隨著函式的執行結束而消失。這是因為tensorflow設定的全域性變數及區域性變數與其他