驗證碼之字元分割&GUI顯示
阿新 • • 發佈:2019-01-06
驗證碼思路流程:
1. 讀取驗證碼
2.分割驗證碼
這裡的驗證碼是那些背景只是一些噪點,沒有很明顯的橫杆那種。
從圖中(28*88)可知,4個字元從中間往兩邊看分散式均勻的,故只需將字元極左和極右找出,然後往兩邊各外擴使其畫素達到18*4=72即可。
a)首先將三個通道RGB資料獲取,二值化,然後將三個二值化影象相與,便得到第四個圖
b)得到二維碼的四個字的二值影象後,下面確定字元極左極右座標
對每一列求和,數值分佈如圖所示
求其眾數,那麼眾數開始上升之處便是左邊第一個字元之處,由此找到了左邊第一個字元的起始位置(極左),同理可得極右點;由此然後往兩邊各外擴使其畫素達到18*4=72。再平均分割成四個字元即可。
3.識別驗證碼字元
首先要用到上篇文章訓練好的模型_iter_1000.caffemodel、以及均值檔案mean.binaryproto和由train_val.prototxt修改得到的網路框架deploy.prototxt(注不含網路層的初始條件,因其權值都已訓練好,且最後層也要變動,但在資料層需要加入輸入資料的格式)
content in train_val.prototxt
name: "deploy"
input: "data"
input_shape {
dim: 1 # batchsize
dim: 3 # number of colour channels - rgb
dim: 28 # width
dim: 28 # height
}
layer {
name: "conv1"
type: "Convolution"
bottom: "data"
top: "conv1"
convolution_param {
num_output: 32
pad: 2
kernel_size: 5
stride: 1
}
}
layer {
name: "pool1"
type: "Pooling"
bottom: "conv1"
top: "pool1"
pooling_param {
pool: MAX
kernel_size: 3
stride: 2
}
}
layer {
name: "relu1"
type: "ReLU"
bottom: "pool1"
top: "pool1"
}
layer {
name: "conv2"
type: "Convolution"
bottom: "pool1"
top: "conv2"
convolution_param {
num_output: 32
pad: 2
kernel_size: 5
stride: 1
}
}
layer {
name: "relu2"
type: "ReLU"
bottom: "conv2"
top: "conv2"
}
layer {
name: "pool2"
type: "Pooling"
bottom: "conv2"
top: "pool2"
pooling_param {
pool: AVE
kernel_size: 3
stride: 2
}
}
layer {
name: "conv3"
type: "Convolution"
bottom: "pool2"
top: "conv3"
convolution_param {
num_output: 64
pad: 2
kernel_size: 5
stride: 1
}
}
layer {
name: "relu3"
type: "ReLU"
bottom: "conv3"
top: "conv3"
}
layer {
name: "pool3"
type: "Pooling"
bottom: "conv3"
top: "pool3"
pooling_param {
pool: AVE
kernel_size: 3
stride: 2
}
}
layer {
name: "ip1"
type: "InnerProduct"
bottom: "pool3"
top: "ip1"
inner_product_param {
num_output: 64
}
}
layer {
name: "ip2"
type: "InnerProduct"
bottom: "ip1"
top: "ip2"
inner_product_param {
num_output: 10
}
}
layer {
name: "prob"
type: "Softmax"
bottom: "ip2"
top: "loss"
}
4.Matlab呼叫模型
classification_demo改編而來
function [scores, maxlabel] = classification_vic(im, use_gpu)
% [scores, maxlabel] = classification_demo(im, use_gpu)
%
% Image classification demo using BVLC CaffeNet.
%
% IMPORTANT: before you run this demo, you should download BVLC CaffeNet
% from Model Zoo (http://caffe.berkeleyvision.org/model_zoo.html)
%
% ****************************************************************************
% For detailed documentation and usage on Caffe's Matlab interface, please
% refer to Caffe Interface Tutorial at
% http://caffe.berkeleyvision.org/tutorial/interfaces.html#matlab
% ****************************************************************************
%
% input
% im color image as uint8 HxWx3
% use_gpu 1 to use the GPU, 0 to use the CPU
%
% output
% scores 1000-dimensional ILSVRC score vector
% maxlabel the label of the highest score
%
% You may need to do the following before you start matlab:
% $ export LD_LIBRARY_PATH=/opt/intel/mkl/lib/intel64:/usr/local/cuda-5.5/lib64
% $ export LD_PRELOAD=/usr/lib/x86_64-linux-gnu/libstdc++.so.6
% Or the equivalent based on where things are installed on your system
%
% Usage:
% im = imread('../../examples/images/cat.jpg');
% scores = classification_demo(im, 1);
% [score, class] = max(scores);
% Five things to be aware of:
% caffe uses row-major order
% matlab uses column-major order
% caffe uses BGR color channel order
% matlab uses RGB color channel order
% images need to have the data mean subtracted
% Data coming in from matlab needs to be in the order
% [width, height, channels, images]
% where width is the fastest dimension.
% Here is the rough matlab for putting image data into the correct
% format in W x H x C with BGR channels:
% % permute channels from RGB to BGR
% im_data = im(:, :, [3, 2, 1]);
% % flip width and height to make width the fastest dimension
% im_data = permute(im_data, [2, 1, 3]);
% % convert from uint8 to single
% im_data = single(im_data);
% % reshape to a fixed size (e.g., 227x227).
% im_data = imresize(im_data, [IMAGE_DIM IMAGE_DIM], 'bilinear');
% % subtract mean_data (already in W x H x C with BGR channels)
% im_data = im_data - mean_data;
% If you have multiple images, cat them with cat(4, ...)
% Add caffe/matlab to you Matlab search PATH to use matcaffe
% if exist('../+caffe', 'dir')
% addpath('../../Build/x64/Release/matcaffe');
% else
% error('Please run this demo from caffe/matlab/demo');
% end
% Set caffe mode
if exist('use_gpu', 'var') && use_gpu
caffe.set_mode_gpu();
gpu_id = 0; % we will use the first gpu in this demo
caffe.set_device(gpu_id);
else
caffe.set_mode_cpu();
end
% Initialize the network using BVLC CaffeNet for image classification
% Weights (parameter) file needs to be downloaded from Model Zoo.
model_dir = 'Datatest/';
net_model = [model_dir 'deploy.prototxt'];
net_weights = [model_dir '_iter_1000.caffemodel'];
phase = 'test'; % run with phase test (so that dropout isn't applied)
if ~exist(net_weights, 'file')
error('Please download CaffeNet from Model Zoo before you run this demo');
end
% Initialize a network
net = caffe.Net(net_model, net_weights, phase);
if nargin < 1
% For demo purposes we will use the cat image
fprintf('using test/00E714_E.bmp as input image\n');
im = imread('test/00E714_E.bmp');
end
% prepare oversampled input
% input_data is Height x Width x Channel x Num
tic;
input_data = {prepare_image(im)};
toc;
% do forward pass to get scores
% scores are now Channels x Num, where Channels == 1000
tic;
% The net forward function. It takes in a cell array of N-D arrays
% (where N == 4 here) containing data of input blob(s) and outputs a cell
% array containing data from output blob(s)
scores = net.forward(input_data);
toc;
scores = scores{1};
scores = mean(scores, 2); % take average scores over 10 crops
[~, maxlabel] = max(scores);
% call caffe.reset_all() to reset caffe
caffe.reset_all();
% ------------------------------------------------------------------------
function crops_data = prepare_image(im)
% ------------------------------------------------------------------------
% caffe/matlab/+caffe/imagenet/ilsvrc_2012_mean.mat contains mean_data that
% is already in W x H x C with BGR channels
% d = load('../+caffe/imagenet/ilsvrc_2012_mean.mat');
% mean_data = d.mean_data;
mean_data =caffe.io.read_mean('Datatest/mean.binaryproto');
%CROPPED_DIM = 227;
% Convert an image returned by Matlab's imread to im_data in caffe's data
% format: W x H x C with BGR channels
im_data = im(:, :, [3, 2, 1]); % permute channels from RGB to BGR
im_data = permute(im_data, [2, 1, 3]); % flip width and height
im_data = single(im_data); % convert from uint8 to single
%im_data = imresize(im_data, [IMAGE_DIM IMAGE_DIM], 'bilinear'); % resize im_data
im_data = im_data - mean_data; % subtract mean_data (already in W x H x C, BGR)
crops_data =im_data ;
% % oversample (4 corners, center, and their x-axis flips)
% crops_data = zeros(CROPPED_DIM, CROPPED_DIM, 3, 10, 'single');
% indices = [0 IMAGE_DIM-CROPPED_DIM] + 1;
% n = 1;
% for i = indices
% for j = indices
% crops_data(:, :, :, n) = im_data(i:i+CROPPED_DIM-1, j:j+CROPPED_DIM-1, :);
% crops_data(:, :, :, n+5) = crops_data(end:-1:1, :, :, n);
% n = n + 1;
% end
% end
% center = floor(indices(2) / 2) + 1;
% crops_data(:,:,:,5) = ...
% im_data(center:center+CROPPED_DIM-1,center:center+CROPPED_DIM-1,:);
% crops_data(:,:,:,10) = crops_data(end:-1:1, :, :, 5);
5.GUI顯示
結果顯示如下
6.原始碼可從我的Github上Clone下來