Ubuntu18.04下安裝配置Caffe-SSD-GPU版本並MNIST模型測試和matlab caffe介面測試
Ubuntu18.04->sogou->顯示卡驅動->matlab2018a->cuda9.0->cudnn5.01->opencv3.4.1->caffe
opencv 和caffe 比較難裝。其中opencv最為慢,主要make 一次太久。
caffe 安裝
首先make all之前現在caffe的Python裡執行下面這句
cd /…/caffe/python
for req in $(cat requirements.txt); do pip install $req; done
https://blog.csdn.net/CAU_Ayao/article/details/84000151
報錯
sudo make runtest
.build_release/tools/caffe
.build_release/tools/caffe: error while loading shared libraries: libcublas.so.9.0: cannot open shared object file: No such file or directory
sudo vim ld.so.conf
增加
include /etc/ld.so.conf.d/*.conf
/usr/local/cuda/lib64
執行
sudo ldconfig
vim ~/.bashrc
增加
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/cuda/lib64
儲存退出
source ~/.bashrc
重新執行
sudo make clean
sudo make all
sudo make test
sudo make runtest
OK ] AnnotatedDataLayerTest/2.TestReadCropTrainSequenceUnseededLMDB (23 ms) [ RUN ] AnnotatedDataLayerTest/2.TestReadCropTestLevelDB [ OK ] AnnotatedDataLayerTest/2.TestReadCropTestLevelDB (43 ms) [----------] 12 tests from AnnotatedDataLayerTest/2 (732 ms total) [----------] 2 tests from CuDNNSoftmaxLayerTest/1, where TypeParam = double [ RUN ] CuDNNSoftmaxLayerTest/1.TestForwardCuDNN [ OK ] CuDNNSoftmaxLayerTest/1.TestForwardCuDNN (3 ms) [ RUN ] CuDNNSoftmaxLayerTest/1.TestGradientCuDNN [ OK ] CuDNNSoftmaxLayerTest/1.TestGradientCuDNN (962 ms) [----------] 2 tests from CuDNNSoftmaxLayerTest/1 (965 ms total) [----------] Global test environment tear-down [==========] 2361 tests from 309 test cases ran. (391325 ms total) [ PASSED ] 2361 tests.
vim ~/.bashrc
增加
export PYTHONPATH=~/caffe-ssd/python:$PYTHONPATH
啟用
source ~/.bashrc
報錯:
ImportError: No module named _caffe
執行
sudo make pycaffe
sudo pip install -U scikit-image
sudo pip install protobuf
import caffe
成功!!!
MNIST測試
0下載資料集
sudo ./data/mnist/get_mnist.sh
0格式轉換
sudo ./examples/mnist/create_mnist.sh
1訓練
sudo ./examples/mnist/train_lenet.sh
I1210 14:58:30.440436 343 caffe.cpp:217] Using GPUs 0
I1210 14:58:30.462970 343 caffe.cpp:222] GPU 0: GeForce GTX 1070
I1210 14:58:30.632964 343 solver.cpp:63] Initializing solver from parameters:
test_iter: 100
test_interval: 500
base_lr: 0.01
display: 100
max_iter: 10000
lr_policy: "inv"
gamma: 0.0001
power: 0.75
momentum: 0.9
weight_decay: 0.0005
snapshot: 5000
snapshot_prefix: "examples/mnist/lenet"
solver_mode: GPU
device_id: 0
net: "examples/mnist/lenet_train_test.prototxt"
train_state {
level: 0
stage: ""
}
I1210 14:58:30.633087 343 solver.cpp:106] Creating training net from net file: examples/mnist/lenet_train_test.prototxt
I1210 14:58:30.633352 343 net.cpp:322] The NetState phase (0) differed from the phase (1) specified by a rule in layer mnist
I1210 14:58:30.633363 343 net.cpp:322] The NetState phase (0) differed from the phase (1) specified by a rule in layer accuracy
I1210 14:58:30.633415 343 net.cpp:58] Initializing net from parameters:
name: "LeNet"
state {
phase: TRAIN
level: 0
stage: ""
}
layer {
name: "mnist"
type: "Data"
top: "data"
top: "label"
include {
phase: TRAIN
}
transform_param {
scale: 0.00390625
}
data_param {
source: "examples/mnist/mnist_train_lmdb"
batch_size: 64
backend: LMDB
}
}
layer {
name: "conv1"
type: "Convolution"
bottom: "data"
top: "conv1"
param {
lr_mult: 1
}
param {
lr_mult: 2
}
convolution_param {
num_output: 20
kernel_size: 5
stride: 1
weight_filler {
type: "xavier"
}
bias_filler {
type: "constant"
}
}
}
layer {
name: "pool1"
type: "Pooling"
bottom: "conv1"
top: "pool1"
pooling_param {
pool: MAX
kernel_size: 2
stride: 2
}
}
layer {
name: "conv2"
type: "Convolution"
bottom: "pool1"
top: "conv2"
param {
lr_mult: 1
}
param {
lr_mult: 2
}
convolution_param {
num_output: 50
kernel_size: 5
stride: 1
weight_filler {
type: "xavier"
}
bias_filler {
type: "constant"
}
}
}
layer {
name: "pool2"
type: "Pooling"
bottom: "conv2"
top: "pool2"
pooling_param {
pool: MAX
kernel_size: 2
stride: 2
}
}
layer {
name: "ip1"
type: "InnerProduct"
bottom: "pool2"
top: "ip1"
param {
lr_mult: 1
}
param {
lr_mult: 2
}
inner_product_param {
num_output: 500
weight_filler {
type: "xavier"
}
bias_filler {
type: "constant"
}
}
}
layer {
name: "relu1"
type: "ReLU"
bottom: "ip1"
top: "ip1"
}
layer {
name: "ip2"
type: "InnerProduct"
bottom: "ip1"
top: "ip2"
param {
lr_mult: 1
}
param {
lr_mult: 2
}
inner_product_param {
num_output: 10
weight_filler {
type: "xavier"
}
bias_filler {
type: "constant"
}
}
}
layer {
name: "loss"
type: "SoftmaxWithLoss"
bottom: "ip2"
bottom: "label"
top: "loss"
}
I1210 14:58:30.633489 343 layer_factory.hpp:77] Creating layer mnist
I1210 14:58:30.633653 343 net.cpp:100] Creating Layer mnist
I1210 14:58:30.633678 343 net.cpp:408] mnist -> data
I1210 14:58:30.633693 343 net.cpp:408] mnist -> label
I1210 14:58:30.634238 354 db_lmdb.cpp:35] Opened lmdb examples/mnist/mnist_train_lmdb
I1210 14:58:30.644352 343 data_layer.cpp:41] output data size: 64,1,28,28
I1210 14:58:30.645308 343 net.cpp:150] Setting up mnist
I1210 14:58:30.645336 343 net.cpp:157] Top shape: 64 1 28 28 (50176)
I1210 14:58:30.645340 343 net.cpp:157] Top shape: 64 (64)
I1210 14:58:30.645342 343 net.cpp:165] Memory required for data: 200960
I1210 14:58:30.645349 343 layer_factory.hpp:77] Creating layer conv1
I1210 14:58:30.645370 343 net.cpp:100] Creating Layer conv1
I1210 14:58:30.645375 343 net.cpp:434] conv1 <- data
I1210 14:58:30.645383 343 net.cpp:408] conv1 -> conv1
I1210 14:58:31.067790 343 net.cpp:150] Setting up conv1
I1210 14:58:31.067811 343 net.cpp:157] Top shape: 64 20 24 24 (737280)
I1210 14:58:31.067814 343 net.cpp:165] Memory required for data: 3150080
I1210 14:58:31.067827 343 layer_factory.hpp:77] Creating layer pool1
I1210 14:58:31.067836 343 net.cpp:100] Creating Layer pool1
I1210 14:58:31.067859 343 net.cpp:434] pool1 <- conv1
I1210 14:58:31.067864 343 net.cpp:408] pool1 -> pool1
I1210 14:58:31.067948 343 net.cpp:150] Setting up pool1
I1210 14:58:31.067953 343 net.cpp:157] Top shape: 64 20 12 12 (184320)
I1210 14:58:31.067956 343 net.cpp:165] Memory required for data: 3887360
I1210 14:58:31.067960 343 layer_factory.hpp:77] Creating layer conv2
I1210 14:58:31.067967 343 net.cpp:100] Creating Layer conv2
I1210 14:58:31.067970 343 net.cpp:434] conv2 <- pool1
I1210 14:58:31.067975 343 net.cpp:408] conv2 -> conv2
I1210 14:58:31.069413 343 net.cpp:150] Setting up conv2
I1210 14:58:31.069423 343 net.cpp:157] Top shape: 64 50 8 8 (204800)
I1210 14:58:31.069427 343 net.cpp:165] Memory required for data: 4706560
I1210 14:58:31.069432 343 layer_factory.hpp:77] Creating layer pool2
I1210 14:58:31.069437 343 net.cpp:100] Creating Layer pool2
I1210 14:58:31.069440 343 net.cpp:434] pool2 <- conv2
I1210 14:58:31.069443 343 net.cpp:408] pool2 -> pool2
I1210 14:58:31.069475 343 net.cpp:150] Setting up pool2
I1210 14:58:31.069480 343 net.cpp:157] Top shape: 64 50 4 4 (51200)
I1210 14:58:31.069483 343 net.cpp:165] Memory required for data: 4911360
I1210 14:58:31.069485 343 layer_factory.hpp:77] Creating layer ip1
I1210 14:58:31.069491 343 net.cpp:100] Creating Layer ip1
I1210 14:58:31.069494 343 net.cpp:434] ip1 <- pool2
I1210 14:58:31.069499 343 net.cpp:408] ip1 -> ip1
I1210 14:58:31.071511 343 net.cpp:150] Setting up ip1
I1210 14:58:31.071521 343 net.cpp:157] Top shape: 64 500 (32000)
I1210 14:58:31.071522 343 net.cpp:165] Memory required for data: 5039360
I1210 14:58:31.071528 343 layer_factory.hpp:77] Creating layer relu1
I1210 14:58:31.071533 343 net.cpp:100] Creating Layer relu1
I1210 14:58:31.071537 343 net.cpp:434] relu1 <- ip1
I1210 14:58:31.071540 343 net.cpp:395] relu1 -> ip1 (in-place)
I1210 14:58:31.071892 343 net.cpp:150] Setting up relu1
I1210 14:58:31.071898 343 net.cpp:157] Top shape: 64 500 (32000)
I1210 14:58:31.071902 343 net.cpp:165] Memory required for data: 5167360
I1210 14:58:31.071904 343 layer_factory.hpp:77] Creating layer ip2
I1210 14:58:31.071910 343 net.cpp:100] Creating Layer ip2
I1210 14:58:31.071913 343 net.cpp:434] ip2 <- ip1
I1210 14:58:31.071918 343 net.cpp:408] ip2 -> ip2
I1210 14:58:31.072499 343 net.cpp:150] Setting up ip2
I1210 14:58:31.072506 343 net.cpp:157] Top shape: 64 10 (640)
I1210 14:58:31.072510 343 net.cpp:165] Memory required for data: 5169920
I1210 14:58:31.072515 343 layer_factory.hpp:77] Creating layer loss
I1210 14:58:31.072520 343 net.cpp:100] Creating Layer loss
I1210 14:58:31.072522 343 net.cpp:434] loss <- ip2
I1210 14:58:31.072525 343 net.cpp:434] loss <- label
I1210 14:58:31.072532 343 net.cpp:408] loss -> loss
I1210 14:58:31.072542 343 layer_factory.hpp:77] Creating layer loss
I1210 14:58:31.072957 343 net.cpp:150] Setting up loss
I1210 14:58:31.072964 343 net.cpp:157] Top shape: (1)
I1210 14:58:31.072966 343 net.cpp:160] with loss weight 1
I1210 14:58:31.072978 343 net.cpp:165] Memory required for data: 5169924
I1210 14:58:31.072981 343 net.cpp:226] loss needs backward computation.
I1210 14:58:31.072986 343 net.cpp:226] ip2 needs backward computation.
I1210 14:58:31.072989 343 net.cpp:226] relu1 needs backward computation.
I1210 14:58:31.072993 343 net.cpp:226] ip1 needs backward computation.
I1210 14:58:31.072996 343 net.cpp:226] pool2 needs backward computation.
I1210 14:58:31.072999 343 net.cpp:226] conv2 needs backward computation.
I1210 14:58:31.073002 343 net.cpp:226] pool1 needs backward computation.
I1210 14:58:31.073004 343 net.cpp:226] conv1 needs backward computation.
I1210 14:58:31.073010 343 net.cpp:228] mnist does not need backward computation.
I1210 14:58:31.073014 343 net.cpp:270] This network produces output loss
I1210 14:58:31.073019 343 net.cpp:283] Network initialization done.
I1210 14:58:31.073135 343 solver.cpp:196] Creating test net (#0) specified by net file: examples/mnist/lenet_train_test.prototxt
I1210 14:58:31.073166 343 net.cpp:322] The NetState phase (1) differed from the phase (0) specified by a rule in layer mnist
I1210 14:58:31.073215 343 net.cpp:58] Initializing net from parameters:
name: "LeNet"
state {
phase: TEST
}
layer {
name: "mnist"
type: "Data"
top: "data"
top: "label"
include {
phase: TEST
}
transform_param {
scale: 0.00390625
}
data_param {
source: "examples/mnist/mnist_test_lmdb"
batch_size: 100
backend: LMDB
}
}
layer {
name: "conv1"
type: "Convolution"
bottom: "data"
top: "conv1"
param {
lr_mult: 1
}
param {
lr_mult: 2
}
convolution_param {
num_output: 20
kernel_size: 5
stride: 1
weight_filler {
type: "xavier"
}
bias_filler {
type: "constant"
}
}
}
layer {
name: "pool1"
type: "Pooling"
bottom: "conv1"
top: "pool1"
pooling_param {
pool: MAX
kernel_size: 2
stride: 2
}
}
layer {
name: "conv2"
type: "Convolution"
bottom: "pool1"
top: "conv2"
param {
lr_mult: 1
}
param {
lr_mult: 2
}
convolution_param {
num_output: 50
kernel_size: 5
stride: 1
weight_filler {
type: "xavier"
}
bias_filler {
type: "constant"
}
}
}
layer {
name: "pool2"
type: "Pooling"
bottom: "conv2"
top: "pool2"
pooling_param {
pool: MAX
kernel_size: 2
stride: 2
}
}
layer {
name: "ip1"
type: "InnerProduct"
bottom: "pool2"
top: "ip1"
param {
lr_mult: 1
}
param {
lr_mult: 2
}
inner_product_param {
num_output: 500
weight_filler {
type: "xavier"
}
bias_filler {
type: "constant"
}
}
}
layer {
name: "relu1"
type: "ReLU"
bottom: "ip1"
top: "ip1"
}
layer {
name: "ip2"
type: "InnerProduct"
bottom: "ip1"
top: "ip2"
param {
lr_mult: 1
}
param {
lr_mult: 2
}
inner_product_param {
num_output: 10
weight_filler {
type: "xavier"
}
bias_filler {
type: "constant"
}
}
}
layer {
name: "accuracy"
type: "Accuracy"
bottom: "ip2"
bottom: "label"
top: "accuracy"
include {
phase: TEST
}
}
layer {
name: "loss"
type: "SoftmaxWithLoss"
bottom: "ip2"
bottom: "label"
top: "loss"
}
I1210 14:58:31.073285 343 layer_factory.hpp:77] Creating layer mnist
I1210 14:58:31.073367 343 net.cpp:100] Creating Layer mnist
I1210 14:58:31.073374 343 net.cpp:408] mnist -> data
I1210 14:58:31.073380 343 net.cpp:408] mnist -> label
I1210 14:58:31.073958 356 db_lmdb.cpp:35] Opened lmdb examples/mnist/mnist_test_lmdb
I1210 14:58:31.074043 343 data_layer.cpp:41] output data size: 100,1,28,28
I1210 14:58:31.074859 343 net.cpp:150] Setting up mnist
I1210 14:58:31.074869 343 net.cpp:157] Top shape: 100 1 28 28 (78400)
I1210 14:58:31.074873 343 net.cpp:157] Top shape: 100 (100)
I1210 14:58:31.074875 343 net.cpp:165] Memory required for data: 314000
I1210 14:58:31.074878 343 layer_factory.hpp:77] Creating layer label_mnist_1_split
I1210 14:58:31.074885 343 net.cpp:100] Creating Layer label_mnist_1_split
I1210 14:58:31.074888 343 net.cpp:434] label_mnist_1_split <- label
I1210 14:58:31.074892 343 net.cpp:408] label_mnist_1_split -> label_mnist_1_split_0
I1210 14:58:31.074898 343 net.cpp:408] label_mnist_1_split -> label_mnist_1_split_1
I1210 14:58:31.074932 343 net.cpp:150] Setting up label_mnist_1_split
I1210 14:58:31.074936 343 net.cpp:157] Top shape: 100 (100)
I1210 14:58:31.074960 343 net.cpp:157] Top shape: 100 (100)
I1210 14:58:45.304703 343 solver.cpp:259] Train net output #0: loss = 0.00905059 (* 1 = 0.00905059 loss)
I1210 14:58:45.304708 343 sgd_solver.cpp:138] Iteration 9300, lr = 0.00610706
I1210 14:58:45.447075 343 solver.cpp:243] Iteration 9400, loss = 0.0251583
I1210 14:58:45.447098 343 solver.cpp:259] Train net output #0: loss = 0.0251585 (* 1 = 0.0251585 loss)
I1210 14:58:45.447103 343 sgd_solver.cpp:138] Iteration 9400, lr = 0.00608343
I1210 14:58:45.587970 343 solver.cpp:358] Iteration 9500, Testing net (#0)
I1210 14:58:45.639950 343 solver.cpp:425] Test net output #0: accuracy = 0.989
I1210 14:58:45.639971 343 solver.cpp:425] Test net output #1: loss = 0.0345453 (* 1 = 0.0345453 loss)
I1210 14:58:45.640565 343 solver.cpp:243] Iteration 9500, loss = 0.00247001
I1210 14:58:45.640578 343 solver.cpp:259] Train net output #0: loss = 0.00247023 (* 1 = 0.00247023 loss)
I1210 14:58:45.640599 343 sgd_solver.cpp:138] Iteration 9500, lr = 0.00606002
I1210 14:58:45.781831 343 solver.cpp:243] Iteration 9600, loss = 0.00303816
I1210 14:58:45.781853 343 solver.cpp:259] Train net output #0: loss = 0.00303838 (* 1 = 0.00303838 loss)
I1210 14:58:45.781858 343 sgd_solver.cpp:138] Iteration 9600, lr = 0.00603682
I1210 14:58:45.923971 343 solver.cpp:243] Iteration 9700, loss = 0.002835
I1210 14:58:45.923993 343 solver.cpp:259] Train net output #0: loss = 0.00283521 (* 1 = 0.00283521 loss)
I1210 14:58:45.924000 343 sgd_solver.cpp:138] Iteration 9700, lr = 0.00601382
I1210 14:58:46.071753 343 solver.cpp:243] Iteration 9800, loss = 0.0137216
I1210 14:58:46.071775 343 solver.cpp:259] Train net output #0: loss = 0.0137218 (* 1 = 0.0137218 loss)
I1210 14:58:46.071780 343 sgd_solver.cpp:138] Iteration 9800, lr = 0.00599102
I1210 14:58:46.211102 343 solver.cpp:243] Iteration 9900, loss = 0.00450876
I1210 14:58:46.211122 343 solver.cpp:259] Train net output #0: loss = 0.00450896 (* 1 = 0.00450896 loss)
I1210 14:58:46.211127 343 sgd_solver.cpp:138] Iteration 9900, lr = 0.00596843
I1210 14:58:46.351517 343 solver.cpp:596] Snapshotting to binary proto file examples/mnist/lenet_iter_10000.caffemodel
I1210 14:58:46.356554 343 sgd_solver.cpp:307] Snapshotting solver state to binary proto file examples/mnist/lenet_iter_10000.solverstate
I1210 14:58:46.358690 343 solver.cpp:332] Iteration 10000, loss = 0.00307509
I1210 14:58:46.358705 343 solver.cpp:358] Iteration 10000, Testing net (#0)
I1210 14:58:46.410430 343 solver.cpp:425] Test net output #0: accuracy = 0.9909
I1210 14:58:46.410466 343 solver.cpp:425] Test net output #1: loss = 0.0283011 (* 1 = 0.0283011 loss)
I1210 14:58:46.410472 343 solver.cpp:337] Optimization Done.
I1210 14:58:46.410478 343 caffe.cpp:254] Optimization Done.
2測試
2.1先新建檔案test_lenet.sh:
sudo gedit test_lenet.sh
2.1增加內容
定義模型、權重
./build/tools/caffe.bin test -model examples/mnist/lenet_train_test.prototxt -weights examples/mnist/lenet_iter_10000.caffemodel -iterations 100
sudo sh test_lenet2.sh
I1210 15:04:57.474340 1817 caffe.cpp:279] Use CPU.
I1210 15:04:57.668058 1817 net.cpp:322] The NetState phase (1) differed from the phase (0) specified by a rule in layer mnist
I1210 15:04:57.668138 1817 net.cpp:58] Initializing net from parameters:
name: "LeNet"
state {
phase: TEST
level: 0
stage: ""
}
layer {
name: "mnist"
type: "Data"
top: "data"
top: "label"
include {
phase: TEST
}
transform_param {
scale: 0.00390625
}
data_param {
source: "examples/mnist/mnist_test_lmdb"
batch_size: 100
backend: LMDB
}
}
layer {
name: "conv1"
type: "Convolution"
bottom: "data"
top: "conv1"
param {
lr_mult: 1
}
param {
lr_mult: 2
}
convolution_param {
num_output: 20
kernel_size: 5
stride: 1
weight_filler {
type: "xavier"
}
bias_filler {
type: "constant"
}
}
}
layer {
name: "pool1"
type: "Pooling"
bottom: "conv1"
top: "pool1"
pooling_param {
pool: MAX
kernel_size: 2
stride: 2
}
}
layer {
name: "conv2"
type: "Convolution"
bottom: "pool1"
top: "conv2"
param {
lr_mult: 1
}
param {
lr_mult: 2
}
convolution_param {
num_output: 50
kernel_size: 5
stride: 1
weight_filler {
type: "xavier"
}
bias_filler {
type: "constant"
}
}
}
layer {
name: "pool2"
type: "Pooling"
bottom: "conv2"
top: "pool2"
pooling_param {
pool: MAX
kernel_size: 2
stride: 2
}
}
layer {
name: "ip1"
type: "InnerProduct"
bottom: "pool2"
top: "ip1"
param {
lr_mult: 1
}
param {
lr_mult: 2
}
inner_product_param {
num_output: 500
weight_filler {
type: "xavier"
}
bias_filler {
type: "constant"
}
}
}
layer {
name: "relu1"
type: "ReLU"
bottom: "ip1"
top: "ip1"
}
layer {
name: "ip2"
type: "InnerProduct"
bottom: "ip1"
top: "ip2"
param {
lr_mult: 1
}
param {
lr_mult: 2
}
inner_product_param {
num_output: 10
weight_filler {
type: "xavier"
}
bias_filler {
type: "constant"
}
}
}
layer {
name: "accuracy"
type: "Accuracy"
bottom: "ip2"
bottom: "label"
top: "accuracy"
include {
phase: TEST
}
}
layer {
name: "loss"
type: "SoftmaxWithLoss"
bottom: "ip2"
bottom: "label"
top: "loss"
}
I1210 15:04:57.668231 1817 layer_factory.hpp:77] Creating layer mnist
I1210 15:04:57.668817 1817 net.cpp:100] Creating Layer mnist
I1210 15:04:57.668828 1817 net.cpp:408] mnist -> data
I1210 15:04:57.668843 1817 net.cpp:408] mnist -> label
I1210 15:04:57.669452 1842 db_lmdb.cpp:35] Opened lmdb examples/mnist/mnist_test_lmdb
I1210 15:04:57.669580 1817 data_layer.cpp:41] output data size: 100,1,28,28
I1210 15:04:57.669999 1817 net.cpp:150] Setting up mnist
I1210 15:04:57.670009 1817 net.cpp:157] Top shape: 100 1 28 28 (78400)
I1210 15:04:57.670013 1817 net.cpp:157] Top shape: 100 (100)
I1210 15:04:57.670015 1817 net.cpp:165] Memory required for data: 314000
I1210 15:04:57.670020 1817 layer_factory.hpp:77] Creating layer label_mnist_1_split
I1210 15:04:57.670051 1817 net.cpp:100] Creating Layer label_mnist_1_split
I1210 15:04:57.670058 1817 net.cpp:434] label_mnist_1_split <- label
I1210 15:04:57.670066 1817 net.cpp:408] label_mnist_1_split -> label_mnist_1_split_0
I1210 15:04:57.670074 1817 net.cpp:408] label_mnist_1_split -> label_mnist_1_split_1
I1210 15:04:57.670081 1817 net.cpp:150] Setting up label_mnist_1_split
I1210 15:04:57.670085 1817 net.cpp:157] Top shape: 100 (100)
I1210 15:04:57.670089 1817 net.cpp:157] Top shape: 100 (100)
I1210 15:04:57.670090 1817 net.cpp:165] Memory required for data: 314800
I1210 15:04:57.670094 1817 layer_factory.hpp:77] Creating layer conv1
I1210 15:04:57.670104 1817 net.cpp:100] Creating Layer conv1
I1210 15:04:57.670107 1817 net.cpp:434] conv1 <- data
I1210 15:04:57.670111 1817 net.cpp:408] conv1 -> conv1
I1210 15:04:58.105351 1817 net.cpp:150] Setting up conv1
I1210 15:04:58.105371 1817 net.cpp:157] Top shape: 100 20 24 24 (1152000)
I1210 15:04:58.105376 1817 net.cpp:165] Memory required for data: 4922800
I1210 15:04:58.105406 1817 layer_factory.hpp:77] Creating layer pool1
I1210 15:04:58.105437 1817 net.cpp:100] Creating Layer pool1
I1210 15:04:58.105440 1817 net.cpp:434] pool1 <- conv1
I1210 15:04:58.105446 1817 net.cpp:408] pool1 -> pool1
I1210 15:04:58.105464 1817 net.cpp:150] Setting up pool1
I1210 15:04:58.105469 1817 net.cpp:157] Top shape: 100 20 12 12 (288000)
I1210 15:04:58.105471 1817 net.cpp:165] Memory required for data: 6074800
I1210 15:04:58.105475 1817 layer_factory.hpp:77] Creating layer conv2
I1210 15:04:58.105501 1817 net.cpp:100] Creating Layer conv2
I1210 15:04:58.105520 1817 net.cpp:434] conv2 <- pool1
I1210 15:04:58.105525 1817 net.cpp:408] conv2 -> conv2
I1210 15:04:58.106905 1817 net.cpp:150] Setting up conv2
I1210 15:04:58.106915 1817 net.cpp:157] Top shape: 100 50 8 8 (320000)
I1210 15:04:58.106918 1817 net.cpp:165] Memory required for data: 7354800
I1210 15:04:58.106925 1817 layer_factory.hpp:77] Creating layer pool2
I1210 15:04:58.106943 1817 net.cpp:100] Creating Layer pool2
I1210 15:04:58.106947 1817 net.cpp:434] pool2 <- conv2
I1210 15:04:58.106952 1817 net.cpp:408] pool2 -> pool2
I1210 15:04:58.106958 1817 net.cpp:150] Setting up pool2
I1210 15:04:58.106963 1817 net.cpp:157] Top shape: 100 50 4 4 (80000)
I1210 15:04:58.106966 1817 net.cpp:165] Memory required for data: 7674800
I1210 15:04:58.106968 1817 layer_factory.hpp:77] Creating layer ip1
I1210 15:04:58.106976 1817 net.cpp:100] Creating Layer ip1
I1210 15:04:58.106979 1817 net.cpp:434] ip1 <- pool2
I1210 15:04:58.106983 1817 net.cpp:408] ip1 -> ip1
I1210 15:04:58.108822 1817 net.cpp:150] Setting up ip1
I1210 15:04:58.108829 1817 net.cpp:157] Top shape: 100 500 (50000)
I1210 15:04:58.108831 1817 net.cpp:165] Memory required for data: 7874800
I1210 15:04:58.108836 1817 layer_factory.hpp:77] Creating layer relu1
I1210 15:04:58.108840 1817 net.cpp:100] Creating Layer relu1
I1210 15:04:58.108844 1817 net.cpp:434] relu1 <- ip1
I1210 15:04:58.108861 1817 net.cpp:395] relu1 -> ip1 (in-place)
I1210 15:04:58.109218 1817 net.cpp:150] Setting up relu1
I1210 15:04:58.109225 1817 net.cpp:157] Top shape: 100 500 (50000)
I1210 15:04:58.109228 1817 net.cpp:165] Memory required for data: 8074800
I1210 15:04:58.109231 1817 layer_factory.hpp:77] Creating layer ip2
I1210 15:04:58.109252 1817 net.cpp:100] Creating Layer ip2
I1210 15:04:58.109256 1817 net.cpp:434] ip2 <- ip1
I1210 15:04:58.109261 1817 net.cpp:408] ip2 -> ip2
I1210 15:04:58.109295 1817 net.cpp:150] Setting up ip2
I1210 15:04:58.109299 1817 net.cpp:157] Top shape: 100 10 (1000)
I1210 15:04:58.109302 1817 net.cpp:165] Memory required for data: 8078800
I1210 15:04:58.109306 1817 layer_factory.hpp:77] Creating layer ip2_ip2_0_split
I1210 15:04:58.109313 1817 net.cpp:100] Creating Layer ip2_ip2_0_split
I1210 15:04:58.109315 1817 net.cpp:434] ip2_ip2_0_split <- ip2
I1210 15:04:58.109319 1817 net.cpp:408] ip2_ip2_0_split -> ip2_ip2_0_split_0
I1210 15:04:58.109325 1817 net.cpp:408] ip2_ip2_0_split -> ip2_ip2_0_split_1
I1210 15:04:58.109330 1817 net.cpp:150] Setting up ip2_ip2_0_split
I1210 15:04:58.109334 1817 net.cpp:157] Top shape: 100 10 (1000)
I1210 15:04:58.109338 1817 net.cpp:157] Top shape: 100 10 (1000)
I1210 15:04:58.109340 1817 net.cpp:165] Memory required for data: 8086800
I1210 15:04:58.109344 1817 layer_factory.hpp:77] Creating layer accuracy
I1210 15:04:58.109349 1817 net.cpp:100] Creating Layer accuracy
I1210 15:04:58.109351 1817 net.cpp:434] accuracy <- ip2_ip2_0_split_0
I1210 15:04:58.109354 1817 net.cpp:434] accuracy <- label_mnist_1_split_0
I1210 15:04:58.109359 1817 net.cpp:408] accuracy -> accuracy
I1210 15:04:58.109364 1817 net.cpp:150] Setting up accuracy
I1210 15:04:58.109367 1817 net.cpp:157] Top shape: (1)
I1210 15:04:58.109370 1817 net.cpp:165] Memory required for data: 8086804
I1210 15:04:58.109374 1817 layer_factory.hpp:77] Creating layer loss
I1210 15:04:58.109380 1817 net.cpp:100] Creating Layer loss
I1210 15:04:58.109382 1817 net.cpp:434] loss <- ip2_ip2_0_split_1
I1210 15:04:58.109386 1817 net.cpp:434] loss <- label_mnist_1_split_1
I1210 15:04:58.109421 1817 net.cpp:408] loss -> loss
I1210 15:04:58.109429 1817 layer_factory.hpp:77] Creating layer loss
I1210 15:04:58.109791 1817 net.cpp:150] Setting up loss
I1210 15:04:58.109798 1817 net.cpp:157] Top shape: (1)
I1210 15:04:58.109802 1817 net.cpp:160] with loss weight 1
I1210 15:04:58.109814 1817 net.cpp:165] Memory required for data: 8086808
I1210 15:04:58.109817 1817 net.cpp:226] loss needs backward computation.
I1210 15:04:58.109823 1817 net.cpp:228] accuracy does not need backward computation.
I1210 15:04:58.109827 1817 net.cpp:226] ip2_ip2_0_split needs backward computation.
I1210 15:04:58.109830 1817 net.cpp:226] ip2 needs backward computation.
I1210 15:04:58.109833 1817 net.cpp:226] relu1 needs backward computation.
I1210 15:04:58.109838 1817 net.cpp:226] ip1 needs backward computation.
I1210 15:04:58.109840 1817 net.cpp:226] pool2 needs backward computation.
I1210 15:04:58.109843 1817 net.cpp:226] conv2 needs backward computation.
I1210 15:04:58.109846 1817 net.cpp:226] pool1 needs backward computation.
I1210 15:04:58.109850 1817 net.cpp:226] conv1 needs backward computation.
I1210 15:04:58.109853 1817 net.cpp:228] label_mnist_1_split does not need backward computation.
I1210 15:04:58.109858 1817 net.cpp:228] mnist does not need backward computation.
I1210 15:04:58.109860 1817 net.cpp:270] This network produces output accuracy
I1210 15:04:58.109864 1817 net.cpp:270] This network produces output loss
I1210 15:04:58.109874 1817 net.cpp:283] Network initialization done.
I1210 15:04:58.111215 1817 caffe.cpp:285] Running for 100 iterations.
I1210 15:04:58.132164 1817 caffe.cpp:308] Batch 0, accuracy = 1
I1210 15:04:58.132187 1817 caffe.cpp:308] Batch 0, loss = 0.00986392
I1210 15:04:58.150775 1817 caffe.cpp:308] Batch 1, accuracy = 1
I1210 15:04:58.150795 1817 caffe.cpp:308] Batch 1, loss = 0.00429982
I1210 15:04:58.169167 1817 caffe.cpp:308] Batch 2, accuracy = 0.99
I1210 15:04:58.169183 1817 caffe.cpp:308] Batch 2, loss = 0.025451
I1210 15:04:58.186553 1817 caffe.cpp:308] Batch 3, accuracy = 0.99
I1210 15:04:58.186573 1817 caffe.cpp:308] Batch 3, loss = 0.0285686
I1210 15:04:58.204509 1817 caffe.cpp:308] Batch 4, accuracy = 0.99
I1210 15:04:58.204527 1817 caffe.cpp:308] Batch 4, loss = 0.0695804
I1210 15:04:58.222080 1817 caffe.cpp:308] Batch 5, accuracy = 0.99
I1210 15:04:58.222098 1817 caffe.cpp:308] Batch 5, loss = 0.0460763
= 0.000167832
I1210 15:04:59.981992 1817 caffe.cpp:308] Batch 98, accuracy = 1
I1210 15:04:59.982015 1817 caffe.cpp:308] Batch 98, loss = 0.00464218
I1210 15:05:00.013013 1817 caffe.cpp:308] Batch 99, accuracy = 1
I1210 15:05:00.013046 1817 caffe.cpp:308] Batch 99, loss = 0.00950743
I1210 15:05:00.013056 1817 caffe.cpp:313] Loss: 0.0283011
I1210 15:05:00.013063 1817 caffe.cpp:325] accuracy = 0.9909
I1210 15:05:00.013078 1817 caffe.cpp:325] loss = 0.0283011 (* 1 = 0.0283011 loss)
matlab caffe介面測試
在網址下載200M左右的模型。
https://github.com/BVLC/caffe/tree/master/models/bvlc_reference_caffenet
放在對應的位置
在該目錄下/home/XX/Downloads/caffe-ssd/matlab/demo
執行
classification_demo.m
報錯
Undefined function or variable 'caffe_'.
Error in caffe.set_mode_cpu (line 5)
caffe_('set_mode_cpu');
Error in classification_demo (line 70)
caffe.set_mode_cpu();
sudo ln -sf /usr/lib/x86_64-linux-gnu/libstdc++.so.6 /usr/local/MATLAB/R2018a/bin/glnxa64/libstdc++.so.6
在caffe資料夾下執行
sudo make mattest -j8
使用gpu,在第65行增加設定
% Set caffe mode
65 use_gpu=1;
if exist(‘use_gpu’, ‘var’) && use_gpu
再次執行:classification_demo.m
using caffe/examples/images/cat.jpg as input image
Elapsed time is 0.011105 seconds.
Elapsed time is 0.028331 seconds.
Cleared 0 solvers and 1 stand-alone nets
ans =
1000×1 single column vector
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
...
0.0000
0.0000
0.0000