測試了一下keras和mxnet的速度
這兩個都很好用啊,適合我這樣的入門小白
win10 64 cuda8.0 cudnn5.1 gtx1060
cnn mnist
import numpy import os import urllib import gzip import struct def read_data(label_name, image_name): s=os.getenv('DATA') with gzip.open(os.getenv('DATA')+'\\MNIST\\'+label_name) as flbl: magic, num = struct.unpack(">II", flbl.read(8)) label = numpy.fromstring(flbl.read(), dtype=numpy.int8) with gzip.open(os.getenv('DATA')+'\\MNIST\\'+image_name, 'rb') as fimg: magic, num, rows, cols = struct.unpack(">IIII", fimg.read(16)) image = numpy.fromstring(fimg.read(), dtype=numpy.uint8).reshape(len(label), rows, cols) return (label, image) (train_lbl, train_img) = read_data('train-labels-idx1-ubyte.gz', 'train-images-idx3-ubyte.gz') (val_lbl, val_img) = read_data('t10k-labels-idx1-ubyte.gz','t10k-images-idx3-ubyte.gz') def to4d(img): return img.reshape(img.shape[0], 1, 28, 28).astype(numpy.float32)/255 def repack_data(d): t = numpy.zeros((d.size, 10)) for i in range(d.size): t[i][d[i]] = 1 return t train_img=to4d(train_img) val_img=to4d(val_img) batch_size = 100 num_epoch =5 #backend='mxnet' backend='keras' if backend=='keras': from keras.models import * from keras.layers import * from keras.optimizers import * model = Sequential() model.add(Convolution2D(64, 5, 5, input_shape=(1,28,28), init='uniform', activation='relu')) model.add(MaxPooling2D()) model.add(Convolution2D(128, 5, 5, init='uniform', activation='relu')) model.add(MaxPooling2D()) model.add(Flatten()) model.add(Dense(1024, init='uniform', activation='relu')) model.add(Dense(1024, init='uniform', activation='relu')) model.add(Dense(10, init='uniform', activation='softmax')) model.summary() model.compile(loss='categorical_crossentropy', optimizer=adadelta(), metrics=['accuracy']) model.fit(train_img,repack_data(train_lbl),batch_size=batch_size,nb_epoch=num_epoch,validation_data=(val_img,repack_data(val_lbl))) else: import mxnet train_iter = mxnet.io.NDArrayIter(train_img, train_lbl, batch_size, shuffle=True) val_iter = mxnet.io.NDArrayIter(val_img, val_lbl, batch_size) data = mxnet.symbol.Variable('data') conv1 = mxnet.sym.Convolution(data=data, kernel=(5, 5), num_filter=64) relu1 = mxnet.sym.Activation(data=conv1, act_type="relu") pool1 = mxnet.sym.Pooling(data=relu1, pool_type="max", kernel=(2, 2), stride=(2, 2)) conv2 = mxnet.sym.Convolution(data=pool1, kernel=(5, 5), num_filter=128) relu2 = mxnet.sym.Activation(data=conv2, act_type="relu") pool2 = mxnet.sym.Pooling(data=relu2, pool_type="max", kernel=(2, 2), stride=(2, 2)) flatten = mxnet.sym.Flatten(data=pool2) fc1 = mxnet.symbol.FullyConnected(data=flatten, num_hidden=1024) relu3 = mxnet.sym.Activation(data=fc1, act_type="relu") fc2 = mxnet.symbol.FullyConnected(data=relu3, num_hidden=1024) relu4 = mxnet.sym.Activation(data=fc2, act_type="relu") fc3 = mxnet.sym.FullyConnected(data=relu4, num_hidden=10) net = mxnet.sym.SoftmaxOutput(data=fc3, name='softmax') mxnet.viz.plot_network(symbol=net, shape= {"data" : (batch_size, 1, 28, 28)}).render('mxnet') model = mxnet.model.FeedForward( ctx=mxnet.gpu(0), # use GPU 0 for training, others are same as before symbol=net, num_epoch=num_epoch, learning_rate=0.1, optimizer='AdaDelta', initializer=mxnet.initializer.Uniform()) import logging logging.getLogger().setLevel(logging.DEBUG) model.fit( X=train_iter, eval_data=val_iter, batch_end_callback=mxnet.callback.Speedometer(batch_size, 200) )
____________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
====================================================================================================
convolution2d_1 (Convolution2D) (None, 64, 24, 24) 1664 convolution2d_input_1[0][0]
____________________________________________________________________________________________________
maxpooling2d_1 (MaxPooling2D) (None, 64, 12, 12) 0 convolution2d_1[0][0]
____________________________________________________________________________________________________
convolution2d_2 (Convolution2D) (None, 128, 8, 8) 204928 maxpooling2d_1[0][0]
____________________________________________________________________________________________________
maxpooling2d_2 (MaxPooling2D) (None, 128, 4, 4) 0 convolution2d_2[0][0]
____________________________________________________________________________________________________
flatten_1 (Flatten) (None, 2048) 0 maxpooling2d_2[0][0]
____________________________________________________________________________________________________
dense_1 (Dense) (None, 1024) 2098176 flatten_1[0][0]
____________________________________________________________________________________________________
dense_2 (Dense) (None, 1024) 1049600 dense_1[0][0]
____________________________________________________________________________________________________
dense_3 (Dense) (None, 10) 10250 dense_2[0][0]
====================================================================================================
Total params: 3364618
____________________________________________________________________________________________________
keras+theano
Train on 60000 samples, validate on 10000 samples
Epoch 1/5
60000/60000 [==============================] - 7s - loss: 0.1975 - acc: 0.9379 - val_loss: 0.0450 - val_acc: 0.9856
Epoch 2/5
60000/60000 [==============================] - 7s - loss: 0.0449 - acc: 0.9857 - val_loss: 0.0351 - val_acc: 0.9891
Epoch 3/5
60000/60000 [==============================] - 7s - loss: 0.0303 - acc: 0.9907 - val_loss: 0.0248 - val_acc: 0.9921
Epoch 4/5
60000/60000 [==============================] - 7s - loss: 0.0207 - acc: 0.9932 - val_loss: 0.0257 - val_acc: 0.9920
Epoch 5/5
60000/60000 [==============================] - 7s - loss: 0.0151 - acc: 0.9954 - val_loss: 0.0232 - val_acc: 0.9929
mxnet
INFO:root:Start training with [gpu(0)]
INFO:root:Epoch[0] Batch [200]Speed: 2960.54 samples/secTrain-accuracy=0.845600
INFO:root:Epoch[0] Batch [400]Speed: 2878.78 samples/secTrain-accuracy=0.975150
INFO:root:Epoch[0] Batch [600]Speed: 2875.59 samples/secTrain-accuracy=0.980750
INFO:root:Epoch[0] Resetting Data Iterator
INFO:root:Epoch[0] Time cost=21.459
INFO:root:Epoch[0] Validation-accuracy=0.986700
INFO:root:Epoch[1] Batch [200]Speed: 2888.17 samples/secTrain-accuracy=0.985850
INFO:root:Epoch[1] Batch [400]Speed: 2867.33 samples/secTrain-accuracy=0.988150
INFO:root:Epoch[1] Batch [600]Speed: 2867.63 samples/secTrain-accuracy=0.990200
INFO:root:Epoch[1] Resetting Data Iterator
INFO:root:Epoch[1] Time cost=20.874
INFO:root:Epoch[1] Validation-accuracy=0.980700
INFO:root:Epoch[2] Batch [200]Speed: 2894.78 samples/secTrain-accuracy=0.992200
INFO:root:Epoch[2] Batch [400]Speed: 2876.13 samples/secTrain-accuracy=0.993150
INFO:root:Epoch[2] Batch [600]Speed: 2858.85 samples/secTrain-accuracy=0.994650
INFO:root:Epoch[2] Resetting Data Iterator
INFO:root:Epoch[2] Time cost=20.875
INFO:root:Epoch[2] Validation-accuracy=0.990300
INFO:root:Epoch[3] Batch [200]Speed: 2879.48 samples/secTrain-accuracy=0.994600
INFO:root:Epoch[3] Batch [400]Speed: 2859.86 samples/secTrain-accuracy=0.995800
INFO:root:Epoch[3] Batch [600]Speed: 2860.25 samples/secTrain-accuracy=0.995800
INFO:root:Epoch[3] Resetting Data Iterator
INFO:root:Epoch[3] Time cost=20.951
INFO:root:Epoch[3] Validation-accuracy=0.990300
INFO:root:Epoch[4] Batch [200]Speed: 2887.86 samples/secTrain-accuracy=0.995750
INFO:root:Epoch[4] Batch [400]Speed: 2865.84 samples/secTrain-accuracy=0.997100
INFO:root:Epoch[4] Batch [600]Speed: 2868.30 samples/secTrain-accuracy=0.997700
INFO:root:Epoch[4] Resetting Data Iterator
INFO:root:Epoch[4] Time cost=20.915
INFO:root:Epoch[4] Validation-accuracy=0.988300
keras的速度我挺滿意的,基本上達到了同類卡應該有的效果,而且gpu經常100%
但是theano後端的編譯速度好慢好慢好慢!
mxnet好慢啊,三倍時間啊!跑一個官方例子也比gtx980慢一倍,感覺是什麼地方配置跪了
不過我發現mxnet訓練的時候cpu一直是100,可能是這個原因。。。。
悲傷的故事