深度學習DNN構建簡單迴歸模型
阿新 • • 發佈:2018-12-09
(1)構建迴歸神經網路模型
inputs = Input(shape=(size,), dtype='float32')
dropout = Dropout(0)(inputs)
ouput = Dense(512, activation='relu')(dropout)
dropout = Dropout(0.15)(ouput)
ouput = Dense(256, activation='relu')(dropout)
ouputs = Dense(1)(ouput)
model = Model(input=inputs, output=ouputs)
(2)編譯神經網路 Adam優化器適合稀疏矩陣擬合
optimizer = Adam(lr=0.002, beta_1=0.9, beta_2=0.999, epsilon=1e-08)
model.compile(loss='mae', metrics=['mape', 'mae', 'acc'], optimizer=optimizer)
(3)訓練神經網路
tensor_board = TensorBoard(log_dir='./logs', histogram_freq=0, write_graph=True, write_images=False, embeddings_freq=0,
embeddings_layer_names= None, embeddings_metadata=None)
model.fit(np.array(features), labels, batch_size=2000, epochs=1000, validation_data=(np.array(eval_features), eval_labels), callbacks=[tensor_board])
命令列執行:tensorboard –logdir=./logs 在瀏覽器訪問:http://127.0.0.1:6006/
(4)驗證集驗證模型效果
cost = model.evaluate(np.array(eval_features), eval _labels, batch_size=100)
print('evaluate cost: %s' % str(cost))
(5)預測測試資料
preds = model.predict(np.array(test_features))