keras:2)函式式(Functional)模型
阿新 • • 發佈:2018-11-10
相對序貫模型,函式式模型顯得更靈活(序貫是函式式的特例),這裡對函式式模型進行簡單地介紹,具體可以參考官方網站。
官方的栗子:
from keras.layers import Input, Dense
from keras.models import Model
# This returns a tensor
inputs = Input(shape=(784,))
# a layer instance is callable on a tensor, and returns a tensor
x = Dense(64, activation='relu')(inputs)
x = Dense(64 , activation='relu')(x)
predictions = Dense(10, activation='softmax')(x)
# This creates a model that includes
# the Input layer and three Dense layers
model = Model(inputs=inputs, outputs=predictions)
model.compile(optimizer='rmsprop',
loss='categorical_crossentropy',
metrics=['accuracy' ])
model.fit(data, labels) # starts training
layers.Input輸入層是個特例,它直接就返回tensor物件。而其他的層如layers.Dense
Dense(64, activation='relu')
這裡它返回的是一個層(亦可視為函式),這個層作用於後面緊跟著的tensor(括號裡面的變數,幾個”function()()”和scala中的柯里化神似)
Q1:如何使用模型中的任意層作為一個子模型,輸入資料後獲得對應的輸出資料?
以手寫識別的AutoEncode為例
from keras.layers import Input, Dense
from keras.models import Model
# this is the size of our encoded representations
encoding_dim = 32 # 32 floats -> compression of factor 24.5, assuming the input is 784 floats
# this is our input placeholder
input_img = Input(shape=(784,))
# "encoded" is the encoded representation of the input
encoded = Dense(encoding_dim, activation='relu')(input_img)
# "decoded" is the lossy reconstruction of the input
decoded = Dense(784, activation='sigmoid')(encoded)
# this model maps an input to its reconstruction
autoencoder = Model(input_img, decoded)
模型最後拿到的是編碼解碼的模型,但我們想檢視輸入資料經編碼器後的輸出是什麼?如何做呢?
encoder = Model(input_img, encoded)
#x_test是輸出的測試資料
encoded_imgs = encoder.predict(x_test)
單獨對編碼後的資料解碼,與上面有點不同
# create a placeholder for an encoded (32-dimensional) input
encoded_input = Input(shape=(encoding_dim,))
# retrieve the last layer of the autoencoder model
decoder_layer = autoencoder.layers[-1]
# create the decoder model
decoder = Model(encoded_input, decoder_layer(encoded_input))
autoencoder.layers[-1]為按索引取模型的層(函式),最後在Model函式的輸出中作用於輸入的資料,注意Model函式的輸入必須是Input物件。
最後只需編譯主模型autoencoder,子模型decoder 和encoder都可以直接使用
autoencoder.compile(optimizer='adadelta', loss='binary_crossentropy')
參考:
https://keras-cn.readthedocs.io/en/latest/models/model
https://blog.keras.io/building-autoencoders-in-keras.html