Keras 2.x和1.x的區別
最近更改別人的程式碼發現了1.x和2.x有很多去區別就特意找官網查了一下,以下就是官網原文
Keras 2 release notes
This document details changes, in particular API changes, occurring from Keras 1 to Keras 2.
Training
- The
nb_epoch
argument has been renamedepochs
everywhere. - The methods
fit_generator
,evaluate_generator
andpredict_generator
now work by drawing a number of batchessamples_per_epoch
was renamedsteps_per_epoch
infit_generator
.nb_val_samples
was renamedvalidation_steps
infit_generator
.val_samples
was renamedsteps
inevaluate_generator
andpredict_generator
.
- It is now possible to manually add a loss to a model by calling
model.add_loss(loss_tensor)
- It is also possible to not apply any loss to a specific model output. If you pass
None
as theloss
argument for an output (e.g. in compile,loss={'output_1': None, 'output_2': 'mse'}
, the model will expect no Numpy arrays to be fed for this output when usingfit
train_on_batch
, orfit_generator
. The output values are still returned as usual when usingpredict
. - In TensorFlow, models can now be trained using
fit
if some of their inputs (or even all) are TensorFlow queues or variables, rather than placeholders. See this test for specific examples.
Losses & metrics
- The
objectives
module has been renamedlosses
. - Several legacy metric functions have been removed, namely
matthews_correlation
,precision
,recall
,fbeta_score
,fmeasure
. - Custom metric functions can no longer return a dict, they must return a single tensor.
Models
- Constructor arguments for
Model
have been renamed:input
->inputs
output
->outputs
- The
Sequential
model not longer supports theset_input
method. - For any model saved with Keras 2.0 or higher, weights trained with backend X will be converted to work with backend Y without any manual conversion step.
Layers
Removals
Deprecated layers MaxoutDense
, Highway
and TimedistributedDense
have
been removed.
Call method
- All layers that use the learning phase now support a
training
argument incall
(Python boolean or symbolic tensor), allowing to specify the learning phase on a layer-by-layer basis. E.g. by calling aDropout
instance asdropout(inputs, training=True)
you obtain a layer that will always apply dropout, regardless of the current global learning phase. Thetraining
argument defaults to the global Keras learning phase everywhere. - The
call
method of layers can now take arbitrary keyword arguments, e.g. you can define a custom layer with a call signature likecall(inputs, alpha=0.5)
, and then pass aalpha
keyword argument when calling the layer (only with the functional API, naturally). __call__
now makes use of TensorFlowname_scope
, so that your TensorFlow graphs will look pretty and well-structured in TensorBoard.
All
layers taking a legacy dim_ordering
argument
dim_ordering
has been renamed data_format
.
It now takes two values: "channels_first"
(formerly "th"
)
and "channels_last"
(formerly "tf"
).
Dense layer
Changed interface:
output_dim
->units
init
->kernel_initializer
- added
bias_initializer
argument W_regularizer
->kernel_regularizer
b_regularizer
->bias_regularizer
b_constraint
->bias_constraint
bias
->use_bias
Dropout, SpatialDropout*D, GaussianDropout
Changed interface:
p
->rate
Embedding
Convolutional layers
- The
AtrousConvolution1D
andAtrousConvolution2D
layer have been deprecated. Their functionality is instead supported via thedilation_rate
argument inConvolution1D
andConvolution2D
layers. Convolution*
layers are renamedConv*
.- The
Deconvolution2D
layer is renamedConv2DTranspose
. - The
Conv2DTranspose
layer no longer requires anoutput_shape
argument, making its use much easier.
Interface changes common to all convolutional layers:
nb_filter
->filters
- float kernel dimension arguments become a single tuple argument,
kernel
size. E.g. a legacy callConv2D(10, 3, 3)
becomesConv2D(10, (3, 3))
kernel_size
can be set to an integer instead of a tuple, e.g.Conv2D(10, 3)
is equivalent toConv2D(10, (3, 3))
.subsample
->strides
. Can also be set to an integer.border_mode
->padding
init
->kernel_initializer
- added
bias_initializer
argument W_regularizer
->kernel_regularizer
b_regularizer
->bias_regularizer
b_constraint
->bias_constraint
bias
->use_bias
dim_ordering
->data_format
- In the
SeparableConv2D
layers,init
is split intodepthwise_initializer
andpointwise_initializer
. - Added
dilation_rate
argument inConv2D
andConv1D
. - 1D convolution kernels are now saved as a 3D tensor (instead of 4D as before).
- 2D and 3D convolution kernels are now saved in format
spatial_dims + (input_depth, depth))
, even withdata_format="channels_first"
.
Pooling1D
pool_length
->pool_size
stride
->strides
border_mode
->padding
Pooling2D, 3D
border_mode
->padding
dim_ordering
->data_format
ZeroPadding layers
The padding
argument of the ZeroPadding2D
and ZeroPadding3D
layers
must be a tuple of length 2 and 3 respectively. Each entry i
contains by how much to pad the spatial dimension i
.
If it's an integer, symmetric padding is applied. If it's a tuple of integers, asymmetric padding is applied.
Upsampling1D
length
->size
BatchNormalization
The mode
argument of BatchNormalization
has
been removed; BatchNorm now only supports mode 0 (use batch metrics for feature-wise normalization during training, and use moving metrics for feature-wise normalization during testing).
beta_init
->beta_initializer
gamma_init
->gamma_initializer
- added arguments
center
,scale
(booleans, whether to use abeta
andgamma
respectively) - added arguments
moving_mean_initializer
,moving_variance_initializer
- added arguments
beta_regularizer
,gamma_regularizer
- added arguments
beta_constraint
,gamma_constraint
- attribute
running_mean
is renamedmoving_mean
- attribute
running_std
is renamedmoving_variance
(it is in fact a variance with the current implementation).
ConvLSTM2D
Same changes as for convolutional layers and recurrent layers apply.
PReLU
init
->alpha_initializer
GaussianNoise
sigma
->stddev
Recurrent layers
output_dim
->units
init
->kernel_initializer
inner_init
->recurrent_initializer
- added argument
bias_initializer
W_regularizer
->kernel_regularizer
b_regularizer
->bias_regularizer
- added arguments
kernel_constraint
,recurrent_constraint
,bias_constraint
dropout_W
->dropout
dropout_U
->recurrent_dropout
consume_less
->implementation
. String values have been replaced with integers: implementation 0 (default), 1 or 2.- LSTM only: the argument
forget_bias_init
has been removed. Instead there is a boolean argumentunit_forget_bias
, defaulting toTrue
.
Lambda
The Lambda
layer now supports a mask
argument.
Utilities
Utilities should now be imported from keras.utils
rather than from specific submodules (e.g. no more keras.utils.np_utils...
).
Backend
random_normal and truncated_normal
std
->stddev
Misc
- In the backend,
set_image_ordering
andimage_ordering
are nowset_data_format
anddata_format
. - Any arguments (other than
nb_epoch
) prefixed withnb_
has been renamed to be prefixed withnum_
instead. This affects two datasets and one preprocessing utility.