Keras Applications are deep learning models that are made available alongside pre-trained weights. These models can be used for prediction, feature extraction, and fine-tuning.
Weights are downloaded automatically when instantiating a model. They
are stored at ~/.keras/models/
.
The following image classification models (with weights trained on ImageNet) are available:
All of these architectures are compatible with all the backends
(TensorFlow, Theano, and CNTK), and upon instantiation the models will
be built according to the image data format set in your Keras
configuration file at ~/.keras/keras.json
. For instance, if
you have set image_data_format=channels_last
, then any
model loaded from this repository will get built according to the
TensorFlow data format convention, “Height-Width-Depth”.
Keras < 2.2.0
, The Xception model is only
available for TensorFlow, due to its reliance on
SeparableConvolution
layers.Keras < 2.1.5
, The MobileNet model is only
available for TensorFlow, due to its reliance on
DepthwiseConvolution
layers.# instantiate the model
<- application_resnet50(weights = 'imagenet')
model
# load the image
<- "elephant.jpg"
img_path <- image_load(img_path, target_size = c(224,224))
img <- image_to_array(img)
x
# ensure we have a 4d tensor with single element in the batch dimension,
# the preprocess the input for prediction using resnet50
<- array_reshape(x, c(1, dim(x)))
x <- imagenet_preprocess_input(x)
x
# make predictions then decode and print them
<- model %>% predict(x)
preds imagenet_decode_predictions(preds, top = 3)[[1]]
class_name class_description score
1 n02504013 Indian_elephant 0.90117526
2 n01871265 tusker 0.08774310
3 n02504458 African_elephant 0.01046011
<- application_vgg16(weights = 'imagenet', include_top = FALSE)
model
<- "elephant.jpg"
img_path <- image_load(img_path, target_size = c(224,224))
img <- image_to_array(img)
x <- array_reshape(x, c(1, dim(x)))
x <- imagenet_preprocess_input(x)
x
<- model %>% predict(x) features
<- application_vgg19(weights = 'imagenet')
base_model <- keras_model(inputs = base_model$input,
model outputs = get_layer(base_model, 'block4_pool')$output)
<- "elephant.jpg"
img_path <- image_load(img_path, target_size = c(224,224))
img <- image_to_array(img)
x <- array_reshape(x, c(1, dim(x)))
x <- imagenet_preprocess_input(x)
x
<- model %>% predict(x) block4_pool_features
# create the base pre-trained model
<- application_inception_v3(weights = 'imagenet', include_top = FALSE)
base_model
# add our custom layers
<- base_model$output %>%
predictions layer_global_average_pooling_2d() %>%
layer_dense(units = 1024, activation = 'relu') %>%
layer_dense(units = 200, activation = 'softmax')
# this is the model we will train
<- keras_model(inputs = base_model$input, outputs = predictions)
model
# first: train only the top layers (which were randomly initialized)
# i.e. freeze all convolutional InceptionV3 layers
freeze_weights(base_model)
# compile the model (should be done *after* setting layers to non-trainable)
%>% compile(optimizer = 'rmsprop', loss = 'categorical_crossentropy')
model
# train the model on the new data for a few epochs
%>% fit_generator(...)
model
# at this point, the top layers are well trained and we can start fine-tuning
# convolutional layers from inception V3. We will freeze the bottom N layers
# and train the remaining top layers.
# let's visualize layer names and layer indices to see how many layers
# we should freeze:
<- base_model$layers
layers for (i in 1:length(layers))
cat(i, layers[[i]]$name, "\n")
# we chose to train the top 2 inception blocks, i.e. we will freeze
# the first 172 layers and unfreeze the rest:
freeze_weights(base_model, from = 1, to = 172)
unfreeze_weights(base_model, from = 173)
# we need to recompile the model for these modifications to take effect
# we use SGD with a low learning rate
%>% compile(
model optimizer = optimizer_sgd(lr = 0.0001, momentum = 0.9),
loss = 'categorical_crossentropy'
)
# we train our model again (this time fine-tuning the top 2 inception blocks
# alongside the top Dense layers
%>% fit_generator(...) model
# this could also be the output a different Keras model or layer
<- layer_input(shape = c(224, 224, 3))
input_tensor
<- application_inception_V3(input_tensor = input_tensor,
model weights='imagenet',
include_top = TRUE)
The VGG16 model is the basis for the Deep dream Keras example script.