Removing then Inserting a New Middle Layer in a Keras Model

Another way to do this is by building a Sequential model. See the following example where I swap ReLU layers for PReLU. You would need to simply not add the layers you don't want, and add a new layer.

def convert_model_relu(model):
    from keras.layers.advanced_activations import PReLU
    from keras.activations import linear as linear_activation
    from keras.models import Sequential
    new_model = Sequential()
    # Go through all layers, if it has a ReLU activation, replace it with PrELU
    for layer in tuple(model.layers):
        layer_type = type(layer).__name__
        if hasattr(layer, 'activation') and layer.activation.__name__ == 'relu':
            # Set activation to linear, add PReLU
            prelu_name = layer.name + "_prelu"
            prelu = PReLU(shared_axes=(1, 2), name=prelu_name) \ 
                if layer_type == "Conv2D" else PReLU(name=prelu_name)
            layer.activation = linear_activation
            new_model.add(layer)
            new_model.add(prelu)
        else:
            new_model.add(layer)
    return new_model

Assuming that you have a model vgg16_model, initialized either by your function above or by keras.applications.VGG16(weights='imagenet'). Now, you need to insert a new layer in the middle in such a way that the weights of other layers will be saved.

The idea is to disassemble the whole network to separate layers, then assemble it back. Here is the code specifically for your task:

vgg_model = applications.VGG16(include_top=True, weights='imagenet')

# Disassemble layers
layers = [l for l in vgg_model.layers]

# Defining new convolutional layer.
# Important: the number of filters should be the same!
# Note: the receiptive field of two 3x3 convolutions is 5x5.
new_conv = Conv2D(filters=64, 
                  kernel_size=(5, 5),
                  name='new_conv',
                  padding='same')(layers[0].output)

# Now stack everything back
# Note: If you are going to fine tune the model, do not forget to
#       mark other layers as un-trainable

x = new_conv
for i in range(3, len(layers)):
    layers[i].trainable = False
    x = layers[i](x)

# Final touch
result_model = Model(input=layer[0].input, output=x)
result_model.summary()

And the output of the above code is:

_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
input_50 (InputLayer)        (None, 224, 224, 3)       0         
_________________________________________________________________
new_conv (Conv2D)            (None, 224, 224, 64)      1792      
_________________________________________________________________
block1_pool (MaxPooling2D)   (None, 112, 112, 64)      0         
_________________________________________________________________
block2_conv1 (Conv2D)        (None, 112, 112, 128)     73856     
_________________________________________________________________
block2_conv2 (Conv2D)        (None, 112, 112, 128)     147584    
_________________________________________________________________
block2_pool (MaxPooling2D)   (None, 56, 56, 128)       0         
_________________________________________________________________
block3_conv1 (Conv2D)        (None, 56, 56, 256)       295168    
_________________________________________________________________
block3_conv2 (Conv2D)        (None, 56, 56, 256)       590080    
_________________________________________________________________
block3_conv3 (Conv2D)        (None, 56, 56, 256)       590080    
_________________________________________________________________
block3_pool (MaxPooling2D)   (None, 28, 28, 256)       0         
_________________________________________________________________
block4_conv1 (Conv2D)        (None, 28, 28, 512)       1180160   
_________________________________________________________________
block4_conv2 (Conv2D)        (None, 28, 28, 512)       2359808   
_________________________________________________________________
block4_conv3 (Conv2D)        (None, 28, 28, 512)       2359808   
_________________________________________________________________
block4_pool (MaxPooling2D)   (None, 14, 14, 512)       0         
_________________________________________________________________
block5_conv1 (Conv2D)        (None, 14, 14, 512)       2359808   
_________________________________________________________________
block5_conv2 (Conv2D)        (None, 14, 14, 512)       2359808   
_________________________________________________________________
block5_conv3 (Conv2D)        (None, 14, 14, 512)       2359808   
_________________________________________________________________
block5_pool (MaxPooling2D)   (None, 7, 7, 512)         0         
_________________________________________________________________
flatten (Flatten)            (None, 25088)             0         
_________________________________________________________________
fc1 (Dense)                  (None, 4096)              102764544 
_________________________________________________________________
fc2 (Dense)                  (None, 4096)              16781312  
_________________________________________________________________
predictions (Dense)          (None, 1000)              4097000   
=================================================================
Total params: 138,320,616
Trainable params: 1,792
Non-trainable params: 138,318,824
_________________________________________________________________