L2 normalised output with keras

I found the problem!

So I am using tensorflow as a backed and K.l2_normalize(x, axis) calls the tf.nn.l2_normalize(x, dim, epsilon=1e-12, name=None). Notice that this method has one extra parameter epsilon. And this method looks as follows:

with ops.name_scope(name, "l2_normalize", [x]) as name:
   x = ops.convert_to_tensor(x, name="x")
   square_sum = math_ops.reduce_sum(math_ops.square(x), dim, keep_dims=True)
   x_inv_norm = math_ops.rsqrt(math_ops.maximum(square_sum, epsilon))
return math_ops.mul(x, x_inv_norm, name=name)

So if the output of the net contains numbers lower then epsilon (which is set to 1e-12 by default) then it is not normalized correctly which is what happens in my case.