Resnet50 produces different prediction when image loading and resizing is done with OpenCV

# Keras prediction
img = image.load_img(img_path, target_size=(224, 224))

   # OpenCV prediction
imgcv = cv2.imread(img_path)
dim = (224, 224)
imgcv_resized = cv2.resize(imgcv, dim, interpolation=cv2.INTER_LINEAR)
  1. If you look attentively, the interpolation you specify in the case of cv2 is cv2.INTER_LINEAR (bilinear interpolation); however, by default, image.load_img() uses an INTER_NEAREST interpolation method.

  2. img_to_array(img). The dtype argument here is: None

Default to None, in which case the global setting tf.keras.backend.floatx() is used (unless you changed it, it defaults to "float32")

Therefore, in img_to_array(img) you have an image that consists of float32 values, while the cv2.imread(img) returns a numpy array of uint8 values.

  1. Ensure you convert to RGB from BGR, as OpenCV loads directly into BGR format. You can use image = image[:,:,::-1] or image = cv2.cvtColor(image,cv2.COLOR_BGR2RGB); otherwise you will have the R and B channels reversed resulting in an incorrect comparison.

Since the preprocessing that you apply is the same in both cases, the only differences are the ones that I mentioned above; adapting those changes should ensure reproducibility.

There is one observation I would like to make: provided that one uses a library (cv2 in this case) which automatically (and arguably only loads ints) instead of floats, the only correct way is to cast the first prediction array (Keras) to uint8 because by casting the latter to float32, the possible difference in information is lost. For example, with cv2 you load to uint8, and by casting instead of 233 you get 233.0. However, maybe the initial pixel value was 233,3 but this was lost due to the first conversion.