Numpy Resize/Rescale Image

Yeah, you can install opencv (this is a library used for image processing, and computer vision), and use the cv2.resize function. And for instance use:

import cv2
import numpy as np

img = cv2.imread('your_image.jpg')
res = cv2.resize(img, dsize=(54, 140), interpolation=cv2.INTER_CUBIC)

Here img is thus a numpy array containing the original image, whereas res is a numpy array containing the resized image. An important aspect is the interpolation parameter: there are several ways how to resize an image. Especially since you scale down the image, and the size of the original image is not a multiple of the size of the resized image. Possible interpolation schemas are:

  • INTER_NEAREST - a nearest-neighbor interpolation
  • INTER_LINEAR - a bilinear interpolation (used by default)
  • INTER_AREA - resampling using pixel area relation. It may be a preferred method for image decimation, as it gives moire’-free results. But when the image is zoomed, it is similar to the INTER_NEAREST method.
  • INTER_CUBIC - a bicubic interpolation over 4x4 pixel neighborhood
  • INTER_LANCZOS4 - a Lanczos interpolation over 8x8 pixel neighborhood

Like with most options, there is no "best" option in the sense that for every resize schema, there are scenarios where one strategy can be preferred over another.


While it might be possible to use numpy alone to do this, the operation is not built-in. That said, you can use scikit-image (which is built on numpy) to do this kind of image manipulation.

Scikit-Image rescaling documentation is here.

For example, you could do the following with your image:

from skimage.transform import resize
bottle_resized = resize(bottle, (140, 54))

This will take care of things like interpolation, anti-aliasing, etc. for you.


One-line numpy solution for downsampling (by 2):

smaller_img = bigger_img[::2, ::2]

And upsampling (by 2):

bigger_img = smaller_img.repeat(2, axis=0).repeat(2, axis=1)

(this asssumes HxWxC shaped image. h/t to L. Kärkkäinen in the comments above. note this method only allows whole integer resizing (e.g., 2x but not 1.5x))


For people coming here from Google looking for a fast way to downsample images in numpy arrays for use in Machine Learning applications, here's a super fast method (adapted from here ). This method only works when the input dimensions are a multiple of the output dimensions.

The following examples downsample from 128x128 to 64x64 (this can be easily changed).

Channels last ordering

# large image is shape (128, 128, 3)
# small image is shape (64, 64, 3)
input_size = 128
output_size = 64
bin_size = input_size // output_size
small_image = large_image.reshape((output_size, bin_size, 
                                   output_size, bin_size, 3)).max(3).max(1)

Channels first ordering

# large image is shape (3, 128, 128)
# small image is shape (3, 64, 64)
input_size = 128
output_size = 64
bin_size = input_size // output_size
small_image = large_image.reshape((3, output_size, bin_size, 
                                      output_size, bin_size)).max(4).max(2)

For grayscale images just change the 3 to a 1 like this:

Channels first ordering

# large image is shape (1, 128, 128)
# small image is shape (1, 64, 64)
input_size = 128
output_size = 64
bin_size = input_size // output_size
small_image = large_image.reshape((1, output_size, bin_size,
                                      output_size, bin_size)).max(4).max(2)

This method uses the equivalent of max pooling. It's the fastest way to do this that I've found.