How to locate QR code in large image to improve decoding performance?

I think I have found a simple yet reliable way in which the corners of the QR code can be detected. However, my approach assumes there is some contrast (the more the better) between the QR and its surrounding area. Also, we have to keep in mind that neither pyzbar nor opencv.QRCodeDetector are 100% reliable.

So, here is my approach:

  1. Resize image. After some experimentation I have come to the conclusion that pyzbar is not completely scale invariant. Although I don't have references that can back this claim, I still use small to medium images for barcode detection as a rule of thumb. You can skip this step as it might seem completely arbitrary.
image = cv2.imread("image.jpg")
scale = 0.3
width = int(image.shape[1] * scale)
height = int(image.shape[0] * scale)
image = cv2.resize(image, (width, height))
  1. Thresholding. We can take advantage on the fact that barcodes are generally black on white surfaces. The more contrast the better.
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
_, thresh = cv2.threshold(gray, 120, 255, cv2.THRESH_BINARY_INV + cv2.THRESH_OTSU)

image after masking 3. Dilation + contours. This step is a little bit trickier and I do apologize if my english is not completely clear here. We can see from the previous image that there are black spaces in between the white inside the QR code. If we were to just find the contours, then opencv will assume these spaces are separate entities and not part of a whole. If we want to transform the QR code and make it seem as just a white square, we have to do a bit of morphological operations. Namely, we have to dilate the image.

# The bigger the kernel, the more the white region increases.
# If the resizing step was ignored, then the kernel will have to be bigger
# than the one given here.
kernel = np.ones((3, 3), np.uint8)
thresh = cv2.dilate(thresh, kernel, iterations=1)
contours, _ = cv2.findContours(thresh, cv2.RETR_LIST, cv2.CHAIN_APPROX_SIMPLE)

threshold after dilation 4. Filtering and getting bounding boxes. Most of the found contours are too small to contain a barcode, so we have to filter them in order to make our search space smaller. After filtering out the weak candidates, we can fetch the bounding boxes of the strong ones.

EDIT: In this case we are filtering by area (small area = weak candidate), but we can also filter by the extent of the detection. Basically what the extent measures is the rectangularity of an object, and we can use that information since we know a QR code is a square. I chose the extent to be greater than pi / 4, since that is the extent of a perfect circle, meaning we are also filtering out circular objects.

bboxes = []
for cnt in contours:
  area = cv2.contourArea(cnt)
  xmin, ymin, width, height = cv2.boundingRect(cnt)
  extent = area / (width * height)
  
  # filter non-rectangular objects and small objects
  if (extent > np.pi / 4) and (area > 100):
    bboxes.append((xmin, ymin, xmin + width, ymin + height))

Search space 5. Detect barcodes. We have reduced our search space to just the actual QR codes! Now we can finally use pyzbar without worrying too much about it taking too long to do barcode detection.

qrs = []
info = set()
for xmin, ymin, xmax, ymax in bboxes:
  roi = image[ymin:ymax, xmin:xmax]
  detections = pyzbar.decode(roi, symbols=[pyzbar.ZBarSymbol.QRCODE])
  for barcode in detections:
     info.add(barcode.data)
     # bounding box coordinates
     x, y, w, h = barcode.rect
     qrs.append((xmin + x, ymin + y, xmin + x + w, ymin + y + height))

Unfortunately, pyzbar was only able to decode the information of the largest QR code (b'3280406-001'), even though both barcodes were in the search space. With regard to knowing how many times was a particular code detected, you can use a Counter object from the collections standard module. If you don't mind having that information, then you can just use a set as I did here.

Hope this could be of help :).