Pixelate Image With Pillow

You're doing a few things wrong.

First of all, you should use PNG, not JPG for your output. JPG introduces so many artifacts, that small images like your output get completely degenerated.

Then, you should reduce your palette. It's much easier to work with input containing no noise.

First of all, boring initialization:

from PIL import Image
import operator
from collections import defaultdict
import re

input_path = 'input.jpg'
output_path = 'output.png'
size = (4,4)

Then we declare the palette - this should contain colors of all possible LEGO bricks. I sampled the values below from your image, but you can use black and white like you do in your code, or any colors you want as long as they're similar to colors in the source image:

palette = [
    (45,  50,  50),  #black
    (240, 68,  64),  #red
    (211, 223, 223), #white
    (160, 161, 67),  #green
    (233, 129, 76),  #orange
]
while len(palette) < 256:
    palette.append((0, 0, 0))

The code below will declare palette for PIL, since PIL needs flat array rather than array of tuples:

flat_palette = reduce(lambda a, b: a+b, palette)
assert len(flat_palette) == 768

Now we can declare an image that will hold the palette. We'll use it to reduce the colors from the original image later.

palette_img = Image.new('P', (1, 1), 0)
palette_img.putpalette(flat_palette)

Here we open the image and quantize it. We scale it to size eight times bigger than needed, since we're going to sample the average output later.

multiplier = 8
img = Image.open(input_path)
img = img.resize((size[0] * multiplier, size[1] * multiplier), Image.BICUBIC)
img = img.quantize(palette=palette_img) #reduce the palette

After this, our image looks like this:

quantized image

We need to convert it back to RGB so that we can sample pixels now:

img = img.convert('RGB')

Now we're going to construct our final image. To do this, we'll sample how many pixels of each palette color each square in the bigger image contains. Then we'll choose the color that occurs most often.

out = Image.new('RGB', size)
for x in range(size[0]):
    for y in range(size[1]):
        #sample at get average color in the corresponding square
        histogram = defaultdict(int)
        for x2 in range(x * multiplier, (x + 1) * multiplier):
            for y2 in range(y * multiplier, (y + 1) * multiplier):
                histogram[img.getpixel((x2,y2))] += 1
        color = max(histogram.iteritems(), key=operator.itemgetter(1))[0]
        out.putpixel((x, y), color)

Finally, we save the output:

out.save(output_path)

The result:

small image

Upscaled by 1600%:

big image


Just for fun, I tackled this with ImageMagick - which is also callable from Python...

First off, I create a little custom palette to match your colours - your white is not very white and your green is different from ImageMagick's idea of green so I used hex for them instead of colour names.

convert xc:black xc:red xc:"rgb(200,200,200)" xc:"rgb(168,228,23)"  xc:orange +append palette.png

If I scale that palette up, it looks like this:

enter image description here

Then I resize your image down to 4x4 and map the result to the custom palette and scale it back up so you can see it like this:

convert lego.jpg -resize 4x4! +dither -remap palette.png -scale 1600 result.png

and here is the result

enter image description here

The white is off to match the "white" in your original.