What resampling technique should be used when projecting aerial photos?

Aerial photos are continuous data. Each pixel represents the response of a region of a sensor to light directed at it and as that light varies, the response varies continuously. The result is usually discretized (often into 255 or 256) categories, but that doesn't change the nature of the data. Therefore you want to interpolate rather than using categorical algorithms like nearest neighbor or majority. Bilinear interpolation is usually just fine; at some cost in execution time, cubic convolution will retain local contrast a tiny bit better. A small amount of additional blurriness is unavoidable, but that's almost impossible to notice until the image has undergone many such transformations. The errors made with nearest neighbor are much worse in comparison.


I lack the "reputation" to Comment so...

If radiometric analysis is going to be performed on the aerial photos then it should be done prior to resampling/projecting. Otherwise you will almost certainly introduce unintended bias into the final product. As per blord-castillo's helpful comment above.

If the proximate and final uses of the aerials are for visual appeal or background mapping, then I would go with the fastest method that gives you a usable product.

  • If the cell size of the new aerial is the same as the original, then NEAREST works best IMHO.

  • If the cell size of the new aerial is larger than the original, then BILINEAR works best.

  • If (for some crazy reason) the cell size of the new aerial is smaller than the original, then I would go back to using NEAREST.

The other options, CUBIC and MAJORITY, will produce artifacts in the resampled product, take longer to process, and otherwise don't seem to apply to what you're trying to do.

As a final point: While it's true the process of sampling light emanating/reflecting from the surface of the Earth is conceptually continuous, it is also true that the Earth's surface exhibits both continuous and discrete phenomenon.

  • In general, human activity tends to produce discrete transitions and

  • "Natural" features are often (but not always) continuously varying or at least have fuzzy edges.

So, as indicated in my first portion above, how you manipulate the aerials will depend on how you expect to use them.


I know that this question is rather old, but I wanted to add my 2 cents, in case others come across this thread trying to answer the same question...

The previous answers are correct when you truly wish to RESAMPLE your data, such as if you are aggregating your data from a 30 m pixel size to a 90m pixel size. In this case you are attempting to create a new value for each individual pixel, based on a collection of nearby pixels. So yes, here for discrete data sets you would select Nearest Neighbor, while for continuous data, you would choose either Bilinear or Cubic Convolution.

In this question however, the goal is NOT actually to resample the data, but simply to convert the existing data to a new projection - you want the same values, just in a new projection. In this case, you DO want to use Nearest Neighbor resampling for discrete as well as continuous datasets, to maintain the integrity of your original data values. I know this statement goes against everything you read about "resampling", but really think critically about what you want to achieve, and what you are doing to the data. Also, I don't make this recommendation on a whim...I've spent 5 years working on a PhD specializing in GIS/Remote Sensing, as well as teaching GIS/remote sensing undergrad courses.

Another note, the original poster asked about zero and/or negative values... If these values are true data values (ie the altitude can actually be 0 or -34.5), then you want to include these values. However if the value(s) in question are not true data, and instead used to represent NoDATA (say 0 or -9999), then you need to mask these pixels out of your raster (remove) prior to resampling via bilinear or cubic convolution. Otherwise, those -9999 pixels will be included in the resampling calculation, as if that pixel had a real altitude of -9999 and you will end up with invalid data values. As a VERY simplified example in cubic convolution, if your 4 nearest cell values are 4, 5, 16, -9999, including the -9999 could result in a new pixel value of -9974, which is not valid data.