Creating photogrametric point cloud of an object from distributed photographs, here: Making accurate 3D model of building using UAV imagery?

I've done this before with success using the Photosynth Toolkit (http://www.visual-experiments.com/demos/photosynthtoolkit/), except instead of a drone I was hanging my head out of a small plane taking pictures of the downtown area of a small town. You could also check out Visual SFM (http://ccwu.me/vsfm/); I haven't used it but it seems to be another tool to accomplish the same task.

I recently got a drone as well, and intend to use both of these methodologies for the same project. I'll post some examples of the photosynth toolkit project when I get a chance.

EDIT: Here's an example of the output of the Photosynth Toolkit (as viewed in MeshLab http://meshlab.sourceforge.net/)

enter image description here

This is the point cloud data (with color information) resulting from a batch of aerial photos I took from the airplane. I clustered the images to focus on processing the point cloud for one block at a time, which is why the one block is so much more dense than the rest.

Here's the same point cloud with a triangulated irregular network overlaid on top. It's not perfect, but it's a cool reconstruction.

enter image description here

So, in answer to your question on whether using a UAV to generate point cloud data is a viable alternative to terrestrial laser scanner: yes, it is!

Keep in mind that automated methodologies for stitching the photos together don't work well in high contrast lighting environments; If one side of your building is in sunlight while the other is in shade you may have trouble getting the photos to line up. The best time to take photos like that is when it is overcast. The clouds help diffuse sunlight making the lighting more even/consistent.

If your lighting is good, you can take pictures at relatively close range to come up with a very detailed point cloud dataset. You can see from the TIN above that there's a line on the left side that looks like it goes from the ground up to space; that's an outlier that was not removed from the dataset. One thing you should look into is method of smoothing point cloud data/removing outliers, maybe using a nearest neighbor analysis.

If you're taking very close up photos of the building, you may want to put targets on the building to help relate the photos to one another. If you use targets, make sure each one is unique so that photos don't get matched to the wrong location, and you should try to get 2/3 targets in each photo. If you have some targets on the ground, you can use GPS readings at each one to georeference your point cloud dataset, so that any measurements you take from the building would represent real-world measurements.

If you want to look into georeferencing your point cloud data, check out Mark Willis' how-to guide (http://palentier.blogspot.com/2010/12/how-to-create-digital-elevation-model.html). It's an old blog, but the methodology is a good one.

EDIT2: Last comment: make sure you are using a camera without much distortion. For example, the GoPro is an awesome little camera to put on drones, but the significant distortion caused by the wide angle lens eliminates the possibility of using the standard GoPro for photogrammetric project. There is a solution for this problem, but it may require taking apart your GoPro: http://www.peauproductions.com/collections/survey-and-ndvi-cameras

Peau Productions sells modified GoPro cameras with different lenses that have significantly less distortion than the lens that comes with the camera. They also sell the lenses themselves if you're up for modifying your camera on your own.

EDIT: I know this is an old question, but thought I'd share OpenDroneMap, an open source tool to do exactly this project http://opendronemap.org/


I think a way to do this is VisualSFM to do the matching of the photos (the stronger the GPU the better) and creating a dense point cloud and MeshLab to create a textured triangulated model from the point cloud.

VisualSFM:

http://ccwu.me/vsfm/

http://ptak.felk.cvut.cz/sfmservice/websfm.pl?menu=cmpmvs (cp. especially the 'Technology' site and the paper referred to there)

MeshLab:

https://sourceforge.net/projects/meshlab/

See for some HowTo's / applications (even the UAV one!):

https://www.youtube.com/watch?v=V4iBb_j6k_g

https://www.youtube.com/watch?v=wBKidr0e-XA

https://www.youtube.com/watch?v=-S7HeJvIKcs


-https://www.mapsmadeeasy.com/point_estimator you can use this to make a flight plan set variables to what you want make sure to pick inspire/phantom 3 as camera near the bottom, you can export this plan as kml for apm.

or if you are more adept you can use the gis software of your choice to create a kml grid flight path for upload to litchi in the following step.

-https://flylitchi.com/ for flight planning, upload your kml to mission hub from mapsmadeeasy make sure to change flight height, it is really slick and allows for awesome waypoint missions.

-now you can fly your mission with camera setting of your choice

-post mission use lightroom to correct distortion (same distortion as inspire 1) http://www.inspirepilots.com/threads/inspire-camera-lens-correction-profiles.1270/, if you skip this step your elevation models will have a kind of concave effect.

-for sfm processing i would also recommend to try maps made easy as well they let you use gcp and use a point based system, free points at the beginning, and small jobs are free.