This presentation is part of the 2017 3D Digital Documentation Summit.

LIDAR and Photogrammetry: Comprehensive Data Collection for Historical Preservation

Speaker 1:                           Thank you, Jason. So before I get into it, I just want to say this has been a great conference and very pleasant surprise about the music that is on Frenchman Street. I was not expecting that. It’s been awesome.

Okay, so who DJS? So we are a engineering consulting firm. We’re based out of Abington, Pennsylvania, which is a suburb of Philadelphia. We do a lot of work in forensic accident reconstruction and that’s kind of where we initially got into laser scanning, through that route in about 2003. I know some people said they started with the Cyrax 2500. I think we also have one of those unfortunately sitting in a closet just doing nothing.

And we kind of progressed through the years. Advancing the laser scanning units that we have as we went along. But we’re about 34 full-time employees, and we have a consultant network for different … like, slip and falls, or biomechanical experts. Things of that nature. More related to the forensic end of things.

But recently we started to offer the services of 3D documentation and we’ll kind of go into a little but of that here as far as some of our projects. So just a quick agenda, we’ll go over the tools that we use, not necessarily the absolute tools of the trade, just the tools that we have in our toolbox that we like to use for digital documentation, why use that, just considerations when using laser scanners, considerations when going about photogrammetry, and then couple case studies of Lincoln Memorial, and a ferry boat in Binghamton, which is a ferry boat.

So yeah, the tools that we currently use … so we have a Leica ScanStation 310, and also a couple FARO Focus 330. So we do get to experience both sets of those proprietary softwares, Sycolin and Scene. So it is kind of an interesting thing to get in and out of those and get our data into a common place in the end.

We also have an Artec Eva for structure-like scanning for smaller objects. And also the DPI-8 hand scanner which allows us to get in and out of kind of difficult areas to to document that basically we can supplement the terrestrial Lidar with that. We actually use it quite a bit on vehicle interiors because it can be kind of hard to get either one of these laser scanners in a vehicle and get good clean data.

As far as image capture technology, we’re currently using the Phantom 4 Pro Plus. And the Go Pro 5. And those are just upgrades for us. We previously were using the Phantom 2 Vision Plus and Go Pro 4. The 5 Go Pro is nice because it does have G.P.S. tags and it’s image capture, which is useful for pulling that data into a photogram share applications that take advantage of G.P.S. tags. And of course DSLR. When it comes down to it, there’s always a cell phone and you can still get usable data from cell phone data image capture.

Right so why use 3D documentation? This is kind of been said already over these past couple days, but I’ll kinda just go through it quickly. So one of our main concerns is the speed of data capture, especially for forensic matters. We want to get in and out of scenes as quickly as possible and capture the most data we can that may be relevant at the time or in the future.

So traditional methods here, I would say at an optimal amount of time, maybe you get one point per second. And with the laser scanners, these are the ratings that I’ve read. I’ve not seen all those points come in at once, but especially the Pharaoh. The fact that it can collect nearly a million points per second in some scenarios is pretty amazing.

And of course with the image capture from the DJI and the Go Pro, you can set these to capture kind of time lapse images to run every couple seconds. So you can get a lot of data as you’re moving through a site or around an object and capture a lot of data quickly.

And then, the accuracy of measurement. So the measurements that are taken by traditional methods are accurate, but it’s kind of the writing them down and then getting that data over to somewhere else where some accuracy can be lost. So the laser scanners obviously maintain all the measurements within their internal storage. And when you get that all put together, it’s kind of a clear picture of what you’re looking at, in this case a floor plan.

And then robust data sets. With the high number of data points that they’re collecting in that short period of time, you end up with tons of data. And people talked about it. It can be good, it can be bad because you don’t over scan, but you don’t want to under scan either. But you are collecting a lot of information. So we like that we can collect a ton of information in a short period of time and end up with really large comprehensive data sets. The same is true for photogrammetric image reconstruction point clouds. So that’s what we’re looking at here. You can also get nice robust data sets coming out of photogrammetry.

Another item is the data integration with common software packages. So for us, like many others, we use a lot of Autodesk products. So it was great when in 2014 Recap came about and made it easy to kind of bring that data into the other software packages that are of use. Previous to being recap we were actually still working with kind of the same background stuff because it used to be Studio Clouds. So we we’ve always been previous, to Recap coming out, we were always trying to find different plug ins that allow us to get the point cloud data into our software to do any post work. And for us, a lot of our stuff in the forensic realm ends up in 3DS Max.

So laser scanning considerations. These may be old hat at this point, but let’s just go over them. So you have the line of sight data collection considerations scanned, density versus time, and data display options. And these are all things to think about when planning how you’re going to go about laser scanning an object or a site. So the line of sight, obviously if there’s not a direct line of sight for the laser, you may end up with holes or you will end up with holes. So it’s just a consideration to know that. You need to place the scanner in multiple locations to account for that.

And then the scan density versus time. So in this example, this is a two minute scan with a C10. And you have an average of about 14 millimeter spacing on the wall there, which is I think maybe 20 feet away from the scanner. This one you know seven minute scan and then you’re down to higher detail. An average of six millimeters on that same wall. So the consideration needs to be there for how much time you have on site and how much detail you’re looking to get in whatever it is that’s being scanned.

And then of course the data display options. So there’s the option of no color at all, which is that intensity view in the top left. There’s the option of onboard camera, which in this one, this is in the middle. Here’s from the C10. And from my understanding, the image capture on board has improved in later variance of [inaudible 00:09:52] products. So the third option is less and less needed, but the third one down the bottom there is basically supplementing the data by taking those pictures yourself with say like a DSLR on a Ninja.

So photogrammetric considerations. And this, for what I’m going over right now, mainly has to do with drone use, but also applies to other photogrammetric stuff. So particularly with drone, you just want to be cognizant of the ground sampling distance. Basically means that based on the sensor of the camera that’s on your drone and altitude of what you’re flying, you’re going to get different levels of detail on the object that you’re trying to capture. So lower altitude, higher detail.

And of course, just like scanning overlap is important. In order to be able to tie these images together, which you know people have said in their talks. And of course vantage points. It’s great to have a drone and be able to move freely through the air. And you get a lot of different views on objects that you otherwise maybe wouldn’t get from ground positions. So it’s key to get as many different views as you can to cover it.

And of course F.A.A. regulations. Always something to consider. So part one of seven went into effect on 29 of 16. And that was a change from the 3.33 exemption. It’s just something to be aware of. This could be a whole talk on it’s own for the part one of seven rules and regulations but definitely be cognizant of it. And part of that Knowledge is knowing what airspace you can work in without being where you shouldn’t be as far as controlled airspace.

So this image here is from Sky Vector, which is a web page that gives you the sectional charts. Which still, even though some of us have passed the test, they’re still pretty confusing to look at. This is the one for Philadelphia. So that is a good place to look and understand as well, but what we mainly use to kind of digest the same type of information for drone use is Air Map.

And that’s kind of an application that’s been tailored for drone pilots to where it takes that same information that you’re getting about the different air spaces in your area and let’s you know via these kind of check boxes on the side. You can turn them on to see what’s what for where you are.

This is obviously right here where we are in New Orleans right now. None of these check boxes are turned on, so if you do go ahead and do that, it shows you different airspace considerations that you need to deal with. So in this case, the main one that I was seeing was this very large blue block here and zooming even further out to see how far covered in what exactly what was it say. It’s a restricted airspace. Restricted special use airspace is just something to know that’s there and then you can figure out what your needs are in order to operate in that area.

So case study. So the Lincoln Memorial. This one is actually does not include in the drone use, of course, because it’s in D.C. But points are included because it was a nice National Park place that we did. So this was done in conjunction, obviously, with National Park Service and also with [inaudible 00:14:16]. Who you guys saw yesterday. Joint effort to document Lincoln Memorial. And so for anything that we go about doing, obviously like others have said, planning is key. If it’s possible to visit that area beforehand, you can kind of scope it out and see what you’re going to be dealing with.

So these are just some images of different areas that we were concerned about and trying to see how we could approach that. So being on site is one thing, gathering any type of aerial satellite imagery and kind of pre-planning where you might place your scanners is a good idea. You can see here it’s a little haphazard with the foliage around. We wanted to see how we would be able to get in and out of there and still have those scan positions be able to tie back to other ones.

And then any type of floor plan you can have access to is also … obviously you spoke to this. This is just from the Library of Congress. Plans that are available for the Lincoln Memorial. This is actually in the basement area which prior to seeing this, we didn’t really know anything about it and that was good that we saw that because it’s massive.

So just some views of the data we collected. Pretty comprehensive. Some voids, but the minimal and no real noticeable voids on the building itself. Some are around the site where there’s some foliage issues, but pretty good coverage.

And then we did do the supplemental photography in this case with the [inaudible 00:16:07] Ninja HDR photography and then map that to those points. Just another view here. A couple more views, cross sections, and then another cross section showing that basement which, like I said, was glad we found out that it was there before we went about doing it because that on its own is quite an endeavor.

This data is up on the CyArk website. There’s the page there. If you want to check it out. And then the National Park Service and CyArk crew members here along with with our guys from DJS down in the basement. And as far as just some kind of metrics on it. 609 laser scans, roughly 4 billion data points from those laser scans, 5,000 photos which are all those HDR panoramic images. So each set up, I think, was 20 or 30. Maybe more. Four days total data collection, and that was running two C10s and 2 Pharaohs all day.

This next one, the ferry boat. This is the one where we actually did go ahead and have Lidar combine with photogrammetry. So this would be more in line with the rest of the talks that you’ll hear today. So the ferry boat Binghamton was a ferry boat that was launched in 1905. Operated for 50 plus years between Manhattan and Hoboken carrying cars and people across.

1969 it was sold for a conversion into a restaurant, and so it was essentially anchored there in Edgewater, New Jersey. And it was open for a restaurant operations and nightclub operations in 1975. And then was listed on the National Register of Historic Places in 1982.

So there’s a pretty interesting backstory on this. You can up look it up if you like. Not going to go into it today but it did cease restaurant operations in 2007, and it was still anchored there and basically sat there deteriorating from then until now. Sandy came through causing major damage to many areas, of course. But the ferry boat was definitely one of them. And a major flooding on the boat itself.

So as far as when we were brought into the project, of course we went into it and looked about, “Okay, let’s plan. Let’s figure out what we’re going to do,” So this is it’s location there between New Jersey and New York on the Hudson River. The closer view here. And with it being there and in New Jersey, it’s not actually too far from where we are and in Abington. So we went up ahead of time and took a look at the current conditions. But essentially the walkway there that goes across to the boat itself is blocked off, inaccessible, so actual physical presence on the ship was not possible.

So this was all going to be very kind of remote documentation so we wanted to see kind of what that would entail, and so visiting the site was key. Since it had been sitting there for a while many animals have made it home, so here’s some geese here which was a consideration for us knowing that we were going to be flying a drone around. You know, another thing to watch out for that some people think of is the wildlife that may interact with the drone.

And of course being on water, we wanted to be aware of the tides and pick a best time when the ship itself was most exposed, which turned out to be in the morning. And then of course the airspace considerations for this area. This right now is … this image was captured close to when we went, but this isn’t how it would look necessarily right now. It has a lot of a temporary flight restrictions shown here in red for V.I.P. movement, sporting events, things of that nature.

But looking at it on Sky Vector, this was kind of before we knew about Air Map. It was really kind of hard to figure out where our area of operation was in here. But it turns out it was right about here, which is just on the edge of that class, I believe, D airspace for Teterboro. So we actually did request FAA permission and went through a long process of waiting to hear back. And it turned out we were okay without … we didn’t need any permission there but we wanted to make sure because this is kind of a sensitive area around New York City.

So the actual execution, we did laser scan from the shore to get the Western elevation. And as I said, we were there early in the morning to account for the low tide at that point, which did mean that the sun was directly in our face as we were going about it. Which that was one thing we didn’t consider because the flying the drone and trying to capture images with the sun directly in line with the camera would be bad. So we had to actually just wait a bit and let the sun rise up a bit higher before we went on flying.

So the drone documentation. These are just some images here. Going around showing kind of the current status at the time. This was August, September of last year. Just going to run through these. And actually, this image shows a little bit of what can happen with the sun being a factor. You see these kind of stripes on the image? That has to do with essentially shadows cast from the sun through the props that are spinning on to the sensor. So these kind of wiggle around if you’re shooting video. And I’m surprised that it caught it and it still … but did show up there as well.

And then an overhead view here. And the last portion of how I went about doing … we did, knowing that we needed to maintain visual on a site and having the drone behind the ship from the shore, we knew we could only get so much to comply with that. So we did utilize a boat with the Go Pro to go around the ship once the tide was at high tide. So these are some images of that.

And then so the data from this. These are the camera positions, both from the drone, and from the Go Pro from the boat. And this is the kind of coarse image matching and picks 4D, and also you can see in there some manual type ones that we put in to kind of check and validate that images were aligning well. And then once you have the coarse cloud, you can then generate the densified point cloud, which is a much fuller cloud. So that’s a view of it here.

And then these are the results from the laser scanning. You can see the different scan positions on the shore. So six scans with the Pharaoh scanner. And then this is the combination of both  Pix40 photogrammetric image matching, and light R data. Just kind of show a few different angles here.

We were really impressed, again, with the photogrammetric image matching and its ability to really give us that data on the other side of the boat that we wouldn’t have been able to get otherwise. A few more images.

Then we worked with a partner to go ahead and generate the CAD drawings. And this … hopeful it comes up. May not. There we go. So yeah. Essentially this boat is going to be destroyed because it’s beyond repair, so it needed to be documented properly before they went ahead and did that. That’s why we were there, and that’s why we went through the effort in getting this documentation.

So wrapping up. Just going to get to a quick video that is at the end of the presentation here, which is a low orbit of the scan data. Not sure why these are bogging it down a bit, these images.

Okay, so I’ll keep hitting the button. Hopefully the image plays. If anyone has questions or anything while I press the button. Go ahead.

Audience Member:        Were you a manually flying the drone, or [inaudible 00:27:08]?

Speaker 1:                           We manually flew the drone, yeah. We kind of find that that’s best for us. We’ve just had some weird things happen when we fly autonomous, that it’s kind of a level of control that we want to have. So we manually orbited around.

Audience Member:        [inaudible 00:27:29]

Speaker 1:                           Actually we just set it to take a picture every two seconds, and we figure … there’s kind of no such thing to us as too much data. In that realm of pictures, we can always take some out later if we needed to.

Go ahead.

Audience Member:        Was the boat rising and falling at all? Or was it just stuck there?

Speaker 1:                           It was stuck, yeah. That was a concern, yeah. Obviously with it moving around and that could have made tying the datasets together because you have to run it around there.

Sure.

Audience Member:        I just wanted to ask if that Lincoln Memorial [inaudible 00:28:01] there?

Speaker 1:                           It’s store by CyArk so whatever the National Park Insider kind of decided to do as far as availability is where it’s at. I’m not sure personally where that is now.

Uh huh?

Audience Member:        I’m shopping for gear right now and wondering if you could talk about if you could spend a lot of money on a drone, or you can spend a little money on a drone?

Speaker 1:                           Right.

Audience Member:        Also, how do you feel about the softwares and usability between [inaudible 00:28:31] and Pharaoh?

Speaker 1:                           I’ll go with the drone question first. We wanted to get into the drones, and had that same concern about cost and whatnot. Some systems we looked at $70,000, and we’re like, “That’s a bit much to bite off when you’re just trying to figure things out.” So that’s why we went with the DJI Phantom Two Vision Plus initially because it was only $1,500. That’s what actually collected the data on this one. So my opinion is that the DJI, and for what we’re using it for here, is perfectly adequate.

As far as laser scanners usability, I kind of think the Pharaoh scanner is still easier to use as far as interface and understanding what kind of point density you’ll get at what distance. So yeah, I’d probably say Pharaoh is a little easier to use. Yeah. But we actually process that data in Cyclone and so Cyclone has the advantage on the software side.

Thank you.

 

Abstract

New technology has altered the manner in which three dimensional data is collected and processed. From terrestrial laser scanners to unmanned aerial systems (UAS – , a.k.a. drones), three dimensional capabilities are expanding exponentially. The three dimensional point cloud data, whether collected by a laser scanner, or produced via matching common points in a collection of aerial/land-based photographs (photogrammetry), is revolutionizing the way that we view and utilize measurement information. Dense point clouds produce the appearance of solid walls, floors, ceilings, and roofs, allowing us to understand the context of existing conditions directly within design software, accelerating the workflow of producing high quality drawings and Building Information Models (BIM) for use in new construction and/or renovation.

This session will cover the different methodologies for collecting and working with data from both terrestrial laser scanning platforms, and high flying aerial photography captured by drones.

When using terrestrial laser scanning, the user places a laser scanner in a fixed position, typically on a tripod. The laser scanner rotates 360° horizontally, while an oscillating mirror reflects the projected laser beam vertically over a ranged distance. This fixed position and range typically results in a large number of positions being needed to collect measurements on all surfaces.

In contrast to laser scanning, small Unmanned Aerial Systems (UASs) provide a platform of data capture that can acquire information from a large variety of perspectives, in a short period of time. With a large variety of photos of the subject, complex photogrammetric algorithms can identify similar features, and place them appropriately in 3D space.

While both of these two data collection methods are extremely powerful on their own, the session will additionally discuss how these collection methods can be combined to allow for more comprehensive coverage in building/site/object documentation.

Before taking to the skies, it is imperative that technicians understand the rules and regulations which govern the National Airspace System (NAS) relating to the use of Unmanned Aerial Systems (UASs). The session we will briefly summarize these critical regulations, and include some tips to help others plan successful UAS flight missions. The session will also touch on the ability to work with point cloud and 3D surface data in major software packages, including Revit, AutoCAD, and Navisworks. With access to rich data such as these, design, restoration, and preservation efforts are hastened tremendously.

Brief case studies will include our recent collaborations with the National Park Service (NPS) in documenting the Lincoln Memorial and Washington Monument on the National Mall, as well as project examples which combine laser scanning with photogrammetry.

Speaker Bio

Jon W. Adams oversees the development, monitoring, and implementation of procedures for high-quality BIM documentation and data processing. Jon has been with DJS Associates, Inc., since 2008 and is involved with the planning, overseeing and execution of site documentation, as well as the creation of 2D and 3D as-built models. Through this hands-on approach, and attention to detail, he is able to help clients determine how to best address their needs on a case-by-case basis. He has been the coordinator and team leader of many complicated preservation projects including the documentation of the Lincoln Memorial and the Washington Monument.

Tagged with →  
National Center for Preservation Technology and Training
645 University Parkway
Natchitoches, LA 71457

Email: ncptt[at]nps.gov
Phone: (318) 356-7444
Fax: (318) 356-9119