This paper is part of the 2017 3D Digital Documentation Summit.

Scientific photogrammetric imaging of a large-scale Diego Rivera fresco mural

Mark:                    I’d like to first thank NCPTT, and particularly Jason Church for making it possible for myself and Carla to be able to address you remotely today.

What we’re gonna do is show you our approach to scientific photogrammetry on a large scale. In this case, with a Diego Rivera mural.

So here it is. It’s called “Pan American Unity.” Not the original name. But it’s 74′ x 22′, and it’s installed in the Diego Rivera Theater.

Originally, Diego wanted it called “The Marriage of Artistic Expression of the North and of the South on this Continent.” It was done during the Golden Gate International Exposition at Treasure Island in 1940.

Here’s Diego’s full scale charcoal drawing, and you can see he was doing it in public. People watched him both do this drawing and paint this subsequent mural. And here we see Diego at work.

Now, I’d like to spend some time talking about the photogrammetry methodology that we employ at CHI and also teach in our imaging classes.

We’d like to thank Tom Noble and Neffra Matthews, who both mentored us and invented the primary mechanisms of the following method.

When we do photogrammetry, our goal is to capture photographic image sequences in a way that can be qualitatively evaluated, which means other people can judge for themselves the usefulness and re-usefulness of this date for their purposes. Contain qualitive measurements of the uncertainty of the point locations comprising the photogrammetric surface of 3D material. And we want this preserved for future generations. Photogrammetry and computational photography generally have a real advantage for longterm preservation.

Here’s the basic equipment we need. We need a camera. We need a tripod. We need calibrated scale bars. The calibrated scale bars make it very simple to use and provide a very low level of measurement on certainty.

So what we’re gonna show you is a rule-based method of capturing this material. It creates successful two and 3D results with quantifiable measurement on certainty. It’s usable in any kind of photogrammetry software. It will give you, essentially, the same result if you choose good photogrammetry software. If you do it correctly, every time you build the 3D result, it will be the same. Again, qualified by the fact that you’re using good software.

Some people say, “Just take a lot of pictures of your subject. You’re gonna be okay.” The problem is, is that today’s photogrammetry software is so good that if you take a lot of pictures, it’s gonna look great. Well, if you’re doing video gaming, that’s just dandy, and that’s a perfect outcome.

However, if you’re doing scientific imaging, these types of haphazard photo sets will generate models that have widespread uncertainties, which are unknowable and cannot be backed out of the result, when you’re trying to eliminate error in the processing.

So here’s what you have to have, is good geometry. And that basically means that you wanna have good base to distance [of your flying drone’s 00:05:11] height ratios. The base is the distance between any two sequential photographs. So if I take a shot, step a meter, take another shot, it’s a meter base.

And we also wanna have a consistent overlap between photos, of 66%, which means you’re moving a third of the field of view. Very importantly, we need to have multiple look angles on every point in the surface, from these good geometric positions in a way that serves to solve problems for specular materials, and also truly eliminate your uncertainties down to the very small levels.

Here’s base to distance. If you have a wide base, you see the intersection of the two projections. And projections do have the diameter that increases with distance. You’ll see small depth if you have a relatively shorter base between photos. The areas where the projections overlap is longer, and that gives you more depth uncertainty. Of course, we wanna make sure we have the photos overlap by two-thirds.

So, what we wanna do is also give you a few other tips. Use invariant focus. Put it on focus the way you need to. Refer to a depth of field chart, if you need to use that in your focusing decision. You tape the lens. Also, you wanna use an invariant focal length, which means you can’t zoom. And there are very good reasons for all of these things. So use a prime lens, or if you must use a zoom, what you wanna do is tape it, either it’s front or back [extends 00:07:22].

Then you wanna also have an invariant aperture, meaning don’t change your f-stop. If you have a cloudy sky, you can put it on cloudy, sunny sky, you can put it on aperture priority and it’ll change the shutter speed to give you a similar exposure. Changing shutter speed has no impact. Also, you wanna set the aperture so that you’re getting your sharpest image. Don’t use very small apertures, even if somebody like PhotoScan recommends it because it really decreases your sharpness significantly, and that has [00:08:11] effects for your photogrammetry projects that are bad.

Use low ISO, so that you get a good signal, which is information from your photo, to noise ratio.

Finally, what you wanna have is good, a well-designed sensor array. Your sensor array is where you stand when you take your photographs. Redundancy, which is the scientific term for having multiple sensors recording the same data, is a crucial element of all scientific measurement systems. In photogrammetry, what we do is we require nine points of redundancy for every point on the surface. This has very positive effects, such as allowing you to capture more specular surfaces and totally reduces the uncertainty volume, down to very, very low levels.

Let’s give you an example. Here’s our Area of Interest. We take our first image two-thirds of the way off the Area of Interest. Move a third of the field of view, which is a two-thirds overlap. Move a third of the field of view again, which is a two-thirds overlap. And each step we’re taking is what the base is between the photos. And then we take another photograph where the camera is two-thirds … Two-thirds of the field of view is off of the Area of Interest.

Next, what we’ve got here now are three look angles at every point on the subject. What we do now is take the camera, move it up slightly and tilt it down about 15 degrees, and rotate the camera 90 degrees to portrait mode. Then do two-thirds overlapping shots covering the Area of Interest as we have described.

Then you take the camera, lower it below your first strip of images, tilt it up 15 degrees, and repeat the two-thirds overlap process.

So what this gives you is nine look angles at each point on your surface. You will find that if you have nine look angles on each part of your surface, you will have a really good model that will develop well in any competent software package.

For web resources on photogrammetry, we have much of this material on our website at culturalheritaeimaging.org. The method that you’ve just seen is totally software independent, and can be used by today’s, or tomorrow’s, software. Currently we use Agisoft Photoscan Pro. But that recommendation can, and will, change as the photogrammetry software develops.

So at this point, I’d like to turn it over to Carla. And she can proceed from here.

Carla:                     So, I’d like to take a little bit of time and walk through in a little more detail the Diego Rivera Mural project. Essentially we had a team of three people that was on site with the mural for four days. We rented a lifter and mounted our camera and lights and so forth. We shot the mural two times. Once with a 24mm lens, and once with a 50mm lens, so that we could get really good base to height geometry from the 24 and then the 50 to give us the fine detail that we were looking for.

We had a number of goals with the project, which included really just getting a good baseline of the mural’s shape and color. One of the things that was driving this project was that there’s a plan to move the mural. We’re interested in being able to provide to the conservation folks the ability to identify areas that may be problematic, that need to be dealt with before the move. Also to also a baseline that could allow people in the future to monitor changes to the mural by acquiring the data again. We wanted to be able to give data to the folks doing the architectural modeling of a new home for the mural.

But we also had some research goals in mind, which were to make this data accessible. Through digital representation make the mural accessible. Not that many people actually know about it and visit it. Also to enable distributed collaboration on mural research.

So here’s the mural. You can see it’s pretty interesting the way that is installed in this space. The mural goes all the way up to the ceiling, all the way to the walls. There’s this railing that is in a very inconvenient place when you’re trying to image the bottom of it.

Here’s a little time lapse of us. This was the basic approach. Going up and down in the lifter. We had the camera mounted such that it was talking to a laptop, so we could see what the camera was seeing as went and stopped the lifter in the different locations that we needed to get the proper overlaps that Mark just went over. There was a lot of that for four days.

The mural also is also not installed in the space … The panels aren’t flat, flush to each other. It was painted flat, but the space didn’t quite accommodate the mural. The mural was installed in this kind of quarter-moon shape, which added an additional challenge in terms of the imaging, and would also add a challenge in any kind of stitching. Although, doing an orthomosaic approach is gonna resolve that.

Here’s the full set of images from that campaign. Here you can see the overlaps as we zoom in.

This is another short video that shows the mural’s surface. You can see here as the color goes off, that we actually had in the geometry in the surface of the mural, the information about Rivera’s brushstrokes. So we can see real details within the surface.

Because the mural was done as part of an Art in Action exhibit at a World’s Fair, there’s a lot of good historic documentation of the creation of this artwork, which is really great to go back and compare and to have when you’re looking at it.

Take a look at the hands, and the tool that the hands are gripping here as the color goes away. You’ll see that we just have that information right in the 3D surface, in the 3D model that we were able to create.

So now what I want to do is zoom in and look at a few detail areas. You can see here a few of the areas that we’re gonna be looking at.

First I wanna do a little demo of the mural within the Photoscan software of this detail. The reason I wanna show you this is that there’s a crack that runs through this area. Now this crack, we know there’s a crack. The crack goes all the way to the right edge of the mural, but it’s more than ten feet off the ground and it’s hard to see the details of it. As it moves farther into the center of the mural, it gets thinner and more difficult to see.

What we’re looking at here is the dense cloud. You can see as I manipulate this a little bit, the crack in the surface. Now I’m gonna show you the solid model with no color. You can see here that we actually have in the geometry the crack that’s in the mural.

Also, we were able to detect that the crack actually splits here and becomes kinda two cracks, which was something that the conservators had not noticed in looking at the mural before. You can see right along here, and the split.

So, as I mentioned, some of these challenges. This thing’s big. It’s a lot of work that way. But it’s also the way that it’s installed, creates some difficulties in terms of capturing it correctly. In this case, because the surface of the mural was so subtle, that drove our ground sample distance requirements to be much higher resolution than we would normally shoot for most of the 3D subjects that we tackle.

Also, because of it being a fresco painting, we really needed to think about the color management issues for the data, not gonna have time to get into that in this short talk. We did pay more attention to properly color profiling our images any time that the lighting changed.

This was from the first campaign that I’ve talked about with the four days in the mural that was done in December of 2015. You can see from this Photoscan report, which is color-coded … Here’s the scale right here on the left. You want everything to be this good, deep purple that shows that you have nine or more look angles, is our goal, as Mark presented to you earlier. You can see that along the top and the bottom and even in the corners, we have missing data. That number decreases, and we don’t really have enough look angles.

so what we decided to do was to go back to the mural and spend two more days there in January of this year, just to reshoot the edges and the sides, I’m sorry, the top and the bottom. And the corners in particular. So here is the output from the Sparse Cloud after we add any additional photos. You can see now we’re getting that good purple color almost everywhere, and we’re down to about eight images in a few places. But overall, we’ve got good solid coverage now of the entire surface.

Let me talk a little bit more about the resolution of the data. This, as calculated by the Photoscan software, we have essentially 38 samples per square millimeter. And by a sample, I mean an XYZ point in space with color. That’s just over six pixels per linear millimeter.

The medium dense cloud that we were able to create has almost half a billion points, and so at ultra high, we would have greater than seven billion points in the full mural. And that’s to cover this area, which is a hundred and forty five square meters, of the painted surface of the mural.

Our target, our plan for this project, was to shoot the mural with a 50mm lens, with a 50 megapixel camera at five feet away. That would give us just under eight pixels per linear millimeter, or roughly 200ppi, which for those of you who do digitization may not sounds like a high number. But we’re talking about doing that for a surface that’s 22′ x 74′. What we got back from Photoscan is that we had a 5’2″ average distance and that we were getting about 7.6 pixels per linear millimeter. So pretty close to our plan.

In terms of precision, we follow a error reduction work flow, where we remove outlying points in an iterative process within the Photoscan tool. We were able to get to a root main square error of just under .1 pixel, which was our target.

We also used calibrating scale bars in this project. We had three scale bars that we entered the known calibrated value, and had Photoscan tell us what the average error is for those. Which was just over a tenth of a millimeter, 1.4 tenths of a millimeter.

Then we had a check scale bar where we did not give the software any information and just had it estimate the distance and we were at 1.2 tenths of a millimeter in that case. So that’s another way to give us some idea of how good we’re doing.

One of the ways we feel like it’s gonna be really helpful to share the mural data is through orthomosaics. As most of you know, if you have good 3D data along with good photographic data, as we do with photogrammetry, that we can create really, really high quality two-dimensional image views that are perspective corrected for any view. That’s gonna be a great way to get the data out there.

This little piece of the mural was just about 70″ tall. We have, in this case, 12,288 pixels that I built out of it. But let’s look a little closer at the orthomosaic potential here. This is one of the input images, with 50 megapixel camera that’s a lot of pixels. It’s 8,688 x 5,792 pixels.

Let’s take a little closer look at this beautiful cat. I don’t know, I’m just really drawn to this cat. So, the cat here, this is just cropped from one of the input images, is just over 3,000 pixels, this area. And then here it is from an orthomosaic that I created of that area that’s slightly more resolution, but pretty close.

And if we zoom in a little tighter, this was from just the eyes of the cat, it’s just under 5 inches wide, this area. This is just over a thousand pixels, as cropped from an input photo. And then here it is from the orthomosaic input photo.

So we feel like we’re getting pretty good quality data down to this level of detail from the orthomosaic. Again, we’ll be able to put that out as a very high resolution gigapixel image.

The previous high resolution image of the mural was made about 20 years ago by scanning medium format film images, and then stitching them. That was over 14,000 pixels across, which in its day, 20 years ago, was very high resolution. Obviously, we’ve come a long way in 20 years.

Sharing the mural results is something that we’re still working on, what the best way is to get the images out there, and the results out there. As I mentioned, orthomosaics we think is a great way to go because it’s already pretty straightforward to share high resolution images over the web. We’ve already produced some videos as you’ve seen. We’ll be producing additional videos, and we’ll probably share some detail areas using Sketchfab. We’re also looking at the use of the 3D Heritage Online Presenter tool, and some other ways we might share data.

If we’re talking about researchers and conservators, then we need to think a little bit differently. They want a little more resolution than probably the general public. So we’ll be doing some work, sharing it with that audience to figure out what level of detail’s useful for them, and how we might make it available.

And I want to take just a minute here, wrapping up the talk, to talk a little bit about our approach to the metadata and the process history for this project.

Cultural Heritage Imaging, at our core, we really believe in scientific approaches to imaging. That’s so that our data can be used by others. It can be evaluated by others, and is useful into the future. To really meet the requirements of science, we have to make the original data available in some way. We also need to keep track of what we did. In other words, you can’t just show the results, you have to show your work.

We’ve been working on some software tools in this area. One of them is called the Capture Context Tool. DLN, stands for the Digital Lab Notebook. The idea here is that you can take information about all kinds of things. The locations, the subjects, who was involved, who was paying for the project, what’s the equipment, information about the subjects. Make that really easy to collect in a natural language way, and in a way that storage of the data to be easily created into templates and for reuse.

And our approach here would apply to any computation photography, or photograph-based digital representation. The initial versions of these tools we built with RTI in mind. They’re tuned for that. We’re working on these tools for photogrammetry. Our idea is that they could also be modified to support mulispectral imaging, and other kinds of image sets.

But at the end of the day, we want under the hood, without the user needing to know how to do it, to produce linked open data, and to have that data be mapped to the CIDOC Conceptual Reference Model. We’re actually using what’s called CRN Dig, which is specifically for digital promenades.

Our focus here is on the metadata about the digital representation, and doing that in a way that it can link to the information about the subject matter itself.

Here’s the main interface. There’s a database behind this, that helps you reuse the data and create templates and so forth. But what you produce out of here is an XML file, and an RDF file, which are pretty well understood formats for linked open data approach. That’s what you get out the other end.

We have a secondary tool called the Inspector. The idea here is that you want to validate that the image sets meet various rules and requirements for the type of digital surrogate that you’re trying to create. You want that data to also be added to this Digital Lab Notebook.

So we are very happy to have some support from NCPTT to add the photogrammetry data sets to these two tools. We have RTI versions that are available right now, or will be, sorry, very shortly available as [beta 00:26:00]. And we’re adding the photogrammetry support right now.

This work comes out of a lot of years, little bits and pieces of various grants we’ve had in the past that have provided support. We’ve been partnering with various developers, including also Martin Doerr, who’s been providing the CRM expertise.

One of the goals of this work, the whole idea of the Digital Lab Notebook is really the idea of democratizing technology. We think a key to that is separating the authenticity from the authority. What we mean is that the data should stand on its own. People should be able to assess the quality of the data based on information about the data, rather than who created it.

So the work of any person that follows good practice should stand up to the work from the British Museum or Harvard or the MET, or pick your favorite top-notch institution. So that’s really one of the goals here, is to enable that.

I’d like to just close with some quick acknowledgements. To collect the image data for the mural was primarily funded by a group called The Friends of the Diego Rivera Mural. The City College of San Francisco is the steward, and owner, and houser of the mural, and they also supported this project. Will Maynez is a historian of the mural, from City College that went to great lengths to help support the project. And various other folks also dove in to help with specific aspects of the project as well.

You can find us in various places online. We’d love it for you to do that. Thank you very much.

Abstract

This paper will share our experience capturing the fine surface details of a 6.7 by 22.5 meters (~ 151 square meters) 1940 fresco by Diego Rivera.

The project had multiple motivations: to produce benchmark historic documentation of the current state of the mural; provide details of the mural’s surface for conservation and restoration planning; promote awareness and research of the mural iconography and the brushwork of the artist.

In 1940, Mexican artist Diego Rivera (1886-1957) painted a huge mural (22 feet x 74 feet), an inspiring vision of the unity of art, religion, history, politics, and technology in the Americas. Originally titled The Marriage of the Artistic Expression of the North and of the South on this Continent, it is commonly known as Pan American Unity. Rivera created the work during the 1940 season of the Golden Gate International Exposition (GGIE) at Treasure Island on the Bay in San Francisco, California. The mural was the centerpiece of a program called “Art In Action” that featured many artists creating their works while the public watched.
Several aspects of the project will be highlighted including: the determination and achievement of the resolution and precision requirements; the metadata strategy for the imaging data; and considerations for viable outputs for the web and other distribution channels.

Because the fresco surface is so subtle, a high-resolution (sub-millimeter) capture was required. Approximately1500 overlapping 50MP images were collected following a rule-based, data acquisition error minimization and software independent capture methodology.

Another key goal of the work was to acquire appropriate metadata about the imaging project to aide in data reuse and scholarship. We employed a novel metadata acquisition and management approach using software tools developed by Cultural Heritage Imaging in collaboration with the Centre for Cultural Informatics of the Foundation for Research and Technology Hellas in Heraklion Crete.

The methodology and tools are designed for digital representations that are built with computational photography technologies, and that are intended for use in interdisciplinary science and humanities scholarship. The software builds a “Digital Lab Notebook” and takes the form of a user-friendly toolkit, which makes it possible to document not only the algorithmic transformation of photographic data, but also the context in which the photographs were created. Current computational photography technologies are based on the algorithmic extraction of information from multiple photographs, generating new information not found in any single photo. This software’s new metadata and knowledge management methodology produces metadata-rich empirical digital data. In turn this managed metadata enhances the likelihood of the information’s sustainability.

The result is CIDOC Conceptual Reference Model (CRM) mapped Linked Open Data (LOD) describing the capture context and data validity. The tools use a natural language interface to collect relevant information about the subject, people, project, and equipment. The user needs no CRM or LOD experience to produce this rich metadata result.

With a sound metadata foundation, the advanced photogrammetric imaging data collected for this project will assure a reliable baseline of the mural’s current condition. It will also ensure the reusability of the data for future generations.

Author Biographies
Carla Schroer is co-founder and Director of Cultural Heritage Imaging (CHI), a nonprofit corporation dedicated to advancing the state of the art in digital capture and documentation of the world’s cultural, historic, and artistic treasures. Carla leads the training and software development programs, while also working on field documentation projects using Reflectance Transformation Imaging and photogrammetry. She spent 20 years in the commercial software industry directing a range of projects including Sun Microsystems’ Java technology.

Mark Mudge is co-founder and President CHI. He has worked as a professional bronze sculptor and has been involved in photography and 3D imaging for over 25 years. He is a co-inventor, with Tom Malzbender, of the computational photography technique, Highlight Reflectance Transformation Imaging. He has published twelve articles and book chapters related to cultural heritage imaging and serves on several international committees, including the International Council of Museums’ (ICOM) Documentation Committee (CIDOC).

Tagged with →  
National Center for Preservation Technology and Training
645 University Parkway
Natchitoches, LA 71457

Email: ncptt[at]nps.gov
Phone: (318) 356-7444
Fax: (318) 356-9119